Development Updates info.btree.at

Hi all,

with the permission encourage from @clemens I start this thread, to write about my development achievements of the newest b.tree application version which is still in a pre-alpha version but should be fully open source when finished. I try to keep this thread mostly on the technical aspects.

Short backstory: My beekeeping management application is running since 2014, I do not have a big user base but as I need it for my own beekeeping operation I do not care. It was also a first project to learn programming. Over the years it has grown into a big behemoth of PHP and javascript mix code and every fix or change brings a handful of new bugs. Additionally, it includes a few paid licenses, which did not allow me to make it open source.

In 2021 I looked for ways to improve the current situation and the decision was made to build everything new from ground up, but it should still be possible to migrate the old database (my database design is lacking and little bit hard to migrate as I designed everything when I started programming with no knowledge at all). As you can think 8 years of data is something you want to keep. This is of course a major undertaking for me as I only work on it in my free time and I’m no professional programmer and have no fundings to outsource some parts.

Since 2022 I’m more or less finished with my biology study and left unemployed. Thus, the positive side, while writing job applications and waiting for the beekeeping season to kick in I have more time to spend on my application.

Dev Backend (see dev branch for most recent changes):

Dev Frontend:
dpa

In my upcoming first post I will write about the design of my backend and why and which tools I use.

Cheers
Hannes

3 Likes

Dear Hannes,

thank you very much for sharing this project with the community with a free software license. I will be curious trying to run the application when I can find some time.

Keep up the spirit!

With kind regards,
Andreas.

P.S.: In the same manner as we tried to inspire @marten.schoonman and @iconize of BEEP fame [1], and the Hiveeyes backend component [2], may I suggest to use the AGPL-3.0 license for this project? That would, in laymans terms, extend the obligation to publish amendments to the software also when “just running it as a web application”.


  1. https://github.com/beepnl/BEEP ↩︎

  2. https://github.com/daq-tools/kotori ↩︎

Hi Andreas,

thanks for the input, will adapt the license as it sounds like a better choice for the project.

Cheers
Hannes

Hi all,

the goal of this post on my backend decisions is to help fellow programmers in their decision making or give them some ideas when they want to build something similar. As for me this type of post would have helped me a lot at the beginning.

Backstory: It took me quite a while to decide how I takle the problem to write a new backend. Salvage my old PHP code into a full fledged backend or not? As I wrote everything in pure PHP and used no ORM this was not really fruitful. After a few months of different PHP frameworks jumping I made a full stop and went for a node.js approach. Why node.js? Mainly because I will use javascript on the frontend and it felt for me always more natural. Of course everything will be written in TypeScript, although it still gives me a lot of headache, it does not only prevent me from bugs it is also very helpful in development (at least if you write “good” types).

Goal: node.js backend with lots of business logic already included.

Database: MySQL, the sole and simple reason was that my current app database is running on MySQL. I also have written a few helpful VIEWS which I can reuse quite nicely.

Server: Standard express configuration with the controllers containerized with awilix. May drop the containers again, but did not decide yet.

Security:

  • cors for CORS middleware (for non public API requests)
  • helmet and hpp to secure http headers
  • express-rate-limit to prevent public API overuses

Authentication: JSON web token strategy (password). This was the one I could get my head around the “easiest”. The access token is short lived and async frontend calls wait up if the first needs to refresh it. The refresh token is connected with the user-agent to allow multiple tokens for the same user on different devices. I still don’t know how I should handle an expired refresh token, forcefully logout the user or create a password popup.

ORM: knex was my base and it has a great simple CLI for database migration logic. It also allowed me to write the MySQL VIEWS in raw SQL. As I did not want to use a full grown ORM but little bit of a helper I decided in addition on objection.js which is the most raw knex variant I could find. It was a rather simple learning curve and felt correct to use.

Validation: objections.js allows simple JSON validation, which actually uses ajv-formats as their base. Nevertheless the first validation (should) already happen at the router with express-validator (Errors are thrown with boom). Still undecided if I should use joi, currently the express-validators seem to be fine for me but joi would allow better reusability. Did not really start so far with router validation only on the auth routes so this part is still something in development.

Mailer: Standard nodemailer to handle mailing and mjml framework for creating the email templates as I can reuse this from my old app.

Internationalisation: After some trial and errors the best way was to simply let the frontend decide which language to use. This means currently the server response contains multiple languages and the frontend displays the correct language based on the user settings. This was a final great idea don’t know where I picked it up.

Datatables.net: I’m a big fan and user of Datatables but will probably drop it because it does not play nicely with vue for my frontend. My current application uses 90% Datatables so this is a big hit for me as I could have salvaged a lot. Now I need to write my own lazy loading tables with export, multi-row edit etc. which probably will cost me some time but at least then all my used libraries are without license fees.

Testing: Nothing so far, I now this is really bad practice but rapid and rough prototyping did not play nicely with a write-test-to-code approach. As most of the stuff I’m coding I do not really understand :).

Small helpful helpers which I believe are always a good choice on bigger projects: dayjs for date handling, lodash collection of little helper functions (eg. compare two objects if they are the same), node-schedule so I don’t need to setup cron-jobs on the server.

What I would do different again? Probably would go currently for Rust on the backend, simply because I would love to try it and it could come in helpful for future data science projects.

Cheers
Hannes

1 Like

May you share some details about that, a short description / links or some screenshots with the main features. There is software out to manage bee boxes or beekeeper’s inspections or a feeding protocol in autumn or an electronic “Standbuch” to track animal pharmaceuticals, … so it would be nice to have a raw idea what btree actually does. For me, it is not completely clear from an outsiders perspective.

Simply said it is a record keeping tool (feeding, checkups, harvesting, treatment), one of many on the market. The main reason for me was to create a simple tool for the yearly official inspection as certified organic beekeeper. It is nothing special but I was back then unhappy with the available applications when you manage more than a few colonies and thus decided to create my own. Nowadays the available software is a lot better and I often recommend following commercial apps: Hobby Beekeeper BeeInTouch (German) and for Professional Beekeeper MyApiary (English).

As for my own you can visit info.btree.at for more information, were I have written a documentation. My “niche” is probably medium sized beekeeping operations, because of quick editing features and simple table like formats for data, without too many unnecessary display information. My current number of registered apiaries and hives may seem high, see attached quickly sketched plot, but active users (logged-in in the last 6 months) is around 150. A total of 40 users paid for premium access last year (50 € / year incl. VAT), which allows me to run a fast managed server in Germany and pay for the licenses.

Cheers
Hannes

1 Like

Hi all,

first post on frontend decision.

Currently my frontend is a mixture of PHP templating and jQuery. Interestingly enough, as for PHP which I always felt is a great language for templating is nowadays mostly used on the backend and modern javascript frameworks with their SSR (server-side-rendering) came more or less a full circle back to PHP which always was SSR in my understanding.

Selecting a framework for frontend could be easy but if you research the various javascript frameworks you kinda feel overwhelmed, as you don’t want to invest time into a technology which may be obsolete again in a year. The biggest out there are probably React and Angular and I played around with both, but it felt too big for me when starting with it. After the big ones I looked at “fresher” starts Svelte and Vue.js, both felt very good with a lower entry barrier in my opinion. With Vue.js I also have already some experience because I use Nuxt for my personal homepage. Therefore the decision was made to start working with Vue.js as overthinking stuff also don’t help. Especially the new Vue.js 3.0 composition API and the new script setup feels a lot like I handled my jQuery code, with a lot of added benefits.

The first chore tasks user login / register related which I actually don’t like. Nevertheless it worked quite good with Vue.js especially because of the components setup you can reuse for example e-mail inputs in multiple pages without much hassle. Already looking forward to refactoring to clean up excess code (DRY), which I really enjoy.

I’m also pleased that my logic with token setup and waiting for promises if refresh is needed works quite elegant (in my opinion), but need for sure to ask some professional programmers on this topic. As my axios intercept currently looks something like this and timeouts are always questionable. ;)

 // Access Token expired
        if (err.response.status === 401 && !retry) {
          retry = true;
          await api.post('/v1/auth/refresh', storeToken.getRefreshToken).then(
            (res) => {
              storeToken.refreshToken(res.data.result);
              retry = false;
              return api(originalConfig);
            },
            async (error) => {
              await AuthService.logout();
              return Promise.reject(error);
            }
          );
        } else if (err.response.status === 401) {
          // if we have multiple calls async to our api
          // we loop and wait till refresh token is regenerated
          while (retry === true) {
            await new Promise((r) => setTimeout(r, 500));
          }
          return api(originalConfig);
        }

Short “demo” video for my the first parts of the frontend:

Cheers
Hannes

1 Like

Hi all,

frontend decisions next part. After long days of coding I come to love Vue.js and hate it at some parts, especially forwarding and watching props between components. This is probably mainly due to my misunderstanding how it be done correctly.

As the first chores were not so fun for me I decided to work on more fun parts → apiary map, as I always like to generate maps. I don’t know for sure but probably the dominating map library is Leaflet.js and I have already some experience with it. Reality did hit hard after seeing that https://leafletjs.com/ is created by an awesome Ukraine, Vladimir Agafonkin. One more reason to hate this ongoing war stuff. The great part about open source is that it does not know any borders and all people are welcome (at least in my world).

Nevertheless, thankfully Leaflet.js does integrate very easy with Vue.js thanks to an official repo: GitHub - vue-leaflet/vue-leaflet: vue-leaflet compatible with vue3. The library showed me again how easy it is to create interactive maps but had some hiccups with updating the markers reactively, as I tried to cache as much as possible.

Here a Demo how it looks currently:

Cheers
Hannes

1 Like

I’d like just to mention that we have used Vue in the Bee Observer project as tool to generate a UI to configure the node via a captive portal. So the Vue code was generated at a computer and then used and uploaded as static code on the node. We found it difficult to work on this Vue code as a shared ressource with different developer. So we did nearly non bugfixing or improvements.

Static sites are always the easiest also in my head, I will also not use any SSR in the near future. Maybe looking into PWA or generating an Electron app but that are just thoughts. For building GitHub actions are very powerful, for example my personal homepage is automatically build and uploaded when I commit to the master branch. It also removes Amazon AWS cache, which I use for asset caching (probably not needed for my 2 visitors a month but I like to over-engineer things ;) ), you can look at my workflow here: btree_info/deploy.yml at master · HannesOberreiter/btree_info · GitHub

If you have multiple persons working on a project, a good Git setup and contribution documentation is a must have, in addition to test cases. Otherwise you need to check every piece of changed code extensively.

Cheers
Hannes

Hi all,

frontend decisions next part. After some mapping fun last time, I did struggle a little bit with my build setup with webpack. After some research I came to a “modern” tool for frontend dev https://vitejs.dev/. It is actually not that new (2019) and I don’t know why I skipped it, although it is the recommended build tool for vue3 (you can also use it with React). The transition from webpack was fast as the settings for Vite are pretty simple and the tool itself is really blazing fast (see video, last part were I update a nested component). The big advantage of Vite is that it only rebuild components (HRM), so no need to rebuild the full app on each edit. At least that is how much I understand it.

After getting the build setup running again, the next task was again “chores”, eg. user settings, credentials editing and company settings. On this note, I have a really “bad” user management system as when I started the database and app I never though about multiple “users” and “companies”. If I would create a new database from “scratch”, I would use and absolutely recommend a third party user authentication system (auth3, openID etc.). Not only to prevent my own headaches but also to improve security.

Cheers
Hannes

1 Like

Just FYI, I can see the shared videos as web embedded video with Chrome only and not with FF. So in case you got an error

The media could not be loaded, either because the server or network failed or because the format is not supported.

Try an other Browser. Btw thx for sharing Hannes!

Thanks for the info. Yeah you are correct, although it should be supported: WebM video format | Can I use... Support tables for HTML5, CSS3, etc

The same is true for Safari, where it also wont work inside the browser.

I may make a playlist on YouTube were it gets auto converted.

Edit: YouTube Playlist App Dev - YouTube

Cheers
Hannes

Hi all,

frontend and little bit backend decisions next part. While happily building my frontend I come to some brain twists with my “REST” api design, but I never followed REST design strictly anyway.

Some cases:

  • Deleting or getting thousand rows at the same time with IDs, with REST I’m somewhat limited to the max url length as it wont take by definition body params.
  • Special endpoints for something like moving dates of tasks in a calendar, were I felt the work in the backend is more future proof than offloading everything on the frontend part

I already said at the beginning in my backend decisions that I want to use as much business logic as possible in the backend. Which seems to not play nicely if you want to have a strict REST design.

Nevertheless, after some fine-tuning I finally archived my first datatable, which is kinda a big step for me as I want to show most of my app data as tables.

Some points I wanted to have at minimum: server side loading of big data, manage save state in local storage, allow custom column display, basic search and ordering. Make it composable for my other tables. There are still some bugs and mini optimisations needed but I’m already quite happy with the result and looking forward to building my other 20… tables :).

Cheers
Hannes

2 Likes

Hi all,

are there any MySQL Database magicians in the Hiveeyes community? Currently running into performance issues with my VIEWS and would need some help, don’t want to go back to raw queries in my code (this is what I did on my current app to improve the performance).

Of course I am willing to pay pocket money for help.

I can give a database dump with example data and here is the raw SQL code (*_view_.*.sql Files) (especially 20220125124845_view_queens_locations.sql; 20220125123313_view_hives_locations.sql, 20220125152944_view_tasks_apiaries.sql)

Cheers
Hannes

[edit] For further discussion and the solution see the separate topic Problems with MySQL/MariaDB performance

1 Like

Hi all,

after my short excursion into database engineering for dummies I’m back on track building stuff. In my database schema I may or may not made some bad future decisions. For my own sake of sanity I introduced multiple of redundant columns, which improves my code a lot and reduces the use of VIEWS in queries.

My original design of “tasks” (eg. feeding, treatment …) was like this:

task - 1:1 - hive - 1:n - movedate - n:1 - apiary - 1:1 - company

This means to figure out if a “task” belongs to the current user I had to go the “whole” route down. Now I introduced redundant user_id columns, to better filter out some tables, without relying on connections and VIEWS.

task - n:1 - company
hive - n:1 - company
task - n:1 - hive - 1:n - movedate - n:1 - apiary - n:1 - company

I also don’t have to care for cascades “that” much anymore, as now a “task” could exist without a hive or apiary. It actually should not happen but it eases my mind.

As for the frontend I was able to make a lot of “bigger” junks. This also come with the realisation that I really need to start soon writing tests, which I’m completely ignore in my whole self-taught programming career. So far I’m leaning towards starting with e2e tests and to reduce the need for “new” languages I’m thinking about using https://www.cypress.io/ for frontend and backend API testing. I don’t know the downsides of using cypress for the backend but for me it is probably better to try to learn one tool and not different flavours again.

Short video of my finished integration of Dropbox, which brought took me around 4 days of pain but I somehow managed. The official SDK is has a lot of types missing for TypeScript and also the documentation was lacking. The integration is also available on my current app but this time a lot cleaner and more secure with access and refresh token. Currently I also support uploading to my own server which I will not support in the new app.

Cheers
Hannes

Hi all,

beekeeping season is slowly kicking in, but still manage to get time in for my app development.

A little side-story why I come to love TypeScript. Few days ago I had a bug report in my old code by a user that my queen rearing logic wasn’t working for some rearing methods of his. After a lot of trial and errors the solution was rather simple, it was a type error which did only matter if there were more than 10 rearing steps:

if (value.position > sel_position) // "10" > "5" = false
if (parseInt(value.position) > parseInt(sel_position)) // 10 > 5 = true
// https://javascript.info/comparison#string-comparison

Now to some stats, as I love them and I would say I’m around 75% finished with reimplementation of my old code and going forward there will be a lot of changes from structure to the old app.

  • Old Frontend:
    • 234 files, 46.318 lines of codes
  • New Frontend:
    • 188 files, 18.769 lines of codes
  • Old Backend:
    • 61 files, 11.048 lines of codes
  • New Backend:
    • 138 files, 9.052 lines of codes

For the frontend I could trim down a lot of code as in my old version I had a lot of redundant code pieces. In addition Vue.js helps me a lot to reduce code chunks, especially with easy component system.

The new backend has more than double of the old file numbers, which is due to my “new” thinking of going away from behemoth single files and use smaller files for different logic, one example in the old app I had one router file (controller.php) now each main route has their own file (apiary.route.ts, hive.route.ts, …).

On the frontend part I also try to implement “best” practices from Vue.js. One of this is a flat file structure. Initially I liked it a lot but currently with growing number of files I’m little bit unsure, as the components directory keeps growing:

Although the problem lies with me, as I’m still open files manually via mouse-clicking. I really need to keep open files with keyboard searching, then the flat file structure really makes sense and works great:

Lastly a video of my queen rearing logic, which was a big part and I’m quite happy to reimplemented it a lot better than in my old app:

Cheers
Hannes

2 Likes

Hi all,

last week was heavy for me. I created my first technical alpha to test deployment and build automatic. Again lots of new stuff to learn. Now I know why google firebase and similar tools are loved by developers as building a full stack app from scratch is really time demanding and though.

First I had to upgrade my testing server as it was really slow when compiling and serving stuff. Current setup looks like this and its pretty fast (without user load that is ;)):

The +10GB volume is reserved for my database as the lokal disk is not “stable” and could be lost if the server goes down. The server does backup each day and is setup with ngnix for http server and reverse proxy, certbot for automatic setup of SSL certificates, docker for my API server image and database image.

In addition I did setup a cronjob which auto updates / upgrades server packages and restarts daily in case of any memory leaks. Don’t know if that is a good practice, but still better than missing any crucial security patches.

Many trials and errors later I managed to setup ngnix and how to use reverse proxy with Docker containers. It seems to be a good idea to set for your docker container a subnet mask for your network, as it did change two times and my reverse proxy did not work and I did not know why.

Now my docker-compose.yml looks like this:

...
networks:
  btree-db-network:
    driver: bridge
    ipam:
      config:
        - subnet: 172.18.0.0/16

and my upstream.conf like this:

# path: /etc/nginx/conf.d/upstream.conf
upstream btree_at_api {
    server 172.18.0.1:1338; # Gateway + Port
}

After setting up the api and starting my Docker containers I had to plan for Database backups. After a little bit of search I found out about databack/mysql-backup a Docker container which job is exactly this, it was rather easy to setup. The only thing was, where to save the backups.

Although Amazon AWS would be a good choice (only downside that you support Amazon domination of the market) and it would be supported by the docker container, I though I go the challenging way as I have a secure Nextcloud server running, so the “easiest” solution was to setup a connection to it. Nextcloud uses WebDAV so I had to install davfs2 as driver on my server to be able to mound the Nextcloud disk. Thanks to a rather straight forward guide which I could follow I somehow managed to do it: Guide Mount Nextcloud WebDAV on Linux

After the backend was done, I moved on to the frontend. Which is “only” static as I do not use SSR, so deployment is easy. Setup a GitHub action to autobuild and push with sftp onto the server www folder. It was my first SPA and did run into problems, when one refreshes the browser. I solved it by using a ry_files $uri $uri/ /index.html; code inside my ngnix config:

server {
    ....
   # This part is to remove the service worker from cache, also very important when building a PWA app
   location = /service-worker.js {
        expires off;
        add_header Cache-Control no-cache;
        access_log off;
    }
   .....
    # SPA reload bug workaround https://megamorf.gitlab.io/2020/07/18/fix-404-error-when-opening-spa-urls/
   location / {
      try_files $uri $uri/ /index.html;
    }

}

Now everything runs more or less smooth, the last part was to auto build the Docker container from GitHub for my API. But the Docker Auto-Build feature was disabled in 2021 (Changes to Docker Hub Autobuilds - Docker), thankfully there are already GitHub actions which build and push the container to DockerHub.

Lastly as previous mentioned I started to begin building e2e tests for my backend. I did give up on cypress as there was no clear HowDo for API. Now I’m going with mocha and chai combo and supertest for the http access. It again took me quite a while to get it running (main problem was that my node server was not closing after mocha testing, found out after hours with nodewtf that nodemailer was the problem and I introduced a memory leak, so I solved, already a good point of testing :) )

mocha test

Cheers
Hannes

Hi all,

this week was writing tests for my backend and the CI test implementation.

First of all, I’m really happy that I forced myself to write tests, already found a few bugs which I had surely missed before going live. The testing also seems rather fast, which means I can keep it running while development.

Currently I have written close to 500 test cases and some of them nested, which comes down on my local machine to ~8s.

testing-local

On GitHub for my CI implementation it boils down to ~1-2 minutes.

Pollution of test database

I did not know how to handle the continuous pollution of my test database, as the test are automatically run on change. First I tried to drop the database and create a new one, but this left me with some user permission problems. Next up I tried to rollback all migrations and then migrate back up which took quite a long time, as I have already written a lot of migration files.

My final solution was to a) migrate to latest database version b) truncate all tables (not needed on CI). b) was solved with some raw SQL queries, first I fetched the table names from the schema information and then had to remove the foreign key security and loop over all tables, here is the full code which is run before the tests:

before(async function () {
    this.timeout(10000); // Standard time-out for mocha is 2s
    console.log('  knex migrate latest ...');
    await knexInstance.migrate.latest();
    if (process.env.ENVIRONMENT !== 'ci') {
      console.log('  knex truncate tables ...');
      //knexInstance.migrate.rollback({ all: true })
      await knexInstance.raw('SET FOREIGN_KEY_CHECKS = 0;');
      const tables = await knexInstance
        .table('information_schema.tables')
        .select('table_name', 'table_schema', 'table_type')
        .where('table_type', 'BASE TABLE')
        .where('table_schema', knexConfig.connection.database);
      for (t of tables) {
        if (
          !(
            ['KnexMigrations', 'KnexMigrations_lock'].includes(t.TABLE_NAME) ||
            t.TABLE_NAME.includes('innodb')
          )
        )
          await knexInstance.raw(`TRUNCATE ${t.TABLE_NAME};`);
      }
      await knexInstance.raw('SET FOREIGN_KEY_CHECKS = 1;');
    }
    global.app = require(process.cwd() + '/dist/api/app.bootstrap');
    global.server = global.app.server;
})

Mocha peculiarities

Here are some cases which took me quite a while to figure out, with mocha.

done()

Normally you have to close your tests and before, after cases with done(), eg:

  before((done) => {
    ....
    done();
  });

If you use promises you should not use the done() return.

  before(async () => {
    await Promise()
  });

global

If you want to use variables over multiple testing files there is a global variable, eg for your test user login.

global.demoUser = {
  email: `test@btree.at`,
  password: 'test_btree',
  name: 'Test Beekeeper',
  lang: 'en',
  newsletter: false,
  source: '0',
};

Closing server

As last time mentioned I had to use nodewtf to find out why mocha was not auto closing. This time it was knex which did not close my connection, so you have to use knex.destroy().

  after((done) => {
    global.app.boot.stop();
    global.app.dbServer.stop();
    knexInstance.destroy();
    done();
  });

CI / GitHub Action

The goal was to automatically run the test if a pull on the main branch is happening. This one was again a little bit tricky as you cannot really test it on your local machine.

I did first play around how to create a database, the first idea was to use a service container (which is a Docker container). But after a while I figured out that on Linux there is actually SQL installed but not active and you need only to start it.

# Start SQL
 sudo systemctl start mysql
# Create our testing database
mysql -e 'CREATE DATABASE ${{ env.DB_DATABASE }};' -u${{ env.DB_USER }} -p${{ env.DB_PASSWORD }}

Next up was again permission problems, as newer MySQL does not allow simple password access, which after some googling I could solve with some SQL commands.

# Change identifier method for our testing user
mysql -e "ALTER USER '${{ env.DB_USER }}'@'localhost' IDENTIFIED WITH mysql_native_password BY '${{ env.DB_PASSWORD }}';" -u${{ env.DB_USER }} -p${{ env.DB_PASSWORD }}
# Let MySQL know that privileges changed 
mysql -e "flush privileges;" -u${{ env.DB_USER }} -p${{ env.DB_PASSWORD }}

The final working action can be found here: btree_server/test.yml at main · HannesOberreiter/btree_server · GitHub

Cheers
Hannes

1 Like

Hi all,

beekeeping season is going strong and last queen rearing series is done for this year. Quite happy with the outcome this year got 42 new queens ready, more than enough for myself.

Development has slowed down a little bit due to lack of time, but still writing test. I also started writing tests for the frontend part, this time now with cypress. Overall my two main impressions for cypress e2e are:

  • positive: you can see the results
  • negative: quite slow to test the whole app

It is probably something you only do before publishing a new version. As you cannot really let all tests run while development. But you can also only run a subset of tests for example a new page or form you are writing, this would work in parallel without any problems.

Anyway feel a little bit out-powered for writing any more tests and will probably continue writing features and maybe finish up the backend (ical, public API for hive scales, statistics).

Here is a video of the integration test, as you see in the end it only tests if the page is present and loaded, this part of course needs more in depth tests in future.

Cheers
Hannes

3 Likes