Hi all,
last week was heavy for me. I created my first technical alpha to test deployment and build automatic. Again lots of new stuff to learn. Now I know why google firebase and similar tools are loved by developers as building a full stack app from scratch is really time demanding and though.
First I had to upgrade my testing server as it was really slow when compiling and serving stuff. Current setup looks like this and its pretty fast (without user load that is ;)):
The +10GB volume is reserved for my database as the lokal disk is not “stable” and could be lost if the server goes down. The server does backup each day and is setup with ngnix
for http server and reverse proxy, certbot
for automatic setup of SSL certificates, docker
for my API server image and database image.
In addition I did setup a cronjob which auto updates / upgrades server packages and restarts daily in case of any memory leaks. Don’t know if that is a good practice, but still better than missing any crucial security patches.
Many trials and errors later I managed to setup ngnix and how to use reverse proxy with Docker containers. It seems to be a good idea to set for your docker container a subnet mask for your network, as it did change two times and my reverse proxy did not work and I did not know why.
Now my docker-compose.yml
looks like this:
...
networks:
btree-db-network:
driver: bridge
ipam:
config:
- subnet: 172.18.0.0/16
and my upstream.conf
like this:
# path: /etc/nginx/conf.d/upstream.conf
upstream btree_at_api {
server 172.18.0.1:1338; # Gateway + Port
}
After setting up the api and starting my Docker containers I had to plan for Database backups. After a little bit of search I found out about databack/mysql-backup a Docker container which job is exactly this, it was rather easy to setup. The only thing was, where to save the backups.
Although Amazon AWS would be a good choice (only downside that you support Amazon domination of the market) and it would be supported by the docker container, I though I go the challenging way as I have a secure Nextcloud server running, so the “easiest” solution was to setup a connection to it. Nextcloud uses WebDAV so I had to install davfs2
as driver on my server to be able to mound the Nextcloud disk. Thanks to a rather straight forward guide which I could follow I somehow managed to do it: Guide Mount Nextcloud WebDAV on Linux
After the backend was done, I moved on to the frontend. Which is “only” static as I do not use SSR, so deployment is easy. Setup a GitHub action to autobuild and push with sftp onto the server www
folder. It was my first SPA and did run into problems, when one refreshes the browser. I solved it by using a ry_files $uri $uri/ /index.html;
code inside my ngnix config:
server {
....
# This part is to remove the service worker from cache, also very important when building a PWA app
location = /service-worker.js {
expires off;
add_header Cache-Control no-cache;
access_log off;
}
.....
# SPA reload bug workaround https://megamorf.gitlab.io/2020/07/18/fix-404-error-when-opening-spa-urls/
location / {
try_files $uri $uri/ /index.html;
}
}
Now everything runs more or less smooth, the last part was to auto build the Docker container from GitHub for my API. But the Docker Auto-Build feature was disabled in 2021 (Changes to Docker Hub Autobuilds - Docker), thankfully there are already GitHub actions which build and push the container to DockerHub.
Lastly as previous mentioned I started to begin building e2e tests for my backend. I did give up on cypress
as there was no clear HowDo for API. Now I’m going with mocha
and chai
combo and supertest
for the http access. It again took me quite a while to get it running (main problem was that my node server was not closing after mocha testing, found out after hours with nodewtf
that nodemailer
was the problem and I introduced a memory leak, so I solved, already a good point of testing :) )

Cheers
Hannes