As a web developer, I am always working on projects of varying size and usefulness. Still, I’d like these projects to make their way online eventually. The only problem is that web hosting is not always cheap, and getting a domain and renting web space for every single project will eventually cost a small fortune to keep running. So I opted to set out to find a reliable way to set up a Multi-Site VPS (Virtual Private Server) that will host all of my projects, in a single easy-to-manage package.
I will go into detail on how to configure the Multi-Site VPS in such a way that it will be able to host several web-based applications using Docker to containerize them, Portainer to manage the Docker containers, and Traefik to serve as a reverse proxy in order to send incoming traffic to the right container. As an example, I will then show how to launch a WordPress website through this method.
Installing and Configuring CentOS 7
As far as which Linux distro to use, you are free to pick for yourself. Many of the more known distro’s support docker just fine so there isn’t really a wrong choice. However, for this specific guide, I will be using the Red Hat based CentOS 7.
Updating OS & installing packages on the Multi-Site VPS
Assuming you got your clean CentOS 7 server running and are connected to it through for example SSH, I recommend running some updates before we start with the rest of the server.
# Tell the package manager to check for updates
sudo yum check-update
# Apply those updates
sudo yum update -y
# Reboot the server to make sure everything is updated properly
sudo reboot --now
The server will now restart, so wait a few minutes before attempting to reconnect to your server. Once you reconnect, run the following commands to install the DNF package manager. We will need that later to install some specific packages that are not available on Yum.
# Install the DNF package manager
sudo yum install dnf -y
# Make sure the DNF package manager is on the latest version
# (sometimes installing does not get the latest version)
sudo dnf update -y
# Reboot the server once more
sudo reboot --now
Again, wait for the reboot to complete, and reconnect. Following this guide, I will use Nano as a text editor, because I personally prefer it. However, if you prefer other text editors like Vi or Vim, you can use those as well. This guide will assume you use Nano. It is not installed natively on CentOS, so you need to install it first.
# Install the nano text editor
sudo yum install nano -y
Finally, we will enable automatic updates through the dnf-automatic package. This will make sure that all installed packages are supplied with the latest security updates, without us having to update the server manually over time. We will also have to edit a config file in order for it to work correctly.
# Install the DNF automatic updater
sudo yum install dnf-automatic -y
# Edit the DNF automatic update config file through nano
nano /etc/dnf/automatic.conf
Copy the following lines, and replace them with the current contents of automatic.conf
[commands]
# What kind of upgrade to perform:
# default = all available upgrades
# security = only the security upgrades
upgrade_type = default
random_sleep = 0
# Maximum time in seconds to wait until the system is on-line and able to
# connect to remote repositories.
network_online_timeout = 60
# To just receive updates use dnf-automatic-notifyonly.timer
# Whether updates should be downloaded when they are available, by
# dnf-automatic.timer. notifyonly.timer, download.timer and
# install.timer override this setting.
download_updates = yes
# Whether updates should be applied when they are available, by
# dnf-automatic.timer. notifyonly.timer, download.timer and
# install.timer override this setting.
apply_updates = yes
[emitters]
# Name to use for this system in messages that are emitted. Default is the
# hostname.
# system_name = my-host
# How to send messages. Valid options are stdio, email and motd. If
# emit_via includes stdio, messages will be sent to stdout; this is useful
# to have cron send the messages. If emit_via includes email, this
# program will send email itself according to the configured options.
# If emit_via includes motd, /etc/motd file will have the messages. if
# emit_via includes command_email, then messages will be send via a shell
# command compatible with sendmail.
# Default is email,stdio.
# If emit_via is None or left blank, no messages will be sent.
emit_via = motd
[email]
# The address to send email messages from.
email_from = root@example.com
# List of addresses to send messages to.
email_to = root
# Name of the host to connect to to send email messages.
email_host = localhost
[command]
# The shell command to execute. This is a Python format string, as used in
# str.format(). The format function will pass a shell-quoted argument called
# `body`.
# command_format = "cat"
# The contents of stdin to pass to the command. It is a format string with the
# same arguments as `command_format`.
# stdin_format = "{body}"
[command_email]
# The shell command to use to send email. This is a Python format string,
# as used in str.format(). The format function will pass shell-quoted arguments
# called body, subject, email_from, email_to.
# command_format = "mail -Ssendwait -s {subject} -r {email_from} {email_to}"
# The contents of stdin to pass to the command. It is a format string with the
# same arguments as `command_format`.
# stdin_format = "{body}"
# The address to send email messages from.
email_from = root@example.com
# List of addresses to send messages to.
email_to = root
[base]
# This section overrides dnf.conf
# Use this to filter DNF core messages
debuglevel = 1
If you are running a mail server, you could edit the applicable configs in the file to point towards your mail server, so that you will automatically get sent reports about the automatic updates. Lastly, we enable automatic updates through systemctl.
# Enable the DNF automatic updater
sudo systemctl enable --now dnf-automatic-timer
Configuring ipv6
Next, we will install some tools in order to configure the way we can connect to our server.
# Install net-tools and httpd-tools
sudo yum install net-tools httpd-tools -y
Seeing as ipv4 addresses are getting scarce, we are going to configure the server to accept ipv6 connections.
# Edit the sysctl.conf file
sudo nano /etc/sysctl.conf
Once you open the file in your text editor, go to the bottom of the file, and add the following line. Then save and exit:
# Enable ipv6
net.ipv6.conf.eth0.disable_ipv6 = 0
Next, we are going to make sure that the server knows what ipv6 address to default to. Most server providers I know supply you with an ipv6 address for your server. You can usually find this somewhere within your configuration panel on the hosting provider’s website. When you find your ipv6 address, copy it, and execute the following commands:
# Load the settings from the configuration file
sudo sysctl -p
# Add the ipv6 address to the default network
sudo ip addr add [paste your ipv6 address here] dev eth0
# Define the ipv6 address as default
sudo ip route add default via fe80::1 dev eth0
# Edit the sysconfig network file
sudo nano /etc/sysconfig/network
The last command opens a file once again. Add the following line to the bottom of this file, then save and exit.
# Enable ipv6 mode
NETWORKING_IPV6=yes
Enabling a user account with root access
I prefer my servers to have at least a bit of extra security. In this step, we will be adding a new account with root access, and deny logging in as root to our server. We will then disable password login, and enable login through a ssh key. This makes the server a lot more secure by not allowing any type of SSH access without the correct key. After that, we add 2FA (2-factor authentication) through Google Authenticator to our login. This is a big help in case our key is compromised. And finally, we will change the port of our SSH connection just to make things a little bit more obscure for possible threats.
To start, let’s make a new user and give it root access.
# Replace every instance of newUserName with your chosen username
# Add a new user
adduser newUserName
# Give the new user root access
usermod -aG wheel newUserName
# Change the password of the new user
passwd newUserName
After the last command, you are prompted to enter a new password for the user. Enter the new password twice. Connect to the server using the new user and password.
# Connect to the server through SSH
ssh newUserName@yourServerIp
Enabling logging in through SSH key
To enable logging in with SSH keys, you will need to generate an SSH key pair, consisting of a public key, and a private key. Once you have the key pair, we are going to configure the server to only accept an SSH connection if the ssh-agent of the client is loaded up with the correct private key. Open the public key file, and copy its contents to the clipboard. We are now going to allow the created keys to work for this new user account. Execute the following commands to create the .ssh directory, and apply the correct directory permissions:
# Create the .ssh directory in the users home folder
mkdir ~/.ssh
# Set the correct folder permissions
sudo chmod 700 ~/.ssh
# Edit the authorized_keys file
nano ~/.ssh/authorized_keys
Once you are in the text editor, paste the copied public key, save, and quit. Then change the file permissions of the authorized_keys file:
# Set the correct file permissions
sudo chmod 600 ~/.ssh/authorized_keys
In order for the keys to work, we need to configure the SSH server to accept key authentication and deny password authentication and root login. Open the following file as such:
# Edit the SSH server config
sudo nano /etc/ssh/sshd_config
Once opened, look for the following lines that I commented out, and replace them with the lines below that.
# PubkeyAuthentication no
PubkeyAuthentication yes
# PasswordAuthentication yes
PasswordAuthentication no
# PermitRootLogin yes
PermitRootLogin no
Save and exit the file. Next, we are going to reload the SSH server so that our new config is applied.
# Reload the SSH server, applying the new configs
sudo systemctl reload sshd
Enabling 2FA
Next up, 2FA. This will ask anyone that wants to log in for the appropriate authenticator code generated by the Google Authenticator app. We will first need access to EPEL packages in order to install the Google authenticator package.
Generating the Google Authenticator QR
Execute the following commands:
# Enable access to EPEL packages through yum
sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm -y
# Install the google authenticator package
sudo yum install google-authenticator -y
# Launch the google authenticator configuration utility
google-authenticator
This will start the Google-authenticator configuration utility. Answer the first prompt with “y”. You will then see a generated QR code show up in your terminal. Download and open the Google Authenticator app on your phone, tap the plus icon in the bottom-right, and pick “Scan QR-code”. Use your camera to scan the QR code from the terminal, and finish the registration process. The terminal will also give you some emergency codes. Make sure to save these to a safe location, in case you lose access to the authenticator. Afterward, answer the prompts in the terminal with the following answers: “y, y, n, y”.
Configuring the server to use 2FA
Now the authenticator is linked, we still need to tell the server to use the authenticator during login. For this, we use Linux’s PAM (Pluggable Authentication Module). Execute the following command:
# Edit the PAM config for SSH access
sudo nano /etc/pam.d/sshd
Add the first line to the bottom of the file. Then find the second line, and comment it out.
auth required pam_google_authenticator.so nullok
# auth substack password-auth
Save and exit the file. Then edit the SSH config once more:
sudo nano /etc/ssh/sshd_config
Find the first line and replace it with the second. Then add the third line to the bottom of the file.
# ChallengeResponseAuthentication no
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,password publickey,keyboard-interactive
Through the above, we tell the server to only enable specific ways of logging in through SSH. We only want it to match the public and private keys, and accept the authenticator. The authenticator is called through the “keyboard-interactive” option. Finally, we restart the SSH server once more to apply all the changes.
# Restart the SSH server to apply changes to the config
sudo systemctl restart sshd.service
Now test the applied configurations by reconnecting to the server. The server will no longer ask for a password and instead look at your active SSH key. Then it asks for your verification key from your Google Authenticator. After entering your code, you are reconnected to your server.
Enabling Firewall
Now we will enable a basic firewall using the built-in firewall in order to fend off any attacks on unattended ports. We only want the ports that we actually use to be available.
# Turn on the firewall
sudo systemctl enable firewalld --now
# Enable port 80 for http access
sudo firewall-cmd --permanent --zone=public --add-service=http
# Enable port 443 for https access
sudo firewall-cmd --permanent --zone=public --add-service=https
# Enable port 22 for SSH access
sudo firewall-cmd --permanent --zone=public --add-service=ssh
# Apply all changes
sudo firewall-cmd --reload
(Optional) Changing the SSH port
As an added layer of security, you could also change your SSH port. This will only befuddle the simplest of SSH attacks, but it is a little safer and very easy to set up. We add the following rule to our firewall.
# Replace xxxxx with any custom port number you want.
# To prevent overlapping port numbers, it is safest to pick a number between 10000-65535
sudo firewall-cmd --add-port=xxxxx/tcp --permanent
# Apply all changes
sudo firewall-cmd --reload
Now we need to change the SSH config of the server to recognize the new port.
# Edit the SSH server config
sudo nano /etc/ssh/sshd_config
Then find the following line, and change it accordingly.
# Again, fill in your chosen port number
Port xxxxx
# Save and close
There are still some internal configs that need to be changed. Luckily there is another package that helps out with that. With semanage, we can easily configure certain elements of the security layer that comes with CentOS.
# Install the package containing semanage
sudo dnf install policycoreutils-python -y
# Add access to the chosen port to the SELinux config
sudo semanage port -a -t ssh_port_t -p tcp 9876
# Modify access to the chosen port to the SELinux config. This is required for it to work.
sudo semanage port -m -t ssh_port_t -p tcp 9876
# Reload the ssh server.
sudo systemctl restart sshd
Disconnect from the server once more, and reconnect to it using the chosen port number.
# Reconnect to the server through SSH using the newly defined port
ssh user@server.tld -p xxxxx
And to finish off our configurations, we are going to disable access to the default SSH port.
# Remove the default SSH access from the firewall
sudo firewall-cmd --remove-service=ssh --permanent
# Reload the firewall to apply changes
sudo firewall-cmd --reload
So now we have achieved the following security measures on our Multi-Site VPS:
- We enabled ipv6 access to our server
- Disabled logging into the root account, and made a user with root access
- Enabled logging in with SSH key
- Enabled 2FA authentication
- Enabled Firewall
- Changed the SSH port
Installing Docker on the Multi-Site VPS
Docker is an application that lets us run containerized images. These containers are specifically set up to only run everything an application specifically needs to run, making it very resource efficient. This way we do not have to install any packages on our Multi-Site VPS that you would usually find on a web server, like Apache, MySQL, or PHP. We are going to run our projects on separate stacks of containers to make the management of these projects a breeze. First, we are going to install the docker package.
# We need the yum config manager which is not installed by default
sudo yum install yum-utils -y
# Next we use the config manager to add the docker repo to yum
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# Now we are going to install the docker package and its dependencies
sudo yum install docker-ce docker-ce-cli containerd.io -y
# Now we start the docker daemon, and enable running docker on startup
sudo systemctl enable docker --now
Docker is now running in the background of our Multi-Site VPS. We can check this by running the “sudo docker” command in bash. It will show the help menu for docker commands. Now we are going to install the docker-compose package so that we can use docker-compose files to more easily add docker containers to our daemon.
# Install the docker-compose package
sudo yum install docker-compose -y
Lastly, docker should not need to be run as sudo, so we make it available without the sudo prefix using the following command
# This command will use the $USER environment variable, which is the current user
sudo usermod -aG docker $USER
Installing Portainer on the Multi-Site VPS
Portainer is a great application that gives you a GUI in which you can manage your docker containers. It is very easy to set up and provides a more visual method of making sure everything is working as intended. Think of it as a general management interface for your Multi-Site VPS. It also makes making docker stacks using docker-compose configs a lot easier.
# We use docker to create the portainer volume, which will persist the saved data used by portainer
docker volume create portainer_data
# Now we will run the portainer container
docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
cr.portainer.io/portainer/portainer-ce:latest
The previous command works like this:
- With Docker run, we will start a new container
- using -d we will start de container detached, making it run as a background process
- The two -p tags expose port 8000 and port 9443, which are required for Portainer to run correctly
- –name portainer will give this container the name of “portainer”
- –restart-always will ensure that when docker is running, docker will always try to restart portainer if it happens to not be running.
- -v /var/run/docker.sock:/var/run/docker.sock will enable Portainer to hook onto the docker daemon. This is what gives the container access to the docker application running on the server.
- -v portainer_data:/data will persist the data that portainer will save to the portainer_data volume. One example of this is the login credentials you need to access the portainer dashboard.
- cr.portainer.io/portainer/portainer-ce:latest is simply where the image for this container is located, which docker will download and execute.
Once docker downloaded the image and is executing the container, you need to enable access to the dashboard by configuring the firewall. Execute the following commands:
# Add the portainer dashboard port to the firewall
sudo firewall-cmd --permanent --zone=public --add-port=9443/tcp
# Reload the firewall to apply changes
sudo firewall-cmd --reload
Now, visit “http://yourdomain.tld:9443” in your browser to access the installation wizard. The process is straightforward. As soon as you selected the local environment the menu on the left side of the page should be populated with several options, including the “Stacks” option.
Installing Traefik on the Multi-Site VPS
The last step before our Multi-Site VPS environment is ready to serve some projects is to run the Traefik docker container. Traefik is a reverse-proxy engine that makes configuring a Multi-Site server a breeze. Instead of making a configuration for every project, we can supply docker containers with labels that Traefik will hook into in order to receive a request from a user that will lead to the appropriate linked container. It will also automatically apply a Let’s Encrypt SSL certificate to our sites if we do so desire.
Creating the Traefik configuration file
First, we will create a directory in our user’s home directory to save a copy of the Traefik config file. That way the config persists, and we are able to edit it outside the Traefik container itself.
# Create the traefik directory in the current user's home folder
sudo mkdir ~/traefik
# Edit the traefik.yml file
nano ~/traefik/traefik.yml
Next, paste the following config in the traefik.yml file
global:
checkNewVersion: true
sendAnonymousUsage: false # true by default
# (Optional) Log information
# ---
# log:
# level: ERROR # DEBUG, INFO, WARNING, ERROR, CRITICAL
# format: common # common, json, logfmt
# filePath: /var/log/traefik/traefik.log
# (Optional) Accesslog
# ---
# accesslog:
# format: common # common, json, logfmt
# filePath: /var/log/traefik/access.log
# (Optional) Enable API and Dashboard
# ---
# api:
# dashboard: true # true by default
# insecure: true # Don't do this in production!
# Entry Points configuration
# ---
entryPoints:
web:
address: :80
# (Optional) Redirect to HTTPS
# ---
http:
redirections:
entryPoint:
to: websecure
scheme: https
websecure:
address: :443
# Configure your CertificateResolver here...
# ---
certificatesResolvers:
staging:
acme:
email: your-email@domain.tld
storage: /etc/traefik/certs/acme.json
caServer: "https://acme-staging-v02.api.letsencrypt.org/directory"
httpChallenge:
entryPoint: web
production:
acme:
email: your-email@domain.tld
storage: /etc/traefik/certs/acme.json
caServer: "https://acme-v02.api.letsencrypt.org/directory"
httpChallenge:
entryPoint: web
# (Optional) Overwrite Default Certificates
# tls:
# stores:
# default:
# defaultCertificate:
# certFile: /etc/traefik/certs/cert.pem
# keyFile: /etc/traefik/certs/cert-key.pem
# (Optional) Disable TLS version 1.0 and 1.1
# options:
# default:
# minVersion: VersionTLS12
providers:
docker:
exposedByDefault: true # Default is true
file:
# watch for dynamic configuration changes
directory: /etc/traefik
watch: true
Note: Make sure to change the listed email addresses in the certificateResolvers part of this config with your email address, or SSL certificates might not work!
Here follows a brief explanation of the config:
- Under the global part, we define that Traefik may update automatically, and we disable sending telemetry data
- Under the entryPoints part, we define two entry points: web and websecure. Web covers HTTP access and websecure covers HTTPS access. In web we also define a rule where HTTP automatically redirects to HTTPS for better security.
- Under the certificateResolvers part, we define two stages on which we can apply SSL certificates. Let’s Encrypt also supports giving out staging certificates for sites not running in production for testing purposes. For us to create containers that will have these SSL certificates applied, we need to define if a container is running in staging or production. With this config, Traefik knows when to apply a test certificate or a real one.
- Under the providers part we expose the docker daemon by default. This way it will automatically detect new running containers so that it can hook onto them. We define the location of the config so that we can edit them while Traefik is running
Save and exit the file. Now Traefik is ready to be started.
Starting the Traefik container
Open Portainer, and in the leftmost menu, click on “Stacks”. Then at the top of that page, click on “+ Add Stack”. Then on the next screen enter the name “Traefik” and then copy and paste the following docker-compose config in the web-editor box.
version: '3'
volumes:
traefik-ssl-certs:
driver: local
services:
traefik:
image: "traefik:v2.5"
container_name: "traefik"
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- /home/USER/traefik:/etc/traefik
- traefik-ssl-certs:/ssl-certs
- /var/run/docker.sock:/var/run/docker.sock:ro
Note: Make sure to change “USER” in the volumes part of the config to your Linux username.
Let’s go through this config as well.
- Version defines what Docker Engine versions are supported by this config.
- At volumes, we create another persistent volume that will save the certificates that Traefik will generate for our containers.
- At services, we define what containers must be created within this stack. In this case, we will start a container named Treafik using the Traefik v2.5 image. We ensure it will automatically restart unless we specifically tell Docker to stop the container. We expose ports 80 for HTTP and port 443 for HTTPS. Finally, we bind the config we made to the correct folder within the container. We also persist all generated certificates to the created volume and provide access to the docker daemon on the server.
Scroll all the way down the page, and click on “Deploy the Stack”. Once the page refreshes, the Traefik container will run in the background. We can now deploy additional stacks that Traefik will detect when correctly configured. We can then define incoming domain requests and bind them to specific containers to serve many sites from the same server, depending on which domain a user tried to access.
Traefik Example: WordPress on a Multi-Site VPS
As an example of how Traefik works, we will deploy a WordPress stack that will be able to be reached through a URL of a domain that we own. Let’s assume that we own a domain called mywordpress.com and we configured the domain to point towards the Multi-Site VPS.
Creating the WordPress stack
We open portainer and once again we go to the Stacks menu, and click on “+ Add Stack”. We name the stack “mywordpress” and then use the following docker-compose config.
version: '3.1'
services:
mywordpress:
image: wordpress
restart: unless-stopped
container_name: mywordpress
labels:
traefik.enable: "true"
traefik.http.routers.mywordpress.entrypoints: "web, websecure"
traefik.http.routers.mywordpress.rule: "Host(`mywordpress.com`)"
traefik.http.routers.mywordpress.tls: "true"
traefik.http.routers.mywordpress.tls.certresolver: "production"
traefik.http.services.mywordpress.loadbalancer.server.port: "80"
environment:
WORDPRESS_DB_HOST: "yourdomain.tld:8001"
WORDPRESS_DB_USER: "username"
WORDPRESS_DB_PASSWORD: "password"
WORDPRESS_DB_NAME: "wp_mywordpress"
volumes:
- /home/USER/mywordpress:/var/www/html
network_mode: traefik_default
mywordpress-db:
image: mysql:5.7
restart: unless-stopped
container_name: mywordpress-db
ports:
- 8001:3306
environment:
MYSQL_DATABASE: "wp_mywordpress"
MYSQL_USER: "username"
MYSQL_PASSWORD: "password"
MYSQL_RANDOM_ROOT_PASSWORD: '1'
volumes:
- db-mywordpress:/var/lib/mysql
volumes:
db-mywordpress:
Note: Make sure to use your own descriptive container names, your own domain, and your own usernames and passwords in the docker-compose config above.
Explaining the WordPress stack
WordPress
Once more, let’s explain what we do here.
- I start a container named “mywordpress” using the WordPress image that docker will download for us from the docker repository.
- I configured the image to restart unless we stopped it manually
- Next, we give the container some labels that Traefik can interact with.
- traefik.enable – Allow Traefik to notice this container
- traefik.http.routers.[site name].entrypoints – Defines what entry points can be accessed for this container. In this case, we allow accessing ports 80 and 443.
- routers.[site name].rule – Here we define the host. The host will be the domain Traefik will recognize in the request, and then send that request to this container.
- routers.[site name].tls – Here we enable applying the automatic SSL certificates.
- routers.[site name].tls.certresolver – Here we decide on which resolver we use. We defined “staging” and “production” in the config. We use “production” in this case since this will be a live public website.
- services.mywordpress.loadbalancer.server.port – Here we define what port within the container the request is sent to. Also, this will use Traefik’s internal load balancer.
- In Environment, we define some Environment Variables that the container can use. To find out which variables are available, you need to read the image’s documentation. WordPress has its own supported Environment Variables listed on its Docker page. In this case, I define what database and which credentials WordPress should use.
- In volumes, we tell docker to sync the contents of the WordPress webroot folder to a folder on the server that we can reach. This way we can easily configure the config, fix problems, or do other maintenance to the WordPress site.
- In network_mode we tell docker to add this container to the Traefik network. Only when a container is in the same network as Traefik, will it be able to detect the container.
MySQL
- We define a name for the container
- We configure its restart behavior
- We bind port 8001 on the server to port 3306 in the container (which is the MySQL default port). So when port 8001 is called on our server, Traefik will send the request to this MySQL container. The container receives the request as if it was sent to port 3306.
- We set up the database through Environment Variables
- We link the created database to a volume, so the data persists even after the container stops.
Afterward, we also need to update our firewall once again to allow access to the MySQL database.
# Add the MySQL database port to the firewall
sudo firewall-cmd --permanent --zone=public --add-port=8001/tcp
# Reload the firewall to apply changes
sudo firewall-cmd --reload
Note: There are ways to only expose the database to the wordpress container in the same stack, which is likely a better way to do it. However this works as well. Just make sure you use secure credentials.
Note 2: If you decide to add more sites that use a MySQL database, you should pick a different unused port instead of port 8001.
Once the stack runs and the firewall is updated, you can reach the site through the Host URL you defined as a label. Notice that the site automatically redirects to HTTPS. Note that it might take some time for Traefik to pick up a new certificate. Using this method, you can now deploy as many WordPress sites, or other projects on your server through docker as you please. As long as your server has the free resources to do so of course!
Thank you for reading my first-ever guide! I would really appreciate any comment or feedback! Also, feel free to check out my blog where I write about anything going on in my life.