Setting Up a Production-Ready VPS
Set up a VPS or home server with Docker and Traefik as a reverse proxy. Learn SSH config, firewall rules, and SSL with Let's Encrypt. Covers both commercial VPS and Raspberry Pi setups for secure and optimized web hosting.
Setting up a production-ready Virtual Private Server (VPS) has become significantly easier than expected. With the right steps and tools, you can have your environment running smoothly with minimal overhead. In this guide, I’ll walk you through the process of configuring a production-ready VPS, including security, deployment, and scalability, using simple tools that don’t require deep domain expertise.
Introduction and Project Setup
A production-ready VPS offers several advantages over serverless platforms, such as consistent billing, greater control over your infrastructure, and the ability to mitigate platform-specific issues. The challenge here was to deploy a simple web application on a VPS from scratch.
For this setup, the requirements for "production-ready" included:
- A DNS record pointing to the server.
- The application must be up and running.
- Security best practices for protection.
- High availability and automated deployment for ease of use.
- Website monitoring for notifications if the site becomes unavailable.
Rather than using complex tools like Kubernetes or infrastructure-as-code solutions like Terraform, the goal was to keep the setup simple and minimal.
VPS Selection and Initial Setup
Setting up a production-ready Virtual Private Server (VPS) involves selecting a provider that aligns with your requirements and budget. Below is a comparison of popular VPS providers, along with an optional Raspberry Pi setup for a home VPS:
| Provider | CPU | RAM | Storage | Bandwidth | Price (per month, cheapest comparable option) | Location |
|---|---|---|---|---|---|---|
| DigitalOcean | 2 vCPU | 4 GB | 80 GB SSD | 4 TB | ~€24 | Multiple |
| Hostinger | 1 vCPU | 4 GB | 50 GB SSD | 4 TB | €5 (first year, then €9-14) | Multiple |
| OVH | 2 vCPU | 4 GB | 80 GB SSD | ∞ | €11 (first year, then €13) | Multiple |
| AWS Lightsail | 2 vCPU | 4 GB | 80 GB SSD | 4 TB | €24 | Multiple |
| Raspberry Pi | Quad-Core ARM Cortex-A72 | 4 GB | microSD (up to 1 TB) | ∞ | ~€100 (one-time) | Home Network |
Note: Prices and specifications are approximate and may vary based on the provider and any ongoing promotions. The Raspberry Pi setup is a cost-effective solution for light workloads but may not match the performance of commercial VPS providers.
When choosing a VPS provider, consider factors such as performance requirements, budget constraints, data center locations, and additional features like backups or managed services. For home setups, ensure that your internet service provider allows server hosting and that you have a reliable power source to minimize downtime.
I chose Ubuntu 20.04 LTS as the operating system and set up a base installation to avoid unnecessary bloat. Here's a quick rundown of the initial steps:
- Set up the VPS: I installed the OS and configured a strong password for the root user.
- SSH Configuration: I added a secure SSH public key for login to avoid password-based authentication.
User Setup: A new user was added with sudo privileges using the adduser and usermod commands.
adduser myuser
usermod -aG sudo myuser
The new user was tested by switching to it and running a command with sudo.
DNS Configuration
Next, I pointed a domain name to the server. The process involved purchasing a domain and adding the necessary A record to the DNS configuration. Here's how to set up DNS on a service like Hostinger:
- Add A Record: Set the root domain to point to your VPS's IP address.
Verify DNS Propagation: You can verify using nslookup or another DNS tool to confirm that your domain resolves to the correct IP address.
nslookup doom.mydomain.com
SSH Hardening and Security
Security was one of the primary considerations during setup. I hardened SSH to prevent unauthorized access and protect the server from brute-force attacks. Here's how:
Add your local SSH public key to the VPS:
Before proceeding with changes to the SSH configuration, ensure that your local id_rsa.pub key is copied to the remote environment.
Generate an SSH Key Pair (if you don't have one):
Open a terminal and run:
ssh-keygen -t rsa -b 4096 -C "your_email@example.com"Press Enter to accept the default file location (~/.ssh/id_rsa) and provide a passphrase when prompted (optional).
Copy Your Public Key to the VPS:
Use `ssh-copy-id` to add your public key to the remote server's authorized keys
ssh-copy-id myuser@doom.mydomain.com
Open text edition on the config file:
Edit /etc/ssh/sshd_config
sudo vim /etc/ssh/sshd_config
Note: here is a vim cheat sheet
Disable PAM authentication:
As well as PAM authentication to no
UsePAM no
Disable Root Login:
I also disabled root login via SSH.
PermitRootLogin no
Disable Password Authentication:
Disable password authentication.
PasswordAuthentication no
Restart SSH:
After making changes, restart the SSH service.
systemctl restart sshd
This ensures that only users with SSH keys can log in.
Web Application Deployment (Initial Approach)
To get the application up and running, I chose the HTTP Doom project, a Docker-based HTTP service that serves the classic Doom game. This project is simple but effective for testing web applications. Here's how to set it up:
Install Docker and Docker Compose:
I installed Docker and Docker Compose on the VPS:
sudo apt install docker.io
sudo apt install docker-compose
Run the Container:
Once the image was built, I ran the container on the VPS:
docker run -p 80:8080 mattipaksula/http-doomThe application was now accessible on port 80.
Running the app with compose
Instead of running the app directly on the server, I opted to setup a compose file. This approach allows for better configuration and scalability.
Configure Docker Compose: I used the docker-compose.yml file for deployment.
Note: Don't hesitate to build a folder in your VPS to hold your compose file for better file management.
services:
app:
image: mattipaksula/http-doom
ports:
- 80:8080
Run Docker Compose: I brought up the stack using:
docker compose up -dThis method ensured that the app will be easily configurable with the future services we plan to add.
Firewall Configuration
A firewall is essential to securing the server. Using ufw (Uncomplicated Firewall, packaged with Ubuntu), I configured the firewall rules to block unnecessary ports and allow only essential ones.
Disable all inboud requests and enable outbound:
sudo ufw default deny incoming
sudo ufw default allow outgoing
Allow SSH Connections:
sudo ufw allow OpenSSH
Note : if you have used a custom SSH port, here is where you need to allow it, replace the OpenSSH by the custom port.
Allow HTTP Connections:
sudo ufw allow 80
Allow connection to traefik insecure WebUI:
sudo ufw allow 8080
Allow HTTPs Connections:
sudo ufw allow 443
Apply changes:
sudo ufw enable
This configuration ensures only necessary ports are open and helps prevent unauthorized access.
Change the compose file:
services:
app:
image: mattipaksula/http-doom
Reverse Proxy Setup with Traefik
To streamline the deployment further, I set up a reverse proxy using Traefik. Traefik automatically handles routing, load balancing, and HTTPS certificates, making it ideal for a production environment.
Set Up Traefik: I added Traefik as a service in the docker-compose.yml file.
services:
reverse-proxy:
image: traefik:v3.1
command:
# WebUI dashboard
- "--api.insecure=true"
- "--providers.docker"
# Prevents default exposing docker containers, has to explicitly have label
- "--providers.docker.exposedbydefault=false"
ports:
- "80:80"
- "443:443"
# WebUI port (enabled by --api.insecure=true)
- "8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
app:
...
Configure the Doom Service: I added a label to the HTTP Doom container to let Traefik know how to route traffic:
services:
reverse-proxy:
...
app:
image: mattipaksula/http-doom
labels:
- "traefik.enable=true"
- "traefik.http.routers.doom.rule=Host(`doom.mydomain.com`)"
Restart the Stack: After updating docker-compose.yml, I restarted the services.
docker compose restart
Now, traffic directed to doom.mydomain.com would be routed through Traefik, which would forward requests to the HTTP Doom container.
TLS Certificate Configuration
For security, I wanted to ensure that all HTTP traffic is encrypted using TLS. Traefik integrates with Let's Encrypt to automatically provision SSL certificates.
- Enable TLS in Traefik: I added the following lines to the Traefik configuration to automatically obtain TLS certificates:
services:
reverse-proxy:
image: traefik:v3.1
command:
- "--entrypoints.websecure.address=:443"
- "--providers.docker.exposedbydefault=false"
- "--certificatesresolvers.myresolver.acme.httpchallenge=true"
- "--certificatesresolvers.myresolver.acme.httpchallenge.entrypoint=websecure"
- "--certificatesresolvers.myresolver.acme.storage=/letsencrypt/acme.json"
- "--certificatesresolvers.myresolver.acme.email=youremail@domain.com"
ports:
- "80:80"
- "443:443"
# WebUI port (enabled by --api.insecure=true)
- "8080"
volume:
- letsencrypt:/letsencrypt
- /var/run/docker.sock:/var/run/docker.sock
app:
image: mattipaksula/http-doom
labels:
- "traefik.enable=true"
- "traefik.http.routers.doom.rule=Host(`doom.mydomain.com`)
- "traefik.http.routers.doom.entrypoints=websecure"
- "traefik.http.routers.doom.tls.certresolver=myresolver"
volumes:
letsencrypt:
Test the Setup: After reloading the services, I verified that accessing https://doom.mydomain.com served the HTTPS version of the Doom application.

High Availability and Automated Deployment
To ensure availability, you can run multiple instances of your application. Traefik will automatically load balance traffic across them.
Modify your docker-compose.yml to define a replicas count:
services:
reverse-proxy:
...
app:
image: mattipaksula/http-doom
labels:
- "traefik.enable=true"
- "traefik.http.routers.doom.rule=Host(`doom.mydomain.com`)
- "traefik.http.routers.doom.entrypoints=websecure"
- "traefik.http.routers.doom.tls.certresolver=myresolver"
deploy:
mode: replicated
replicas: 3
restart: alwaysYou can also use Watchtower to automatically monitor and update your running containers whenever a new image is pushed:
services:
watchtower:
image: containrrr/watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
command:
- "--label-enable"
- "--interval"
- "30"
- "--rolling-restart"
reverse-proxy:
...
app:
# define the tag version so as to allow watchtower to monitor the change
image: mattipaksula/http-doom:latest
labels:
- "traefik.enable=true"
- "traefik.http.routers.doom.rule=Host(`doom.mydomain.com`)
- "traefik.http.routers.doom.entrypoints=websecure"
- "traefik.http.routers.doom.tls.certresolver=myresolver"
- "com.centurylinklabs.watchtower.enable=true"
deploy:
mode: replicated
replicas: 3
restart: alwaysSteps to Monitor Uptime with UptimeRobot:
- Sign Up for UptimeRobot:
- Go to UptimeRobot and sign up for an account.
- Create a New Monitor:
- Once logged in, click on Add New Monitor.
- Choose HTTP(s) as the monitor type.
- Enter your website’s URL (e.g.,
https://doom.mydomain.com). - Set the monitoring interval (e.g., every 5 minutes).
- Optionally, configure notifications (via email, Slack, etc.) to get alerts if the website is down.
- Save the Monitor:
- Click Create Monitor after configuring the details.
UptimeRobot Integration with Your Setup
After creating your monitor, UptimeRobot will periodically check the availability of your domain. If your site becomes unavailable, UptimeRobot will notify you according to your notification settings (email, Slack, etc.).
Conclusion
Setting up a production-ready VPS is much easier than it seems, thanks to simple tools like Docker, Traefik, and automatic SSL certificate management with Let's Encrypt. By following these steps, you can deploy a secure, scalable application without the need for complex setups. Whether you're hosting a simple web app or a more complex service, the principles outlined here can be adapted for many use cases.
In this case, HTTP Doom served as a simple example to demonstrate the power of containerization and reverse proxy setups. If you follow this approach, your production VPS setup will be secure, easily maintainable, and scalable.
Source
The content in this guide is based on a combination of various sources and insights, including:
- Dreams of Code YouTube Tutorial: This video helped shape the foundational concepts around setting up a reverse proxy with Traefik and managing containerized services. The guide draws upon techniques discussed in the tutorial to configure services like the reverse proxy, Traefik for load balancing, and automated deployment using Docker and Watchtower. Watch the full video here.
- Traefik Documentation: Key concepts around using Traefik for reverse proxying and automated SSL certificate management with Let's Encrypt were inspired by the official Traefik documentation.
- UptimeRobot: UptimeRobot was used for monitoring the availability of the deployed website. You can set up uptime monitoring using UptimeRobot's free plan to receive notifications when your site goes down. More information can be found at UptimeRobot.
- Docker Hub - mattipaksula/http-doom: The example application
http-doomfrom the Docker Hub repository is used in the configuration as a sample app. For more details, visit the Docker Hub page for the image. - Watchtower Documentation: For automating deployment with Docker containers, Watchtower allows monitoring and automatic updating of container images. Learn more from the Watchtower GitHub repository.
This setup combines modern DevOps practices to ensure that your application is resilient, scalable, and monitored, providing high availability and seamless deployments.
Disclaimer: this post was written with the help of AI, translating and compiling my notes and making it a easier to read... hopefully.