Building a Secure Nginx Reverse Proxy with Docker and Let’s Encrypt

table of contents

Introduction

In recent years, the rapid growth of AI-powered services—such as ChatGPT, Google Gemini, and Claude—has lowered the barriers for individuals and small teams to develop and publish web applications. While this accessibility is a huge advantage, running applications securely has never been more important. One indispensable tool in achieving a secure environment is the reverse proxy.

What Is a Reverse Proxy?

graph LR
    U((User))
    RP[Reverse Proxy<br/>Nginx]
    A[Application<br/>Server]
    
    U -->|HTTPS| RP
    RP -->|HTTP| A
    
    style U fill:#f9f,stroke:#333,stroke-width:2px
    style RP fill:#90EE90,stroke:#333,stroke-width:2px
    style A fill:#87CEEB,stroke:#333,stroke-width:2px
    
    %% Annotations for explanation
    subgraph Internet
        U
    end
    
    subgraph Internal Network
        RP
        A
    end

Think of a reverse proxy as a gatekeeper that stands between your end users and the actual web application. Here’s a simplified scenario:

  • When a user tries to access your site, the reverse proxy is the first point of contact.
  • The proxy can detect suspicious traffic or malicious requests, blocking them before they ever reach the application server.
  • It also manages secure communication by encrypting traffic (HTTPS), which helps protect sensitive data from eavesdropping.

By inserting this “gatekeeper” in front of your application servers, you can significantly reduce security risks and centralize the management of incoming traffic.

Why Do You Need a Reverse Proxy?

1. Enhanced Security

  • Shielding the App Server
    Your application server won’t be directly exposed to the public Internet, limiting potential attack vectors.
  • Blocking Malicious Requests
    Suspicious IP addresses or harmful requests can be filtered out.
  • Encrypted Communication
    HTTPS ensures that data in transit is encrypted, protecting it from interception.

2. Streamlined Operations

  • Single Domain, Multiple Apps
    Manage multiple internal services under a single domain and let the reverse proxy handle all routing.
  • Centralized SSL Certificate Management
    Issue and renew certificates in one place, rather than configuring encryption on each individual service.
  • Unified Access Logs
    Keep a centralized record of access data for easier auditing and troubleshooting.

3. Improved Performance

  • Caching Static Content
    Store frequently requested files to reduce load on the application server.
  • Load Balancing
    Distribute incoming traffic across multiple back-end servers to ensure steady performance.
  • Network Optimization
    Compress and optimize data transfer to enhance page-loading times.

What You’ll Achieve with This Guide

In this guide, you’ll learn how to set up an Nginx reverse proxy equipped with automated SSL certificate renewal, using Docker and Let’s Encrypt. Here’s a quick overview of what you can accomplish:

  • Automatic HTTPS Integration
    Your web application will be served over a secure connection without manual certificate handling.
  • Stronger Security Posture
    Encrypt incoming and outgoing traffic, and gain more control over who can reach your back-end services.
  • Hassle-Free Certificate Renewal
    Certificates will renew automatically, letting you focus on development rather than maintenance tasks.
  • Convenient Docker Deployment
    Spin up or tear down your environment quickly, making this setup ideal for personal projects or small teams.

System Architecture

This project runs both Nginx (as the reverse proxy) and Certbot (for SSL management) in Docker containers, creating a secure, containerized web server environment.

Key Components

  1. Nginx Container (Reverse Proxy)
    • Accepts all external traffic and forwards requests to the internal application server(s).
    • Terminates HTTPS traffic, handling encryption details so your back-end can remain simpler.
    • Uses a lightweight Alpine Linux–based image for efficiency.
  2. Certbot Container (SSL Certificate Management)
    • Obtains free SSL certificates from Let’s Encrypt.
    • Automatically checks for renewal every 12 hours.
    • Applies new certificates to Nginx without manual intervention.

Configuration Files

  • nginx.conf: Core Nginx settings (e.g., worker processes, logging, and global directives).
  • default.conf: Site-specific rules (e.g., SSL configurations, proxy directives).
  • docker-compose.yml: High-level container orchestration settings.
  • init-letsencrypt.sh: Script for initial setup and certificate generation.

Built-In Security Measures

This setup incorporates several critical security practices:

  • Encrypted Connections
    Only TLS 1.2 and 1.3 are enabled, ensuring modern and secure encryption methods.
  • Certificate Automation
    Let’s Encrypt handles certificate issuing and renewals, which are automatically applied to Nginx.
  • Access Controls
    • All HTTP requests are redirected to HTTPS.
    • Proper proxy headers help preserve client IP information and ensure accurate logging.

By deploying Nginx as a reverse proxy with automatic SSL certificate management, you can shield your web applications from direct exposure to the Internet while simplifying operations. Whether you’re an individual developer or part of a small team, this Docker-based environment offers a robust, scalable solution for modern web services.

Overview of Key Files

This project is built around several essential files, each playing a specific part in creating an automated, secure web server environment. By working together, these files streamline the process of configuring Nginx, issuing SSL certificates, and managing Docker containers.


Configuration Files

default.conf

The default.conf file is where you set up critical Nginx directives that govern how your site handles secure connections and forwards requests to your back-end. Its main tasks include:

  • Redirecting HTTP to HTTPS
    Ensures users always land on a secure version of your site.
  • Applying SSL Certificates
    Specifies which certificates to serve and how to encrypt traffic.
  • Defining Upstream Servers
    Points incoming requests to your internal application servers or services.

nginx.conf

Meanwhile, nginx.conf covers the more general “global” settings for your Nginx environment, such as:

  • Logging Format
    Determines how request and error logs are recorded, which is essential for monitoring and troubleshooting.
  • Worker Process Configuration
    Adjusts how many worker processes Nginx spawns to handle traffic efficiently.
  • General System Behavior
    Sets the foundational parameters that affect the entire server’s performance and security posture.

Docker-Related Files

docker-compose.yml

This file orchestrates how all containers in your system work together. Here you’ll define:

  • Nginx Container Setup
    Pulls and configures the Nginx image and maps ports for external access.
  • Certbot Container Setup
    Integrates Let’s Encrypt to handle SSL certificate generation and renewal.
  • Certificate Auto-Renewal Process
    Schedules how often to check for expiring certificates and updates them automatically.

By centralizing these components, docker-compose.yml simplifies deployment, making it easy to bring your entire system up or down with a single command.

Dockerfile

The Dockerfile is used to customize your Nginx container image, typically starting with a minimal Alpine Linux base. In this file, you’ll:

  • Copy in Required Configuration Files
    Place your default.conf and other custom settings inside the container.
  • Install Any Additional Packages
    Incorporate packages or tools necessary for your specific needs.
  • Optimize the Final Image
    Keep your container image as lightweight as possible, which is good for both security and performance.

Setup Script

init-letsencrypt.sh

This shell script automates the initial provisioning and configuration steps required for a secure environment, including:

  • Obtaining SSL Certificates
    Fetches certificates from Let’s Encrypt and places them where Nginx can use them.
  • Initializing Nginx Settings
    Makes any necessary changes to your Nginx configuration before the service fully launches.
  • Adjusting Certificate Permissions
    Ensures the certificates have the right access privileges, so that Nginx and Certbot can operate smoothly.

All of these tasks happen behind the scenes, reducing the risk of manual errors and speeding up the overall setup process.


Further Resources

If you’d like to review the specific contents of these files, check out the project’s GitHub repository for the most up-to-date code. You can also watch the upcoming tutorial video for a step-by-step demonstration of how everything fits together.


Setup Steps

Now let’s walk through the actual environment deployment. Even if you’re new to Docker or Nginx, you can follow these steps without worry.

Prerequisites

Before you begin, make sure you have:

  1. A Server with Docker Installed
    Any cloud or on-premise machine running Docker should work.
  2. A Registered Domain
    Use a custom domain that suits your project.
  3. Correct DNS Settings
    The domain’s DNS must point to your server’s IP address to validate certificates properly.

Step-by-Step Overview

Setting up this environment can be broken down into two main phases:

  1. Testing Mode
    • Use Let’s Encrypt’s staging environment to verify your configuration.
    • This helps you avoid hitting rate limits if you need to fix any mistakes in your setup.
  2. Production Deployment
    • Once everything checks out, switch from staging to production.
    • Acquire the official certificates and enable the full HTTPS environment.

By following this incremental process—especially testing first—you can confirm your configuration is solid without risking rate-limit blocks from Let’s Encrypt. After you’ve verified that everything is running smoothly, you’ll have a fully secure setup ready for public access.

Concrete Setup Steps

1. Clone the Repository

Begin by downloading the project files to your server or development machine:

git clone https://github.com/superdoccimo/rev.git
cd rev

This gives you access to all necessary Docker and Nginx configurations as well as the initialization scripts.

2. Update Initial Settings

Open the init-letsencrypt.sh script in a text editor and modify the following variables:

domains=(your-domain.com)   # e.g., example.com
email="admin@example.com"   # Replace with your email address
staging=1                   # 1 = Test mode
  • domains: Use the domain you’ve pointed to your server.
  • email: Important for receiving notifications about certificate renewals or potential issues.
  • staging: Start with test mode to avoid hitting Let’s Encrypt’s rate limits during initial setup.

3. Run in Test Mode

Make the script executable and then launch it:

chmod +x init-letsencrypt.sh
sudo ./init-letsencrypt.sh

Key checks:

  1. Ensure no error messages appear in the terminal.
  2. Visit the domain in your web browser to confirm it’s loading. In test mode, you may see a certificate warning; that’s normal at this stage.

4. Switch to Production

Once you confirm everything works in test mode, switch to production. Edit init-letsencrypt.sh again and set:

staging=0

Then run the script once more:

sudo ./init-letsencrypt.sh

This time, Let’s Encrypt will issue a valid SSL certificate for public use. Your site should now load securely without any certificate warnings.


Troubleshooting

If something goes wrong, try the following:

  1. Check Nginx Logs
    docker compose logs nginx-proxy
    Look for clues about misconfiguration or networking issues.
  2. Common Pitfalls
    • Is your domain’s DNS correctly pointing to the server’s IP address?
    • Are ports 80 (HTTP) and 443 (HTTPS) open and not blocked by a firewall?
    • Have you configured any cloud provider’s firewall or security group rules to allow inbound traffic on these ports?

For more detailed instructions and advanced examples, consult the project’s GitHub README or check out the tutorial video linked within that repository.


Customization Options

After the basic setup is complete, you can personalize various parts of your environment based on your application’s needs.

1. Back-End Server Configuration

Within default.conf, you can point to the actual application server or services you want to protect:

location / {
    proxy_pass http://10.0.0.37:8080;  # Adjust this line
}
  • Change the IP address and port number to match your back-end service.
  • If you have multiple services, add additional location blocks for each one.

2. SSL/TLS Settings

Tailor your TLS settings to meet specific security policies:

ssl_protocols TLSv1.2 TLSv1.3;    # Which TLS versions to support
ssl_session_timeout 1d;           # Session validity
ssl_session_cache shared:SSL:10m; # Cache size
  • You can also update your cipher suites or enable HTTP/2 for better performance.

3. Certificate Renewal Frequency

In docker-compose.yml, locate the Certbot service section to adjust how often the script checks for expiring certificates:

sleep 12h

If you want more frequent checks (e.g., every 6 hours), tweak the value accordingly.

4. Additional Customization Ideas

  • Logging
    • Change where access and error logs are stored.
    • Implement log rotation to prevent disk space from filling up.
  • Resource Limits
    • Set container memory or CPU constraints in the Docker Compose file.
    • Tune Nginx worker processes for high-traffic scenarios.
  • Ads.txt and Access Control
    • Place an ads.txt file at the root if you run ads.
    • Use IP whitelisting or basic auth to restrict sensitive endpoints.

Always test any changes before going live. For instance:

docker compose exec nginx-proxy nginx -t

This command checks whether your Nginx configuration is valid. If no errors are reported, you can reload or restart the service without worry.

By fine-tuning each component step by step, you’ll end up with a robust, secure, and versatile reverse proxy solution to fit your specific needs.

Managing Multiple Sites

One of the most powerful features of a reverse proxy is its ability to effortlessly handle multiple websites or services. Here’s a hypothetical scenario to illustrate how you might use it:

  • A blog (e.g., WordPress) running on port 8080
  • A portfolio site running on port 8888

Without a reverse proxy, your URLs might look like this:

http://example.com:8080 (Blog)
http://example.com:8888 (Portfolio)

However, by placing Nginx in front of these applications, you can transform those addresses into more user-friendly URLs:

https://blog.example.com
https://portfolio.example.com

or, if you prefer subfolders instead of subdomains:

https://example.com/blog
https://example.com/portfolio

Sample Configuration

Below are two different approaches to routing traffic to separate back-end services.

Using Subdomains

server {
    server_name blog.example.com;
    location / {
        proxy_pass http://localhost:8080;
    }
}

server {
    server_name portfolio.example.com;
    location / {
        proxy_pass http://localhost:8888;
    }
}

In this setup, each domain (blog.example.com and portfolio.example.com) points to a different service. Users don’t need to deal with unusual port numbers, and you can assign distinct SSL certificates for each subdomain.

Using Paths

server {
    server_name example.com;
    
    location /blog {
        proxy_pass http://localhost:8080;
    }
    
    location /portfolio {
        proxy_pass http://localhost:8888;
    }
}

With this method, everything is accessed through a single domain (example.com), and each service resides under a dedicated path (e.g., /blog or /portfolio). This option simplifies your domain management since you only need one SSL certificate and one DNS entry.

graph LR
    U((Internet))
    
    subgraph PC
        subgraph "Nginx Container"
            N[Reverse Proxy<br/>Nginx :443]
        end
        
        subgraph "Local Services"
            B[Blog<br/>:8080]
            P[Portfolio<br/>:8888]
        end
    end
    
    U -->|"https://blog.example.com"| N
    U -->|"https://portfolio.example.com"| N
    N -->|"proxy_pass<br/>localhost:8080"| B
    N -->|"proxy_pass<br/>localhost:8888"| P
    
    style U fill:#f9f,stroke:#333,stroke-width:2px
    style N fill:#90EE90,stroke:#333,stroke-width:2px
    style B fill:#87CEEB,stroke:#333,stroke-width:2px
    style P fill:#FFB6C1,stroke:#333,stroke-width:2px

Practical Use Case

In a real-world network, you might have different servers on the same LAN handling various tasks. Your reverse proxy can forward requests to each internal server based on subdomain or path:

graph TB
    U((Internet))
    
    subgraph "Internal LAN (192.168.0.0/24)"
        subgraph "Proxy Server (192.168.0.10)"
            N[Reverse Proxy<br/>Nginx :443]
        end
        
        WP["WordPress Server<br/>(192.168.0.20:8080)"]
        APP["Application Server<br/>(192.168.0.30:3000)"]
        DB["Database Server<br/>(192.168.0.40:5432)"]
    end
    
    U -->|"https://blog.example.com"| N
    U -->|"https://app.example.com"| N
    N -->|"proxy_pass<br/>192.168.0.20:8080"| WP
    N -->|"proxy_pass<br/>192.168.0.30:3000"| APP
    APP -.->|"internal<br/>communication"| DB
    
    style U fill:#f9f,stroke:#333,stroke-width:2px
    style N fill:#90EE90,stroke:#333,stroke-width:2px
    style WP fill:#87CEEB,stroke:#333,stroke-width:2px
    style APP fill:#FFB6C1,stroke:#333,stroke-width:2px
    style DB fill:#DDA0DD,stroke:#333,stroke-width:2px
  • Blog Server (LAN IP 192.168.1.10, listens on port 9000)
  • Portfolio Server (LAN IP 192.168.1.11, listens on port 7000)

By mapping subdomains or paths to these internal IP addresses and ports, you maintain a clean and professional URL structure while keeping your back-end infrastructure hidden. You can further refine this approach to apply unique security policies, caching rules, or even load balancing across multiple instances of each site.

This flexibility makes it easy to expand or alter your services without ever exposing their real ports or IP addresses directly to the public Internet.


Tip: If you plan to set up distinct SSL certificates for subdomains, ensure that you have corresponding DNS entries (e.g., blog and portfolio CNAME records) pointing to the same IP address that hosts the reverse proxy.

If you like this article, please
Follow !

Please share if you like it!
table of contents