One place for hosting & domains

      How To Use Traefik v2 as a Reverse Proxy for Docker Containers on Ubuntu 20.04


      The author selected Girls Who Code to receive a donation as part of the Write for DOnations program.

      Introduction

      Docker can be an efficient way to run web applications in production, but you may want to run multiple applications on the same Docker host. In this situation, you’ll need to set up a reverse proxy. This is because you only want to expose ports 80 and 443 to the rest of the world.

      Traefik is a Docker-aware reverse proxy that includes a monitoring dashboard. Traefik v1 has been widely used for a while, and you can follow this earlier tutorial to install Traefik v1). But in this tutorial, you’ll install and configure Traefik v2, which includes quite a few differences.

      The biggest difference between Traefik v1 and v2 is that frontends and backends were removed and their combined functionality spread out across routers, middlewares, and services. Previously a backend did the job of making modifications to requests and getting that request to whatever was supposed to handle it. Traefik v2 provides more separation of concerns by introducing middlewares that can modify requests before sending them to a service. Middlewares make it easier to specify a single modification step that might be used by a lot of different routes so that they can be reused (such as HTTP Basic Auth, which you’ll see later). A router can also use many different middlewares.

      In this tutorial you’ll configure Traefik v2 to route requests to two different web application containers: a WordPress container and an Adminer container, each talking to a MySQL database. You’ll configure Traefik to serve everything over HTTPS using Let’s Encrypt.

      Prerequisites

      To complete this tutorial, you will need the following:

      Step 1 — Configuring and Running Traefik

      The Traefik project has an official Docker image, so you will use that to run Traefik in a Docker container.

      But before you get your Traefik container up and running, you need to create a configuration file and set up an encrypted password so you can access the monitoring dashboard.

      You’ll use the htpasswd utility to create this encrypted password. First, install the utility, which is included in the apache2-utils package:

      • sudo apt-get install apache2-utils

      Then generate the password with htpasswd. Substitute secure_password with the password you’d like to use for the Traefik admin user:

      • htpasswd -nb admin secure_password

      The output from the program will look like this:

      Output

      admin:$apr1$ruca84Hq$mbjdMZBAG.KWn7vfN/SNK/

      You’ll use this output in the Traefik configuration file to set up HTTP Basic Authentication for the Traefik health check and monitoring dashboard. Copy the entire output line so you can paste it later.

      To configure the Traefik server, you’ll create two new configuration files called traefik.toml and traefik_dynamic.toml using the TOML format. TOML is a configuration language similar to INI files, but standardized. These files let us configure the Traefik server and various integrations, or providers, that you want to use. In this tutorial, you will use three of Traefik’s available providers: api, docker, and acme. The last of these, acme, supports TLS certificates using Let’s Encrypt.

      Create and open traefik.toml using nano or your preferred text editor:

      First, you want to specify the ports that Traefik should listen on using the entryPoints section of your config file. You want two because you want to listen on port 80 and 443. Let’s call these web (port 80) and websecure (port 443).

      Add the following configurations:

      traefik.toml

      [entryPoints]
        [entryPoints.web]
          address = ":80"
          [entryPoints.web.http.redirections.entryPoint]
            to = "websecure"
            scheme = "https"
      
        [entryPoints.websecure]
          address = ":443"
      

      Note that you are also automatically redirecting traffic to be handled over TLS.

      Next, configure the Traefik api, which gives you access to both the API and your dashboard interface. The heading of [api] is all that you need because the dashboard is then enabled by default, but you’ll be explicit for the time being.

      Add the following code:

      traefik.toml

      ...
      [api]
        dashboard = true
      

      To finish securing your web requests you want to use Let’s Encrypt to generate valid TLS certificates. Traefik v2 supports Let’s Encrypt out of the box and you can configure it by creating a certificates resolver of the type acme.

      Let’s configure your certificates resolver now using the name lets-encrypt:

      traefik.toml

      ...
      [certificatesResolvers.lets-encrypt.acme]
        email = "your_email@your_domain"
        storage = "acme.json"
        [certificatesResolvers.lets-encrypt.acme.tlsChallenge]
      

      This section is called acme because ACME is the name of the protocol used to communicate with Let’s Encrypt to manage certificates. The Let’s Encrypt service requires registration with a valid email address, so to have Traefik generate certificates for your hosts, set the email key to your email address. You then specify that you will store the information that you will receive from Let’s Encrypt in a JSON file called acme.json.

      The acme.tlsChallenge section allows us to specify how Let’s Encrypt can verify that the certificate. You’re configuring it to serve a file as part of the challenge over port 443.

      Finally, you need to configure Traefik to work with Docker.

      Add the following configurations:

      traefik.toml

      ...
      [providers.docker]
        watch = true
        network = "web"
      

      The docker provider enables Traefik to act as a proxy in front of Docker containers. You’ve configured the provider to watch for new containers on the web network, which you’ll create soon.

      Our final configuration uses the file provider. With Traefik v2, static and dynamic configurations can’t be mixed and matched. To get around this, you will use traefik.toml to define your static configurations and then keep your dynamic configurations in another file, which you will call traefik_dynamic.toml. Here you are using the file provider to tell Traefik that it should read in dynamic configurations from a different file.

      Add the following file provider:

      traefik.toml

      • [providers.file]
      • filename = "traefik_dynamic.toml"

      Your completed traefik.toml will look like this:

      traefik.toml

      [entryPoints]
        [entryPoints.web]
          address = ":80"
          [entryPoints.web.http.redirections.entryPoint]
            to = "websecure"
            scheme = "https"
      
        [entryPoints.websecure]
          address = ":443"
      
      [api]
        dashboard = true
      
      [certificatesResolvers.lets-encrypt.acme]
        email = "your_email@your_domain"
        storage = "acme.json"
        [certificatesResolvers.lets-encrypt.acme.tlsChallenge]
      
      [providers.docker]
        watch = true
        network = "web"
      
      [providers.file]
        filename = "traefik_dynamic.toml"
      

      Save and close the file.

      Now let’s create traefik_dynamic.toml.

      The dynamic configuration values that you need to keep in their own file are the middlewares and the routers. To put your dashboard behind a password you need to customize the API’s router and configure a middleware to handle HTTP basic authentication. Let’s start by setting up the middleware.

      The middleware is configured on a per-protocol basis and since you’re working with HTTP you’ll specify it as a section chained off of http.middlewares. Next comes the name of your middleware so that you can reference it later, followed by the type of middleware that it is, which will be basicAuth in this case. Let’s call your middleware simpleAuth.

      Create and open a new file called traefik_dynamic.toml:

      • nano traefik_dynamic.toml

      Add the following code. This is where you’ll paste the output from the htpasswd command:

      traefik_dynamic.toml

      [http.middlewares.simpleAuth.basicAuth]
        users = [
          "admin:$apr1$ruca84Hq$mbjdMZBAG.KWn7vfN/SNK/"
        ]
      

      To configure the router for the api you’ll once again be chaining off of the protocol name, but instead of using http.middlewares, you’ll use http.routers followed by the name of the router. In this case, the api provides its own named router that you can configure by using the [http.routers.api] section. You’ll configure the domain that you plan on using with your dashboard also by setting the rule key using a host match, the entrypoint to use websecure, and the middlewares to include simpleAuth.

      Add the following configurations:

      traefik_dynamic.toml

      ...
      [http.routers.api]
        rule = "Host(`your_domain`)"
        entrypoints = ["websecure"]
        middlewares = ["simpleAuth"]
        service = "api@internal"
        [http.routers.api.tls]
          certResolver = "lets-encrypt"
      

      The web entry point handles port 80, while the websecure entry point uses port 443 for TLS/SSL. You automatically redirect all of the traffic on port 80 to the websecure entry point to force secure connections for all requests.

      Notice the last three lines here configure a service, enable tls, and configure certResolver to "lets-encrypt". Services are the final step to determining where a request is finally handled. The api@internal service is a built-in service that sits behind the API that you expose. Just like routers and middlewares, services can be configured in this file, but you won’t need to do that to achieve your desired result.

      Your completed traefik_dynamic.toml file will look like this:

      traefik_dynamic.toml

      [http.middlewares.simpleAuth.basicAuth]
        users = [
          "admin:$apr1$ruca84Hq$mbjdMZBAG.KWn7vfN/SNK/"
        ]
      
      [http.routers.api]
        rule = "Host(`your_domain`)"
        entrypoints = ["websecure"]
        middlewares = ["simpleAuth"]
        service = "api@internal"
        [http.routers.api.tls]
          certResolver = "lets-encrypt"
      

      Save the file and exit the editor.

      With these configurations in place, you will now start Traefik.

      Step 2 – Running the Traefik Container

      In this step you will create a Docker network for the proxy to share with containers. You will then access the Traefik dashboard. The Docker network is necessary so that you can use it with applications that are run using Docker Compose.

      Create a new Docker network called web:

      • docker network create web

      When the Traefik container starts, you will add it to this network. Then you can add additional containers to this network later for Traefik to proxy to.

      Next, create an empty file that will hold your Let’s Encrypt information. You’ll share this into the container so Traefik can use it:

      Traefik will only be able to use this file if the root user inside of the container has unique read and write access to it. To do this, lock down the permissions on acme.json so that only the owner of the file has read and write permission.

      Once the file gets passed to Docker, the owner will automatically change to the root user inside the container.

      Finally, create the Traefik container with this command:

      • docker run -d
      • -v /var/run/docker.sock:/var/run/docker.sock
      • -v $PWD/traefik.toml:/traefik.toml
      • -v $PWD/traefik_dynamic.toml:/traefik_dynamic.toml
      • -v $PWD/acme.json:/acme.json
      • -p 80:80
      • -p 443:443
      • --network web
      • --name traefik
      • traefik:v2.2

      This command is a little long. Let’s break it down.

      You use the -d flag to run the container in the background as a daemon. You then share your docker.sock file into the container so that the Traefik process can listen for changes to containers. You also share the traefik.toml and traefik_dynamic.toml configuration files into the container, as well as acme.json.

      Next, you map ports :80 and :443 of your Docker host to the same ports in the Traefik container so Traefik receives all HTTP and HTTPS traffic to the server.

      You set the network of the container to web, and you name the container traefik.

      Finally, you use the traefik:v2.2 image for this container so that you can guarantee that you’re not running a completely different version than this tutorial is written for.

      A Docker image’s ENTRYPOINT is a command that always runs when a container is created from the image. In this case, the command is the traefik binary within the container. You can pass additional arguments to that command when you launch the container, but you’ve configured all of your settings in the traefik.toml file.

      With the container started, you now have a dashboard you can access to see the health of your containers. You can also use this dashboard to visualize the routers, services, and middlewares that Traefik has registered. You can try to access the monitoring dashboard by pointing your browser to https://monitor.your_domain/dashboard/ (the trailing / is required).

      You will be prompted for your username and password, which are admin and the password you configured in Step 1.

      Once logged in, you’ll see the Traefik interface:

      Empty Traefik dashboard

      You will notice that there are already some routers and services registered, but those are the ones that come with Traefik and the router configuration that you wrote for the API.

      You now have your Traefik proxy running, and you’ve configured it to work with Docker and monitor other containers. In the next step you will start some containers for Traefik to proxy.

      Step 3 — Registering Containers with Traefik

      With the Traefik container running, you’re ready to run applications behind it. Let’s launch the following containers behind Traefik:

      1. A blog using the official WordPress image.
      2. A database management server using the official Adminer image.

      You’ll manage both of these applications with Docker Compose using a docker-compose.yml file.

      Create and open the docker-compose.yml file in your editor:

      Add the following lines to the file to specify the version and the networks you’ll use:

      docker-compose.yml

      version: "3"
      
      networks:
        web:
          external: true
        internal:
          external: false
      

      You use Docker Compose version 3 because it’s the newest major version of the Compose file format.

      For Traefik to recognize your applications, they must be part of the same network, and since you created the network manually, you pull it in by specifying the network name of web and setting external to true. Then you define another network so that you can connect your exposed containers to a database container that you won’t expose through Traefik. You’ll call this network internal.

      Next, you’ll define each of your services, one at a time. Let’s start with the blog container, which you’ll base on the official WordPress image. Add this configuration to the bottom of the file:

      docker-compose.yml

      ...
      
      services:
        blog:
          image: wordpress:4.9.8-apache
          environment:
            WORDPRESS_DB_PASSWORD:
          labels:
            - traefik.http.routers.blog.rule=Host(`blog.your_domain`)
            - traefik.http.routers.blog.tls.enabled=true
            - traefik.http.routers.blog.tls.cert-provider=lets-encrypt
            - traefik.port=80
          networks:
            - internal
            - web
          depends_on:
            - mysql
      

      The environment key lets you specify environment variables that will be set inside of the container. By not setting a value for WORDPRESS_DB_PASSWORD, you’re telling Docker Compose to get the value from your shell and pass it through when you create the container. You will define this environment variable in your shell before starting the containers. This way you don’t hard-code passwords into the configuration file.

      The labels section is where you specify configuration values for Traefik. Docker labels don’t do anything by themselves, but Traefik reads these so it knows how to treat containers. Here’s what each of these labels does:

      • traefik.http.routers.adminer.rule=Host(`blog.your_domain`) creates a new router for your container and then specifies the routing rule used to determine if a request matches this container.
      • traefik.routers.custom_name.tls=true specifies that this router should use TLS.
      • traefik.routers.custom_name.tls.certResolver=lets-encrypt specifies that the certificates resolver that you created earlier called lets-encrypt should be used to get a certificate for this route.
      • traefik.port specifies the exposed port that Traefik should use to route traffic to this container.

      With this configuration, all traffic sent to your Docker host on port 80 or 443 with the domain of blog.your_domain will be routed to the blog container.

      You assign this container to two different networks so that Traefik can find it via the web network and it can communicate with the database container through the internal network.

      Lastly, the depends_on key tells Docker Compose that this container needs to start after its dependencies are running. Since WordPress needs a database to run, you must run your mysql container before starting your blog container.

      Next, configure the MySQL service:

      docker-compose.yml

      services:
      ...
        mysql:
          image: mysql:5.7
          environment:
            MYSQL_ROOT_PASSWORD:
          networks:
            - internal
          labels:
            - traefik.enable=false
      

      You’re using the official MySQL 5.7 image for this container. You’ll notice that you’re once again using an environment item without a value. The MYSQL_ROOT_PASSWORD and WORDPRESS_DB_PASSWORD variables will need to be set to the same value to make sure that your WordPress container can communicate with the MySQL. You don’t want to expose the mysql container to Traefik or the outside world, so you’re only assigning this container to the internal network. Since Traefik has access to the Docker socket, the process will still expose a router for the mysql container by default, so you’ll add the label traefik.enable=false to specify that Traefik should not expose this container.

      Finally, define the Adminer container:

      docker-compose.yml

      services:
      ...
        adminer:
          image: adminer:4.6.3-standalone
          labels:
            - traefik.http.routers.adminer.rule=Host(`db-admin.your_domain`)
            - traefik.http.routers.adminer.tls=true
            - traefik.http.routers.adminer.tls.certresolver=lets-encrypt
            - traefik.port=8080
          networks:
            - internal
            - web
          depends_on:
            - mysql
      

      This container is based on the official Adminer image. The network and depends_on configuration for this container exactly match what you’re using for the blog container.

      The line traefik.http.routers.adminer.rule=Host(`db-admin.your_domain`) tells Traefik to examine the host requested. If it matches the pattern of db-admin.your_domain, Traefik will route the traffic to the adminer container over port 8080.

      Your completed docker-compose.yml file will look like this:

      docker-compose.yml

      version: "3"
      
      networks:
        web:
          external: true
        internal:
          external: false
      
      services:
        blog:
          image: wordpress:4.9.8-apache
          environment:
            WORDPRESS_DB_PASSWORD:
          labels:
            - traefik.http.routers.blog.rule=Host(`blog.your_domain`)
            - traefik.http.routers.blog.tls=true
            - traefik.http.routers.blog.tls.certresolver=lets-encrypt
            - traefik.port=80
          networks:
            - internal
            - web
          depends_on:
            - mysql
      
        mysql:
          image: mysql:5.7
          environment:
            MYSQL_ROOT_PASSWORD:
          networks:
            - internal
          labels:
            - traefik.enable=false
      
        adminer:
          image: adminer:4.6.3-standalone
          labels:
          labels:
            - traefik.http.routers.adminer.rule=Host(`db-admin.your_domain`)
            - traefik.http.routers.adminer.tls=true
            - traefik.http.routers.adminer.tls.certresolver=lets-encrypt
            - traefik.port=8080
          networks:
            - internal
            - web
          depends_on:
            - mysql
      

      Save the file and exit the text editor.

      Next, set values in your shell for the WORDPRESS_DB_PASSWORD and MYSQL_ROOT_PASSWORD variables:

      • export WORDPRESS_DB_PASSWORD=secure_database_password
      • export MYSQL_ROOT_PASSWORD=secure_database_password

      Substitute secure_database_password with your desired database password. Remember to use the same password for both WORDPRESS_DB_PASSWORD and MYSQL_ROOT_PASSWORD.

      With these variables set, run the containers using docker-compose:

      Now watch the Traefik admin dashboard while it populates.

      Populated Traefik dashboard

      If you explore the Routers section you will find routers for adminer and blog configured with TLS:

      HTTP Routers w/ TLS

      Navigate to blog.your_domain, substituting your_domain with your domain. You’ll be redirected to a TLS connection and you can now complete the WordPress setup:

      WordPress setup screen

      Now access Adminer by visiting db-admin.your_domain in your browser, again substituting your_domain with your domain. The mysql container isn’t exposed to the outside world, but the adminer container has access to it through the internal Docker network that they share using the mysql container name as a hostname.

      On the Adminer login screen, enter root for Username, enter mysql for Server, and enter the value you set for MYSQL_ROOT_PASSWORD for the Password. Leave Database empty. Now press Login.

      Once logged in, you’ll see the Adminer user interface.

      Adminer connected to the MySQL database

      Both sites are now working, and you can use the dashboard at monitor.your_domain to keep an eye on your applications.

      Conclusion

      In this tutorial, you configured Traefik v2 to proxy requests to other applications in Docker containers.

      Traefik’s declarative configuration at the application container level makes it easy to configure more services, and there’s no need to restart the traefik container when you add new applications to proxy traffic to since Traefik notices the changes immediately through the Docker socket file it’s monitoring.

      To learn more about what you can do with Traefik v2, head over to the official Traefik documentation.



      Source link

      How To Use Traefik as a Reverse Proxy for Docker Containers on Ubuntu 20.04


      The author selected Girls Who Code to receive a donation as part of the Write for DOnations program.

      Introduction

      Docker can be an efficient way to run web applications in production, but you may want to run multiple applications on the same Docker host. In this situation, you’ll need to set up a reverse proxy since you only want to expose ports 80 and 443 to the rest of the world.

      Traefik is a Docker-aware reverse proxy that includes its own monitoring dashboard. In this tutorial, you’ll use Traefik to route requests to two different web application containers: a WordPress container and an Adminer container, each talking to a MySQL database. You’ll configure Traefik to serve everything over HTTPS using Let’s Encrypt.

      Prerequisites

      To follow this tutorial, you will need the following:

      Step 1 — Configuring and Running Traefik

      The Traefik project has an official Docker image, so we will use that to run Traefik in a Docker container.

      But before we get our Traefik container up and running, we need to create a configuration file and set up an encrypted password so we can access the monitoring dashboard.

      We’ll use the htpasswd utility to create this encrypted password. First, install the utility, which is included in the apache2-utils package:

      • sudo apt-get install apache2-utils

      Then generate the password with htpasswd. Substitute secure_password with the password you’d like to use for the Traefik admin user:

      • htpasswd -nb admin secure_password

      The output from the program will look like this:

      Output

      admin:$apr1$ruca84Hq$mbjdMZBAG.KWn7vfN/SNK/

      You’ll use your unique output in the Traefik configuration file to set up HTTP Basic Authentication for the Traefik health check and monitoring dashboard. Copy your entire output line so you can paste it later. Do not use the example output.

      To configure the Traefik server, we’ll create a new configuration file called traefik.toml using the TOML format. TOML is a configuration language similar to INI files, but standardized. This file lets us configure the Traefik server and various integrations, or providers, that we want to use. In this tutorial, we will use three of Traefik’s available providers: api, docker, and acme. The last of these, acme supports TLS certificates using Let’s Encrypt.

      Open up your new file in nano or your favorite text editor:

      First, add two named entry points, http and https, which all backends will have access to by default:

      traefik.toml

      defaultEntryPoints = ["http", "https"]
      

      We’ll configure the http and https entry points later in this file.

      Next, configure the api provider, which gives you access to a dashboard interface. This is where you’ll paste the output from the htpasswd command:

      traefik.toml

      ...
      [entryPoints]
        [entryPoints.dashboard]
          address = ":8080"
          [entryPoints.dashboard.auth]
            [entryPoints.dashboard.auth.basic]
              users = ["admin:your_encrypted_password"]
      
      [api]
      entrypoint="dashboard"
      

      The dashboard is a separate web application that will run within the Traefik container. We set the dashboard to run on port 8080.

      The entrypoints.dashboard section configures how we’ll be connecting with the api provider, and the entrypoints.dashboard.auth.basic section configures HTTP Basic Authentication for the dashboard. Use the output from the htpasswd command you just ran for the value of the users entry. You could specify additional logins by separating them with commas.

      We’ve defined our first entryPoint, but we’ll need to define others for standard HTTP and HTTPS communication that isn’t directed towards the api provider. The entryPoints section configures the addresses that Traefik and the proxied containers can listen on. Add these lines to the file underneath the entryPoints heading:

      traefik.toml

      ...
        [entryPoints.http]
          address = ":80"
            [entryPoints.http.redirect]
              entryPoint = "https"
        [entryPoints.https]
          address = ":443"
            [entryPoints.https.tls]
      ...
      

      The http entry point handles port 80, while the https entry point uses port 443 for TLS/SSL. We automatically redirect all of the traffic on port 80 to the https entry point to force secure connections for all requests.

      Next, add this section to configure Let’s Encrypt certificate support for Traefik:

      traefik.toml

      ...
      [acme]
      email = "your_email@your_domain"
      storage = "acme.json"
      entryPoint = "https"
      onHostRule = true
        [acme.httpChallenge]
        entryPoint = "http"
      

      This section is called acme because ACME is the name of the protocol used to communicate with Let’s Encrypt to manage certificates. The Let’s Encrypt service requires registration with a valid email address, so in order to have Traefik generate certificates for our hosts, set the email key to your email address. We then specify that we will store the information that we will receive from Let’s Encrypt in a JSON file called acme.json. The entryPoint key needs to point to the entry point handling port 443, which in our case is the https entry point.

      The key onHostRule dictates how Traefik should go about generating certificates. We want to fetch our certificates as soon as our containers with specified hostnames are created, and that’s what the onHostRule setting will do.

      The acme.httpChallenge section allows us to specify how Let’s Encrypt can verify that the certificate should be generated. We’re configuring it to serve a file as part of the challenge through the http entrypoint.

      Finally, let’s configure the docker provider by adding these lines to the file:

      traefik.toml

      ...
      [docker]
      domain = "your_domain"
      watch = true
      network = "web"
      

      The docker provider enables Traefik to act as a proxy in front of Docker containers. We’ve configured the provider to watch for new containers on the web network, which we’ll create soon, and expose them as subdomains of your_domain.

      At this point, traefik.toml should have the following contents:

      traefik.toml

      defaultEntryPoints = ["http", "https"]
      
      [entryPoints]
        [entryPoints.dashboard]
          address = ":8080"
          [entryPoints.dashboard.auth]
            [entryPoints.dashboard.auth.basic]
              users = ["admin:your_encrypted_password"]
        [entryPoints.http]
          address = ":80"
            [entryPoints.http.redirect]
              entryPoint = "https"
        [entryPoints.https]
          address = ":443"
            [entryPoints.https.tls]
      
      [api]
      entrypoint="dashboard"
      
      [acme]
      email = "your_email@your_domain"
      storage = "acme.json"
      entryPoint = "https"
      onHostRule = true
        [acme.httpChallenge]
        entryPoint = "http"
      
      [docker]
      domain = "your_domain"
      watch = true
      network = "web"
      

      Save the file and exit the editor. With these configurations in place, we can initialize Traefik.

      Step 2 — Running the Traefik Container

      Next, create a Docker network for the proxy to share with containers. The Docker network is necessary so that we can use it with applications that are run using Docker Compose. Let’s call this network web:

      • docker network create web

      When the Traefik container starts, we will add it to this network. Then we can add additional containers to this network later for Traefik to proxy to.

      Next, create an empty file that will hold our Let’s Encrypt information. We’ll share this into the container so Traefik can use it:

      Traefik will only be able to use this file if the root user inside of the container has unique read and write access to it. To do this, lock down the permissions on acme.json so that only the owner of the file has read and write permission:

      Once the file gets passed to Docker, the owner will automatically change to the root user inside the container.

      Finally, create the Traefik container with this command:

      • docker run -d
      • -v /var/run/docker.sock:/var/run/docker.sock
      • -v $PWD/traefik.toml:/traefik.toml
      • -v $PWD/acme.json:/acme.json
      • -p 80:80
      • -p 443:443
      • -l traefik.frontend.rule=Host:monitor.your_domain
      • -l traefik.port=8080
      • --network web
      • --name traefik
      • traefik:1.7-alpine

      The command is a little long so let’s break it down.

      We use the -d flag to run the container in the background as a daemon. We then share our docker.sock file into the container so that the Traefik process can listen for changes to containers. We also share the traefik.toml configuration file and the acme.json file we created into the container.

      Next, we map ports :80 and :443 of our Docker host to the same ports in the Traefik container so Traefik receives all HTTP and HTTPS traffic to the server.

      Then we set up two Docker labels that tell Traefik to direct traffic to the hostname monitor.your_domain to port :8080 within the Traefik container, which will expose the monitoring dashboard.

      We set the network of the container to web, and we name the container traefik.

      Finally, we use the traefik:1.7-alpine image for this container, because it’s small.

      A Docker image’s ENTRYPOINT is a command that always runs when a container is created from the image. In this case, the command is the traefik binary within the container. You can pass additional arguments to that command when you launch the container, but we’ve configured all of our settings in the traefik.toml file.

      With the container started, you now have a dashboard you can access to see the health of your containers. You can also use this dashboard to visualize the frontends and backends that Traefik has registered. Access the monitoring dashboard by pointing your browser to https://monitor.your_domain. You will be prompted for your username and password, which are admin and the password you configured in Step 1.

      Once logged in, you’ll see an interface similar to this:

      Empty Traefik dashboard

      There isn’t much to see just yet, but leave this window open, and you will see the contents change as you add containers for Traefik to manage.

      We now have our Traefik proxy running, configured to work with Docker, and ready to monitor other Docker containers. Let’s add some containers for Traefik to proxy.

      Step 3 — Registering Containers with Traefik

      With the Traefik container running, you’re ready to run applications behind it. Let’s launch the following containers behind Traefik:

      1. A blog using the official WordPress image.
      2. A database management server using the official Adminer image.

      We’ll manage both of these applications with Docker Compose using a docker-compose.yml file.

      Create and open the docker-compose.yml file in your editor:

      Add the following lines to the file to specify the version and the networks we’ll use:

      docker-compose.yml

      version: "3"
      
      networks:
        web:
          external: true
        internal:
          external: false
      

      We use Docker Compose version 3 because it’s the newest major version of the Compose file format.

      For Traefik to recognize our applications, they must be part of the same network, and since we created the network manually, we pull it in by specifying the network name of web and setting external to true. Then we define another network so that we can connect our exposed containers to a database container that we won’t expose through Traefik. We’ll call this network internal.

      Next, we’ll define each of our services, one at a time. Let’s start with the blog container, which we’ll base on the official WordPress image. Add this configuration to the bottom of your file:

      docker-compose.yml

      ...
      
      services:
        blog:
          image: wordpress:4.9.8-apache
          environment:
            WORDPRESS_DB_PASSWORD:
          labels:
            - traefik.backend=blog
            - traefik.frontend.rule=Host:blog.your_domain
            - traefik.docker.network=web
            - traefik.port=80
          networks:
            - internal
            - web
          depends_on:
            - mysql
      

      The environment key lets you specify environment variables that will be set inside of the container. By not setting a value for WORDPRESS_DB_PASSWORD, we’re telling Docker Compose to get the value from our shell and pass it through when we create the container. We will define this environment variable in our shell before starting the containers. This way we don’t hard-code passwords into the configuration file.

      The labels section is where you specify configuration values for Traefik. Docker labels don’t do anything by themselves, but Traefik reads these so it knows how to treat containers. Here’s what each of these labels does:

      • traefik.backend specifies the name of the backend service in Traefik (which points to the actual blog container).
      • traefik.frontend.rule=Host:blog.your_domain tells Traefik to examine the host requested and if it matches the pattern of blog.your_domain it should route the traffic to the blog container.
      • traefik.docker.network=web specifies which network to look under for Traefik to find the internal IP for this container. Since our Traefik container has access to all of the Docker info, it would potentially take the IP for the internal network if we didn’t specify this.
      • traefik.port specifies the exposed port that Traefik should use to route traffic to this container.

      With this configuration, all traffic sent to our Docker host’s port 80 will be routed to the blog container.

      We assign this container to two different networks so that Traefik can find it via the web network and it can communicate with the database container through the internal network.

      Lastly, the depends_on key tells Docker Compose that this container needs to start after its dependencies are running. Since WordPress needs a database to run, we must run our mysql container before starting our blog container.

      Next, configure the MySQL service by adding this configuration to the bottom of your file:

      docker-compose.yml

      ...
        mysql:
          image: mysql:5.7
          environment:
            MYSQL_ROOT_PASSWORD:
          networks:
            - internal
          labels:
            - traefik.enable=false
      

      We’re using the official MySQL 5.7 image for this container. You’ll notice that we’re once again using an environment item without a value. The MYSQL_ROOT_PASSWORD and WORDPRESS_DB_PASSWORD variables will need to be set to the same value to make sure that our WordPress container can communicate with the MySQL. We don’t want to expose the mysql container to Traefik or the outside world, so we’re only assigning this container to the internal network. Since Traefik has access to the Docker socket, the process will still expose a frontend for the mysql container by default, so we’ll add the label traefik.enable=false to specify that Traefik should not expose this container.

      Finally, add this configuration to the bottom of your file to define the Adminer container:

      docker-compose.yml

      ...
        adminer:
          image: adminer:4.6.3-standalone
          labels:
            - traefik.backend=adminer
            - traefik.frontend.rule=Host:db-admin.your_domain
            - traefik.docker.network=web
            - traefik.port=8080
          networks:
            - internal
            - web
          depends_on:
            - mysql
      

      This container is based on the official Adminer image. The network and depends_on configuration for this container exactly match what we’re using for the blog container.

      However, since we’re directing all of the traffic to port 80 on our Docker host directly to the blog container, we need to configure this container differently in order for traffic to make it to our adminer container. The line traefik.frontend.rule=Host:db-admin.your_domain tells Traefik to examine the host requested. If it matches the pattern of db-admin.your_domain, Traefik will route the traffic to the adminer container.

      At this point, docker-compose.yml should have the following contents:

      docker-compose.yml

      version: "3"
      
      networks:
        web:
          external: true
        internal:
          external: false
      
      services:
        blog:
          image: wordpress:4.9.8-apache
          environment:
            WORDPRESS_DB_PASSWORD:
          labels:
            - traefik.backend=blog
            - traefik.frontend.rule=Host:blog.your_domain
            - traefik.docker.network=web
            - traefik.port=80
          networks:
            - internal
            - web
          depends_on:
            - mysql
        mysql:
          image: mysql:5.7
          environment:
            MYSQL_ROOT_PASSWORD:
          networks:
            - internal
          labels:
            - traefik.enable=false
        adminer:
          image: adminer:4.6.3-standalone
          labels:
            - traefik.backend=adminer
            - traefik.frontend.rule=Host:db-admin.your_domain
            - traefik.docker.network=web
            - traefik.port=8080
          networks:
            - internal
            - web
          depends_on:
            - mysql
      

      Save the file and exit the text editor.

      Next, set values in your shell for the WORDPRESS_DB_PASSWORD and MYSQL_ROOT_PASSWORD variables before you start your containers:

      • export WORDPRESS_DB_PASSWORD=secure_database_password
      • export MYSQL_ROOT_PASSWORD=secure_database_password

      Substitute secure_database_password with your desired database password. Remember to use the same password for both WORDPRESS_DB_PASSWORD and MYSQL_ROOT_PASSWORD.

      With these variables set, run the containers using docker-compose:

      Now take another look at the Traefik admin dashboard. You’ll see that there is now a backend and a frontend for the two exposed servers:

      Populated Traefik dashboard

      Navigate to blog.your_domain. You’ll be redirected to a TLS connection and can now complete the WordPress setup:

      WordPress setup screen

      Now access Adminer by visiting db-admin.your_domain in your browser, again substituting your_domain with your domain. The mysql container isn’t exposed to the outside world, but the adminer container has access to it through the internal Docker network that they share using the mysql container name as a hostname.

      On the Adminer login screen, set the System dropdown menu to MySQL. Now enter mysql for the Server, enter root for the username, and enter the value you set for MYSQL_ROOT_PASSWORD for Password. Leave Database empty. Now press Login.

      Once logged in, you’ll see the Adminer user interface:

      Adminer connected to the MySQL database

      Both sites are now working, and you can use the dashboard at monitor.your_domain to keep an eye on your applications.

      Conclusion

      In this tutorial, you configured Traefik to proxy requests to other applications in Docker containers.

      Traefik’s declarative configuration at the application container level makes it easy to configure more services, and there’s no need to restart the traefik container when you add new applications to proxy traffic because Traefik notices the changes immediately through the Docker socket file that it’s monitoring.

      To learn more about what you can do with Traefik, head over to the official Traefik documentation.



      Source link

      How To Configure Nginx as a Web Server and Reverse Proxy for Apache on One Ubuntu 20.04 Server


      The author selected the Electronic Frontier Foundation to receive a donation as part of the Write for DOnations program.

      Introduction

      Apache and Nginx are two popular open-source web servers often used with PHP. It can be useful to run both of them on the same virtual machine when hosting multiple websites that have varied requirements. The general solution for running two web servers on a single system is to either use multiple IP addresses or different port numbers.

      Servers that have both IPv4 and IPv6 addresses can be configured to serve Apache sites on one protocol and Nginx sites on the other, but this isn’t currently practical, as IPv6 adoption by ISPs is still not widespread. Having a different port number like 81 or 8080 for the second web server is another solution, but sharing URLs with port numbers (such as http://your_domain:81) isn’t always reasonable or ideal.

      In this tutorial you’ll configure Nginx as both a web server and as a reverse proxy for Apache – all on a single server.

      Depending on the web application, code changes might be required to keep Apache reverse-proxy-aware, especially when SSL sites are configured. To avoid this, you will install an Apache module called mod_rpaf which rewrites certain environment variables so it appears that Apache is directly handling requests from web clients.

      We will host four domain names on one server. Two will be served by Nginx: nginx1.your_domain (the default virtual host) and nginx2.your_domain. The remaining two, apache1.your_domain and apache2.your_domain, will be served by Apache. We’ll also configure Apache to serve PHP applications using PHP-FPM, which offers better performance over mod_php.

      Prerequisites

      To complete this tutorial, you’ll need the following:

      • A new Ubuntu 20.04 server configured by following the Initial Server Setup with Ubuntu 20.04, with a sudo non-root user and a firewall.
      • Four fully-qualified domain names configured to point to your server’s IP address. See Step 3 of How To Set Up a Host Name with DigitalOcean for an example of how to do this. If you host your domains’ DNS elsewhere, you should create appropriate A records there instead.

      Step 1 — Installing Apache and PHP-FPM

      Let’s start by installing Apache and PHP-FPM.

      In addition to Apache and PHP-FPM, we will also install the PHP FastCGI Apache module, libapache2-mod-fastcgi, to support FastCGI web applications.

      First, update your package list to ensure you have the latest packages.

      Next, install the Apache and PHP-FPM packages:

      • sudo apt install apache2 php-fpm

      The FastCGI Apache module isn’t available in Ubuntu’s repository so download it from kernel.org and install it using the dpkg command.

      • wget https://mirrors.edge.kernel.org/ubuntu/pool/multiverse/liba/libapache-mod-fastcgi/libapache2-mod-fastcgi_2.4.7~0910052141-1.2_amd64.deb
      • sudo dpkg -i libapache2-mod-fastcgi_2.4.7~0910052141-1.2_amd64.deb

      Next, let’s change Apache’s default configuration to use PHP-FPM.

      Step 2 — Configuring Apache and PHP-FPM

      In this step we will change Apache’s port number to 8080 and configure it to work with PHP-FPM using the mod_fastcgi module. Rename Apache’s ports.conf configuration file:

      • sudo mv /etc/apache2/ports.conf /etc/apache2/ports.conf.default

      Create a new ports.conf file with the port set to 8080:

      • echo "Listen 8080" | sudo tee /etc/apache2/ports.conf

      Note: Web servers are generally set to listen on 127.0.0.1:8080 when configuring a reverse proxy but doing so would set the value of PHP’s environment variable SERVER_ADDR to the loopback IP address instead of the server’s public IP. Our aim is to set up Apache in such a way that its websites do not see a reverse proxy in front of it. So, we will configure it to listen on 8080 on all IP addresses.

      Next we’ll create a virtual host file for Apache. The <VirtualHost> directive in this file will be set to serve sites only on port 8080.

      Disable the default virtual host:

      • sudo a2dissite 000-default

      Then create a new virtual host file, using the existing default site:

      • sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/001-default.conf

      Now open the new configuration file:

      • sudo nano /etc/apache2/sites-available/001-default.conf

      Change the listening port to 8080:

      /etc/apache2/sites-available/000-default.conf

      <VirtualHost *:8080>
          ServerAdmin webmaster@localhost
          DocumentRoot /var/www/html
          ErrorLog ${APACHE_LOG_DIR}/error.log
          CustomLog ${APACHE_LOG_DIR}/access.log combined
      </VirtualHost>
      

      Save the file and activate the new configuration file:

      • sudo a2ensite 001-default

      Then reload Apache:

      • sudo systemctl reload apache2

      Install the net-tools package which contains the netstat command:

      • sudo apt install net-tools

      Verify that Apache is now listening on 8080:

      The output should look like the following example, with apache2 listening on 8080:

      Output

      Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1086/sshd tcp6 0 0 :::8080 :::* LISTEN 4678/apache2 tcp6 0 0 :::22 :::* LISTEN 1086/sshd

      Once you verify that Apache is listening on the correct port, you can configure support for PHP and FastCGI.

      Step 3 — Configuring Apache to Use mod_fastcgi

      Apache serves PHP pages using mod_php by default, but it requires additional configuration to work with PHP-FPM.

      Note: If you are trying this tutorial on an existing installation of LAMP with mod_php, disable it first with sudo a2dismod php7.4.

      We will be adding a configuration block for mod_fastcgi, which depends on mod_action. mod_action is disabled by default, so we first need to enable it:

      Rename the existing FastCGI configuration file:

      • sudo mv /etc/apache2/mods-enabled/fastcgi.conf /etc/apache2/mods-enabled/fastcgi.conf.default

      Create a new configuration file:

      • sudo nano /etc/apache2/mods-enabled/fastcgi.conf

      Add the following directives to the file to pass requests for .php files to the PHP-FPM UNIX socket:

      /etc/apache2/mods-enabled/fastcgi.conf

      <IfModule mod_fastcgi.c>
        AddHandler fastcgi-script .fcgi
        FastCgiIpcDir /var/lib/apache2/fastcgi
        AddType application/x-httpd-fastphp .php
        Action application/x-httpd-fastphp /php-fcgi
        Alias /php-fcgi /usr/lib/cgi-bin/php-fcgi
        FastCgiExternalServer /usr/lib/cgi-bin/php-fcgi -socket /run/php/php7.4-fpm.sock -pass-header Authorization
        <Directory /usr/lib/cgi-bin>
          Require all granted
        </Directory>
      </IfModule>
      

      Save the changes and perform a configuration test:

      Note: If you see the warning Could not reliably determine the server's fully /
      qualified domain name, using 127.0.1.1. Set the /'ServerName' directive globally/
      to suppress this message.
      , you can safely ignore it for now. We’ll configure server names later.

      Reload Apache as long as Syntax OK is displayed:

      • sudo systemctl reload apache2

      Now let’s make sure we can serve PHP from Apache.

      Step 4 — Verifying PHP Functionality

      Let’s make sure that PHP works by creating a phpinfo() file and accessing it from a web browser.

      Create the file /var/www/html/info.php, which contains a call to the phpinfo function:

      • echo "<?php phpinfo(); ?>" | sudo tee /var/www/html/info.php

      Note that if you followed the initial server setup in the Prerequisites section, then you likely enabled the Apache firewall. Let’s go ahead and make sure that we can access our IP on port 8080, which is not currently accessible. We’ll restrict public access to this port in Step 10.

      First allow port 8080 through the firewall:

      Since we are going to secure our Apache domains, let’s go ahead and make sure TLS traffic on port 443 can enter.

      Allow Apache Full to permit traffic on ports 80 and 443:

      • sudo ufw allow "Apache Full"

      Now check your firewall status:

      If you followed the prerequisites, then the output will look like this:

      Output

      To Action From -- ------ ---- OpenSSH ALLOW Anywhere Apache Full ALLOW Anywhere 8080 ALLOW Anywhere OpenSSH (v6) ALLOW Anywhere (v6) Apache Full (v6) ALLOW Anywhere (v6) 8080 (v6) ALLOW Anywhere (v6)

      You will see that port 8080 and Apache Full are allowed alongside any other firewall rules. Now let’s view our info.php page.

      To see the info.php in a browser, go to http://your_server_ip:8080/info.php. This will give you a list of the configuration settings PHP is using. You’ll see output similar to this:

      phpinfo Server API

      phpinfo PHP Variables

      At the top of the page, check that Server API says FPM/FastCGI. About two-thirds of the way down the page, the PHP Variables section will tell you the SERVER_SOFTWARE is Apache on Ubuntu. These confirm that mod_fastcgi is active and Apache is using PHP-FPM to process PHP files.

      Step 5 — Creating Virtual Hosts for Apache

      Let’s create Apache virtual host files for the domains apache1.your_domain and apache2.your_domain. To do that, we’ll first create document root directories for both sites and place some default files in those directories so we can easily test our configuration.

      First, create the document root directories:

      • sudo mkdir -v /var/www/apache1.your_domain /var/www/apache2.your_domain

      Then create an index file for each site:

      • echo "<h1 style="color: green;">Apache 1</h1>" | sudo tee /var/www/apache1.your_domain/index.html
      • echo "<h1 style="color: red;">Apache 2</h1>" | sudo tee /var/www/apache2.your_domain/index.html

      Then create a phpinfo() file for each site so we can test that PHP is configured properly.

      • echo "<?php phpinfo(); ?>" | sudo tee /var/www/apache1.your_domain/info.php
      • echo "<?php phpinfo(); ?>" | sudo tee /var/www/apache2.your_domain/info.php

      Now create the virtual host file for apache1.your_domain:

      • sudo nano /etc/apache2/sites-available/apache1.your_domain.conf

      Add the following code to the file to define the host:

      /etc/apache2/sites-available/apache1.your_domain.conf

          <VirtualHost *:8080>
              ServerName apache1.your_domain
              ServerAlias www.apache1.your_domain
              DocumentRoot /var/www/apache1.your_domain
              <Directory /var/www/apache1.your_domain>
                  AllowOverride All
              </Directory>
          </VirtualHost>
      

      The line AllowOverride All enables .htaccess support.

      These are only the most basic directives. For a complete guide on setting up virtual hosts in Apache, see How To Set Up Apache Virtual Hosts on Ubuntu 18.04.

      Save and close the file. Then create a similar configuration for apache2.your_domain. First create the file:

      • sudo nano /etc/apache2/sites-available/apache2.your_domain.conf

      Then add the configuration to the file:

      your_domain.conf’>/etc/apache2/sites-available/apache2.your_domain.conf

          <VirtualHost *:8080>
              ServerName apache2.your_domain
              ServerAlias www.apache2.your_domain
              DocumentRoot /var/www/apache2.your_domain
              <Directory /var/www/apache2.your_domain
                  AllowOverride All
              </Directory>
          </VirtualHost>
      

      Save the file and exit the editor.

      Now that both Apache virtual hosts are set up, enable the sites using the a2ensite command. This creates a symbolic link to the virtual host file in the sites-enabled directory:

      • sudo a2ensite apache1.your_domain
      • sudo a2ensite apache2.your_domain

      Check Apache for configuration errors again:

      You’ll see Syntax OK displayed if there are no errors. If you see anything else, review the configuration and try again.

      Reload Apache to apply the changes once your configuration is error-free:

      • sudo systemctl reload apache2

      To confirm the sites are working, open http://apache1.your_domain:8080 and http://apache2.your_domain:8080 in your browser and verify that each site displays its index.html file.

      You’ll see the following results:

      apache1 index page

      apache2 index page

      Also, ensure that PHP is working by accessing the info.php files for each site. Visit http://apache1.your_domain:8080/info.php and http://apache2.your_domain:8080/info.php in your browser.

      You’ll see the same PHP configuration spec list on each site as you saw in Step 4.

      We now have two websites hosted on Apache at port 8080. Let’s configure Nginx next.

      Step 6 — Installing and Configuring Nginx

      In this step we’ll install Nginx and configure the domains nginx1.your_domain and nginx2.your_domain as Nginx’s virtual hosts. For a complete guide on setting up virtual hosts in Nginx, see How To Set Up Nginx Server Blocks (Virtual Hosts) on Ubuntu 20.04.

      Install Nginx using the apt package manager:

      Then remove the default virtual host’s symlink since we won’t be using it any more:

      • sudo rm /etc/nginx/sites-enabled/default

      We’ll create our own default site later (nginx1.your_domain).

      Now we’ll create virtual hosts for Nginx using the same procedure we used for Apache. First create document root directories for both the websites:

      • sudo mkdir -v /usr/share/nginx/nginx1.your_domain /usr/share/nginx/nginx2.your_domain

      We’ll keep the Nginx web sites in /usr/share/nginx, which is where Nginx wants them by default. You could put them under /var/www/html with the Apache sites, but this separation may help you associate sites with Nginx.

      As you did with Apache’s virtual hosts, create index and phpinfo() files for testing after setup is complete:

      • echo "<h1 style="color: green;">Nginx 1</h1>" | sudo tee /usr/share/nginx/nginx1.your_domain/index.html
      • echo "<h1 style="color: red;">Nginx 2</h1>" | sudo tee /usr/share/nginx/nginx2.your_domain/index.html
      • echo "<?php phpinfo(); ?>" | sudo tee /usr/share/nginx/nginx1.your_domain/info.php
      • echo "<?php phpinfo(); ?>" | sudo tee /usr/share/nginx/nginx2.your_domain/info.php

      Now create a virtual host file for the domain nginx1.your_domain:

      • sudo nano /etc/nginx/sites-available/nginx1.your_domain

      Nginx calls server {. . .} areas of a configuration file server blocks. Create a server block for the primary virtual host, nginx1.your_domain. The default_server configuration directive makes this the default virtual host which processes HTTP requests which do not match any other virtual host.

      your_domain’>/etc/nginx/sites-available/nginx1.your_domain

      server {
          listen 80 default_server;
      
          root /usr/share/nginx/nginx1.your_domain;
          index index.php index.html index.htm;
      
          server_name nginx1.your_domain www.nginx1.your_domain;
          location / {
              try_files $uri $uri/ /index.php;
          }
      
          location ~ .php$ {
              fastcgi_pass unix:/run/php/php7.4-fpm.sock;
              include snippets/fastcgi-php.conf;
          }
      }
      

      Save and close the file. Now create a virtual host file for Nginx’s second domain, nginx2.your_domain:

      • sudo nano /etc/nginx/sites-available/nginx2.your_domain

      Add the following to the file:

      your_domain’>/etc/nginx/sites-available/nginx2.your_domain

      server {
          root /usr/share/nginx/nginx2.your_domain;
          index index.php index.html index.htm;
      
          server_name nginx2.your_domain www.nginx2.your_domain;
          location / {
              try_files $uri $uri/ /index.php;
          }
      
          location ~ .php$ {
              fastcgi_pass unix:/run/php/php7.4-fpm.sock;
              include snippets/fastcgi-php.conf;
          }
      }
      

      Save and close the file.

      Enable both sites by creating symbolic links to the sites-enabled directory:

      • sudo ln -s /etc/nginx/sites-available/nginx1.your_domain /etc/nginx/sites-enabled/nginx1.your_domain
      • sudo ln -s /etc/nginx/sites-available/nginx2.your_domain /etc/nginx/sites-enabled/nginx2.your_domain

      Test the Nginx configuration to ensure there are no configuration issues:

      Then reload Nginx if there are no errors:

      • sudo systemctl reload nginx

      Now access the phpinfo() file for both Nginx virtual hosts in a web browser by visiting http://nginx1.yourdomain/info.php and http://nginx2.yourdomain/info.php. Look under the PHP Variables sections again.

      [“SERVER_SOFTWARE”] should say nginx, indicating that the files were directly served by Nginx. [“DOCUMENT_ROOT”] should point to the directory you created earlier in this step for each Nginx site.

      At this point, we have installed Nginx and created two virtual hosts. Next we will configure Nginx to proxy requests meant for domains hosted on Apache.

      Step 7 — Configuring Nginx for Apache’s Virtual Hosts

      Let’s create an additional Nginx virtual host with multiple domain names in the server_name directives. Requests for these domain names will be proxied to Apache.

      Create a new Nginx virtual host file to forward requests to Apache:

      • sudo nano /etc/nginx/sites-available/apache

      Add the following code block; it specifies the names of both Apache virtual host domains and proxies their requests to Apache. Remember to use the public IP address in proxy_pass:

      /etc/nginx/sites-available/apache

      server {
          listen 80;
          server_name apache1.your_domain www.apache1.your_domain apache2.your_domain www.apache2.your_domain;
      
          location / {
              proxy_pass http://your_server_ip:8080;
              proxy_set_header Host $host;
              proxy_set_header X-Real-IP $remote_addr;
              proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
              proxy_set_header X-Forwarded-Proto $scheme;
          }
      }
      

      Save the file and enable this new virtual host by creating a symbolic link:

      • sudo ln -s /etc/nginx/sites-available/apache /etc/nginx/sites-enabled/apache

      Test the configuration to ensure there are no errors:

      If there are no errors, reload Nginx:

      • sudo systemctl reload nginx

      Open the browser and access the URL http://apache1.your_domain/info.php in your browser. Scroll down to the PHP Variables section and check the values displayed.

      The variables SERVER_SOFTWARE and DOCUMENT_ROOT confirm that this request was handled by Apache. The variables HTTPXREAL_IP and HTTPXFORWARDED_FOR were added by Nginx and should show the public IP address of the computer you’re using to access the URL (if you accessed Apache directly on port 8080 you would not see these variables).

      We have successfully set up Nginx to proxy requests for specific domains to Apache. Next, let’s configure Apache to set the REMOTE_ADDR variable as if it were handling these requests directly.

      Step 8 — Installing and Configuring mod_rpaf

      In this step you’ll install an Apache module called mod_rpaf which rewrites the values of REMOTE_ADDR, HTTPS and HTTP_PORT based on the values provided by a reverse proxy. Without this module, some PHP applications would require code changes to work seamlessly from behind a proxy. This module is present in Ubuntu’s repository as libapache2-mod-rpaf but it is outdated and doesn’t support certain configuration directives. Instead, we will install it from source.

      Move to your home directory and install the packages needed to build the module:

      • sudo apt install unzip build-essential apache2-dev

      Download the latest stable release from GitHub:

      • wget https://github.com/gnif/mod_rpaf/archive/stable.zip

      Extract the downloaded file:

      Change into the new directory containing the files:

      Compile and install the module:

      Next, create a file in the mods-available directory that will load the rpaf module:

      • sudo nano /etc/apache2/mods-available/rpaf.load

      Add the following code to the file to load the module:

      /etc/apache2/mods-available/rpaf.load

      LoadModule rpaf_module /usr/lib/apache2/modules/mod_rpaf.so
      

      Save the file and exit the editor.

      Create another file in this directory called rpaf.conf that will contain the configuration directives for mod_rpaf:

      • sudo nano /etc/apache2/mods-available/rpaf.conf

      Add the following code block to configure mod_rpaf, making sure to specify the IP address of your server:

      /etc/apache2/mods-available/rpaf.conf

          <IfModule mod_rpaf.c>
              RPAF_Enable             On
              RPAF_Header             X-Real-Ip
              RPAF_ProxyIPs           your_server_ip 
              RPAF_SetHostName        On
              RPAF_SetHTTPS           On
              RPAF_SetPort            On
          </IfModule>
      

      Here’s a brief description of each directive. See the mod_rpaf README file for more information.

      • RPAF_Header – The header to use for the client’s real IP address.
      • RPAF_ProxyIPs – The proxy IP to adjust HTTP requests for.
      • RPAF_SetHostName – Updates the vhost name so ServerName and ServerAlias work.
      • RPAF_SetHTTPS – Sets the HTTPS environment variable based on the value contained in X-Forwarded-Proto.
      • RPAF_SetPort – Sets the SERVER_PORT environment variable. Useful for when Apache is behind a SSL proxy.

      Save rpaf.conf and enable the module:

      This creates symbolic links of the files rpaf.load and rpaf.conf in the mods-enabled directory. Now do a configuration test:

      Reload Apache if there are no errors:

      • sudo systemctl reload apache2

      Access the phpinfo() pages http://apache1.your_domain/info.php and http://apache2.your_domain/info.php in your browser and check the PHP Variables section. The REMOTE_ADDR variable will now also be that of your local computer’s public IP address.

      Now let’s set up TLS/SSL encryption for each site.

      Step 9 — Setting Up HTTPS Websites with Let’s Encrypt (Optional)

      In this step we will configure TLS/SSL certificates for both the domains hosted on Apache. We’ll obtain the certificates through [Let’s Encrypt](https://letsencrypt.org]. Nginx supports SSL termination so we can set up SSL without modifying Apache’s configuration files. The mod_rpaf module ensures that the required environment variables are set on Apache to make applications work seamlessly behind a SSL reverse proxy.

      First we will separate the server {...} blocks of both the domains so that each of them can have their own SSL certificates. Open the file /etc/nginx/sites-available/apache in your editor:

      • sudo nano /etc/nginx/sites-available/apache

      Modify the file so that it looks like this, with apache1.your_domain and apache2.your_domain in their own server blocks:

      /etc/nginx/sites-available/apache

          server {
              listen 80;
              server_name apache1.your_domain www.apache1.your_domain;
      
              location / {
                  proxy_pass http://your_server_ip:8080;
                  proxy_set_header Host $host;
                  proxy_set_header X-Real-IP $remote_addr;
                  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                  proxy_set_header X-Forwarded-Proto $scheme;
              }
          }
          server {
              listen 80;
              server_name apache2.your_domain www.apache2.your_domain;
      
              location / {
                  proxy_pass http://your_server_ip:8080;
                  proxy_set_header Host $host;
                  proxy_set_header X-Real-IP $remote_addr;
                  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                  proxy_set_header X-Forwarded-Proto $scheme;
              }
          }
      

      We’ll use Certbot to generate our TLS/SSL certificates. Its Nginx plugin will take care of reconfiguring Nginx and reloading the config whenever necessary.

      Install certbot using snapd

      • sudo snap install --classic certbot

      Once it’s installed, use the certbot command to generate the certificates for apache1.your_domain and www.apache1.your_domain:

      • sudo certbot --agree-tos --no-eff-email --email your-email --nginx -d apache1.your_domain -d www.apache1.your_domain

      This command tells Certbot to use the nginx plugin, using -d to specify the names we’d like the certificate to be valid for.

      Now execute the command for the second domain:

      • sudo certbot --agree-tos --no-eff-email --email your-email --nginx -d your_domain -d www.apache2.your_domain

      Access one of Apache’s domains in your browser using the https:// prefix; visit https://apache1.your_domain/info.php or https://apache2.your_domain/info.php.

      Look in the PHP Variables section. The variable SERVER_PORT has been set to 443 and HTTPS set to on, as though Apache was directly accessed over HTTPS. With these variables set, PHP applications do not have to be specially configured to work behind a reverse proxy.

      Now let’s disable direct access to Apache.

      Step 10 — Blocking Direct Access to Apache (Optional)

      Since Apache is listening on port 8080 on the public IP address, it is accessible by everyone. It can be blocked by working the following IPtables command into your firewall rule set.

      • sudo iptables -I INPUT -p tcp --dport 8080 ! -s your_server_ip -j REJECT --reject-with tcp-reset

      Be sure to use your server’s IP address in place of the highlighted example. Once port 8080 is blocked in your firewall, test that Apache is unreachable on it. Open your web browser and try accessing one of Apache’s domain names on port 8080. For example: http://apache1.your_domain:8080

      The browser should display an “Unable to connect” or “Webpage is not available” error message. With the IPtables tcp-reset option in place, an outsider would see no difference between port 8080 and a port that doesn’t have any service on it.

      Note: IPtables rules do not survive a system reboot by default. There are multiple ways to preserve IPtables rules, but the easiest is to use iptables-persistent in Ubuntu’s repository. Explore this article to learn more about how to configure IPTables.

      Now let’s configure Nginx to serve static files for the Apache sites.

      Step 11 — Serving Static Files Using Nginx (Optional)

      When Nginx proxies requests for Apache’s domains, it sends every file request for that domain to Apache. Nginx is faster than Apache at serving static files like images, JavaScript and style sheets. So let’s configure Nginx’s apache virtual host file to directly serve static files but send PHP requests on to Apache.

      Open the file /etc/nginx/sites-available/apache in your editor:

      • sudo nano /etc/nginx/sites-available/apache

      You’ll need to add two additional location blocks to each server block, as well as modify the existing location sections. Additionally, you’ll need to tell Nginx where to find the static files for each site.

      If you’ve decided not to use SSL and TLS certificates, modify your file so it looks like this:

      /etc/nginx/sites-available/apache

      server {
          listen 80;
          server_name apache2.your_domain www.apache2.your_domain;
          root /var/www/your_domain;
          index index.php index.htm index.html;
      
          location / {
              try_files $uri $uri/ /index.php;
          }
      
          location ~ .php$ {
              proxy_pass http://your_server_ip:8080;
              proxy_set_header Host $host;
              proxy_set_header X-Real-IP $remote_addr;
              proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
              proxy_set_header X-Forwarded-Proto $scheme;
          }
      
          location ~ /.ht {
              deny all;
          }
      }
      
      server {
          listen 80;
          server_name apache1.your_domain www.apache1.your_domain;
          root /var/www/your_domain;
          index index.php index.htm index.html;
      
          location / {
              try_files $uri $uri/ /index.php;
          }
      
          location ~ .php$ {
              proxy_pass http://your_ip_address:8080;
              proxy_set_header Host $host;
              proxy_set_header X-Real-IP $remote_addr;
              proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
              proxy_set_header X-Forwarded-Proto $scheme;
          }
      
          location ~ /.ht {
              deny all;
          }
      }
      

      If you also want HTTPS to be available, use the following configuration instead:

      /etc/nginx/sites-available/apache

      server {
          listen 80;
          server_name apache2.your_domain www.apache2.your_domain;
          root /var/www/your_domain;
          index index.php index.htm index.html;
      
          location / {
              try_files $uri $uri/ /index.php;
          }
      
          location ~ .php$ {
              proxy_pass http://your_server_ip:8080;
              proxy_set_header Host $host;
              proxy_set_header X-Real-IP $remote_addr;
              proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
              proxy_set_header X-Forwarded-Proto $scheme;
          }
      
          location ~ /.ht {
              deny all;
          }
      
          listen 443 ssl;
          ssl_certificate /etc/letsencrypt/live/your_domain/fullchain.pem;
          ssl_certificate_key /etc/letsencrypt/live/your_domain/privkey.pem;
          include /etc/letsencrypt/options-ssl-nginx.conf;
          ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
      }
      
      server {
          listen 80;
          server_name apache1.your_domain www.apache1.your_domain;
          root /var/www/your_domain;
          index index.php index.htm index.html;
      
          location / {
              try_files $uri $uri/ /index.php;
          }
      
          location ~ .php$ {
              proxy_pass http://your_ip_address:8080;
              proxy_set_header Host $host;
              proxy_set_header X-Real-IP $remote_addr;
              proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
              proxy_set_header X-Forwarded-Proto $scheme;
          }
      
          location ~ /.ht {
              deny all;
          }
      
          listen 443 ssl;
          ssl_certificate /etc/letsencrypt/live/your_domain/fullchain.pem;
          ssl_certificate_key /etc/letsencrypt/live/your_domain/privkey.pem;
          include /etc/letsencrypt/options-ssl-nginx.conf;
          ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
      }
      

      The try_files directive makes Nginx look for files in the document root and directly serve them. If the file has a .php extension, the request is passed to Apache. Even if the file is not found in the document root, the request is passed on to Apache so that application features like permalinks work without problems.

      Warning: The location ~ /.ht directive is very important; this prevents Nginx from serving the contents of Apache configuration files like .htaccess and .htpasswd, which contain sensitive information.

      Save the file and perform a configuration test:

      Reload Nginx if the test succeeds:

      • sudo service nginx reload

      To verify things are working, you can examine Apache’s log files in /var/log/apache2 and see the GET requests for the info.php files of apache2.your_domain and apache1.your_domain. Use the tail command to see the last few lines of the file, and use the -f switch to watch the file for changes:

      • sudo tail -f /var/log/apache2/other_vhosts_access.log

      Now visit apache1.your_domain/info.php or apache2.your_domain/info.php in your browser and then look at the output from the log. You’ll see that Apache is indeed replying (your port will be 80 or 443 depending on whether or not you secured the instance):

      Output

      apache2.your_domain:80 your_server_ip - - [27/Aug/2020:18:18:34 -0400] "GET /info.php HTTP/1.0" 200 20414 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.135 Safari/537.36"

      Then visit the index.html page for each site and you won’t see any log entries from Apache. Nginx is serving them.

      When you’re done observing the log file, press CTRL+C to stop tailing it.

      With this setup, Apache will not be able to restrict access to static files. Access control for static files would need to be configured in Nginx’s apache virtual host file, but that’s beyond the scope of this tutorial.

      Conclusion

      You now have one Ubuntu server with Nginx serving nginx1.your_domain and nginx2.your_domain, along with Apache serving apache1.your_domain and apache2.your_domain. Though Nginx is acting as a reverse-proxy for Apache, Nginx’s proxy service is transparent and connections to Apache’s domains appear be served directly from Apache itself. You can use this method to serve secure and static sites.



      Source link