One place for hosting & domains

      Build NGINX with PageSpeed From Source


      Updated by Linode

      Written by Linode

      Build NGINX with PageSpeed From Source

      What is Google PageSpeed?

      PageSpeed is a set of modules for NGINX and Apache which optimize and measure page performance of websites. Optimization is done by minifying static assets such as CSS and JavaScript, which decreases page load time. PageSpeed Insights is a tool that measures your site’s performance, and makes recommendations for further modifications based on the results.

      There are currently two ways to get PageSpeed and NGINX working together:

      • Compile NGINX with support for PageSpeed, then compile PageSpeed.
      • Compile PageSpeed as a dynamic module to use with NGINX, whether NGINX was installed from source or a binary.

        Note

        Installing NGINX from source requires several manual installation steps and will require manual maintenance when performing tasks like version upgrades. To install NGINX using a package manager see the NGINX section.

      This guide will show how to compile both NGINX and PageSpeed. If you would prefer to use PageSpeed as a module for NGINX, see this NGINX blog post for instructions.

      Before You Begin

      • You should not have a pre-existing installation of NGINX. If you do, back up the configuration files if you want to retain their information, and then purge NGINX.

      • You will need root access to the system, or a user account with sudo privileges.

      • Set your system’s hostname.

      • Update your system’s packages.

      Considerations for a Self-Compiled NGINX Installation

      Filesystem Locations: When you compile NGINX from source, the entire installation, including configuration files, is located at /usr/local/nginx/nginx/. This is in contrast to an installation from a package manager, which places its configuration files in /etc/nginx/.

      Built-in Modules: When you compile NGINX from source, no additional modules are included unless explicitly specified, which means that HTTPS is not supported by default. Below you can see the output of nginx -V using the PageSpeed automated install command on Ubuntu 16.04 with no additional modules or options specified.

        
      root@localhost:~# /usr/local/nginx/sbin/nginx -V
      nginx version: nginx/1.13.8
      built by gcc 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.5)
      configure arguments: --add-module=/root/incubator-pagespeed-ngx-latest-stable
      
      

      Contrast this output with the same command run on the same Ubuntu system but with the binary installed from NGINX’s repository:

        
      root@localhost:~# nginx -V
      nginx version: nginx/1.13.8
      built by gcc 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3)
      built with OpenSSL 1.0.1f 6 Jan 2014 (running with OpenSSL 1.0.2g  1 Mar 2016)
      TLS SNI support enabled
      configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie'
      
      

      Build NGINX and PageSpeed

      The official PageSpeed documentation provides a bash script to automate the installation process.

      Note

      The automated installation script will install several compilation tools needed to install PageSpeed. If you are using a production environment, ensure you uninstall any packages that are no longer needed after the installation has completed.

      1. If you plan to serve your website using TLS, install the SSL libraries needed to compile the HTTPS module for NGINX:

        CentOS/Fedora

        yum install openssl-devel
        

        Ubuntu/Debian

        apt install libssl-dev
        
      2. Run the Automated Install bash command to start the installation:

        bash <(curl -f -L -sS https://ngxpagespeed.com/install) 
        --nginx-version latest
        
      3. During the build process, you’ll be asked if you want to build NGINX with any additional modules. The PageSpeed module is already included, so you don’t need to add it here.

        The options below are a recommended starting point; you can also add more specialized options for your particular use case. These options retain the directory paths, user and group names of pre-built NGINX binaries, and enable the SSL and HTTP/2 modules for HTTPS connections:

        --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_v2_module
        
      4. Next you’ll be asked if you want to build NGINX. You’ll be shown the destination directories for logs, configuration files and binaries. If these look correct, answer Y to continue.

          
        Configuration summary
          + using system PCRE library
          + using OpenSSL library: /usr/bin/openssl
          + using system zlib library
        
          nginx path prefix: "/etc/nginx"
          nginx binary file: "/usr/sbin/nginx"
          nginx modules path: "/usr/lib/nginx/modules"
          nginx configuration prefix: "/etc/nginx"
          nginx configuration file: "/etc/nginx/nginx.conf"
          nginx pid file: "/var/run/nginx.pid"
          nginx error log file: "/var/log/nginx/error.log"
          nginx http access log file: "/var/log/nginx/access.log"
          nginx http client request body temporary files: "/var/cache/nginx/client_temp"
          nginx http proxy temporary files: "/var/cache/nginx/proxy_temp"
          nginx http fastcgi temporary files: "/var/cache/nginx/fastcgi_temp"
          nginx http uwsgi temporary files: "/var/cache/nginx/uwsgi_temp"
          nginx http scgi temporary files: "/var/cache/nginx/scgi_temp"
        
        Build nginx? [Y/n]
        
        
      5. If the build was successful, you’ll see the following message:

          
        Nginx installed with ngx_pagespeed support compiled-in.
        
        If this is a new installation you probably need an init script to
        manage starting and stopping the nginx service.  See:
          http://wiki.nginx.org/InitScripts
        
        You'll also need to configure ngx_pagespeed if you haven't yet:
          https://developers.google.com/speed/pagespeed/module/configuration
        
        
      6. When you want to update NGINX, back up your configuration files and repeat steps two through four above to build with the new source version.

      Control NGINX

      NGINX can be controlled either by creating a systemd service or by calling the binary directly. Choose one of these methods and do not mix them. If you start NGINX using the binary commands, for example, systemd will not be aware of the process and will try to start another NGINX instance if you run systemctl start nginx, which will fail.

      systemd

      1. In a text editor, create /lib/systemd/system/nginx.service and add the following unit file from the NGINX wiki:

        /lib/systemd/system/nginx.service
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        
        [Unit]
        Description=The NGINX HTTP and reverse proxy server
        After=syslog.target network.target remote-fs.target nss-lookup.target
        
        [Service]
        Type=forking
        PIDFile=/run/nginx.pid
        ExecStartPre=/usr/sbin/nginx -t
        ExecStart=/usr/sbin/nginx
        ExecReload=/bin/kill -s HUP $MAINPID
        ExecStop=/bin/kill -s QUIT $MAINPID
        PrivateTmp=true
        
        [Install]
        WantedBy=multi-user.target
      2. Enable NGINX to start on boot and start the server:

        systemctl enable nginx
        systemctl start nginx
        
      3. NGINX can now be controlled as with any other systemd-controlled process:

        systemctl stop nginx
        systemctl restart nginx
        systemctl status nginx
        

      NGINX binary

      You can use NGINX’s binary to control the process directly without making a startup file for your init system.

      1. Start NGINX:

        /usr/sbin/nginx
        
      2. Reload the configuration:

        /usr/sbin/nginx -s reload
        
      3. Stop NGINX:

        /usr/sbin/nginx -s stop
        

      Configuration

      NGINX

      1. Since the compiled options specified above are different than the source’s defaults, some additional configuration is necessary. Replace example.com in the following commands with your Linode’s public IP address or domain name:

        useradd --no-create-home nginx
        mkdir -p /var/cache/nginx/client_temp
        mkdir /etc/nginx/conf.d/
        mkdir /var/www/example.com
        chown nginx:nginx /var/www/example.com
        mv /etc/nginx/nginx.conf.default /etc/nginx/nginx.conf.backup-default
        
      2. In NGINX terminology, a Server Block equates to a website (similar to the Virtual Host in Apache terminology). Each NGINX site’s configuration should be in its own file with the name formatted as example.com.conf, located at /etc/nginx/conf.d/.

        If you followed this guide or our Getting Started with NGINX series, then your site’s configuration will be in a server block in a file stored in /etc/nginx/conf.d/. If you do not have this setup, then you likely have the server block directly in /etc/nginx/nginx.conf. See Server Block Examples in the NGINX docs for more info.

        Create a configuration file for your site with a basic server block inside:

        /etc/nginx/conf.d/example.com.conf
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        
        server {
            listen       80;
            listen       [::]:80;
            server_name  example.com www.example.com;
            access_log   logs/example.access.log main;
            error_log    logs/example.error error;
        
            root         /var/www/example.com/;
        
        }
      3. Start NGINX:

        systemd:

        systemctl start nginx
        

        Other init systems:

        /usr/sbin/nginx
        
      4. Verify NGINX is working by going to your site’s domain or IP address in a web browser. You should see the NGINX welcome page:

        NGINX welcome page

      PageSpeed

      1. Create PageSpeed’s cache location and change its ownership to the nginx user and group:

        mkdir /var/cache/ngx_pagespeed/
        chown nginx:nginx /var/cache/ngx_pagespeed/
        
      2. Add the PageSpeed directives to your site configuration’s server block as shown below.

        /etc/nginx/conf.d/example.com.conf
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        
        server {
        
              ...
        
            pagespeed on;
            pagespeed FileCachePath "/var/cache/ngx_pagespeed/";
            pagespeed RewriteLevel OptimizeForBandwidth;
        
            location ~ ".pagespeed.([a-z].)?[a-z]{2}.[^.]{10}.[^.]+" {
                add_header "" "";
                }
        
            location ~ "^/pagespeed_static/" { }
            location ~ "^/ngx_pagespeed_beacon$" { }
        
            }

        Note

        RewriteLevel OptimizeForBandwidth is a safer choice than the default CoreFilters rewrite level.
      3. NGINX supports HTTPS by default, so if your site already is set up with a TLS certificate, add the two directives below to your site’s server block, pointing to the correct location depending on your system.

        pagespeed SslCertDirectory directory;
        pagespeed SslCertFile file;
        
      4. Reload your configuration:

        /usr/sbin/nginx/ -s reload
        
      5. Test PageSpeed is running and NGINX is successfully serving pages. Substitute example.com in the cURL command with your Linode’s domain name or IP address.

        curl -I -X GET example.com
        

        The output should be similar to below. If the response contains an HTTP 200 response and X-Page-Speed is listed in the header with the PageSpeed version number, everything is working correctly.

          
        HTTP/1.1 200 OK
        Server: nginx/1.13.8
        Content-Type: text/html
        Transfer-Encoding: chunked
        Connection: keep-alive
        Vary: Accept-Encoding
        Date: Tue, 23 Jan 2018 16:50:23 GMT
        X-Page-Speed: 1.12.34.3-0
        Cache-Control: max-age=0, no-cache
        
        
      6. Use PageSpeed Insights to test your site for additional improvement areas.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      How To Secure a Containerized Node.js Application with Nginx, Let’s Encrypt, and Docker Compose


      Introduction

      There are multiple ways to enhance the flexibility and security of your Node.js application. Using a reverse proxy like Nginx offers you the ability to load balance requests, cache static content, and implement Transport Layer Security (TLS). Enabling encrypted HTTPS on your server ensures that communication to and from your application remains secure.

      Implementing a reverse proxy with TLS/SSL on containers involves a different set of procedures from working directly on a host operating system. For example, if you were obtaining certificates from Let’s Encrypt for an application running on a server, you would install the required software directly on your host. Containers allow you to take a different approach. Using Docker Compose, you can create containers for your application, your web server, and the Certbot client that will enable you to obtain your certificates. By following these steps, you can take advantage of the modularity and portability of a containerized workflow.

      In this tutorial, you will deploy a Node.js application with an Nginx reverse proxy using Docker Compose. You will obtain TLS/SSL certificates for the domain associated with your application and ensure that it receives a high security rating from SSL Labs. Finally, you will set up a cron job to renew your certificates so that your domain remains secure.

      Prerequisites

      To follow this tutorial, you will need:

      • An Ubuntu 18.04 server, a non-root user with sudo privileges, and an active firewall. For guidance on how to set these up, please see this Initial Server Setup guide.
      • Docker and Docker Compose installed on your server. For guidance on installing Docker, follow Steps 1 and 2 of How To Install and Use Docker on Ubuntu 18.04. For guidance on installing Compose, follow Step 1 of How To Install Docker Compose on Ubuntu 18.04.
      • A registered domain name. This tutorial will use example.com throughout. You can get one for free at Freenom, or use the domain registrar of your choice.
      • Both of the following DNS records set up for your server. You can follow this introduction to DigitalOcean DNS for details on how to add them to a DigitalOcean account, if that’s what you’re using:

        • An A record with example.com pointing to your server’s public IP address.
        • An A record with www.example.com pointing to your server’s public IP address.

      Step 1 — Cloning and Testing the Node Application

      As a first step, we will clone the repository with the Node application code, which includes the Dockerfile that we will use to build our application image with Compose. We can first test the application by building and running it with the docker run command, without a reverse proxy or SSL.

      In your non-root user’s home directory, clone the nodejs-image-demo repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in How To Build a Node.js Application with Docker.

      Clone the repository into a directory called node_project:

      • git clone https://github.com/do-community/nodejs-image-demo.git node_project

      Change to the node_project directory:

      In this directory, there is a Dockerfile that contains instructions for building a Node application using the Docker node:10 image and the contents of your current project directory. You can look at the contents of the Dockerfile by typing:

      Output

      FROM node:10 RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app WORKDIR /home/node/app COPY package*.json ./ RUN npm install COPY . . COPY --chown=node:node . . USER node EXPOSE 8080 CMD [ "node", "app.js" ]

      These instructions build a Node image by copying the project code from the current directory to the container and installing dependencies with npm install. They also take advantage of Docker's caching and image layering by separating the copy of package.json and package-lock.json, containing the project's listed dependencies, from the copy of the rest of the application code. Finally, the instructions specify that the container will be run as the non-root node user with the appropriate permissions set on the application code and node_modules directories.

      For more information about this Dockerfile and Node image best practices, please see the complete discussion in Step 3 of How To Build a Node.js Application with Docker.

      To test the application without SSL, you can build and tag the image using docker build and the -t flag. We will call the image node-demo, but you are free to name it something else:

      • docker build -t node-demo .

      Once the build process is complete, you can list your images with docker images:

      You will see the following output, confirming the application image build:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE node-demo latest 23961524051d 7 seconds ago 896MB node 10 8a752d5af4ce 10 days ago 894MB

      Next, create the container with docker run. We will include three flags with this command:

      • -p: This publishes the port on the container and maps it to a port on our host. We will use port 80 on the host, but you should feel free to modify this as necessary if you have another process running on that port. For more information about how this works, see this discussion in the Docker docs on port binding.
      • -d: This runs the container in the background.
      • --name: This allows us to give the container a memorable name.

      Run the following command to build the container:

      • docker run --name node-demo -p 80:8080 -d node-demo

      Inspect your running containers with docker ps:

      You will see output confirming that your application container is running:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4133b72391da node-demo "node app.js" 17 seconds ago Up 16 seconds 0.0.0.0:80->8080/tcp node-demo

      You can now visit your domain to test your setup: http://example.com. Remember to replace example.com with your own domain name. Your application will display the following landing page:

      Application Landing Page

      Now that you have tested the application, you can stop the container and remove the images. Use docker ps again to get your CONTAINER ID:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4133b72391da node-demo "node app.js" 17 seconds ago Up 16 seconds 0.0.0.0:80->8080/tcp node-demo

      Stop the container with docker stop. Be sure to replace the CONTAINER ID listed here with your own application CONTAINER ID:

      You can now remove the stopped container and all of the images, including unused and dangling images, with docker system prune and the -a flag:

      Type y when prompted in the output to confirm that you would like to remove the stopped container and images. Be advised that this will also remove your build cache.

      With your application image tested, you can move on to building the rest of your setup with Docker Compose.

      Step 2 — Defining the Web Server Configuration

      With our application Dockerfile in place, we can create a configuration file to run our Nginx container. We will start with a minimal configuration that will include our domain name, document root, proxy information, and a location block to direct Certbot's requests to the .well-known directory, where it will place a temporary file to validate that the DNS for our domain resolves to our server.

      First, create a directory in the current project directory for the configuration file:

      Open the file with nano or your favorite editor:

      • nano nginx-conf/nginx.conf

      Add the following server block to proxy user requests to your Node application container and to direct Certbot's requests to the .well-known directory. Be sure to replace example.com with your own domain name:

      ~/node_project/nginx-conf/nginx.conf

      server {
              listen 80;
              listen [::]:80;
      
              root /var/www/html;
              index index.html index.htm index.nginx-debian.html;
      
              server_name example.com www.example.com;
      
              location / {
                      proxy_pass http://nodejs:8080;
              }
      
              location ~ /.well-known/acme-challenge {
                      allow all;
                      root /var/www/html;
              }
      }
      

      This server block will allow us to start the Nginx container as a reverse proxy, which will pass requests to our Node application container. It will also allow us to use Certbot's webroot plugin to obtain certificates for our domain. This plugin depends on the HTTP-01 validation method, which uses an HTTP request to prove that Certbot can access resources from a server that responds to a given domain name.

      Once you have finished editing, save and close the file. To learn more about Nginx server and location block algorithms, please refer to this article on Understanding Nginx Server and Location Block Selection Algorithms.

      With the web server configuration details in place, we can move on to creating our docker-compose.yml file, which will allow us to create our application services and the Certbot container we will use to obtain our certificates.

      Step 3 — Creating the Docker Compose File

      The docker-compose.yml file will define our services, including the Node application and web server. It will specify details like named volumes, which will be critical to sharing SSL credentials between containers, as well as network and port information. It will also allow us to specify specific commands to run when our containers are created. This file is the central resource that will define how our services will work together.

      Open the file in your current directory:

      First, define the application service:

      ~/node_project/docker-compose.yml

      version: '3'
      
      services:
        nodejs:
          build:
            context: .
            dockerfile: Dockerfile
          image: nodejs
          container_name: nodejs
          restart: unless-stopped
      

      The nodejs service definition includes the following:

      • build: This defines the configuration options, including the context and dockerfile, that will be applied when Compose builds the application image. If you wanted to use an existing image from a registry like Docker Hub, you could use the image instruction instead, with information about your username, repository, and image tag.
      • context: This defines the build context for the application image build. In this case, it's the current project directory.
      • dockerfile: This specifies the Dockerfile that Compose will use for the build — the Dockerfile you looked at in Step 1.
      • image, container_name: These apply names to the image and container.
      • restart: This defines the restart policy. The default is no, but we have set the container to restart unless it is stopped.

      Note that we are not including bind mounts with this service, since our setup is focused on deployment rather than development. For more information, please see the Docker documentation on bind mounts and volumes.

      To enable communication between the application and web server containers, we will also add a bridge network called app-network below the restart definition:

      ~/node_project/docker-compose.yml

      services:
        nodejs:
      ...
          networks:
            - app-network
      

      A user-defined bridge network like this enables communication between containers on the same Docker daemon host. This streamlines traffic and communication within your application, since it opens all ports between containers on the same bridge network, while exposing no ports to the outside world. Thus, you can be selective about opening only the ports you need to expose your frontend services.

      Next, define the webserver service:

      ~/node_project/docker-compose.yml

      ...
       webserver:
          image: nginx:latest
          container_name: webserver
          restart: unless-stopped
          ports:
            - "80:80"
          volumes:
            - web-root:/var/www/html
            - ./nginx-conf:/etc/nginx/conf.d
            - certbot-etc:/etc/letsencrypt
            - certbot-var:/var/lib/letsencrypt
          depends_on:
            - nodejs
          networks:
            - app-network
      

      Some of the settings we defined for the nodejs service remain the same, but we've also made the following changes:

      • image: This tells Compose to pull the latest Nginx image from Docker Hub.
      • ports: This exposes port 80 to enable the configuration options we've defined in our Nginx configuration.

      We have also specified the following named volumes and bind mounts:

      • web-root:/var/www/html: This will add our site's static assets, copied to a volume called web-root, to the the /var/www/html directory on the container.
      • ./nginx-conf:/etc/nginx/conf.d: This will bind mount the Nginx configuration directory on the host to the relevant directory on the container, ensuring that any changes we make to files on the host will be reflected in the container.
      • certbot-etc:/etc/letsencrypt: This will mount the relevant Let's Encrypt certificates and keys for our domain to the appropriate directory on the container.
      • certbot-var:/var/lib/letsencrypt: This mounts Let's Encrypt's default working directory to the appropriate directory on the container.

      Next, add the configuration options for the certbot container. Be sure to replace the domain and email information with your own domain name and contact email:

      ~/node_project/docker-compose.yml

      ...
        certbot:
          image: certbot/certbot
          container_name: certbot
          volumes:
            - certbot-etc:/etc/letsencrypt
            - certbot-var:/var/lib/letsencrypt
            - web-root:/var/www/html
          depends_on:
            - webserver
          command: certonly --webroot --webroot-path=/var/www/html --email sammy@example.com --agree-tos --no-eff-email --staging -d example.com  -d www.example.com 
      

      This definition tells Compose to pull the certbot/certbot image from Docker Hub. It also uses named volumes to share resources with the Nginx container, including the domain certificates and key in certbot-etc, the Let's Encrypt working directory in certbot-var, and the application code in web-root.

      Again, we've used depends_on to specify that the certbot container should be started once the webserver service is running.

      We've also included a command option that specifies the command to run when the container is started. It includes the certonly subcommand with the following options:

      • --webroot: This tells Certbot to use the webroot plugin to place files in the webroot folder for authentication.
      • --webroot-path: This specifies the path of the webroot directory.
      • --email: Your preferred email for registration and recovery.
      • --agree-tos: This specifies that you agree to ACME's Subscriber Agreement.
      • --no-eff-email: This tells Certbot that you do not wish to share your email with the Electronic Frontier Foundation (EFF). Feel free to omit this if you would prefer.
      • --staging: This tells Certbot that you would like to use Let's Encrypt's staging environment to obtain test certificates. Using this option allows you to test your configuration options and avoid possible domain request limits. For more information about these limits, please see Let's Encrypt's rate limits documentation.
      • -d: This allows you to specify domain names you would like to apply to your request. In this case, we've included example.com and www.example.com. Be sure to replace these with your own domain preferences.

      As a final step, add the volume and network definitions. Be sure to replace the username here with your own non-root user:

      ~/node_project/docker-compose.yml

      ...
      volumes:
        certbot-etc:
        certbot-var:
        web-root:
          driver: local
          driver_opts:
            type: none
            device: /home/sammy/node_project/views/
            o: bind
      
      networks:
        app-network:
          driver: bridge
      

      Our named volumes include our Certbot certificate and working directory volumes, and the volume for our site's static assets, web-root. In most cases, the default driver for Docker volumes is the local driver, which on Linux accepts options similar to the mount command. Thanks to this, we are able to specify a list of driver options with driver_opts that mount the views directory on the host, which contains our application's static assets, to the volume at runtime. The directory contents can then be shared between containers. For more information about the contents of the views directory, please see Step 2 of How To Build a Node.js Application with Docker.

      The docker-compose.yml file will look like this when finished:

      ~/node_project/docker-compose.yml

      version: '3'
      
      services:
        nodejs:
          build:
            context: .
            dockerfile: Dockerfile
          image: nodejs
          container_name: nodejs
          restart: unless-stopped
          networks:
            - app-network
      
        webserver:
          image: nginx:latest
          container_name: webserver
          restart: unless-stopped
          ports:
            - "80:80"
          volumes:
            - web-root:/var/www/html
            - ./nginx-conf:/etc/nginx/conf.d
            - certbot-etc:/etc/letsencrypt
            - certbot-var:/var/lib/letsencrypt
          depends_on:
            - nodejs
          networks:
            - app-network
      
        certbot:
          image: certbot/certbot
          container_name: certbot
          volumes:
            - certbot-etc:/etc/letsencrypt
            - certbot-var:/var/lib/letsencrypt
            - web-root:/var/www/html
          depends_on:
            - webserver
          command: certonly --webroot --webroot-path=/var/www/html --email sammy@example.com --agree-tos --no-eff-email --staging -d example.com  -d www.example.com 
      
      volumes:
        certbot-etc:
        certbot-var:
        web-root:
          driver: local
          driver_opts:
            type: none
            device: /home/sammy/node_project/views/
            o: bind
      
      networks:
        app-network:
          driver: bridge  
      

      With the service definitions in place, you are ready to start the containers and test your certificate requests.

      Step 4 — Obtaining SSL Certificates and Credentials

      We can start our containers with docker-compose up, which will create and run our containers and services in the order we have specified. If our domain requests are successful, we will see the correct exit status in our output and the right certificates mounted in the /etc/letsencrypt/live folder on the webserver container.

      Create the services with docker-compose up and the -d flag, which will run the nodejs and webserver containers in the background:

      You will see output confirming that your services have been created:

      Output

      Creating nodejs ... done Creating webserver ... done Creating certbot ... done

      Using docker-compose ps, check the status of your services:

      If everything was successful, your nodejs and webserver services should be Up and the certbot container will have exited with a 0 status message:

      Output

      Name Command State Ports ------------------------------------------------------------------------ certbot certbot certonly --webroot ... Exit 0 nodejs node app.js Up 8080/tcp webserver nginx -g daemon off; Up 0.0.0.0:80->80/tcp

      If you see anything other than Up in the State column for the nodejs and webserver services, or an exit status other than 0 for the certbot container, be sure to check the service logs with the docker-compose logs command:

      • docker-compose logs service_name

      You can now check that your credentials have been mounted to the webserver container with docker-compose exec:

      • docker-compose exec webserver ls -la /etc/letsencrypt/live

      If your request was successful, you will see output like this:

      Output

      total 16 drwx------ 3 root root 4096 Dec 23 16:48 . drwxr-xr-x 9 root root 4096 Dec 23 16:48 .. -rw-r--r-- 1 root root 740 Dec 23 16:48 README drwxr-xr-x 2 root root 4096 Dec 23 16:48 example.com

      Now that you know your request will be successful, you can edit the certbot service definition to remove the --staging flag.

      Open docker-compose.yml:

      Find the section of the file with the certbot service definition, and replace the --staging flag in the command option with the --force-renewal flag, which will tell Certbot that you want to request a new certificate with the same domains as an existing certificate. The certbot service definition should now look like this:

      ~/node_project/docker-compose.yml

      ...
        certbot:
          image: certbot/certbot
          container_name: certbot
          volumes:
            - certbot-etc:/etc/letsencrypt
            - certbot-var:/var/lib/letsencrypt
            - web-root:/var/www/html
          depends_on:
            - webserver
          command: certonly --webroot --webroot-path=/var/www/html --email sammy@example.com --agree-tos --no-eff-email --force-renewal -d example.com -d www.example.com
      ...
      

      You can now run docker-compose up to recreate the certbot container and its relevant volumes. We will also include the --no-deps option to tell Compose that it can skip starting the webserver service, since it is already running:

      • docker-compose up --force-recreate --no-deps certbot

      You will see output indicating that your certificate request was successful:

      Output

      certbot | IMPORTANT NOTES: certbot | - Congratulations! Your certificate and chain have been saved at: certbot | /etc/letsencrypt/live/example.com/fullchain.pem certbot | Your key file has been saved at: certbot | /etc/letsencrypt/live/example.com/privkey.pem certbot | Your cert will expire on 2019-03-26. To obtain a new or tweaked certbot | version of this certificate in the future, simply run certbot certbot | again. To non-interactively renew *all* of your certificates, run certbot | "certbot renew" certbot | - Your account credentials have been saved in your Certbot certbot | configuration directory at /etc/letsencrypt. You should make a certbot | secure backup of this folder now. This configuration directory will certbot | also contain certificates and private keys obtained by Certbot so certbot | making regular backups of this folder is ideal. certbot | - If you like Certbot, please consider supporting our work by: certbot | certbot | Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate certbot | Donating to EFF: https://eff.org/donate-le certbot | certbot exited with code 0

      With your certificates in place, you can move on to modifying your Nginx configuration to include SSL.

      Step 5 — Modifying the Web Server Configuration and Service Definition

      Enabling SSL in our Nginx configuration will involve adding an HTTP redirect to HTTPS and specifying our SSL certificate and key locations. It will also involve specifying our Diffie-Hellman group, which we will use for Perfect Forward Secrecy.

      Since you are going to recreate the webserver service to include these additions, you can stop it now:

      • docker-compose stop webserver

      Next, create a directory in your current project directory for your Diffie-Hellman key:

      Generate your key with the openssl command:

      • sudo openssl dhparam -out /home/sammy/node_project/dhparam/dhparam-2048.pem 2048

      It will take a few moments to generate the key.

      To add the relevant Diffie-Hellman and SSL information to your Nginx configuration, first remove the Nginx configuration file you created earlier:

      Open another version of the file:

      • nano nginx-conf/nginx.conf

      Add the following code to the file to redirect HTTP to HTTPS and to add SSL credentials, protocols, and security headers. Remember to replace example.com with your own domain:

      ~/node_project/nginx-conf/nginx.conf

      
      server {
              listen 80;
              listen [::]:80;
              server_name example.com www.example.com;
      
              location ~ /.well-known/acme-challenge {
                allow all;
                root /var/www/html;
              }
      
              location / {
                      rewrite ^ https://$host$request_uri? permanent;
              }
      }
      
      server {
              listen 443 ssl http2;
              listen [::]:443 ssl http2;
              server_name example.com www.example.com;
      
              server_tokens off;
      
              ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
              ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
      
              ssl_buffer_size 8k;
      
              ssl_dhparam /etc/ssl/certs/dhparam-2048.pem;
      
              ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
              ssl_prefer_server_ciphers on;
      
              ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
      
              ssl_ecdh_curve secp384r1;
              ssl_session_tickets off;
      
              ssl_stapling on;
              ssl_stapling_verify on;
              resolver 8.8.8.8;
      
              location / {
                      try_files $uri @nodejs;
              }
      
              location @nodejs {
                      proxy_pass http://nodejs:8080;
                      add_header X-Frame-Options "SAMEORIGIN" always;
                      add_header X-XSS-Protection "1; mode=block" always;
                      add_header X-Content-Type-Options "nosniff" always;
                      add_header Referrer-Policy "no-referrer-when-downgrade" always;
                      add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
                      # add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
                      # enable strict transport security only if you understand the implications
              }
      
              root /var/www/html;
              index index.html index.htm index.nginx-debian.html;
      }
      

      The HTTP server block specifies the webroot for Certbot renewal requests to the .well-known/acme-challenge directory. It also includes a rewrite directive that directs HTTP requests to the root directory to HTTPS.

      The HTTPS server block enables ssl and http2. To read more about how HTTP/2 iterates on HTTP protocols and the benefits it can have for website performance, please see the introduction to How To Set Up Nginx with HTTP/2 Support on Ubuntu 18.04. This block also includes a series of options to ensure that you are using the most up-to-date SSL protocols and ciphers and that OSCP stapling is turned on. OSCP stapling allows you to offer a time-stamped response from your certificate authority during the initial TLS handshake, which can speed up the authentication process.

      The block also specifies your SSL and Diffie-Hellman credentials and key locations.

      Finally, we've moved the proxy pass information to this block, including a location block with a try_files directive, pointing requests to our aliased Node.js application container, and a location block for that alias, which includes security headers that will enable us to get A ratings on things like the SSL Labs and Security Headers server test sites. These headers include X-Frame-Options, X-Content-Type-Options, Referrer Policy, Content-Security-Policy, and X-XSS-Protection. The HTTP Strict Transport Security (HSTS) header is commented out — enable this only if you understand the implications and have assessed its "preload" functionality.

      Once you have finished editing, save and close the file.

      Before recreating the webserver service, you will need to add a few things to the service definition in your docker-compose.yml file, including relevant port information for HTTPS and a Diffie-Hellman volume definition.

      Open the file:

      In the webserver service definition, add the following port mapping and the dhparam named volume:

      ~/node_project/docker-compose.yml

      ...
       webserver:
          image: nginx:latest
          container_name: webserver
          restart: unless-stopped
          ports:
            - "80:80"
            - "443:443"
          volumes:
            - web-root:/var/www/html
            - ./nginx-conf:/etc/nginx/conf.d
            - certbot-etc:/etc/letsencrypt
            - certbot-var:/var/lib/letsencrypt
            - dhparam:/etc/ssl/certs
          depends_on:
            - nodejs
          networks:
            - app-network
      

      Next, add the dhparam volume to your volumes definitions:

      ~/node_project/docker-compose.yml

      ...
      volumes:
        ...
        dhparam:
          driver: local
          driver_opts:
            type: none
            device: /home/sammy/node_project/dhparam/
            o: bind
      

      Similarly to the web-root volume, the dhparam volume will mount the Diffie-Hellman key stored on the host to the webserver container.

      Save and close the file when you are finished editing.

      Recreate the webserver service:

      • docker-compose up -d --force-recreate --no-deps webserver

      Check your services with docker-compose ps:

      You should see output indicating that your nodejs and webserver services are running:

      Output

      Name Command State Ports ---------------------------------------------------------------------------------------------- certbot certbot certonly --webroot ... Exit 0 nodejs node app.js Up 8080/tcp webserver nginx -g daemon off; Up 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp

      Finally, you can visit your domain to ensure that everything is working as expected. Navigate your browser to https://example.com, making sure to substitute example.com with your own domain name. You will see the following landing page:

      Application Landing Page

      You should also see the lock icon in your browser's security indicator. If you would like, you can navigate to the SSL Labs Server Test landing page or the Security Headers server test landing page. The configuration options we've included should earn your site an A rating on both.

      Step 6 — Renewing Certificates

      Let's Encrypt certificates are valid for 90 days, so you will want to set up an automated renewal process to ensure that they do not lapse. One way to do this is to create a job with the cron scheduling utility. In this case, we will schedule a cron job using a script that will renew our certificates and reload our Nginx configuration.

      Open a script called ssl_renew.sh in your project directory:

      Add the following code to the script to renew your certificates and reload your web server configuration:

      ~/node_project/ssl_renew.sh

      #!/bin/bash
      
      /usr/local/bin/docker-compose -f /home/sammy/node_project/docker-compose.yml run certbot renew --dry-run 
      && /usr/local/bin/docker-compose -f /home/sammy/node_project/docker-compose.yml kill -s SIGHUP webserver
      

      In addition to specifying the location of our docker-compose binary, we also specify the location of our docker-compose.yml file in order to run docker-compose commands. In this case, we are using docker-compose run to start a certbot container and to override the command provided in our service definition with another: the renew subcommand, which will renew certificates that are close to expiring. We've included the --dry-run option here to test our script.

      The script then uses docker-compose kill to send a SIGHUP signal to the webserver container to reload the Nginx configuration. For more information on using this process to reload your Nginx configuration, please see this Docker blog post on deploying the official Nginx image with Docker.

      Close the file when you are finished editing. Make it executable:

      Next, open your root crontab file to run the renewal script at a specified interval:

      If this is your first time editing this file, you will be asked to choose an editor:

      crontab

      no crontab for root - using an empty one
      Select an editor.  To change later, run 'select-editor'.
        1. /bin/ed
        2. /bin/nano        <---- easiest
        3. /usr/bin/vim.basic
        4. /usr/bin/vim.tiny
      Choose 1-4 [2]: 
      ...
      

      At the bottom of the file, add the following line:

      crontab

      ...
      */5 * * * * /home/sammy/node_project/ssl_renew.sh >> /var/log/cron.log 2>&1
      

      This will set the job interval to every five minutes, so you can test whether or not your renewal request has worked as intended. We have also created a log file, cron.log, to record relevant output from the job.

      After five minutes, check cron.log to see whether or not the renewal request has succeeded:

      • tail -f /var/log/cron.log

      You should see output confirming a successful renewal:

      Output

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ** DRY RUN: simulating 'certbot renew' close to cert expiry ** (The test certificates below have not been saved.) Congratulations, all renewals succeeded. The following certs have been renewed: /etc/letsencrypt/live/example.com/fullchain.pem (success) ** DRY RUN: simulating 'certbot renew' close to cert expiry ** (The test certificates above have not been saved.) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Killing webserver ... done

      You can now modify the crontab file to set a daily interval. To run the script every day at noon, for example, you would modify the last line of the file to look like this:

      crontab

      ...
      0 12 * * * /home/sammy/node_project/ssl_renew.sh >> /var/log/cron.log 2>&1
      

      You will also want to remove the --dry-run option from your ssl_renew.sh script:

      ~/node_project/ssl_renew.sh

      #!/bin/bash
      
      /usr/local/bin/docker-compose -f /home/sammy/node_project/docker-compose.yml run certbot renew 
      && /usr/local/bin/docker-compose -f /home/sammy/node_project/docker-compose.yml kill -s SIGHUP webserver
      

      Your cron job will ensure that your Let's Encrypt certificates don't lapse by renewing them when they are eligible.

      Conclusion

      You have used containers to set up and run a Node application with an Nginx reverse proxy. You have also secured SSL certificates for your application's domain and set up a cron job to renew these certificates when necessary.

      If you are interested in learning more about Let's Encrypt plugins, please see our articles on using the Nginx plugin or the standalone plugin.

      You can also learn more about Docker Compose by looking at the following resources:

      The Compose documentation is also a great resource for learning more about multi-container applications.



      Source link

      How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes


      Introduction

      Kubernetes Ingresses allow you to flexibly route traffic from outside your Kubernetes cluster to Services inside of your cluster. This is accomplished using Ingress Resources, which define rules for routing HTTP and HTTPS traffic to Kubernetes Services, and Ingress Controllers, which implement the rules by load balancing traffic and routing it to the appropriate backend Services. Popular Ingress Controllers include Nginx, Contour, HAProxy, and Traefik. Ingresses provide a more efficient and flexible alternative to setting up multiple LoadBalancer services, each of which uses its own dedicated Load Balancer.

      In this guide, we’ll set up the Kubernetes-maintained Nginx Ingress Controller, and create some Ingress Resources to route traffic to several dummy backend services. Once we’ve set up the Ingress, we’ll install cert-manager into our cluster to manage and provision TLS certificates for encrypting HTTP traffic to the Ingress.

      Prerequisites

      Before you begin with this guide, you should have the following available to you:

      • A Kubernetes 1.10+ cluster with role-based access control (RBAC) enabled
      • The kubectl command-line tool installed on your local machine and configured to connect to your cluster. You can read more about installing kubectl in the official documentation.
      • A domain name and DNS A records which you can point to the DigitalOcean Load Balancer used by the Ingress. If you are using DigitalOcean to manage your domain’s DNS records, consult How to Manage DNS Records to learn how to create A records.
      • The Helm package manager installed on your local machine and Tiller installed on your cluster, as detailed in How To Install Software on Kubernetes Clusters with the Helm Package Manager.
      • The wget command-line utility installed on your local machine. You can install wget using the package manager built into your operating system.

      Once you have these components set up, you’re ready to begin with this guide.

      Step 1 — Setting Up Dummy Backend Services

      Before we deploy the Ingress Controller, we’ll first create and roll out two dummy echo Services to which we’ll route external traffic using the Ingress. The echo Services will run the hashicorp/http-echo web server container, which returns a page containing a text string passed in when the web server is launched. To learn more about http-echo, consult its GitHub Repo, and to learn more about Kubernetes Services, consult Services from the official Kubernetes docs.

      On your local machine, create and edit a file called echo1.yaml using nano or your favorite editor:

      Paste in the following Service and Deployment manifest:

      echo1.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: echo1
      spec:
        ports:
        - port: 80
          targetPort: 5678
        selector:
          app: echo1
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: echo1
      spec:
        selector:
          matchLabels:
            app: echo1
        replicas: 2
        template:
          metadata:
            labels:
              app: echo1
          spec:
            containers:
            - name: echo1
              image: hashicorp/http-echo
              args:
              - "-text=echo1"
              ports:
              - containerPort: 5678
      

      In this file, we define a Service called echo1 which routes traffic to Pods with the app: echo1 label selector. It accepts TCP traffic on port 80 and routes it to port 5678,http-echo's default port.

      We then define a Deployment, also called echo1, which manages Pods with the app: echo1 Label Selector. We specify that the Deployment should have 2 Pod replicas, and that the Pods should start a container called echo1 running the hashicorp/http-echo image. We pass in the text parameter and set it to echo1, so that the http-echo web server returns echo1. Finally, we open port 5678 on the Pod container.

      Once you're satisfied with your dummy Service and Deployment manifest, save and close the file.

      Then, create the Kubernetes resources using kubectl create with the -f flag, specifying the file you just saved as a parameter:

      • kubectl create -f echo1.yaml

      You should see the following output:

      Output

      service/echo1 created deployment.apps/echo1 created

      Verify that the Service started correctly by confirming that it has a ClusterIP, the internal IP on which the Service is exposed:

      You should see the following output:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE echo1 ClusterIP 10.245.222.129 <none> 80/TCP 60s

      This indicates that the echo1 Service is now available internally at 10.245.222.129 on port 80. It will forward traffic to containerPort 5678 on the Pods it selects.

      Now that the echo1 Service is up and running, repeat this process for the echo2 Service.

      Create and open a file called echo2.yaml:

      echo2.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: echo2
      spec:
        ports:
        - port: 80
          targetPort: 5678
        selector:
          app: echo2
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: echo2
      spec:
        selector:
          matchLabels:
            app: echo2
        replicas: 1
        template:
          metadata:
            labels:
              app: echo2
          spec:
            containers:
            - name: echo2
              image: hashicorp/http-echo
              args:
              - "-text=echo2"
              ports:
              - containerPort: 5678
      

      Here, we essentially use the same Service and Deployment manifest as above, but name and relabel the Service and Deployment echo2. In addition, to provide some variety, we create only 1 Pod replica. We ensure that we set the text parameter to echo2 so that the web server returns the text echo2.

      Save and close the file, and create the Kubernetes resources using kubectl:

      • kubectl create -f echo2.yaml

      You should see the following output:

      Output

      service/echo2 created deployment.apps/echo2 created

      Once again, verify that the Service is up and running:

      You should see both the echo1 and echo2 Services with assigned ClusterIPs:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE echo1 ClusterIP 10.245.222.129 <none> 80/TCP 6m6s echo2 ClusterIP 10.245.128.224 <none> 80/TCP 6m3s kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 4d21h

      Now that our dummy echo web services are up and running, we can move on to rolling out the Nginx Ingress Controller.

      Step 2 — Setting Up the Kubernetes Nginx Ingress Controller

      In this step, we'll roll out the Kubernetes-maintained Nginx Ingress Controller. Note that there are several Nginx Ingress Controllers; the Kubernetes community maintains the one used in this guide and Nginx Inc. maintains kubernetes-ingress. The instructions in this tutorial are based on those from the official Kubernetes Nginx Ingress Controller Installation Guide.

      The Nginx Ingress Controller consists of a Pod that runs the Nginx web server and watches the Kubernetes Control Plane for new and updated Ingress Resource objects. An Ingress Resource is essentially a list of traffic routing rules for backend Services. For example, an Ingress rule can specify that HTTP traffic arriving at the path /web1 should be directed towards the web1 backend web server. Using Ingress Resources, you can also perform host-based routing: for example, routing requests that hit web1.your_domain.com to the backend Kubernetes Service web1.

      In this case, because we’re deploying the Ingress Controller to a DigitalOcean Kubernetes cluster, the Controller will create a LoadBalancer Service that spins up a DigitalOcean Load Balancer to which all external traffic will be directed. This Load Balancer will route external traffic to the Ingress Controller Pod running Nginx, which then forwards traffic to the appropriate backend Services.

      We'll begin by first creating the Kubernetes resources required by the Nginx Ingress Controller. These consist of ConfigMaps containing the Controller's configuration, Role-based Access Control (RBAC) Roles to grant the Controller access to the Kubernetes API, and the actual Ingress Controller Deployment. To see a full list of these required resources, consult the manifest from the Kubernetes Nginx Ingress Controller’s GitHub repo.

      To create these mandatory resources, use kubectl apply and the -f flag to specify the manifest file hosted on GitHub:

      • kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml

      We use apply instead of create here so that in the future we can incrementally apply changes to the Ingress Controller objects instead of completely overwriting them. To learn more about apply, consult Managing Resources from the official Kubernetes docs.

      You should see the following output:

      Output

      namespace/ingress-nginx created configmap/nginx-configuration created configmap/tcp-services created configmap/udp-services created serviceaccount/nginx-ingress-serviceaccount created clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created role.rbac.authorization.k8s.io/nginx-ingress-role created rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created deployment.extensions/nginx-ingress-controller created

      This output also serves as a convenient summary of all the Ingress Controller objects created from the mandatory.yaml manifest.

      Next, we'll create the Ingress Controller LoadBalancer Service, which will create a DigitalOcean Load Balancer that will load balance and route HTTP and HTTPS traffic to the Ingress Controller Pod deployed in the previous command.

      To create the LoadBalancer Service, once again kubectl apply a manifest file containing the Service definition:

      • kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml

      You should see the following output:

      Output

      service/ingress-nginx created

      Now, confirm that the DigitalOcean Load Balancer was successfully created by fetching the Service details with kubectl:

      • kubectl get svc --namespace=ingress-nginx

      You should see an external IP address, corresponding to the IP address of the DigitalOcean Load Balancer:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx LoadBalancer 10.245.247.67 203.0.113.0 80:32486/TCP,443:32096/TCP 20h

      Note down the Load Balancer's external IP address, as you'll need it in a later step.

      This load balancer receives traffic on HTTP and HTTPS ports 80 and 443, and forwards it to the Ingress Controller Pod. The Ingress Controller will then route the traffic to the appropriate backend Service.

      We can now point our DNS records at this external Load Balancer and create some Ingress Resources to implement traffic routing rules.

      Step 3 — Creating the Ingress Resource

      Let's begin by creating a minimal Ingress Resource to route traffic directed at a given subdomain to a corresponding backend Service.

      In this guide, we'll use the test domain example.com. You should substitute this with the domain name you own.

      We'll first create a simple rule to route traffic directed at echo1.example.com to the echo1 backend service and traffic directed at echo2.example.com to the echo2 backend service.

      Begin by opening up a file called echo_ingress.yaml in your favorite editor:

      Paste in the following ingress definition:

      echo_ingress.yaml

      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: echo-ingress
      spec:
        rules:
        - host: echo1.example.com
          http:
            paths:
            - backend:
                serviceName: echo1
                servicePort: 80
        - host: echo2.example.com
          http:
            paths:
            - backend:
                serviceName: echo2
                servicePort: 80
      

      When you've finished editing your Ingress rules, save and close the file.

      Here, we've specified that we'd like to create an Ingress Resource called echo-ingress, and route traffic based on the Host header. An HTTP request Host header specifies the domain name of the target server. To learn more about Host request headers, consult the Mozilla Developer Network definition page. Requests with host echo1.example.com will be directed to the echo1 backend set up in Step 1, and requests with host echo2.example.com will be directed to the echo2 backend.

      You can now create the Ingress using kubectl:

      • kubectl apply -f echo_ingress.yaml

      You'll see the following output confirming the Ingress creation:

      Output

      ingress.extensions/echo-ingress created

      To test the Ingress, navigate to your DNS management service and create A records for echo1.example.com and echo2.example.com pointing to the DigitalOcean Load Balancer's external IP. The Load Balancer's external IP is the external IP address for the ingress-nginx Service, which we fetched in the previous step. If you are using DigitalOcean to manage your domain's DNS records, consult How to Manage DNS Records to learn how to create A records.

      Once you've created the necessary echo1.example.com and echo2.example.com DNS records, you can test the Ingress Controller and Resource you've created using the curl command line utility.

      From your local machine, curl the echo1 Service:

      You should get the following response from the echo1 service:

      Output

      echo1

      This confirms that your request to echo1.example.com is being correctly routed through the Nginx ingress to the echo1 backend Service.

      Now, perform the same test for the echo2 Service:

      You should get the following response from the echo2 Service:

      Output

      echo2

      This confirms that your request to echo2.example.com is being correctly routed through the Nginx ingress to the echo2 backend Service.

      At this point, you've successfully set up a basic Nginx Ingress to perform virtual host-based routing. In the next step, we'll install cert-manager using Helm to provision TLS certificates for our Ingress and enable the more secure HTTPS protocol.

      Step 4 — Installing and Configuring Cert-Manager

      In this step, we'll use Helm to install cert-manager into our cluster. cert-manager is a Kubernetes service that provisions TLS certificates from Let's Encrypt and other certificate authorities and manages their lifecycles. Certificates can be requested and configured by annotating Ingress Resources with the certmanager.k8s.io/issuer annotation, appending a tls section to the Ingress spec, and configuring one or more Issuers to specify your preferred certificate authority. To learn more about Issuer objects, consult the official cert-manager documentation on Issuers.

      We'll first begin by using Helm to installcert-manager into our cluster:

      • helm install --name cert-manager --namespace kube-system --version v0.4.1 stable/cert-manager

      You should see the following output:

      Output

      . . . NOTES: cert-manager has been deployed successfully! In order to begin issuing certificates, you will need to set up a ClusterIssuer or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer). More information on the different types of issuers and how to configure them can be found in our documentation: https://cert-manager.readthedocs.io/en/latest/reference/issuers.html For information on how to configure cert-manager to automatically provision Certificates for Ingress resources, take a look at the `ingress-shim` documentation: https://cert-manager.readthedocs.io/en/latest/reference/ingress-shim.html

      This indicates that the cert-manager installation was successful.

      Before we begin issuing certificates for our Ingress hosts, we need to create an Issuer, which specifies the certificate authority from which signed x509 certificates can be obtained. In this guide, we'll use the Let's Encrypt certificate authority, which provides free TLS certificates and offers both a staging server for testing your certificate configuration, and a production server for rolling out verifiable TLS certificates.

      Let's create a test Issuer to make sure the certificate provisioning mechanism is functioning correctly. Open a file named staging_issuer.yaml in your favorite text editor:

      nano staging_issuer.yaml
      

      Paste in the following ClusterIssuer manifest:

      staging_issuer.yaml

      apiVersion: certmanager.k8s.io/v1alpha1
      kind: ClusterIssuer
      metadata:
       name: letsencrypt-staging
      spec:
       acme:
         # The ACME server URL
         server: https://acme-staging-v02.api.letsencrypt.org/directory
         # Email address used for ACME registration
         email: your_email_address_here
         # Name of a secret used to store the ACME account private key
         privateKeySecretRef:
           name: letsencrypt-staging
         # Enable the HTTP-01 challenge provider
         http01: {}
      

      Here we specify that we'd like to create a ClusterIssuer object called letsencrypt-staging, and use the Let's Encrypt staging server. We'll later use the production server to roll out our certificates, but the production server may rate-limit requests made against it, so for testing purposes it's best to use the staging URL.

      We then specify an email address to register the certificate, and create a Kubernetes Secret called letsencrypt-staging to store the certificate's private key. We also enable the HTTP-01 challenge mechanism. To learn more about these parameters, consult the official cert-manager documentation on Issuers.

      Roll out the ClusterIssuer using kubectl:

      • kubectl create -f staging_issuer.yaml

      You should see the following output:

      Output

      clusterissuer.certmanager.k8s.io/letsencrypt-staging created

      Now that we've created our Let's Encrypt staging Issuer, we're ready to modify the Ingress Resource we created above and enable TLS encryption for the echo1.example.com and echo2.example.com paths.

      Open up echo_ingress.yaml once again in your favorite editor:

      Add the following to the Ingress Resource manifest:

      echo_ingress.yaml

      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: echo-ingress
        annotations:  
          kubernetes.io/ingress.class: nginx
          certmanager.k8s.io/cluster-issuer: letsencrypt-staging
      spec:
        tls:
        - hosts:
          - echo1.example.com
          - echo2.example.com
          secretName: letsencrypt-staging
        rules:
        - host: echo1.example.com
          http:
            paths:
            - backend:
                serviceName: echo1
                servicePort: 80
        - host: echo2.example.com
          http:
            paths:
            - backend:
                serviceName: echo2
                servicePort: 80
      

      Here we add some annotations to specify the ingress.class, which determines the Ingress Controller that should be used to implement the Ingress Rules. In addition, we define the cluster-issuer to be letsencrypt-staging, the certificate Issuer we just created.

      Finally, we add a tls block to specify the hosts for which we want to acquire certificates, and specify the private key we created earlier.

      When you're done making changes, save and close the file.

      We'll now update the existing Ingress Resource using kubectl apply:

      • kubectl apply -f echo_ingress.yaml

      You should see the following output:

      Output

      ingress.extensions/echo-ingress configured

      You can use kubectl describe to track the state of the Ingress changes you've just applied:

      Output

      Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 14m nginx-ingress-controller Ingress default/echo-ingress Normal UPDATE 1m (x2 over 13m) nginx-ingress-controller Ingress default/echo-ingress Normal CreateCertificate 1m cert-manager Successfully created Certificate "letsencrypt-staging"

      Once the certificate has been successfully created, you can run an additional describe on it to further confirm its successful creation:

      • kubectl describe certificate

      You should see the following output in the Events section:

      Output

      Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CreateOrder 50s cert-manager Created new ACME order, attempting validation... Normal DomainVerified 15s cert-manager Domain "echo2.example.com" verified with "http-01" validation Normal DomainVerified 3s cert-manager Domain "echo1.example.com" verified with "http-01" validation Normal IssueCert 3s cert-manager Issuing certificate... Normal CertObtained 1s cert-manager Obtained certificate from ACME server Normal CertIssued 1s cert-manager Certificate issued successfully

      This confirms that the TLS certificate was successfully issued and HTTPS encryption is now active for the two domains configured.

      We're now ready to send a request to a backend echo server to test that HTTPS is functioning correctly.

      Run the following wget command to send a request to echo1.example.com and print the response headers to STDOUT:

      • wget --save-headers -O- echo1.example.com

      You should see the following output:

      Output

      URL transformed to HTTPS due to an HSTS policy --2018-12-11 14:38:24-- https://echo1.example.com/ Resolving echo1.example.com (echo1.example.com)... 203.0.113.0 Connecting to echo1.example.com (echo1.example.net)|203.0.113.0|:443... connected. ERROR: cannot verify echo1.example.com's certificate, issued by ‘CN=Fake LE Intermediate X1’: Unable to locally verify the issuer's authority. To connect to echo1.example.com insecurely, use `--no-check-certificate'.

      This indicates that HTTPS has successfully been enabled, but the certificate cannot be verified as it's a fake temporary certificate issued by the Let's Encrypt staging server.

      Now that we've tested that everything works using this temporary fake certificate, we can roll out production certificates for the two hosts echo1.example.com and echo2.example.com.

      Step 5 — Rolling Out Production Issuer

      In this step we’ll modify the procedure used to provision staging certificates, and generate a valid, verifiable production certificate for our Ingress hosts.

      To begin, we'll first create a production certificate ClusterIssuer.

      Open a file called prod_issuer.yaml in your favorite editor:

      nano prod_issuer.yaml
      

      Paste in the following manifest:

      prod_issuer.yaml

      apiVersion: certmanager.k8s.io/v1alpha1
      kind: ClusterIssuer
      metadata:
        name: letsencrypt-prod
      spec:
        acme:
          # The ACME server URL
          server: https://acme-v02.api.letsencrypt.org/directory
          # Email address used for ACME registration
          email: your_email_address_here
          # Name of a secret used to store the ACME account private key
          privateKeySecretRef:
            name: letsencrypt-prod
          # Enable the HTTP-01 challenge provider
          http01: {}
      

      Note the different ACME server URL, and the letsencrypt-prod secret key name.

      When you're done editing, save and close the file.

      Now, roll out this Issuer using kubectl:

      • kubectl create -f prod_issuer.yaml

      You should see the following output:

      Output

      clusterissuer.certmanager.k8s.io/letsencrypt-prod created

      Update echo_ingress.yaml to use this new Issuer:

      Make the following changes to the file:

      echo_ingress.yaml

      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: echo-ingress
        annotations:  
          kubernetes.io/ingress.class: nginx
          certmanager.k8s.io/cluster-issuer: letsencrypt-prod
      spec:
        tls:
        - hosts:
          - echo1.example.com
          - echo2.example.com
          secretName: letsencrypt-prod
        rules:
        - host: echo1.example.com
          http:
            paths:
            - backend:
                serviceName: echo1
                servicePort: 80
        - host: echo2.example.com
          http:
            paths:
            - backend:
                serviceName: echo2
                servicePort: 80
      

      Here, we update both the ClusterIssuer and secret key to letsencrypt-prod.

      Once you're satisfied with your changes, save and close the file.

      Roll out the changes using kubectl apply:

      • kubectl apply -f echo_ingress.yaml

      Output

      ingress.extensions/echo-ingress configured

      Wait a couple of minutes for the Let's Encrypt production server to issue the certificate. You can track its progress using kubectl describe on the certificate object:

      • kubectl describe certificate letsencrypt-prod

      Once you see the following output, the certificate has been issued successfully:

      Output

      Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CreateOrder 4m4s cert-manager Created new ACME order, attempting validation... Normal DomainVerified 3m30s cert-manager Domain "echo2.example.com" verified with "http-01" validation Normal DomainVerified 3m18s cert-manager Domain "echo1.example.com" verified with "http-01" validation Normal IssueCert 3m18s cert-manager Issuing certificate... Normal CertObtained 3m16s cert-manager Obtained certificate from ACME server Normal CertIssued 3m16s cert-manager Certificate issued successfully

      We'll now perform a test using curl to verify that HTTPS is working correctly:

      You should see the following:

      Output

      <html> <head><title>308 Permanent Redirect</title></head> <body> <center><h1>308 Permanent Redirect</h1></center> <hr><center>nginx/1.15.6</center> </body> </html>

      This indicates that HTTP requests are being redirected to use HTTPS.

      Run curl on https://echo1.example.com:

      • curl https://echo1.example.com

      You should now see the following output:

      Output

      echo1

      You can run the previous command with the verbose -v flag to dig deeper into the certificate handshake and to verify the certificate information.

      At this point, you've successfully configured HTTPS using a Let's Encrypt certificate for your Nginx Ingress.

      Conclusion

      In this guide, you set up an Nginx Ingress to load balance and route external requests to backend Services inside of your Kubernetes cluster. You also secured the Ingress by installing the cert-manager certificate provisioner and setting up a Let's Encrypt certificate for two host paths.

      There are many alternatives to the Nginx Ingress Controller. To learn more, consult Ingress controllers from the official Kubernetes documentation.



      Source link