One place for hosting & domains

      Secure

      How To Secure Nginx with Let’s Encrypt on CentOS 8


      Not using CentOS 8?


      Choose a different version or distribution.

      The author selected the COVID-19 Relief Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Let’s Encrypt is a certificate authority (CA) that provides free certificates for Transport Layer Security (TLS) encryption. It simplifies the process of creation, validation, signing, installation, and renewal of certificates by providing a software client—Certbot.

      In this tutorial you’ll set up a TLS/SSL certificate from Let’s Encrypt on a CentOS 8 server running Nginx as a web server. Additionally, you will automate the certificate renewal process using a cron job.

      Prerequisites

      In order to complete this guide, you will need:

      • One CentOS 8 server set up by following the CentOS 8 Initial Server Setup guide, including a non-root user with sudo privileges and a firewall.
      • Nginx installed on the CentOS 8 server with a configured server block. You can learn how to set this up by following our tutorial How To Install Nginx on CentOS 8.
      • A fully registered domain name. This tutorial will use your_domain as an example throughout. You can purchase a domain name on Namecheap, get one for free on Freenom, or use the domain registrar of your choice.
      • Both of the following DNS records set up for your server. You can follow this introduction to DigitalOcean DNS for details on how to add them.
        • An A record with your_domain pointing to your server’s public IP address.
        • An A record with www.your_domain pointing to your server’s public IP address.

      Step 1 — Installing the Certbot Let’s Encrypt Client

      First, you need to install the certbot software package. Log in to your CentOS 8 machine as your non-root user:

      The certbot package is not available through the package manager by default. You will need to enable the EPEL repository to install Certbot.

      To add the CentOS 8 EPEL repository, run the following command:

      • sudo dnf install epel-release

      When asked to confirm the installation, type and enter y.

      Now that you have access to the extra repository, install all of the required packages:

      • sudo dnf install certbot python3-certbot-nginx

      This will install Certbot itself and the Nginx plugin for Certbot, which is needed to run the program.

      The installation process will ask you about importing a GPG key. Confirm it so the installation can complete.

      You have now installed the Let’s Encrypt client, but before obtaining certificates, you need to make sure that all required ports are open. To do this, you will update your firewall settings in the next step.

      Step 2 — Updating the Firewall Rules

      Since your prerequisite setup enables firewalld, you will need to adjust the firewall settings in order to allow external connections on your Nginx web server.

      To check which services are already enabled, run the command:

      • sudo firewall-cmd --permanent --list-all

      You’ll receive output like this:

      Output

      public target: default icmp-block-inversion: no interfaces: sources: services: cockpit dhcpv6-client http ssh ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules:

      If you do not see http in the services list, enable it by running:

      • sudo firewall-cmd --permanent --add-service=http

      To allow https traffic, run the following command:

      • sudo firewall-cmd --permanent --add-service=https

      To apply the changes, you’ll need to reload the firewall service:

      • sudo firewall-cmd --reload

      Now that you’ve opened up your server to https traffic, you’re ready to run Certbot and fetch your certificates.

      Step 3 — Obtaining a Certificate

      Now you can request an SSL certificate for your domain.

      When generating the SSL Certificate for Nginx using the certbot Let’s Encrypt client, the client will automatically obtain and install a new SSL certificate that is valid for the domains provided as parameters.

      If you want to install a single certificate that is valid for multiple domains or subdomains, you can pass them as additional parameters to the command. The first domain name in the list of parameters will be the base domain used by Let’s Encrypt to create the certificate, and for that reason you will pass the top-level domain name as first in the list, followed by any additional subdomains or aliases:

      • sudo certbot --nginx -d your_domain -d www.your_domain

      This runs certbot with the --nginx plugin, and the base domain will be your_domain. To execute the interactive installation and obtain a certificate that covers only a single domain, run the certbot command with:

      • sudo certbot --nginx -d your_domain

      The certbot utility can also prompt you for domain information during the certificate request procedure. To use this functionality, call certbot without any domains:

      You will receive a step-by-step guide to customize your certificate options. Certbot will ask you to provide an email address for lost key recovery and notices and to agree to the terms of service. If you did not specify your domains on the command line, Certbot will look for a server_name directive and will give you a list of the domain names found. If your server block files do not specify the domain they serve explicitly using the server_name directive, Certbot will ask you to provide domain names manually.

      For better security, Certbot will automatically configure redirecting all traffic on port 80 to 443.

      When the installation successfully finishes, you will receive a message similar to this:

      Output

      IMPORTANT NOTES: - Congratulations! Your certificate and chain have been saved at: /etc/letsencrypt/live/your_domain/fullchain.pem Your key file has been saved at: /etc/letsencrypt/live/your_domain/privkey.pem Your cert will expire on 2021-02-26. To obtain a new or tweaked version of this certificate in the future, simply run certbot again with the "certonly" option. To non-interactively renew *all* of your certificates, run "certbot renew" - If you like Certbot, please consider supporting our work by: Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate Donating to EFF: https://eff.org/donate-le

      The generated certificate files will be available within a subdirectory named after your base domain in the /etc/letsencrypt/live directory.

      Now that you have finished using Certbot, you can check your SSL certificate status. Verify the status of your SSL certificate by opening the following link in your preferred web browser (don’t forget to replace your_domain with your base domain):

      https://www.ssllabs.com/ssltest/analyze.html?d=your_domain
      

      This site contains an SSL test from SSL Labs, which will start automatically. At the time of this writing, default settings will give an A rating.

      You can now access your website using the https prefix. However, you must renew certificates periodically to keep this setup working. In the next step, you will automate this renewal process.

      Step 4 — Setting Up Auto-Renewal

      Let’s Encrypt certificates are valid for 90 days, but it’s recommended that you renew the certificates every 60 days to allow for a margin of error. The Certbot Let’s Encrypt client has a renew command that automatically checks the currently installed certificates and tries to renew them if they are less than 30 days away from the expiration date.

      You can test automatic renewal for your certificates by running this command:

      • sudo certbot renew --dry-run

      The output will be similar to this:

      Output

      Saving debug log to /var/log/letsencrypt/letsencrypt.log - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Processing /etc/letsencrypt/renewal/your_domain.conf - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Cert not due for renewal, but simulating renewal for dry run Plugins selected: Authenticator nginx, Installer nginx Renewing an existing certificate Performing the following challenges: http-01 challenge for monitoring.pp.ua Waiting for verification... Cleaning up challenges - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - new certificate deployed with reload of nginx server; fullchain is /etc/letsencrypt/live/your_domain/fullchain.pem - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ** DRY RUN: simulating 'certbot renew' close to cert expiry ** (The test certificates below have not been saved.) Congratulations, all renewals succeeded. The following certs have been renewed: /etc/letsencrypt/live/your_domain/fullchain.pem (success) ...

      Notice that if you created a bundled certificate with multiple domains, only the base domain name will show in the output, but the renewal will work for all domains included in this certificate.

      A practical way to ensure your certificates will not get outdated is to create a cron job that will periodically execute the automatic renewal command for you. Since the renewal first checks for the expiration date and only executes the renewal if the certificate is less than 30 days away from expiration, it is safe to create a cron job that runs every week, or even every day.

      Edit the crontab to create a new job that will run the renewal twice per day. To edit the crontab for the root user, run:

      Your text editor will open the default crontab, which is an empty text file at this point. Enter insert mode by pressing i and add in the following line:

      crontab

      0 0,12 * * * python -c 'import random; import time; time.sleep(random.random() * 3600)' && certbot renew --quiet
      

      When you’re finished, press ESC to leave insert mode, then :wq and ENTER to save and exit the file. To learn more about the text editor Vi and its successor Vim, check out our Installing and Using the Vim Text Editor on a Cloud Server tutorial.

      This will create a new cron job that will execute at noon and midnight every day. python -c 'import random; import time; time.sleep(random.random() * 3600)' will select a random minute within the hour for your renewal tasks.

      The renew command for Certbot will check all certificates installed on the system and update any that are set to expire in less than thirty days. --quiet tells Certbot not to output information or wait for user input.

      More detailed information about renewal can be found in the Certbot documentation.

      Conclusion

      In this guide, you installed the Let’s Encrypt client Certbot, downloaded SSL certificates for your domain, and set up automatic certificate renewal. If you have any questions about using Certbot, you can check the official Certbot documentation.

      You can also check the official Let’s Encrypt blog for important updates from time to time.



      Source link

      How To Secure Node.js Applications with a Content Security Policy


      The author selected the Free Software Foundation to receive a donation as part of the Write for DOnations program.

      Introduction

      When the browser loads a page, it executes a lot of code to render the content. The code could be from the same origin as the root document, or a different origin. By default, the browser does not distinguish between the two and executes any code requested by a page regardless of the source. Attackers use this exploit to maliciously inject scripts to the page, which are then executed because the browser has no way of determining if the content is harmful. These situations are where a Content Security Policy (CSP) can provide protection.

      A CSP is an HTTP header that provides an extra layer of security against code-injection attacks, such as cross-site scripting (XSS), clickjacking, and other similar exploits. It facilitates the creation of an “allowlist” of trusted content and blocks the execution of code from sources not present in the allowlist. It also reports any policy violations to a URL of your choice, so that you can keep abreast of potential security attacks.

      With the CSP header, you can specify approved sources for content on your site that the browser can load. Any code that is not from the approved sources, will be blocked from executing, which makes it considerably more difficult for an attacker to inject content and siphon data.

      In this tutorial, you’ll review the different protections the CSP header offers by implementing one in an example Node.js application. You’ll also collect JSON reports of CSP violations to catch problems and fix exploits quickly.

      Prerequisites

      To follow this tutorial, you will need the following:

      • A recent version of Node.js installed on your machine. Follow the steps in the relevant How To Install Node.js tutorial for your operating system to set up a Node.js development environment.

      You should also use a recent browser version, preferably Chrome, as it has the best support for CSP level 3 directives at the time of writing this article (November 2020). Also, make sure to disable any third-party extensions while testing the CSP implementation so that they don’t interfere with the violation reports rendered in the console.

      Step 1 — Setting Up the Demo Project

      To demonstrate the process of creating a Content Security Policy, we’ll work through the entire process of implementing one for this demo project. It’s a one-page website with a variety of content that approximates a typical website or application. It includes a small Vue.js application, YouTube embeds, and some images sourced from Unsplash. It also uses Google fonts and the Bootstrap framework, which is loaded over a content delivery network (CDN).

      In this step, you’ll set up the demo project on your test server or local machine and view it in your browser.

      First, clone the project to your filesystem using the following command:

      • git clone https://github.com/do-community/csp-demo

      Once your project directory is set up, change into it with the following command:

      Next, install the dependencies specified in the package.json file with the next command. You use the express package to set up the web server, while nodemon helps to automatically restart the node application when it detects file changes in the directory:

      Once the dependencies you’ve installed the dependencies, enter the following command to start the web server on port 5500:

      You can now visit your_server_ip:5500 or localhost:5500 in your browser to view the demo page. You will find the text Hello World!, a YouTube embed, and some images on the page.

      CSP Demo

      In the next section, we’ll implement a CSP policy that covers only the most basic protections. We’ll then build on that in the subsequent sections as we uncover all the legitimate resources that we need to allow on the page.

      Step 2 — Implementing a Basic CSP

      Let’s go ahead and write a CSP policy that restricts fonts, images, scripts, styles, and embeds to those originating from the current host only. The following is the response header that achieves this:

      Content-Security-Policy: default-src 'self'; font-src 'self'; img-src 'self'; script-src 'self'; style-src 'self'; frame-src 'self';
      

      Here’s an explanation of the policy directives in this header:

      • font-src defines the sources from where fonts can be loaded from.
      • img-src defines the sources from which image loading is permitted.
      • script-src controls any script-loading privileges on a web page.
      • style-src is the directive for allowing stylesheet sources.
      • frame-src defines allowed sources for frame embeds. (It was deprecated in CSP level 2, but reinstated in level 3.)
      • default-src defines a fallback policy for certain directives if they are not explicitly specified in the header. Here is a complete list of the directives that fall back to default-src.

      In this example, all the specified directives are assigned the 'self' keyword in their source list. This indicates that only resources from the current host (including the URL scheme and port number) should be allowed to execute. For example, script-src 'self' allows the execution of scripts from the current host, but it blocks all other script sources.

      Let’s go ahead and add the header to our Node.js project.

      Leave your app running and open a new terminal window to work with your server.js file:

      Next, add the CSP header from the example in an Express middleware layer. This ensures that you’re including the header in every response from the server:

      server.js

      const express = require('express');
      const bodyParser = require('body-parser');
      const path = require('path');
      const app = express();
      
      app.use(function (req, res, next) {
        res.setHeader(
          'Content-Security-Policy',
          "default-src 'self'; font-src 'self'; img-src 'self'; script-src 'self'; style-src 'self'; frame-src 'self'"
        );
        next();
      });
      
      app.use(bodyParser.json());
      app.use(express.static(path.join(__dirname)));
      
      app.get('/', (req, res) => {
        res.sendFile(path.join(__dirname + '/index.html'));
      });
      
      const server = app.listen(process.env.PORT || 5500, () => {
        const { port } = server.address();
        console.log(`Server running on PORT ${port}`);
      });
      

      Save the file and reload the project in your browser. You’ll notice that the page is completely broken.

      Our CSP header is working as expected and all the external sources that we included on the page have been blocked from loading because they violate the defined policy. However, this is not an ideal way to test a brand-new policy since it can break a website when violations occur.

      Broken page after adding basic CSP

      This is why the Content-Security-Policy-Report-Only header exists. You can use it instead of Content-Security-Policy to prevent the browser from enforcing the policy, while still reporting the violations that occur—this means that you can refine the policy without putting your site at risk. Once you’re happy with your policy, you can switch back to the enforcing header so that the protections are activated.

      Go ahead and replace the Content-Security-Policy header with Content-Security-Policy-Report-Only in your server.js file:

      Add the following highlighted code:

      server.js

      . . .
      app.use(function (req, res, next) {
        res.setHeader(
          'Content-Security-Policy-Report-Only',
          "default-src 'self'; font-src 'self'; img-src 'self'; script-src 'self'; style-src 'self'; frame-src 'self'"
        );
        next();
      });
      . . .
      

      Save the file and reload the page in your browser. The page returns to a working state, but the browser console still reports the CSP violations. Each violation is prefixed with [Report Only] to indicate that the policy is not enforced.

      Report only mode is now active

      In this section, we created the initial implementation of our CSP and set it to report-only mode so that we can refine it without causing the site to break. In the next section, we’ll fix the violations triggered through our initial CSP.

      Step 3 — Fixing Policy Violations

      The policy we implemented in the previous section triggered several violations because we restricted all resources to the origin only—however, we have several third-party assets on the page.

      The two ways to fix CSP violations are: approving the sources in the policy, or removing the code that triggers the violations. Since legitimate resources are triggering all the violations, we’ll concentrate mainly on the former option in this section.

      Open up your browser console. It will display all the current violations of the CSP. Let’s fix each of these issues.

      Allowing the Stylesheets

      The first two violations in the console are from the Google fonts and Bootstrap stylesheets, which you’re loading from https://fonts.googleapis.com and https://cdn.jsdelivr.net respectively. You can allow them both on the page through the style-src directive:

      style-src 'self' https://fonts.googleapis.com https://cdn.jsdelivr.net;
      

      This specifies that CSS files from the origin host, https://fonts.googleapis.com, and https://cdn.jsdelivr.net, should be executed on the page. This policy is quite broad, because it allows any stylesheet from the allowlist domains (not just the ones you’re currently using).

      We can be more specific by using the exact file or directory we’d like to allow instead:

      style-src 'self' https://fonts.googleapis.com https://cdn.jsdelivr.net/npm/bootstrap@4.5.3/dist/css/bootstrap.min.css;
      

      Now, it will only allow the exact, specified stylesheet to execute. It will block all other stylesheets—even if they originate from https://cdn.jsdelivr.net.

      You can update the CSP header like the following with the updated style-src directive. By the time you reload the page, both violations will be resolved:

      server.js

      . . .
      app.use(function (req, res, next) {
        res.setHeader(
          'Content-Security-Policy-Report-Only',
          "default-src 'self'; font-src 'self'; img-src 'self'; script-src 'self'; style-src 'self' https://fonts.googleapis.com https://cdn.jsdelivr.net/npm/bootstrap@4.5.3/dist/css/bootstrap.min.css; frame-src 'self';"
        );
        next();
      });
      . . .
      

      Allowing the Image Sources

      The images you’re using on the page are from a single source: https://images.unsplash.com. Let’s allow it through the img-src directive like the following:

      server.js

      . . .
      app.use(function (req, res, next) {
        res.setHeader(
          'Content-Security-Policy-Report-Only',
          "default-src 'self'; font-src 'self'; img-src 'self' https://images.unsplash.com; script-src 'self'; style-src 'self' https://fonts.googleapis.com https://cdn.jsdelivr.net/npm/bootstrap@4.5.3/dist/css/bootstrap.min.css; frame-src 'self'"
        );
        next();
      });
      . . .
      

      Allowing the Youtube Embed

      You can allow valid sources for nested browsing contexts, which use elements such as <iframe>, through the frame-src directive. If this directive is absent, the browser will look for the child-src directive, which subsequently falls back to the default-src directive.

      Our current policy limits frame embeds to the origin host. Let’s add https://www.youtube.com to the allowlist so that the CSP doesn’t block Youtube embeds from loading once we enforce the policy:

      frame-src 'self' https://www.youtube.com;
      

      Note that the www subdomain is significant here. If you have an embed from https://youtube.com, it will be blocked according to this policy unless you also add https://youtube.com to the allowlist:

      frame-src 'self' https://www.youtube.com https://youtube.com;
      

      Here’s the updated CSP header. Change the frame-src directive as following:

      server.js

      . . .
      app.use(function (req, res, next) {
        res.setHeader(
          'Content-Security-Policy-Report-Only',
          "default-src 'self'; font-src 'self'; img-src 'self' https://images.unsplash.com; script-src 'self'; style-src 'self' https://fonts.googleapis.com https://cdn.jsdelivr.net/npm/bootstrap@4.5.3/dist/css/bootstrap.min.css; frame-src 'self' https://www.youtube.com https://youtube.com;"
        );
        next();
      });
      . . .
      

      Allowing the Font Files

      Google fonts violations

      The Google fonts stylesheet contain references to several font files from https://fonts.gstatic.com. You will find that these files currently violate the defined font-src policy (currently 'self'), so you need to account for them in your revised policy:

      font-src 'self' https://fonts.gstatic.com;
      

      Update the font-src directive in the CSP header as following:

      server.js

      . . .
      app.use(function (req, res, next) {
        res.setHeader(
          'Content-Security-Policy-Report-Only',
          "default-src 'self'; font-src 'self' https://fonts.gstatic.com; img-src 'self' https://images.unsplash.com; script-src 'self'; style-src 'self' https://fonts.googleapis.com https://cdn.jsdelivr.net/npm/bootstrap@4.5.3/dist/css/bootstrap.min.css; frame-src 'self' https://www.youtube.com https://youtube.com;"
        );
        next();
      });
      . . .
      

      Once you reload the page, the console will no longer report violations for the Google fonts files.

      Allowing the Vue.js Script

      A Vue.js script loaded over a CDN is rendering the Hello world! text at the top of the page. We’ll allow its execution on the page through the script-src directive. As mentioned earlier, it’s important to be specific when allowing CDN sources so we don’t open up our site to other possible malicious scripts that are hosted on that domain.

      script-src 'self' https://cdn.jsdelivr.net/npm/vue@2.6.12/dist/vue.min.js;
      

      In this example, you’re only allowing the exact script at that URL to execute on the page. If you want to switch between development and production builds, you’ll need to add the other script URL to the allowlist as well, or you can allow all the resources present in the /dist location, to cover both cases at once:

      script-src 'self' https://cdn.jsdelivr.net/npm/vue@2.6.12/dist/;
      

      Here’s the updated CSP header with the relevant changes:

      server.js

      . . .
      app.use(function (req, res, next) {
        res.setHeader(
          'Content-Security-Policy-Report-Only',
          "default-src 'self'; font-src 'self' https://fonts.gstatic.com; img-src 'self' https://images.unsplash.com; script-src 'self' https://cdn.jsdelivr.net/npm/vue@2.6.12/dist/; style-src 'self' https://fonts.googleapis.com https://cdn.jsdelivr.net/npm/bootstrap@4.5.3/dist/css/bootstrap.min.css; frame-src 'self' https://www.youtube.com https://youtube.com;"
        );
        next();
      });
      . . .
      

      At this point, we’ve successfully allowed all the external files and scripts that our page relies on. But we still have one more CSP violation to resolve due to the presence of an inline script on the page. We’ll explore a few solutions to this problem in the next section.

      Step 4 — Handling Inline Sources

      Although you can approve inline code (such as JavaScript code in a <script> tag) within a CSP using the 'unsafe-inline' keyword, it is not recommended because it greatly increases the risk of a code-injection attack.

      This example policy allows the execution of any inline script on the page, but this is not safe for the aforementioned reasons.

      script-src 'self' 'unsafe-inline' https://unpkg.com/vue@3.0.2/dist/;
      

      The best way to avoid using unsafe-inline is to move the inline code to an external file and reference it that way. This is a better approach for caching, minification, and maintainability, and it also makes the CSP easier to modify in the future.

      However, if you absolutely must use inline code, there are two major ways to add them to your allowlist safely.

      Option 1 — Using a Hash

      This method requires that you calculate a SHA hash that is based on the script itself and then add it to the script-src directive. In recent versions of Chrome, you don’t even need to generate the hash yourself as it’s already included in the CSP violation error in the console:

      [Report Only] Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'self' https://unpkg.com/vue@3.0.2/dist/". Either the 'unsafe-inline' keyword, a hash ('sha256-INJfZVfoUd61ITRFLf63g+S/NJAfswGDl15oK0iXgYM='), or a nonce ('nonce-...') is required to enable inline execution.
      

      The highlighted section here is the exact SHA256 hash that you would need to add to the script-src directive to allow the execution of the specific inline script that triggered the violation.

      Copy the hash and add it to your CSP as follows:

      script-src 'self' https://unpkg.com/vue@3.0.2/dist/ 'sha256-INJfZVfoUd61ITRFLf63g+S/NJAfswGDl15oK0iXgYM=';
      

      The disadvantage to this approach is that if the contents of the script changes, the generated hash will be different, which will trigger a violation.

      Option 2 — Using a Nonce

      The second way to allow the execution of inline code is by using a nonce. These are random strings that you can use to allow a complete block of code regardless of its content.

      Here’s an example of a nonce value in use:

      script-src 'self' https://unpkg.com/vue@3.0.2/dist/ 'nonce-EDNnf03nceIOfn39fn3e9h3sdfa'
      

      The value of the nonce in the CSP must match the nonce attribute on the script:

      <script nonce="EDNnf03nceIOfn39fn3e9h3sdfa">
        // Some inline code
      </script>
      

      Nonces must be unguessable and dynamically generated each time the page is loaded so that an attacker is unable to use them for the execution of a malicious script. If you decide to implement this option, you can use the crypto package to generate a nonce as following:

      const crypto = require('crypto');
      let nonce = crypto.randomBytes(16).toString('base64');
      

      We will opt for the hash method in this tutorial since it’s more practical for our use case.

      Update the script-src directive in your CSP header to include the SHA256 hash of the sole inline script as shown following:

      server.js

      . . .
      app.use(function (req, res, next) {
        res.setHeader(
          'Content-Security-Policy-Report-Only',
          "default-src 'self'; font-src 'self' https://fonts.gstatic.com; img-src 'self' https://images.unsplash.com; script-src 'self' https://unpkg.com/vue@3.0.2/dist/ 'sha256-INJfZVfoUd61ITRFLf63g+S/NJAfswGDl15oK0iXgYM='; style-src 'self' https://fonts.googleapis.com https://cdn.jsdelivr.net/npm/bootstrap@4.5.3/dist/css/bootstrap.min.css; frame-src 'self' https://www.youtube.com https://youtube.com;"
        );
        next();
      });
      . . .
      

      This removes the final CSP violation error that the inline script triggers from the console.

      In the next section, we will monitor the effects of our CSP in a production environment.

      Step 5 — Monitoring Violations

      Once you have your CSP in place, it’s necessary to keep an eye on its effect once in use. For example, if you forget to allow a legitimate source in production or when an attacker is trying to exploit an XSS attack vector (which you need to identify and stop immediately).

      Without some form of active reporting in place, there’s no way to know of these events. This is why the report-to directive exists. It specifies a location that the browser should POST a JSON-formatted violation report to, in the event it has to take action based on the CSP.

      To use this directive, you need to add an additional header to specify an endpoint for the Reporting API:

      Report-To: {"group":"csp-endpoint","max_age":10886400,"endpoints":[{"url":"http://your_server_ip:5500/__cspreport__"}],"include_subdomains":true}
      

      Once that is set, specify the group name in the report-to directive as following:

      report-to csp-endpoint;
      

      Here’s the updated portion of the server.js file with the changes. Be sure to replace the <your_server_ip> placeholder in the Report-To header with your actual server IP address:

      server.js

      . . .
      app.use(function (req, res, next) {
        res.setHeader(
          'Report-To',
          '{"group":"csp-endpoint","max_age":10886400,"endpoints":[{"url":"https://<your_server_ip>:5500/__cspreport__"}],"include_subdomains":true}'
        );
        res.setHeader(
          'Content-Security-Policy-Report-Only',
          "default-src 'self'; font-src 'self' https://fonts.gstatic.com; img-src 'self' https://images.unsplash.com; script-src 'self' https://cdn.jsdelivr.net/npm/vue@2.6.12/dist/ 'sha256-INJfZVfoUd61ITRFLf63g+S/NJAfswGDl15oK0iXgYM='; style-src 'self' https://fonts.googleapis.com https://cdn.jsdelivr.net/npm/bootstrap@4.5.3/dist/css/bootstrap.min.css; frame-src 'self' https://www.youtube.com https://youtube.com; report-to csp-endpoint;"
        );
        next();
      });
      . . .
      

      The report-to directive is intended to replace the now deprecated report-uri directive, but most browsers don’t support it yet (as of November 2020). So, for compatibility with current browsers while also ensuring compatibility with future browser releases support, you should specify both report-uri and report-to in your CSP. If the latter is supported, it will ignore the former:

      server.js

      . . .
      app.use(function (req, res, next) {
        res.setHeader(
          'Report-To',
          '{"group":"csp-endpoint","max_age":10886400,"endpoints":[{"url":"https://your_server_ip:5500/__cspreport__"}],"include_subdomains":true}'
        );
        res.setHeader(
          'Content-Security-Policy-Report-Only',
          "default-src 'self'; font-src 'self' https://fonts.gstatic.com; img-src 'self' https://images.unsplash.com; script-src 'self' https://cdn.jsdelivr.net/npm/vue@2.6.12/dist/ 'sha256-INJfZVfoUd61ITRFLf63g+S/NJAfswGDl15oK0iXgYM='; style-src 'self' https://fonts.googleapis.com https://cdn.jsdelivr.net/npm/bootstrap@4.5.3/dist/css/bootstrap.min.css; frame-src 'self' https://www.youtube.com https://youtube.com; report-to csp-endpoint; report-uri /__cspreport__;"
        );
        next();
      });
      . . .
      

      The /__cspreport__ route needs to exist on the server as well; add this to your file like the following:

      server.js

      . . .
      app.get('/', (req, res) => {
        res.sendFile(path.join(__dirname + '/index.html'));
      });
      
      app.post('/__cspreport__', (req, res) => {
        console.log(req.body);
      });
      . . .
      

      Some browsers send the Content-Type of the report payload as application/csp-report, while others use application/json. If the report-to directive is supported, the Content-Type should be application/reports+json.

      To account for all the possible Content-Type values, you have to set up some configurations on your express server:

      server.js

      . . .
      app.use(
        bodyParser.json({
          type: ['application/json', 'application/csp-report', 'application/reports+json'],
        })
      );
      . . .
      

      At this point, any CSP violations will be sent to the /__cspreport__ route and subsequently logged to the terminal.

      You can try it out by adding a resource from a source that is not compliant with the current CSP, or modifying the inline script in the index.html file as shown following:

      index.html

      . . .
      <script>
        new Vue({
          el: '#vue',
          render(createElement) {
            return createElement('h1', 'Hello World!');
          },
        });
        console.log("Hello")
      </script>
      . . .
      

      This will trigger a violation because the hash of the script is now different from what you included in the CSP header.

      Here’s a typical violation report from a browser using the report-uri:

      {
        'csp-report': {
          'document-uri': 'http://localhost:5500/',
          referrer: '',
          'violated-directive': 'script-src-elem',
          'effective-directive': 'script-src-elem',
          'original-policy': "default-src 'self'; font-src 'self' https://fonts.gstatic.com; img-src 'self' https://images.unsplash.com; script-src 'self' https://cdn.jsdelivr.net/npm/vue@2.6.12/dist/ 'sha256-INJfZVfoUd61ITRFLf63g+S/NJAfswGDl15oK0iXgYM='; style-src 'self' https://fonts.googleapis.com https://cdn.jsdelivr.net/npm/bootstrap@4.5.3/dist/css/bootstrap.min.css; frame-src 'self' https://www.youtube.com https://youtube.com; report-uri /__cspreport__;",
          disposition: 'report',
          'blocked-uri': 'inline',
          'line-number': 58,
          'source-file': 'http://localhost:5500/',
          'status-code': 200,
          'script-sample': ''
        }
      }
      

      The parts to this report are:

      • document-uri: The page the violation occurred on.
      • referrer: The page’s referrer.
      • blocked-uri: The resource that violated the page’s policy (an inline script in this case).
      • line-number: The line number where the inline code begins.
      • violated-directive: The specific directive that was violated. (script-src-elem in this case, which falls back to script-src.)
      • original-policy: The complete policy of the page.

      If the browser supports the report-to directive, the payload should have a similar structure to what is following. Notice how it’s different from the report-uri payload while still carrying the same information:

      [{
          "age": 16796,
          "body": {
              "blocked-uri": "https://vimeo.com",
              "disposition": "enforce",
              "document-uri": "https://localhost:5500/",
              "effective-directive": "frame-src",
              "line-number": 58,
          'original-policy': "default-src 'self'; font-src 'self' https://fonts.gstatic.com; img-src 'self' https://images.unsplash.com; script-src 'self' https://cdn.jsdelivr.net/npm/vue@2.6.12/dist/ 'sha256-INJfZVfoUd61ITRFLf63g+S/NJAfswGDl15oK0iXgYM='; style-src 'self' https://fonts.googleapis.com https://cdn.jsdelivr.net/npm/bootstrap@4.5.3/dist/css/bootstrap.min.css; frame-src 'self' https://www.youtube.com https://youtube.com; report-uri /__cspreport__;",
              "referrer": "",
              "script-sample": "",
              "sourceFile": "https://localhost:5500/",
              "violated-directive": "frame-src"
          },
          "type": "csp",
          "url": "https://localhost:5500/",
          "user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36"
      }]
      

      Note: The report-to directive is supported only in secure contexts, which means that you need to set up your Express server with a valid HTTPS certificate, otherwise you won’t be able to test or use it.

      In this section, we successfully set up CSP monitoring on our server so that we can detect and fix problems quickly. Let’s go ahead and finish this tutorial by enforcing the final policy in the next step.

      Step 6 — Publishing the Final Policy

      Once you’re confident your CSP is set up correctly (ideally after leaving it in production for a few days or weeks in report-only mode), you can enforce it by changing the CSP header from Content-Security-Policy-Report-Only to Content-Security-Policy.

      In addition to reporting the violations, this will stop unauthorized resources from being executed on the page, leading to a safer experience for your visitors.

      Here is the final version of the server.js file:

      server.js

      const express = require('express');
      const bodyParser = require('body-parser');
      const path = require('path');
      const app = express();
      
      app.use(function (req, res, next) {
        res.setHeader(
          'Report-To',
          '{"group":"csp-endpoint","max_age":10886400,"endpoints":[{"url":"http://your_server_ip:5500/__cspreport__"}],"include_subdomains":true}'
        );
        res.setHeader(
          'Content-Security-Policy',
          "default-src 'self'; font-src 'self' https://fonts.gstatic.com; img-src 'self' https://images.unsplash.com; script-src 'self' https://cdn.jsdelivr.net/npm/vue@2.6.12/dist/ 'sha256-INJfZVfoUd61ITRFLf63g+S/NJAfswGDl15oK0iXgYM='; style-src 'self' https://fonts.googleapis.com https://cdn.jsdelivr.net/npm/bootstrap@4.5.3/dist/css/bootstrap.min.css; frame-src 'self' https://www.youtube.com https://youtube.com; report-to csp-endpoint; report-uri /__cspreport__;"
        );
        next();
      });
      
      app.use(
        bodyParser.json({
          type: [
            'application/json',
            'application/csp-report',
            'application/reports+json',
          ],
        })
      );
      app.use(express.static(path.join(__dirname)));
      
      app.get('/', (req, res) => {
        res.sendFile(path.join(__dirname + '/index.html'));
      });
      
      app.post('/__cspreport__', (req, res) => {
        console.log(req.body);
      });
      
      const server = app.listen(process.env.PORT || 5500, () => {
        const { port } = server.address();
        console.log(`Server running on PORT ${port}`);
      });
      

      Browser support
      The CSP header is supported in all browsers with the exception of Internet Explorer, which uses the non-standard X-Content-Security-Policy header instead. If you need to support IE, you have to issue the CSP twice in the response headers.

      The latest version of the CSP spec (level 3) also introduced some newer directives that are not well supported at the moment. Examples include the script-src-elem and prefetch-src directives. Make sure to use the appropriate fallbacks when setting up those directives to ensure the protections remain active in browsers that have not caught up yet.

      Conclusion

      In this article, you’ve set up an effective Content Security Policy for a Node.js application and monitored the violations. If you have an older or more complex website, it will require a wider policy setup that covers all the bases. However, setting up a thorough policy is worth the effort as it makes it a lot harder for an attacker to exploit your website to steal user data.

      You can find the final code from this tutorial in this GitHub repository.

      For more security-related articles, check out our Security topic page. If you would like to learn more about working with Node.js, you can read our How To Code in Node.js series.



      Source link

      How To Deploy a Scalable and Secure Django Application with Kubernetes


      Introduction

      In this tutorial you’ll deploy a containerized Django polls application into a Kubernetes cluster.

      Django is a powerful web framework that can help you get your Python application off the ground quickly. It includes several convenient features like an object-relational mapper, user authentication, and a customizable administrative interface for your application. It also includes a caching framework and encourages clean app design through its URL Dispatcher and Template system.

      In How to Build a Django and Gunicorn Application with Docker, the Django Tutorial Polls application was modified according to the Twelve-Factor methodology for building scalable, cloud-native web apps. This containerized setup was scaled and secured with an Nginx reverse-proxy and Let’s Encrypt-signed TLS certificates in How To Scale and Secure a Django Application with Docker, Nginx, and Let’s Encrypt. In this final tutorial in the From Containers to Kubernetes with Django series, the modernized Django polls application will be deployed into a Kubernetes cluster.

      Kubernetes is a powerful open-source container orchestrator that automates the deployment, scaling and management of containerized applications. Kubernetes objects like ConfigMaps and Secrets allow you to centralize and decouple configuration from your containers, while controllers like Deployments automatically restart failed containers and enable quick scaling of container replicas. TLS encryption is enabled with an Ingress object and the ingress-nginx open-source Ingress Controller. The cert-manager Kubernetes add-on renews and issues certificates using the free Let’s Encrypt certificate authority.

      Prerequisites

      To follow this tutorial, you will need:

      • A Kubernetes 1.15+ cluster with role-based access control (RBAC) enabled. This setup will use a DigitalOcean Kubernetes cluster, but you are free to create a cluster using another method.
      • The kubectl command-line tool installed on your local machine and configured to connect to your cluster. You can read more about installing kubectl in the official documentation. If you are using a DigitalOcean Kubernetes cluster, please refer to How to Connect to a DigitalOcean Kubernetes Cluster to learn how to connect to your cluster using kubectl.
      • A registered domain name. This tutorial will use your_domain.com throughout. You can get one for free at Freenom, or use the domain registrar of your choice.
      • An ingress-nginx Ingress Controller and the cert-manager TLS certificate manager installed into your cluster and configured to issue TLS certificates. To learn how to install and configure an Ingress with cert-manager, please consult How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes.
      • An A DNS record with your_domain.com pointing to the Ingress Load Balancer’s public IP address. If you are using DigitalOcean to manage your domain’s DNS records, consult How to Manage DNS Records to learn how to create A records
      • An S3 object storage bucket such as a DigitalOcean Space to store your Django project’s static files and a set of Access Keys for this Space. To learn how to create a Space, consult the How to Create Spaces product documentation. To learn how to create Access Keys for Spaces, consult Sharing Access to Spaces with Access Keys. With minor changes, you can use any object storage service that the django-storages plugin supports.
      • A PostgreSQL server instance, database, and user for your Django app. With minor changes, you can use any database that Django supports.
      • A Docker Hub account and public repository. For more information on creating these, please see Repositories from the Docker documentation.
      • The Docker engine installed on your local machine. Please see How to Install and Use Docker on Ubuntu 18.04 to learn more.

      Once you have these components set up, you’re ready to begin with this guide.

      Step 1 — Cloning and Configuring the Application

      In this step we’ll clone the application code from GitHub and configure settings like database credentials and object storage keys.

      The application code and Dockerfile can be found in the polls-docker branch of the Django Tutorial Polls App GitHub repository. This repo contains code for the Django documentation’s sample Polls application, which teaches you how to build a polling application from scratch.

      The polls-docker branch contains a Dockerized version of this Polls app. To learn how the Polls app was modified to work effectively in a containerized environment, please see How to Build a Django and Gunicorn Application with Docker.

      Begin by using git to clone the polls-docker branch of the Django Tutorial Polls App GitHub repository to your local machine:

      • git clone --single-branch --branch polls-docker https://github.com/do-community/django-polls.git

      Navigate into the django-polls directory:

      This directory contains the Django application Python code, a Dockerfile that Docker will use to build the container image, as well as an env file that contains a list of environment variables to be passed into the container’s running environment. Inspect the Dockerfile:

      Output

      FROM python:3.7.4-alpine3.10 ADD django-polls/requirements.txt /app/requirements.txt RUN set -ex && apk add --no-cache --virtual .build-deps postgresql-dev build-base && python -m venv /env && /env/bin/pip install --upgrade pip && /env/bin/pip install --no-cache-dir -r /app/requirements.txt && runDeps="$(scanelf --needed --nobanner --recursive /env | awk '{ gsub(/,/, "nso:", $2); print "so:" $2 }' | sort -u | xargs -r apk info --installed | sort -u)" && apk add --virtual rundeps $runDeps && apk del .build-deps ADD django-polls /app WORKDIR /app ENV VIRTUAL_ENV /env ENV PATH /env/bin:$PATH EXPOSE 8000 CMD ["gunicorn", "--bind", ":8000", "--workers", "3", "mysite.wsgi"]

      This Dockerfile uses the official Python 3.7.4 Docker image as a base, and installs Django and Gunicorn’s Python package requirements, as defined in the django-polls/requirements.txt file. It then removes some unnecessary build files, copies the application code into the image, and sets the execution PATH. Finally, it declares that port 8000 will be used to accept incoming container connections, and runs gunicorn with 3 workers, listening on port 8000.

      To learn more about each of the steps in this Dockerfile, please see Step 6 of How to Build a Django and Gunicorn Application with Docker.

      Now, build the image using docker build:

      We name the image polls using the -t flag and pass in the current directory as a build context, the set of files to reference when constructing the image.

      After Docker builds and tags the image, list available images using docker images:

      You should see the polls image listed:

      OutputREPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
      polls               latest              80ec4f33aae1        2 weeks ago         197MB
      python              3.7.4-alpine3.10    f309434dea3a        8 months ago        98.7MB
      

      Before we run the Django container, we need to configure its running environment using the env file present in the current directory. This file will be passed into the docker run command used to run the container, and Docker will inject the configured environment variables into the container’s running environment.

      Open the env file with nano or your favorite editor:

      django-polls/env

      DJANGO_SECRET_KEY=
      DEBUG=True
      DJANGO_ALLOWED_HOSTS=
      DATABASE_ENGINE=postgresql_psycopg2
      DATABASE_NAME=polls
      DATABASE_USERNAME=
      DATABASE_PASSWORD=
      DATABASE_HOST=
      DATABASE_PORT=
      STATIC_ACCESS_KEY_ID=
      STATIC_SECRET_KEY=
      STATIC_BUCKET_NAME=
      STATIC_ENDPOINT_URL=
      DJANGO_LOGLEVEL=info
      

      Fill in missing values for the following keys:

      • DJANGO_SECRET_KEY: Set this to a unique, unpredictable value, as detailed in the Django docs. One method of generating this key is provided in Adjusting the App Settings of the Scalable Django App tutorial.
      • DJANGO_ALLOWED_HOSTS: This variable secures the app and prevents HTTP Host header attacks. For testing purposes, set this to *, a wildcard that will match all hosts. In production you should set this to your_domain.com. To learn more about this Django setting, consult Core Settings from the Django docs.
      • DATABASE_USERNAME: Set this to the PostgreSQL database user created in the prerequisite steps.
      • DATABASE_NAME: Set this to polls or the name of the PostgreSQL database created in the prerequisite steps.
      • DATABASE_PASSWORD: Set this to the PostgreSQL user password created in the prerequisite steps.
      • DATABASE_HOST: Set this to your database’s hostname.
      • DATABASE_PORT: Set this to your database’s port.
      • STATIC_ACCESS_KEY_ID: Set this to your Space or object storage’s access key.
      • STATIC_SECRET_KEY: Set this to your Space or object storage’s access key Secret.
      • STATIC_BUCKET_NAME: Set this to your Space name or object storage bucket.
      • STATIC_ENDPOINT_URL: Set this to the appropriate Spaces or object storage endpoint URL, for example https://your_space_name.nyc3.digitaloceanspaces.com if your Space is located in the nyc3 region.

      Once you’ve finished editing, save and close the file.

      In the next step we’ll run the configured container locally and create the database schema. We’ll also upload static assets like stylesheets and images to object storage.

      Step 2 — Creating the Database Schema and Uploading Assets to Object Storage

      With the container built and configured, use docker run to override the CMD set in the Dockerfile and create the database schema using the manage.py makemigrations and manage.py migrate commands:

      • docker run --env-file env polls sh -c "python manage.py makemigrations && python manage.py migrate"

      We run the polls:latest container image, pass in the environment variable file we just modified, and override the Dockerfile command with sh -c "python manage.py makemigrations && python manage.py migrate", which will create the database schema defined by the app code.

      If you’re running this for the first time you should see:

      Output

      No changes detected Operations to perform: Apply all migrations: admin, auth, contenttypes, polls, sessions Running migrations: Applying contenttypes.0001_initial... OK Applying auth.0001_initial... OK Applying admin.0001_initial... OK Applying admin.0002_logentry_remove_auto_add... OK Applying admin.0003_logentry_add_action_flag_choices... OK Applying contenttypes.0002_remove_content_type_name... OK Applying auth.0002_alter_permission_name_max_length... OK Applying auth.0003_alter_user_email_max_length... OK Applying auth.0004_alter_user_username_opts... OK Applying auth.0005_alter_user_last_login_null... OK Applying auth.0006_require_contenttypes_0002... OK Applying auth.0007_alter_validators_add_error_messages... OK Applying auth.0008_alter_user_username_max_length... OK Applying auth.0009_alter_user_last_name_max_length... OK Applying auth.0010_alter_group_name_max_length... OK Applying auth.0011_update_proxy_permissions... OK Applying polls.0001_initial... OK Applying sessions.0001_initial... OK

      This indicates that the database schema has successfully been created.

      If you’re running migrate a subsequent time, Django will perform a no-op unless the database schema has changed.

      Next, we’ll run another instance of the app container and use an interactive shell inside of it to create an administrative user for the Django project.

      • docker run -i -t --env-file env polls sh

      This will provide you with a shell prompt inside of the running container which you can use to create the Django user:

      • python manage.py createsuperuser

      Enter a username, email address, and password for your user, and after creating the user, hit CTRL+D to quit the container and kill it.

      Finally, we’ll generate the static files for the app and upload them to the DigitalOcean Space using collectstatic. Note that this may take a bit of time to complete.

      • docker run --env-file env polls sh -c "python manage.py collectstatic --noinput"

      After these files are generated and uploaded, you’ll receive the following output.

      Output

      121 static files copied.

      We can now run the app:

      • docker run --env-file env -p 80:8000 polls

      Output

      [2019-10-17 21:23:36 +0000] [1] [INFO] Starting gunicorn 19.9.0 [2019-10-17 21:23:36 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1) [2019-10-17 21:23:36 +0000] [1] [INFO] Using worker: sync [2019-10-17 21:23:36 +0000] [7] [INFO] Booting worker with pid: 7 [2019-10-17 21:23:36 +0000] [8] [INFO] Booting worker with pid: 8 [2019-10-17 21:23:36 +0000] [9] [INFO] Booting worker with pid: 9

      Here, we run the default command defined in the Dockerfile, gunicorn --bind :8000 --workers 3 mysite.wsgi:application, and expose container port 8000 so that port 80 on your local machine gets mapped to port 8000 of the polls container.

      You should now be able to navigate to the polls app using your web browser by typing http://localhost in the URL bar. Since there is no route defined for the / path, you’ll likely receive a 404 Page Not Found error, which is expected.

      Navigate to http://localhost/polls to see the Polls app interface:

      Polls Apps Interface

      To view the administrative interface, visit http://localhost/admin. You should see the Polls app admin authentication window:

      Polls Admin Auth Page

      Enter the administrative username and password you created with the createsuperuser command.

      After authenticating, you can access the Polls app’s administrative interface:

      Polls Admin Main Interface

      Note that static assets for the admin and polls apps are being delivered directly from object storage. To confirm this, consult Testing Spaces Static File Delivery.

      When you are finished exploring, hit CTRL+C in the terminal window running the Docker container to kill the container.

      With the Django app Docker image tested, static assets uploaded to object storage, and database schema configured and ready for use with your app, you’re ready to upload your Django app image to an image registry like Docker Hub.

      Step 3 — Pushing the Django App Image to Docker Hub

      To roll your app out on Kubernetes, your app image must be uploaded to a registry like Docker Hub. Kubernetes will pull the app image from its repository and then deploy it to your cluster.

      You can use a private Docker registry, like DigitalOcean Container Registry, currently free in Early Access, or a public Docker registry like Docker Hub. Docker Hub also allows you to create private Docker repositories. A public repository allows anyone to see and pull the container images, while a private repository allows you to restrict access to you and your team members.

      In this tutorial we’ll push the Django image to the public Docker Hub repository created in the prerequisites. You can also push your image to a private repository, but pulling images from a private repository is beyond the scope of this article. To learn more about authenticating Kubernetes with Docker Hub and pulling private images, please see Pull an Image from a Private Registry from the Kubernetes docs.

      Begin by logging in to Docker Hub on your local machine:

      Output

      Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one. Username:

      Enter your Docker Hub username and password to login.

      The Django image currently has the polls:latest tag. To push it to your Docker Hub repo, re-tag the image with your Docker Hub username and repo name:

      • docker tag polls:latest your_dockerhub_username/your_dockerhub_repo_name:latest

      Push the image to the repo:

      • docker push sammy/sammy-django:latest

      In this tutorial the Docker Hub username is sammy and the repo name is sammy-django. You should replace these values with your own Docker Hub username and repo name.

      You’ll see some output that updates as image layers are pushed to Docker Hub.

      Now that your image is available to Kubernetes on Docker Hub, you can begin rolling it out in your cluster.

      Step 4 — Setting Up the ConfigMap

      When we ran the Django container locally, we passed the env file into docker run to inject configuration variables into the runtime environment. On Kubernetes, configuration variables can be injected using ConfigMaps and Secrets.

      ConfigMaps should be used to store non-confidential configuration information like app settings, and Secrets should be used for sensitive information like API keys and database credentials. They are both injected into containers in a similar fashion, but Secrets have additional access control and security features like encryption at rest. Secrets also store data in base64, while ConfigMaps store data in plain text.

      To begin, create a directory called yaml in which we’ll store our Kubernetes manifests. Navigate into the directory.

      Open a file called polls-configmap.yaml in nano or your preferred text editor:

      • nano polls-configmap.yaml

      Paste in the following ConfigMap manifest:

      polls-configmap.yaml

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: polls-config
      data:
        DJANGO_ALLOWED_HOSTS: "*"
        STATIC_ENDPOINT_URL: "https://your_space_name.space_region.digitaloceanspaces.com"
        STATIC_BUCKET_NAME: "your_space_name"
        DJANGO_LOGLEVEL: "info"
        DEBUG: "True"
        DATABASE_ENGINE: "postgresql_psycopg2"
      

      We’ve extracted the non-sensitive configuration from the env file modified in Step 1 and pasted it into a ConfigMap manifest. The ConfigMap object is called polls-config. Copy in the same values entered into the env file in the previous step.

      For testing purposes leave DJANGO_ALLOWED_HOSTS as * to disable Host header-based filtering. In a production environment you should set this to your app’s domain.

      When you’re done editing the file, save and close it.

      Create the ConfigMap in your cluster using kubectl apply:

      • kubectl apply -f polls-configmap.yaml

      Output

      configmap/polls-config created

      With the ConfigMap created, we’ll create the Secret used by our app in the next step.

      Step 5 — Setting Up the Secret

      Secret values must be base64-encoded, which means creating Secret objects in your cluster is slightly more involved than creating ConfigMaps. You can repeat the process from the previous step, manually base64-encoding Secret values and pasting them into a manifest file. You can also create them using an environment variable file, kubectl create, and the --from-env-file flag, which we’ll do in this step.

      We’ll once again use the env file from Step 1, removing variables inserted into the ConfigMap. Make a copy of the env file called polls-secrets in the yaml directory:

      • cp ../env ./polls-secrets

      Edit the file in your preferred editor:

      polls-secrets

      DJANGO_SECRET_KEY=
      DEBUG=True
      DJANGO_ALLOWED_HOSTS=
      DATABASE_ENGINE=postgresql_psycopg2
      DATABASE_NAME=polls
      DATABASE_USERNAME=
      DATABASE_PASSWORD=
      DATABASE_HOST=
      DATABASE_PORT=
      STATIC_ACCESS_KEY_ID=
      STATIC_SECRET_KEY=
      STATIC_BUCKET_NAME=
      STATIC_ENDPOINT_URL=
      DJANGO_LOGLEVEL=info
      

      Delete all the variables inserted into the ConfigMap manifest. When you’re done, it should look like this:

      polls-secrets

      DJANGO_SECRET_KEY=your_secret_key
      DATABASE_NAME=polls
      DATABASE_USERNAME=your_django_db_user
      DATABASE_PASSWORD=your_django_db_user_password
      DATABASE_HOST=your_db_host
      DATABASE_PORT=your_db_port
      STATIC_ACCESS_KEY_ID=your_space_access_key
      STATIC_SECRET_KEY=your_space_access_key_secret
      

      Be sure to use the same values used in Step 1. When you’re done, save and close the file.

      Create the Secret in your cluster using kubectl create secret:

      • kubectl create secret generic polls-secret --from-env-file=poll-secrets

      Output

      secret/polls-secret created

      Here we create a Secret object called polls-secret and pass in the secrets file we just created.

      You can inspect the Secret using kubectl describe:

      • kubectl describe secret polls-secret

      Output

      Name: polls-secret Namespace: default Labels: <none> Annotations: <none> Type: Opaque Data ==== DATABASE_PASSWORD: 8 bytes DATABASE_PORT: 5 bytes DATABASE_USERNAME: 5 bytes DJANGO_SECRET_KEY: 14 bytes STATIC_ACCESS_KEY_ID: 20 bytes STATIC_SECRET_KEY: 43 bytes DATABASE_HOST: 47 bytes DATABASE_NAME: 5 bytes

      At this point you’ve stored your app’s configuration in your Kubernetes cluster using the Secret and ConfigMap object types. We’re now ready to deploy the app into the cluster.

      Step 6 — Rolling Out the Django App Using a Deployment

      In this step you’ll create a Deployment for your Django app. A Kubernetes Deployment is a controller that can be used to manage stateless applications in your cluster. A controller is a control loop that regulates workloads by scaling them up or down. Controllers also restart and clear out failed containers.

      Deployments control one or more Pods, the smallest deployable unit in a Kubernetes cluster. Pods enclose one or more containers. To learn more about the different types of workloads you can launch, please review An Introduction to Kubernetes.

      Begin by opening a file called polls-deployment.yaml in your favorite editor:

      • nano polls-deployment.yaml

      Paste in the following Deployment manifest:

      polls-deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: polls-app
        labels:
          app: polls
      spec:
          replicas: 2
        selector:
          matchLabels:
            app: polls
        template:
          metadata:
            labels:
              app: polls
          spec:
            containers:
              - image: your_dockerhub_username/app_repo_name:latest
                name: polls
                envFrom:
                - secretRef:
                    name: polls-secret
                - configMapRef:
                    name: polls-config
                ports:
                  - containerPort: 8000
                    name: gunicorn
      

      Fill in the appropriate container image name, referencing the Django Polls image you pushed to Docker Hub in Step 2.

      Here we define a Kubernetes Deployment called polls-app and label it with the key-value pair app: polls. We specify that we’d like to run two replicas of the Pod defined below the template field.

      Using envFrom with secretRef and configMapRef, we specify that all the data from the polls-secret Secret and polls-config ConfigMap should be injected into the containers as environment variables. The ConfigMap and Secret keys become the environment variable names.

      Finally, we expose containerPort 8000 and name it gunicorn.

      To learn more about configuring Kubernetes Deployments, please consult Deployments from the Kubernetes documentation.

      When you’re done editing the file, save and close it.

      Create the Deployment in your cluster using kubectl apply -f:

      • kubectl apply -f polls-deployment.yaml
      • deployment.apps/polls-app created

      Check that the Deployment rolled out correctly using kubectl get:

      • kubectl get deploy polls-app

      Output

      NAME READY UP-TO-DATE AVAILABLE AGE polls-app 2/2 2 2 6m38s

      If you encounter an error or something isn’t quite working, you can use kubectl describe to inspect the failed Deployment:

      You can inspect the two Pods using kubectl get pod:

      Output

      NAME READY STATUS RESTARTS AGE polls-app-847f8ccbf4-2stf7 1/1 Running 0 6m42s polls-app-847f8ccbf4-tqpwm 1/1 Running 0 6m57s

      Two replicas of your Django app are now up and running in the cluster. To access the app, you need to create a Kubernetes Service, which we’ll do next.

      Step 7 — Allowing External Access using a Service

      In this step, you’ll create a Service for your Django app. A Kubernetes Service is an abstraction that allows you to expose a set of running Pods as a network service. Using a Service you can create a stable endpoint for your app that does not change as Pods die and are recreated.

      There are multiple Service types, including ClusterIP Services, which expose the Service on a cluster-internal IP, NodePort Services, which expose the Service on each Node at a static port called the NodePort, and LoadBalancer Services, which provision a cloud load balancer to direct external traffic to the Pods in your cluster (via NodePorts, which it creates automatically). To learn more about these, please see Service from the Kubernetes docs.

      In our final setup we’ll use a ClusterIP Service that is exposed using an Ingress and the Ingress Controller set up in the prerequisites for this guide. For now, to test that everything is functioning correctly, we’ll create a temporary NodePort Service to access the Django app.

      Begin by creating a file called polls-svc.yaml using your favorite editor:

      Paste in the following Service manifest:

      polls-svc.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: polls
        labels:
          app: polls
      spec:
        type: NodePort
        selector:
          app: polls
        ports:
          - port: 8000
            targetPort: 8000
      

      Here we create a NodePort Service called polls and give it the app: polls label. We then select backend Pods with the app: polls label and target their 8000 ports.

      When you’re done editing the file, save and close it.

      Roll out the Service using kubectl apply:

      • kubectl apply -f polls-svc.yaml

      Output

      service/polls created

      Confirm that your Service was created using kubectl get svc:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE polls NodePort 10.245.197.189 <none> 8000:32654/TCP 59s

      This output shows the Service’s cluster-internal IP and NodePort (32654). To connect to the service, we need the external IP addresses for our cluster nodes:

      Output

      NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME pool-7no0qd9e0-364fd Ready <none> 27h v1.18.8 10.118.0.5 203.0.113.1 Debian GNU/Linux 10 (buster) 4.19.0-10-cloud-amd64 docker://18.9.9 pool-7no0qd9e0-364fi Ready <none> 27h v1.18.8 10.118.0.4 203.0.113.2 Debian GNU/Linux 10 (buster) 4.19.0-10-cloud-amd64 docker://18.9.9 pool-7no0qd9e0-364fv Ready <none> 27h v1.18.8 10.118.0.3 203.0.113.3 Debian GNU/Linux 10 (buster) 4.19.0-10-cloud-amd64 docker://18.9.9

      In your web browser, visit your Polls app using any Node’s external IP address and the NodePort. Given the output above, the app’s URL would be: http://203.0.113.1:32654/polls.

      You should see the same Polls app interface that you accessed locally in Step 1:

      Polls Apps Interface

      You can repeat the same test using the /admin route: http://203.0.113.1:32654/admin. You should see the same Admin interface as before:

      Polls Admin Auth Page

      At this stage, you’ve rolled out two replicas of the Django Polls app container using a Deployment. You’ve also created a stable network endpoint for these two replicas, and made it externally accessible using a NodePort Service.

      The final step in this tutorial is to secure external traffic to your app using HTTPS. To do this we’ll use the ingress-nginx Ingress Controller installed in the prerequisites, and create an Ingress object to route external traffic to the polls Kubernetes Service.

      Step 8 — Configuring HTTPS Using Nginx Ingress and cert-manager

      Kubernetes Ingresses allow you to flexibly route traffic from outside your Kubernetes cluster to Services inside of your cluster. This is accomplished using Ingress objects, which define rules for routing HTTP and HTTPS traffic to Kubernetes Services, and Ingress Controllers, which implement the rules by load balancing traffic and routing it to the appropriate backend Services.

      In the prerequisites you installed the ingress-nginx Ingress Controller and cert-manager TLS certificate automation add-on. You also set up staging and production ClusterIssuers for your domain using the Let’s Encrypt certificate authority, and created an Ingress to test certificate issuance and TLS encryption to two dummy backend Services. Before continuing with this step, you should delete the echo-ingress Ingress created in the prerequisite tutorial:

      • kubectl delete ingress echo-ingress

      If you’d like you can also delete the dummy Services and Deployments using kubectl delete svc and kubectl delete deploy, but this is not essential to complete this tutorial.

      You should also have created a DNS A record with your_domain.com pointing to the Ingress Load Balancer’s public IP address. If you’re using a DigitalOcean Load Balancer, you can find this IP address in the Load Balancers section of the Control Panel. If you are also using DigitalOcean to manage your domain’s DNS records, consult How to Manage DNS Records to learn how to create A records.

      If you’re using DigitalOcean Kubernetes, also ensure that you’ve implemented the workaround described in Step 5 of How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes.

      Once you have an A record pointing to the Ingress Controller Load Balancer, you can create an Ingress for your_domain.com and the polls Service.

      Open a file called polls-ingress.yaml using your favorite editor:

      Paste in the following Ingress manifest:

      [polls-ingress.yaml]
      apiVersion: networking.k8s.io/v1beta1
      kind: Ingress
      metadata:
        name: polls-ingress
        annotations:
          kubernetes.io/ingress.class: "nginx"
          cert-manager.io/cluster-issuer: "letsencrypt-staging"
      spec:
        tls:
        - hosts:
          - your_domain.com
          secretName: polls-tls
        rules:
        - host: your_domain.com
          http:
            paths:
            - backend:
                serviceName: polls
                servicePort: 8000
      

      We create an Ingress object called polls-ingress and annotate it to instruct the control plane to use the ingress-nginx Ingress Controller and staging ClusterIssuer. We also enable TLS for your_domain.com and store the certificate and private key in a secret called polls-tls. Finally, we define a rule to route traffic for the your_domain.com host to the polls Service on port 8000.

      When you’re done editing the file, save and close it.

      Create the Ingress in your cluster using kubectl apply:

      • kubectl apply -f polls-ingress.yaml

      Output

      ingress.networking.k8s.io/polls-ingress created

      You can use kubectl describe to track the state of the Ingress you just created:

      • kubectl describe ingress polls-ingress

      Output

      Name: polls-ingress Namespace: default Address: workaround.your_domain.com Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>) TLS: polls-tls terminates your_domain.com Rules: Host Path Backends ---- ---- -------- your_domain.com polls:8000 (10.244.0.207:8000,10.244.0.53:8000) Annotations: cert-manager.io/cluster-issuer: letsencrypt-staging kubernetes.io/ingress.class: nginx Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 51s nginx-ingress-controller Ingress default/polls-ingress Normal CreateCertificate 51s cert-manager Successfully created Certificate "polls-tls" Normal UPDATE 25s nginx-ingress-controller Ingress default/polls-ingress

      You can also run a describe on the polls-tls Certificate to further confirm its successful creation:

      • kubectl describe certificate polls-tls

      Output

      . . . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Issuing 3m33s cert-manager Issuing certificate as Secret does not exist Normal Generated 3m32s cert-manager Stored new private key in temporary Secret resource "polls-tls-v9lv9" Normal Requested 3m32s cert-manager Created new CertificateRequest resource "polls-tls-drx9c" Normal Issuing 2m58s cert-manager The certificate has been successfully issued

      This confirms that the TLS certificate was successfully issued and HTTPS encryption is now active for your_domain.com.

      Given that we used the staging ClusterIssuer, most web browsers won’t trust the fake Let’s Encrypt certificate that it issued, so navigating to your_domain.com will bring you to an error page.

      To send a test request, we’ll use wget from the command-line:

      • wget -O - http://your_domain.com/polls

      Output

      . . . ERROR: cannot verify your_domain.com's certificate, issued by ‘CN=Fake LE Intermediate X1’: Unable to locally verify the issuer's authority. To connect to your_domain.com insecurely, use `--no-check-certificate'.

      We’ll use the suggested --no-check-certificate flag to bypass certificate validation:

      • wget --no-check-certificate -q -O - http://your_domain.com/polls

      Output

      <link rel="stylesheet" type="text/css" href="https://your_space.nyc3.digitaloceanspaces.com/django-polls/static/polls/style.css"> <p>No polls are available.</p>

      This output shows the HTML for the /polls interface page, also confirming that the stylesheet is being served from object storage.

      Now that you’ve successfully tested certificate issuance using the staging ClusterIssuer, you can modify the Ingress to use the production ClusterIssuer.

      Open polls-ingress.yaml for editing once again:

      Modify the cluster-issuer annotation:

      [polls-ingress.yaml]
      apiVersion: networking.k8s.io/v1beta1
      kind: Ingress
      metadata:
        name: polls-ingress
        annotations:
          kubernetes.io/ingress.class: "nginx"
          cert-manager.io/cluster-issuer: "letsencrypt-prod"
      spec:
        tls:
        - hosts:
          - your_domain.com
          secretName: polls-tls
        rules:
        - host: your_domain.com
          http:
            paths:
            - backend:
                serviceName: polls
                servicePort: 8000
      

      When you’re done, save and close the file. Update the Ingress using kubectl apply:

      • kubectl apply -f polls-ingress.yaml

      Output

      ingress.networking.k8s.io/polls-ingress configured

      You can use kubectl describe certificate polls-tls and kubectl describe ingress polls-ingress to track the certificate issuance status:

      • kubectl describe ingress polls-ingress

      Output

      . . . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 23m nginx-ingress-controller Ingress default/polls-ingress Normal CreateCertificate 23m cert-manager Successfully created Certificate "polls-tls" Normal UPDATE 76s (x2 over 22m) nginx-ingress-controller Ingress default/polls-ingress Normal UpdateCertificate 76s cert-manager Successfully updated Certificate "polls-tls"

      The above output confirms that the new production certificate was successfully issued and stored in the polls-tls Secret.

      Navigate to your_domain.com/polls in your web browser to confirm that HTTPS encryption is enabled and everything is working as expected. You should see the Polls app interface:

      Polls Apps Interface

      Verify that HTTPS encryption is active in your web browser. If you’re using Google Chrome, arriving at the above page without any errors confirms that everything is working correctly. In addition, you should see a padlock in the URL bar. Clicking on the padlock will allow you to inspect the Let’s Encrypt certificate details.

      As a final cleanup task, you can optionally switch the polls Service type from NodePort to the internal-only ClusterIP type.

      Modify polls-svc.yaml using your editor:

      Change the type from NodePort to ClusterIP:

      polls-svc.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: polls
        labels:
          app: polls
      spec:
        type: ClusterIP
        selector:
          app: polls
        ports:
          - port: 8000
            targetPort: 8000
      

      When you’re done editing the file, save and close it.

      Roll out the changes using kubectl apply:

      • kubectl apply -f polls-svc.yaml --force

      Output

      service/polls configured

      Confirm that your Service was modified using kubectl get svc:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE polls ClusterIP 10.245.203.186 <none> 8000/TCP 22s

      This output shows that the Service type is now ClusterIP. The only way to access it is via your domain and the Ingress created in this step.

      Conclusion

      In this tutorial you deployed a scalable, HTTPS-secured Django app into a Kubernetes cluster. Static content is served directly from object storage, and the number of running Pods can be quickly scaled up or down using the replicas field in the polls-app Deployment manifest.

      If you’re using a DigitalOcean Space, you can also enable delivery of static assets via a content delivery network and create a custom subdomain for your Space. Please consult Enabling CDN from How to Set Up a Scalable Django App with DigitalOcean Managed Databases and Spaces to learn more.

      To review the rest of the series, please visit our From Containers to Kubernetes with Django series page.



      Source link