One place for hosting & domains

      DigitalOcean

      How To Set Up a Private Docker Registry on Top of DigitalOcean Spaces and Use It with DO Kubernetes


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      A Docker registry is a storage and content delivery system for named Docker images, which are the industry standard for containerized applications. A private Docker registry allows you to securely share your images within your team or organization with more flexibility and control when compared to public ones. By hosting your private Docker registry directly in your Kubernetes cluster, you can achieve higher speeds, lower latency, and better availability, all while having control over the registry.

      The underlying registry storage is delegated to external drivers. The default storage system is the local filesystem, but you can swap this for a cloud-based storage driver. DigitalOcean Spaces is an S3-compatible object storage designed for developer teams and businesses that want a scalable, simple, and affordable way to store and serve vast amounts of data, and is very suitable for storing Docker images. It has a built-in CDN network, which can greatly reduce latency when frequently accessing images.

      In this tutorial, you’ll deploy your private Docker registry to your DigitalOcean Kubernetes cluster using Helm, backed up by DigitalOcean Spaces for storing data. You’ll create API keys for your designated Space, install the Docker registry to your cluster with custom configuration, configure Kubernetes to properly authenticate with it, and test it by running a sample deployment on the cluster. At the end of this tutorial, you’ll have a secure, private Docker registry installed on your DigitalOcean Kubernetes cluster.

      Prerequisites

      Before you begin this tutorial, you’ll need:

      • Docker installed on the machine that you’ll access your cluster from. For Ubuntu 18.04 visit How To Install and Use Docker on Ubuntu 18.04. You only need to complete the first step. Otherwise visit Docker’s website for other distributions.

      • A DigitalOcean Kubernetes cluster with your connection configuration configured as the kubectl default. Instructions on how to configure kubectl are shown under the Connect to your Cluster step shown when you create your cluster. To learn how to create a Kubernetes cluster on DigitalOcean, see Kubernetes Quickstart.

      • A DigitalOcean Space with API keys (access and secret). To learn how to create a DigitalOcean Space and API keys, see How To Create a DigitalOcean Space and API Key.

      • The Helm package manager installed on your local machine, and Tiller installed on your cluster. Complete steps 1 and 2 of the How To Install Software on Kubernetes Clusters with the Helm Package Manager. You only need to complete the first two steps.

      • The Nginx Ingress Controller and Cert-Manager installed on the cluster. For a guide on how to do this, see How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes.

      • A domain name with two DNS A records pointed to the DigitalOcean Load Balancer used by the Ingress. If you are using DigitalOcean to manage your domain’s DNS records, consult How to Manage DNS Records to create A records. In this tutorial, we’ll refer to the A records as registry.example.com and k8s-test.example.com.

      Step 1 — Configuring and Installing the Docker Registry

      In this step, you will create a configuration file for the registry deployment and install the Docker registry to your cluster with the given config using the Helm package manager.

      During the course of this tutorial, you will use a configuration file called chart_values.yaml to override some of the default settings for the Docker registry Helm chart. Helm calls its packages, charts; these are sets of files that outline a related selection of Kubernetes resources. You’ll edit the settings to specify DigitalOcean Spaces as the underlying storage system and enable HTTPS access by wiring up Let’s Encrypt TLS certificates.

      As part of the prerequisite, you would have created the echo1 and echo2 services and an echo_ingress ingress for testing purposes; you will not need these in this tutorial, so you can now delete them.

      Start off by deleting the ingress by running the following command:

      • kubectl delete -f echo_ingress.yaml

      Then, delete the two test services:

      • kubectl delete -f echo1.yaml && kubectl delete -f echo2.yaml

      The kubectl delete command accepts the file to delete when passed the -f parameter.

      Create a folder that will serve as your workspace:

      Navigate to it by running:

      Now, using your text editor, create your chart_values.yaml file:

      Add the following lines, ensuring you replace the highlighted lines with your details:

      chart_values.yaml

      ingress:
        enabled: true
        hosts:
          - registry.example.com
        annotations:
          kubernetes.io/ingress.class: nginx
          certmanager.k8s.io/cluster-issuer: letsencrypt-prod
          nginx.ingress.kubernetes.io/proxy-body-size: "30720m"
        tls:
          - secretName: letsencrypt-prod
            hosts:
              - registry.example.com
      
      storage: s3
      
      secrets:
        htpasswd: ""
        s3:
          accessKey: "your_space_access_key"
          secretKey: "your_space_secret_key"
      
      s3:
        region: your_space_region
        regionEndpoint: your_space_region.digitaloceanspaces.com
        secure: true
        bucket: your_space_name
      

      The first block, ingress, configures the Kubernetes Ingress that will be created as a part of the Helm chart deployment. The Ingress object makes outside HTTP/HTTPS routes point to internal services in the cluster, thus allowing communication from the outside. The overridden values are:

      • enabled: set to true to enable the Ingress.
      • hosts: a list of hosts from which the Ingress will accept traffic.
      • annotations: a list of metadata that provides further direction to other parts of Kubernetes on how to treat the Ingress. You set the Ingress Controller to nginx, the Let's Encrypt cluster issuer to the production variant (letsencrypt-prod), and tell the nginx controller to accept files with a max size of 30 GB, which is a sensible limit for even the largest Docker images.
      • tls: this subcategory configures Let's Encrypt HTTPS. You populate the hosts list that defines from which secure hosts this Ingress will accept HTTPS traffic with our example domain name.

      Then, you set the file system storage to s3 — the other available option would be filesystem. Here s3 indicates using a remote storage system compatible with the industry-standard Amazon S3 API, which DigitalOcean Spaces fulfills.

      In the next block, secrets, you configure keys for accessing your DO Space under the s3 subcategory. Finally, in the s3 block, you configure the parameters specifying your Space.

      Save and close your file.

      Now, if you haven't already done so, set up your A records to point to the Load Balancer you created as part of the Nginx Ingress Controller installation in the prerequisite tutorial. To see how to set your DNS on DigitalOcean, see How to Manage DNS Records.

      Next, ensure your Space isn't empty. The Docker registry won't run at all if you don't have any files in your Space. To get around this, upload a file. Navigate to the Spaces tab, find your Space, click the Upload File button, and upload any file you'd like. You could upload the configuration file you just created.

      Empty file uploaded to empty Space

      Before installing anything via Helm, you need to refresh its cache. This will update the latest information about your chart repository. To do this run the following command:

      Now, you'll deploy the Docker registry chart with this custom configuration via Helm by running:

      • helm install stable/docker-registry -f chart_values.yaml --name docker-registry

      You'll see the following output:

      Output

      NAME: docker-registry ... NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE docker-registry-config 1 1s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE docker-registry-54df68fd64-l26fb 0/1 ContainerCreating 0 1s ==> v1/Secret NAME TYPE DATA AGE docker-registry-secret Opaque 3 1s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE docker-registry ClusterIP 10.245.131.143 <none> 5000/TCP 1s ==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE docker-registry 0/1 1 0 1s ==> v1beta1/Ingress NAME HOSTS ADDRESS PORTS AGE docker-registry registry.example.com 80, 443 1s NOTES: 1. Get the application URL by running these commands: https://registry.example.com/

      Helm lists all the resources it created as a result of the Docker registry chart deployment. The registry is now accessible from the domain name you specified earlier.

      You've configured and deployed a Docker registry on your Kubernetes cluster. Next, you will test the availability of the newly deployed Docker registry.

      Step 2 — Testing Pushing and Pulling

      In this step, you'll test your newly deployed Docker registry by pushing and pulling images to and from it. Currently, the registry is empty. To have something to push, you need to have an image available on the machine you're working from. Let's use the mysql Docker image.

      Start off by pulling mysql from the Docker Hub:

      Your output will look like this:

      Output

      Using default tag: latest latest: Pulling from library/mysql 27833a3ba0a5: Pull complete ... e906385f419d: Pull complete Digest: sha256:a7cf659a764732a27963429a87eccc8457e6d4af0ee9d5140a3b56e74986eed7 Status: Downloaded newer image for mysql:latest

      You now have the image available locally. To inform Docker where to push it, you'll need to tag it with the host name, like so:

      • sudo docker tag mysql registry.example.com/mysql

      Then, push the image to the new registry:

      • sudo docker push registry.example.com/mysql

      This command will run successfully and indicate that your new registry is properly configured and accepting traffic — including pushing new images. If you see an error, double check your steps against steps 1 and 2.

      To test pulling from the registry cleanly, first delete the local mysql images with the following command:

      • sudo docker rmi registry.example.com/mysql && sudo docker rmi mysql

      Then, pull it from the registry:

      • sudo docker pull registry.example.com/mysql

      This command will take a few seconds to complete. If it runs successfully, that means your registry is working correctly. If it shows an error, double check what you have entered against the previous commands.

      You can list Docker images available locally by running the following command:

      You'll see output listing the images available on your local machine, along with their ID and date of creation.

      Your Docker registry is configured. You've pushed an image to it and verified you can pull it down. Now let's add authentication so only certain people can access the code.

      Step 3 — Adding Account Authentication and Configuring Kubernetes Access

      In this step, you'll set up username and password authentication for the registry using the htpasswd utility.

      The htpasswd utility comes from the Apache webserver, which you can use for creating files that store usernames and passwords for basic authentication of HTTP users. The format of htpasswd files is username:hashed_password (one per line), which is portable enough to allow other programs to use it as well.

      To make htpasswd available on the system, you'll need to install it by running:

      • sudo apt install apache2-utils -y

      Note:
      If you're running this tutorial from a Mac, you'll need to use the following command to make htpasswd available on your machine:

      • docker run --rm -v ${PWD}:/app -it httpd htpasswd -b -c /app/htpasswd_file sammy password

      Create it by executing the following command:

      Add a username and password combination to htpasswd_file:

      • htpasswd -B htpasswd_file username

      Docker requires the password to be hashed using the bcrypt algorithm, which is why we pass the -B parameter. The bcrypt algorithm is a password hashing function based on Blowfish block cipher, with a work factor parameter, which specifies how expensive the hash function will be.

      Remember to replace username with your desired username. When run, htpasswd will ask you for the accompanying password and add the combination to htpasswd_file. You can repeat this command for as many users as you wish to add.

      Now, show the contents of htpasswd_file by running the following command:

      Select and copy the contents shown.

      To add authentication to your Docker registry, you'll need to edit chart_values.yaml and add the contents of htpasswd_file in the htpasswd variable.

      Open chart_values.yaml for editing:

      Find the line that looks like this:

      chart_values.yaml

        htpasswd: ""
      

      Edit it to match the following, replacing htpasswd_file_contents with the contents you copied from the htpasswd_file:

      chart_values.yaml

        htpasswd: |-
          htpasswd_file_contents
      

      Be careful with the indentation, each line of the file contents must have four spaces before it.

      Once you've added your contents, save and close the file.

      To propagate the changes to your cluster, run the following command:

      • helm upgrade docker-registry stable/docker-registry -f chart_values.yaml

      The output will be similar to that shown when you first deployed your Docker registry:

      Output

      Release "docker-registry" has been upgraded. Happy Helming! LAST DEPLOYED: ... NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE docker-registry-config 1 3m8s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE docker-registry-6c5bb7ffbf-ltnjv 1/1 Running 0 3m7s ==> v1/Secret NAME TYPE DATA AGE docker-registry-secret Opaque 4 3m8s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE docker-registry ClusterIP 10.245.128.245 <none> 5000/TCP 3m8s ==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE docker-registry 1/1 1 1 3m8s ==> v1beta1/Ingress NAME HOSTS ADDRESS PORTS AGE docker-registry registry.example.com 159.89.215.50 80, 443 3m8s NOTES: 1. Get the application URL by running these commands: https://registry.example.com/

      This command calls Helm and instructs it to upgrade an existing release, in your case docker-registry, with its chart defined in stable/docker-registry in the chart repository, after applying the chart_values.yaml file.

      Now, you'll try pulling an image from the registry again:

      • sudo docker pull registry.example.com/mysql

      The output will look like the following:

      Output

      Using default tag: latest Error response from daemon: Get https://registry.example.com/v2/mysql/manifests/latest: no basic auth credentials

      It correctly failed because you provided no credentials. This means that your Docker registry authorizes requests correctly.

      To log in to the registry, run the following command:

      • sudo docker login registry.example.com

      Remember to replace registry.example.com with your domain address. It will prompt you for a username and password. If it shows an error, double check what your htpasswd_file contains. You must define the username and password combination in the htpasswd_file, which you created earlier in this step.

      To test the login, you can try to pull again by running the following command:

      • sudo docker pull registry.example.com/mysql

      The output will look similar to the following:

      Output

      Using default tag: latest latest: Pulling from mysql Digest: sha256:f2dc118ca6fa4c88cde5889808c486dfe94bccecd01ca626b002a010bb66bcbe Status: Image is up to date for registry.example.com/mysql:latest

      You've now configured Docker and can log in securely. To configure Kubernetes to log in to your registry, run the following command:

      • sudo kubectl create secret generic regcred --from-file=.dockerconfigjson=/home/sammy/.docker/config.json --type=kubernetes.io/dockerconfigjson

      You will see the following output:

      Output

      secret/regcred created

      This command creates a secret in your cluster with the name regcred, takes the contents of the JSON file where Docker stores the credentials, and parses it as dockerconfigjson, which defines a registry credential in Kubernetes.

      You've used htpasswd to create a login config file, configured the registry to authenticate requests, and created a Kubernetes secret containing the login credentials. Next, you will test the integration between your Kubernetes cluster and registry.

      Step 4 — Testing Kubernetes Integration by Running a Sample Deployment

      In this step, you'll run a sample deployment with an image stored in the in-cluster registry to test the connection between your Kubernetes cluster and registry.

      In the last step, you created a secret, called regcred, containing login credentials for your private registry. It may contain login credentials for multiple registries, in which case you'll have to update the Secret accordingly.

      You can specify which secret Kubernetes should use when pulling containers in the pod definition by specifying imagePullSecrets. This step is necessary when the Docker registry requires authentication.

      You'll now deploy a sample Hello World image from your private Docker registry to your cluster. First, in order to push it, you'll pull it to your machine by running the following command:

      • sudo docker pull paulbouwer/hello-kubernetes:1.5

      Then, tag it by running:

      • sudo docker tag paulbouwer/hello-kubernetes:1.5 registry.example.com/paulbouwer/hello-kubernetes:1.5

      Finally, push it to your registry:

      • sudo docker push registry.example.com/paulbouwer/hello-kubernetes:1.5

      Delete it from your machine as you no longer need it locally:

      • sudo docker rmi registry.example.com/paulbouwer/hello-kubernetes:1.5

      Now, you'll deploy the sample Hello World application. First, create a new file, hello-world.yaml, using your text editor:

      Next, you'll define a Service and an Ingress to make the app accessible to outside of the cluster. Add the following lines, replacing the highlighted lines with your domains:

      hello-world.yaml

      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: hello-kubernetes-ingress
        annotations:
          kubernetes.io/ingress.class: nginx
          nginx.ingress.kubernetes.io/rewrite-target: /
      spec:
        rules:
        - host: k8s-test.example.com
          http:
            paths:
            - path: /
              backend:
                serviceName: hello-kubernetes
                servicePort: 80
      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: hello-kubernetes
      spec:
        type: NodePort
        ports:
        - port: 80
          targetPort: 8080
        selector:
          app: hello-kubernetes
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: hello-kubernetes
      spec:
        replicas: 3
        selector:
          matchLabels:
            app: hello-kubernetes
        template:
          metadata:
            labels:
              app: hello-kubernetes
          spec:
            containers:
            - name: hello-kubernetes
              image: registry.example.com/paulbouwer/hello-kubernetes:1.5
              ports:
              - containerPort: 8080
            imagePullSecrets:
            - name: regcred
      

      First, you define the Ingress for the Hello World deployment, which you will route through the Load Balancer that the Nginx Ingress Controller owns. Then, you define a service that can access the pods created in the deployment. In the actual deployment spec, you specify the image as the one located in your registry and set imagePullSecrets to regcred, which you created in the previous step.

      Save and close the file. To deploy this to your cluster, run the following command:

      • kubectl apply -f hello-world.yaml

      You'll see the following output:

      Output

      ingress.extensions/hello-kubernetes-ingress created service/hello-kubernetes created deployment.apps/hello-kubernetes created

      You can now navigate to your test domain — the second A record, k8s-test.example.com in this tutorial. You will see the Kubernetes Hello world! page.

      Hello World page

      The Hello World page lists some environment information, like the Linux kernel version and the internal ID of the pod the request was served from. You can also access your Space via the web interface to see the images you've worked with in this tutorial.

      If you want to delete this Hello World deployment after testing, run the following command:

      • kubectl delete -f hello-world.yaml

      You've created a sample Hello World deployment to test if Kubernetes is properly pulling images from your private registry.

      Conclusion

      You have now successfully deployed your own private Docker registry on your DigitalOcean Kubernetes cluster, using DigitalOcean Spaces as the storage layer underneath. There is no limit to how many images you can store, Spaces can extend infinitely, while at the same time providing the same security and robustness. In production, though, you should always strive to optimize your Docker images as much as possible, take a look at the How To Optimize Docker Images for Production tutorial.



      Source link

      How to Set Up a Scalable Django App with DigitalOcean Managed Databases and Spaces


      Introduction

      Django is a powerful web framework that can help you get your Python application or website off the ground quickly. It includes several convenient features like an object-relational mapper, a Python API, and a customizable administrative interface for your application. It also includes a caching framework and encourages clean app design through its URL Dispatcher and Template system.

      Out of the box, Django includes a minimal web server for testing and local development, but it should be paired with a more robust serving infrastructure for production use cases. Django is often rolled out with an Nginx web server to handle static file requests and HTTPS redirection, and a Gunicorn WSGI server to serve the app.

      In this guide, we will augment this setup by offloading static files like Javascript and CSS stylesheets to DigitalOcean Spaces, and optionally delivering them using a Content Delivery Network, or CDN, which stores these files closer to end users to reduce transfer times. We’ll also use a DigitalOcean Managed PostgreSQL database as our data store to simplify the data layer and avoid having to manually configure a scalable PostgreSQL database.

      Prerequisites

      Before you begin with this guide, you should have the following available to you:

      Step 1 — Installing Packages from the Ubuntu Repositories

      To begin, we’ll download and install all of the items we need from the Ubuntu repositories. We’ll use the Python package manager pip to install additional components a bit later.

      We need to first update the local apt package index and then download and install the packages.

      In this guide, we’ll use Django with Python 3. To install the necessary libraries, log in to your server and type:

      • sudo apt update
      • sudo apt install python3-pip python3-dev libpq-dev curl postgresql-client

      This will install pip, the Python development files needed to build Gunicorn, the libpq header files needed to build the Pyscopg PostgreSQL Python adapter, and the PostgreSQL command-line client.

      Hit Y and then ENTER when prompted to begin downloading and installing the packages.

      Next, we’ll configure the database to work with our Django app.

      Step 2 — Creating the PostgreSQL Database and User

      We’ll now create a database and database user for our Django application.

      To begin, grab the Connection Parameters for your cluster by navigating to Databases from the Cloud Control Panel, and clicking into your database. You should see a Connection Details box containing some parameters for your cluster. Note these down.

      Back on the command line, log in to your cluster using these credentials and the psql PostgreSQL client we just installed:

      • psql -U do_admin -h host -p port -d database

      When prompted, enter the password displayed alongside the doadmin Postgres username, and hit ENTER.

      You will be given a PostgreSQL prompt from which you can manage the database.

      First, create a database for your project called polls:

      Note: Every Postgres statement must end with a semicolon, so make sure that your command ends with one if you are experiencing issues.

      We can now switch to the polls database:

      Next, create a database user for the project. Make sure to select a secure password:

      • CREATE USER myprojectuser WITH PASSWORD 'password';

      We'll now modify a few of the connection parameters for the user we just created. This will speed up database operations so that the correct values do not have to be queried and set each time a connection is established.

      We are setting the default encoding to UTF-8, which Django expects. We are also setting the default transaction isolation scheme to "read committed", which blocks reads from uncommitted transactions. Lastly, we are setting the timezone. By default, our Django projects will be set to use UTC. These are all recommendations from the Django project itself.

      Enter the following commands at the PostgreSQL prompt:

      • ALTER ROLE myprojectuser SET client_encoding TO 'utf8';
      • ALTER ROLE myprojectuser SET default_transaction_isolation TO 'read committed';
      • ALTER ROLE myprojectuser SET timezone TO 'UTC';

      Now we can give our new user access to administer our new database:

      • GRANT ALL PRIVILEGES ON DATABASE polls TO myprojectuser;

      When you are finished, exit out of the PostgreSQL prompt by typing:

      Your Django app is now ready to connect to and manage this database.

      In the next step, we'll install virtualenv and create a Python virtual environment for our Django project.

      Step 3 — Creating a Python Virtual Environment for your Project

      Now that we've set up our database to work with our application, we'll create a Python virtual environment that will isolate this project's dependencies from the system's global Python installation.

      To do this, we first need access to the virtualenv command. We can install this with pip.

      Upgrade pip and install the package by typing:

      • sudo -H pip3 install --upgrade pip
      • sudo -H pip3 install virtualenv

      With virtualenv installed, we can create a directory to store our Python virtual environments and make one to use with the Django polls app.

      Create a directory called envs and navigate into it:

      Within this directory, create a Python virtual environment called polls by typing:

      This will create a directory called polls within the envs directory. Inside, it will install a local version of Python and a local version of pip. We can use this to install and configure an isolated Python environment for our project.

      Before we install our project's Python requirements, we need to activate the virtual environment. You can do that by typing:

      • source polls/bin/activate

      Your prompt should change to indicate that you are now operating within a Python virtual environment. It will look something like this: (polls)user@host:~/envs$.

      With your virtual environment active, install Django, Gunicorn, and the psycopg2 PostgreSQL adaptor with the local instance of pip:

      Note: When the virtual environment is activated (when your prompt has (polls) preceding it), use pip instead of pip3, even if you are using Python 3. The virtual environment's copy of the tool is always named pip, regardless of the Python version.

      • pip install django gunicorn psycopg2-binary

      You should now have all of the software you need to run the Django polls app. In the next step, we'll create a Django project and install this app.

      Step 4 — Creating the Polls Django Application

      We can now set up our sample application. In this tutorial, we'll use the Polls demo application from the Django documentation. It consists of a public site that allows users to view polls and vote in them, and an administrative control panel that allows the admin to modify, create, and delete polls.

      In this guide, we'll skip through the tutorial steps, and simply clone the final application from the DigitalOcean Community django-polls repo.

      If you'd like to complete the steps manually, create a directory called django-polls in your home directory and navigate into it:

      • cd
      • mkdir django-polls
      • cd django-polls

      From there, you can follow the Writing your first Django app tutorial from the official Django documentation. When you're done, skip to Step 5.

      If you just want to clone the finished app, navigate to your home directory and use git to clone the django-polls repo:

      • cd
      • git clone https://github.com/do-community/django-polls.git

      cd into it, and list the directory contents:

      You should see the following objects:

      Output

      LICENSE README.md manage.py mysite polls templates

      manage.py is the main command-line utility used to manipulate the app. polls contains the polls app code, and mysite contains project-scope code and settings. templates contains custom template files for the administrative interface. To learn more about the project structure and files, consult Creating a Project from the official Django documentation.

      Before running the app, we need to adjust its default settings and connect it to our database.

      Step 5 — Adjusting the App Settings

      In this step, we'll modify the Django project's default configuration to increase security, connect Django to our database, and collect static files into a local directory.

      Begin by opening the settings file in your text editor:

      • nano ~/django-polls/mysite/settings.py

      Start by locating the ALLOWED_HOSTS directive. This defines a list of the addresses or domain names that you want to use to connect to the Django instance. An incoming request with a Host header not in this list will raise an exception. Django requires that you set this to prevent a certain class of security vulnerability.

      In the square brackets, list the IP addresses or domain names associated with your Django server. Each item should be listed in quotations with entries separated by a comma. Your list will also include localhost, since you will be proxying connections through a local Nginx instance. If you wish to include requests for an entire domain and any subdomains, prepend a period to the beginning of the entry.

      In the snippet below, there are a few commented out examples that demonstrate what these entries should look like:

      ~/django-polls/mysite/settings.py

      . . .
      
      # The simplest case: just add the domain name(s) and IP addresses of your Django server
      # ALLOWED_HOSTS = [ 'example.com', '203.0.113.5']
      # To respond to 'example.com' and any subdomains, start the domain with a dot
      # ALLOWED_HOSTS = ['.example.com', '203.0.113.5']
      ALLOWED_HOSTS = ['your_server_domain_or_IP', 'second_domain_or_IP', . . ., 'localhost']
      
      . . . 
      

      Next, find the section of the file that configures database access. It will start with DATABASES. The configuration in the file is for a SQLite database. We already created a PostgreSQL database for our project, so we need to adjust these settings.

      We will tell Django to use the psycopg2 database adaptor we installed with pip, instead of the default SQLite engine. We’ll also reuse the Connection Parameters referenced in Step 2. You can always find this information from the Managed Databases section of the DigitalOcean Cloud Control Panel.

      Update the file with your database settings: the database name (polls), the database username, the database user's password, and the database host and port. Be sure to replace the database-specific values with your own information:

      ~/django-polls/mysite/settings.py

      . . .
      
      DATABASES = {
          'default': {
              'ENGINE': 'django.db.backends.postgresql_psycopg2',
              'NAME': 'polls',
              'USER': 'myprojectuser',
              'PASSWORD': 'password',
              'HOST': 'managed_db_host',
              'PORT': 'managed_db_port',
          }
      }
      
      . . .
      

      Next, move down to the bottom of the file and add a setting indicating where the static files should be placed. This is necessary so that Nginx can handle requests for these items. The following line tells Django to place them in a directory called static in the base project directory:

      ~/django-polls/mysite/settings.py

      . . .
      
      STATIC_URL = '/static/'
      STATIC_ROOT = os.path.join(BASE_DIR, 'static/')
      

      Save and close the file when you are finished.

      At this point, you've configured the Django project's database, security, and static files settings. If you followed the polls tutorial from the start and did not clone the GitHub repo, you can move on to Step 6. If you cloned the GitHub repo, there remains one additional step.

      The Django settings file contains a SECRET_KEY variable that is used to create hashes for various Django objects. It's important that it is set to a unique, unpredictable value. The SECRET_KEY variable has been scrubbed from the GitHub repository, so we'll create a new one using a function built-in to the django Python package called get_random_secret_key(). From the command line, open up a Python interpreter:

      You should see the following output and prompt:

      Output

      Python 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>>

      Import the get_random_secret_key function from the Django package, then call the function:

      • from django.core.management.utils import get_random_secret_key
      • get_random_secret_key()

      Copy the resulting key to your clipboard.

      Exit the Python interpreter by pressing CTRL+D.

      Next, open up the settings file in your text editor once again:

      nano ~/django-polls/mysite/settings.py
      

      Locate the SECRET_KEY variable and paste in the key you just generated:

      ~/django-polls/mysite/settings.py

      . . .
      
      # SECURITY WARNING: keep the secret key used in production secret!
      SECRET_KEY = 'your_secret_key_here'
      
      . . .
      

      Save and close the file.

      We'll now test the app locally using the Django development server to ensure that everything's been correctly configured.

      Step 6 — Testing the App

      Before we run the Django development server, we need to use the manage.py utility to create the database schema and collect static files into the STATIC_ROOT directory.

      Navigate into the project's base directory, and create the initial database schema in our PostgreSQL database using the makemigrations and migrate commands:

      • cd django-polls
      • ./manage.py makemigrations
      • ./manage.py migrate

      makemigrations will create the migrations, or database schema changes, based on the changes made to Django models. migrate will apply these migrations to the database schema. To learn more about migrations in Django, consult Migrations from the official Django documentation.

      Create an administrative user for the project by typing:

      • ./manage.py createsuperuser

      You will have to select a username, provide an email address, and choose and confirm a password.

      We can collect all of the static content into the directory location we configured by typing:

      • ./manage.py collectstatic

      The static files will then be placed in a directory called static within your project directory.

      If you followed the initial server setup guide, you should have a UFW firewall protecting your server. In order to test the development server, we'll have to allow access to the port we'll be using.

      Create an exception for port 8000 by typing:

      Testing the App Using the Django Development Server

      Finally, you can test your project by starting the Django development server with this command:

      • ./manage.py runserver 0.0.0.0:8000

      In your web browser, visit your server's domain name or IP address followed by :8000 and the polls path:

      • http://server_domain_or_IP:8000/polls

      You should see the Polls app interface:

      Polls App Interface

      To check out the admin interface, visit your server's domain name or IP address followed by :8000 and the administrative interface's path:

      • http://server_domain_or_IP:8000/admin

      You should see the Polls app admin authentication window:

      Polls Admin Auth Page

      Enter the administrative username and password you created with the createsuperuser command.

      After authenticating, you can access the Polls app's administrative interface:

      Polls Admin Main Interface

      When you are finished exploring, hit CTRL-C in the terminal window to shut down the development server.

      Testing the App Using Gunicorn

      The last thing we want to do before offloading static files is test Gunicorn to make sure that it can serve the application. We can do this by entering our project directory and using gunicorn to load the project's WSGI module:

      • gunicorn --bind 0.0.0.0:8000 mysite.wsgi

      This will start Gunicorn on the same interface that the Django development server was running on. You can go back and test the app again.

      Note: The admin interface will not have any of the styling applied since Gunicorn does not know how to find the static CSS content responsible for this.

      We passed Gunicorn a module by specifying the relative directory path to Django's wsgi.py file, the entry point to our application,. This file defines a function called application, which communicates with the application. To learn more about the WSGI specification, click here.

      When you are finished testing, hit CTRL-C in the terminal window to stop Gunicorn.

      We'll now offload the application’s static files to DigitalOcean Spaces.

      Step 7 — Offloading Static Files to DigitalOcean Spaces

      At this point, Gunicorn can serve our Django application but not its static files. Usually we'd configure Nginx to serve these files, but in this tutorial we'll offload them to DigitalOcean Spaces using the django-storages plugin. This allows you to easily scale Django by centralizing its static content and freeing up server resources. In addition, you can deliver this static content using the DigitalOcean Spaces CDN.

      For a full guide on offloading Django static files to Object storage, consult How to Set Up Object Storage with Django.

      Installing and Configuring django-storages

      We'll begin by installing the django-storages Python package. The django-storages package provides Django with the S3Boto3Storage storage backend that uses the boto3 library to upload files to any S3-compatible object storage service.

      To start, install thedjango-storages and boto3 Python packages using pip:

      • pip install django-storages boto3

      Next, open your app's Django settings file again:

      • nano ~/django-polls/mysite/settings.py

      Navigate down to the INSTALLED_APPS section of the file, and append storages to the list of installed apps:

      ~/django-polls/mysite/settings.py

      . . .
      
      INSTALLED_APPS = [
          . . .
          'django.contrib.staticfiles',
          'storages',
      ]
      
      . . .
      

      Scroll further down the file to the STATIC_URL we previously modified. We'll now overwrite these values and append new S3Boto3Storage backend parameters. Delete the code you entered earlier, and add the following blocks, which include access and location information for your Space. Remember to replace the highlighted values here with your own information::

      ~/django-polls/mysite/settings.py

      . . .
      
      # Static files (CSS, JavaScript, Images)
      # https://docs.djangoproject.com/en/2.1/howto/static-files/
      
      AWS_ACCESS_KEY_ID = 'your_spaces_access_key'
      AWS_SECRET_ACCESS_KEY = 'your_spaces_secret_key'
      
      AWS_STORAGE_BUCKET_NAME = 'your_space_name'
      AWS_S3_ENDPOINT_URL = 'spaces_endpoint_URL'
      AWS_S3_OBJECT_PARAMETERS = {
          'CacheControl': 'max-age=86400',
      }
      AWS_LOCATION = 'static'
      AWS_DEFAULT_ACL = 'public-read'
      
      STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
      
      STATIC_URL = '{}/{}/'.format(AWS_S3_ENDPOINT_URL, AWS_LOCATION)
      STATIC_ROOT = 'static/'
      

      We define the following configuration items:

      • AWS_ACCESS_KEY_ID: The Access Key ID for the Space, which you created in the tutorial prerequisites. If you didn’t create a set of Access Keys, consult Sharing Access to Spaces with Access Keys.
      • AWS_SECRET_ACCESS_KEY: The secret key for the DigitalOcean Space.
      • AWS_STORAGE_BUCKET_NAME: Your DigitalOcean Space name.
      • AWS_S3_ENDPOINT_URL : The endpoint URL used to access the object storage service. For DigitalOcean, this will be something like https://nyc3.digitaloceanspaces.com depending on the Space region.
      • AWS_S3_OBJECT_PARAMETERS Sets the cache control headers on static files.
      • AWS_LOCATION: Defines a directory within the object storage bucket where all static files will be placed.
      • AWS_DEFAULT_ACL: Defines the access control list (ACL) for the static files. Setting it to public-read ensures that the files are publicly accessible to end users.
      • STATICFILES_STORAGE: Sets the storage backend Django will use to offload static files. This backend should work with any S3-compatible backend, including DigitalOcean Spaces.
      • STATIC_URL: Specifies the base URL that Django should use when generating URLs for static files. Here, we combine the endpoint URL and the static files subdirectory to construct a base URL for static files.
      • STATIC_ROOT: Specifies where to collect static files locally before copying them to object storage.

      From now on, when you run collectstatic, Django will upload your app's static files to the Space. When you start Django, it'll begin serving static assets like CSS and Javascript from this Space.

      Before we test that this is all functioning correctly, we need to configure Cross-Origin Resource Sharing (CORS) headers for our Spaces files or access to certain static assets may be denied by your web browser.

      CORS headers tell the web browser that the an application running at one domain can access scripts or resources located at another. In this case, we need to allow cross-origin resource sharing for our Django server's domain so that requests for static files in the Space are not denied by the web browser.

      To begin, navigate to the Settings page of your Space using the Cloud Control Panel:

      Screenshot of the Settings tab

      In the CORS Configurations section, click Add.

      CORS advanced settings

      Here, under Origin, enter the wildcard origin, *

      Warning: When you deploy your app into production, be sure to change this value to your exact origin domain (including the http:// or https:// protocol). Leaving this as the wildcard origin is insecure, and we do this here only for testing purposes since setting the origin to http://example.com:8000 (using a nonstandard port) is currently not supported.

      Under Allowed Methods, select GET.

      Click on Add Header, and in text box that appears, enter Access-Control-Allow-Origin.

      Set Access Control Max Age to 600 so that the header we just created expires every 10 minutes.

      Click Save Options.

      From now on, objects in your Space will contain the appropriate Access-Control-Allow-Origin response headers, allowing modern secure web browsers to fetch these files across domains.

      At this point, you can optionally enable the CDN for your Space, which will serve these static files from a distributed network of edge servers. To learn more about CDNs, consult Using a CDN to Speed Up Static Content Delivery. This can significantly improve web performance. If you don't want to enable the CDN for your Space, skip ahead to the next section, Testing Spaces Static File Delivery.

      Enabling CDN (Optional)

      To activate static file delivery via the DigitalOcean Spaces CDN, begin by enabling the CDN for your DigitalOcean Space. To learn how to do this, consult How to Enable the Spaces CDN from the DigitalOcean product documentation.

      Once you've enabled the CDN for your Space, navigate to it using the Cloud Control Panel. You should see a new Endpoints link under your Space name:

      List of Space Endpoints

      These endpoints should contain your Space name.

      Notice the addition of a new Edge endpoint. This endpoint routes requests for Spaces objects through the CDN, serving them from the edge cache as much as possible. Note down this Edge endpoint, as we'll use it to configure the django-storages plugin.

      Next, edit your app's Django settings file once again:

      • nano ~/django-polls/mysite/settings.py

      Navigate down to the Static Files section we recently modified. Add the AWS_S3_CUSTOM_DOMAIN parameter to configure the django-storages plugin CDN endpoint and update the STATIC_URL parameter to use this new CDN endpoint:

      ~/django-polls/mysite/settings.py

      . . .
      
      # Static files (CSS, JavaScript, Images)
      # https://docs.djangoproject.com/en/2.1/howto/static-files/
      
      # Moving static assets to DigitalOcean Spaces as per:
      # https://www.digitalocean.com/community/tutorials/how-to-set-up-object-storage-with-django
      AWS_ACCESS_KEY_ID = 'your_spaces_access_key'
      AWS_SECRET_ACCESS_KEY = 'your_spaces_secret_key'
      
      AWS_STORAGE_BUCKET_NAME = 'your_space_name'
      AWS_S3_ENDPOINT_URL = 'spaces_endpoint_URL'
      AWS_S3_CUSTOM_DOMAIN = 'spaces_edge_endpoint_URL'
      AWS_S3_OBJECT_PARAMETERS = {
          'CacheControl': 'max-age=86400',
      }
      AWS_LOCATION = 'static'
      AWS_DEFAULT_ACL = 'public-read'
      
      STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
      
      STATIC_URL = '{}/{}/'.format(AWS_S3_CUSTOM_DOMAIN, AWS_LOCATION)
      STATIC_ROOT = 'static/'
      

      Here, replace the spaces_edge_endpoint_URL with the Edge endpoint you just noted down, truncating the https:// prefix. For example, if the Edge endpoint URL is https://example.sfo2.cdn.digitaloceanspaces.com, AWS_S3_CUSTOM_DOMAIN should be set to example.sfo2.cdn.digitaloceanspaces.com.

      When you're done, save and close the file.

      When you start Django, it will now serve static content using the CDN for your DigitalOcean Space.

      Testing Spaces Static File Delivery

      We'll now test that Django is correctly serving static files from our DigitalOcean Space.

      Navigate to your Django app directory:

      From here, run collectstatic to collect and upload static files to your DigitalOcean Space:

      • python manage.py collectstatic

      You should see the following output:

      Output

      You have requested to collect static files at the destination location as specified in your settings. This will overwrite existing files! Are you sure you want to do this? Type 'yes' to continue, or 'no' to cancel:

      Type yes and hit ENTER to confirm.

      You should then see output like the following

      Output

      121 static files copied.

      This confirms that Django successfully uploaded the polls app static files to your Space. You can navigate to your Space using the Cloud Control Panel, and inspect the files in the static directory.

      Next, we'll verify that Django is rewriting the appropriate URLs.

      Start the Gunicorn server:

      • gunicorn --bind 0.0.0.0:8000 mysite.wsgi

      In your web browser, visit your server's domain name or IP address followed by :8000 and /admin:

      http://server_domain_or_IP:8000/admin
      

      You should once again see the Polls app admin authentication window, this time with correct styling.

      Now, use your browser's developer tools to inspect the page contents and reveal the source file storage locations.

      To do this using Google Chrome, right-click the page, and select Inspect.

      You should see the following window:

      Chrome Dev Tools Window

      From here, click on Sources in the toolbar. In the list of source files in the left-hand pane, you should see /admin/login under your Django server's domain, and static/admin under your Space's CDN endpoint. Within static/admin, you should see both the css and fonts directories.

      This confirms that CSS stylesheets and fonts are correctly being served from your Space's CDN.

      When you are finished testing, hit CTRL-C in the terminal window to stop Gunicorn.

      You can disable your active Python virtual environment by entering deactivate:

      Your prompt should return to normal.

      At this point you've successfully offloaded static files from your Django server, and are serving them from object storage. We can now move on to configuring Gunicorn to start automatically as a system service.

      Step 8 — Creating systemd Socket and Service Files for Gunicorn

      In Step 6 we tested that Gunicorn can interact with our Django application, but we should implement a more robust way of starting and stopping the application server. To accomplish this, we'll make systemd service and socket files.

      The Gunicorn socket will be created at boot and will listen for connections. When a connection occurs, systemd will automatically start the Gunicorn process to handle the connection.

      Start by creating and opening a systemd socket file for Gunicorn with sudo privileges:

      • sudo nano /etc/systemd/system/gunicorn.socket

      Inside, we will create a [Unit] section to describe the socket, a [Socket] section to define the socket location, and an [Install] section to make sure the socket is created at the right time. Add the following code to the file:

      /etc/systemd/system/gunicorn.socket

      [Unit]
      Description=gunicorn socket
      
      [Socket]
      ListenStream=/run/gunicorn.sock
      
      [Install]
      WantedBy=sockets.target
      

      Save and close the file when you are finished.

      Next, create and open a systemd service file for Gunicorn with sudo privileges in your text editor. The service filename should match the socket filename with the exception of the extension:

      • sudo nano /etc/systemd/system/gunicorn.service

      Start with the [Unit] section, which specifies metadata and dependencies. We'll put a description of our service here and tell the init system to only start this after the networking target has been reached. Because our service relies on the socket from the socket file, we need to include a Requires directive to indicate that relationship:

      /etc/systemd/system/gunicorn.service

      [Unit]
      Description=gunicorn daemon
      Requires=gunicorn.socket
      After=network.target
      

      Next, we'll open up the [Service] section. We'll specify the user and group that we want to process to run under. We will give our regular user account ownership of the process since it owns all of the relevant files. We'll give group ownership to the www-data group so that Nginx can communicate easily with Gunicorn.

      We'll then map out the working directory and specify the command to use to start the service. In this case, we'll have to specify the full path to the Gunicorn executable, which is installed within our virtual environment. We will bind the process to the Unix socket we created within the /run directory so that the process can communicate with Nginx. We log all data to standard output so that the journald process can collect the Gunicorn logs. We can also specify any optional Gunicorn tweaks here, like the number of worker processes. Here, we run Gunicorn with 3 worker processes.

      Add the following Service section to the file. Be sure to replace the username listed here with your own username:

      /etc/systemd/system/gunicorn.service

      [Unit]
      Description=gunicorn daemon
      Requires=gunicorn.socket
      After=network.target
      
      [Service]
      User=sammy
      Group=www-data
      WorkingDirectory=/home/sammy/django-polls
      ExecStart=/home/sammy/envs/polls/bin/gunicorn 
                --access-logfile - 
                --workers 3 
                --bind unix:/run/gunicorn.sock 
                mysite.wsgi:application
      

      Finally, we'll add an [Install] section. This will tell systemd what to link this service to if we enable it to start at boot. We want this service to start when the regular multi-user system is up and running:

      /etc/systemd/system/gunicorn.service

      [Unit]
      Description=gunicorn daemon
      Requires=gunicorn.socket
      After=network.target
      
      [Service]
      User=sammy
      Group=www-data
      WorkingDirectory=/home/sammy/django-polls
      ExecStart=/home/sammy/envs/polls/bin/gunicorn 
                --access-logfile - 
                --workers 3 
                --bind unix:/run/gunicorn.sock 
                mysite.wsgi:application
      
      [Install]
      WantedBy=multi-user.target
      

      With that, our systemd service file is complete. Save and close it now.

      We can now start and enable the Gunicorn socket. This will create the socket file at /run/gunicorn.sock now and at boot. When a connection is made to that socket, systemd will automatically start the gunicorn.service to handle it:

      • sudo systemctl start gunicorn.socket
      • sudo systemctl enable gunicorn.socket

      We can confirm that the operation was successful by checking for the socket file.

      Checking for the Gunicorn Socket File

      Check the status of the process to find out whether it started successfully:

      • sudo systemctl status gunicorn.socket

      You should see the following output:

      Output

      Failed to dump process list, ignoring: No such file or directory ● gunicorn.socket - gunicorn socket Loaded: loaded (/etc/systemd/system/gunicorn.socket; enabled; vendor preset: enabled) Active: active (running) since Tue 2019-03-05 19:19:16 UTC; 1h 22min ago Listen: /run/gunicorn.sock (Stream) CGroup: /system.slice/gunicorn.socket Mar 05 19:19:16 django systemd[1]: Listening on gunicorn socket.

      Next, check for the existence of the gunicorn.sock file within the /run directory:

      Output

      /run/gunicorn.sock: socket

      If the systemctl status command indicated that an error occurred, or if you do not find the gunicorn.sock file in the directory, it's an indication that the Gunicorn socket was not created correctly. Check the Gunicorn socket's logs by typing:

      • sudo journalctl -u gunicorn.socket

      Take another look at your /etc/systemd/system/gunicorn.socket file to fix any problems before continuing.

      Testing Socket Activation

      Currently, if you've only started the gunicorn.socket unit, the gunicorn.service will not be active, since the socket has not yet received any connections. You can check this by typing:

      • sudo systemctl status gunicorn

      Output

      ● gunicorn.service - gunicorn daemon Loaded: loaded (/etc/systemd/system/gunicorn.service; disabled; vendor preset: enabled) Active: inactive (dead)

      To test the socket activation mechanism, we can send a connection to the socket through curl by typing:

      • curl --unix-socket /run/gunicorn.sock localhost

      You should see the HTML output from your application in the terminal. This indicates that Gunicorn has started and is able to serve your Django application. You can verify that the Gunicorn service is running by typing:

      • sudo systemctl status gunicorn

      Output

      ● gunicorn.service - gunicorn daemon Loaded: loaded (/etc/systemd/system/gunicorn.service; disabled; vendor preset: enabled) Active: active (running) since Tue 2019-03-05 20:43:56 UTC; 1s ago Main PID: 19074 (gunicorn) Tasks: 4 (limit: 4915) CGroup: /system.slice/gunicorn.service ├─19074 /home/sammy/envs/polls/bin/python3 /home/sammy/envs/polls/bin/gunicorn --access-logfile - --workers 3 --bind unix:/run/gunicorn.sock mysite.wsgi:application ├─19098 /home/sammy/envs/polls/bin/python3 /home/sammy/envs/polls/bin/gunicorn . . . Mar 05 20:43:56 django systemd[1]: Started gunicorn daemon. Mar 05 20:43:56 django gunicorn[19074]: [2019-03-05 20:43:56 +0000] [19074] [INFO] Starting gunicorn 19.9.0 . . . Mar 05 20:44:15 django gunicorn[19074]: - - [05/Mar/2019:20:44:15 +0000] "GET / HTTP/1.1" 301 0 "-" "curl/7.58.0"

      If the output from curl or the output of systemctl status indicates that a problem occurred, check the logs for additional details:

      • sudo journalctl -u gunicorn

      You can also check your /etc/systemd/system/gunicorn.service file for problems. If you make changes to this file, be sure to reload the daemon to reread the service definition and restart the Gunicorn process:

      • sudo systemctl daemon-reload
      • sudo systemctl restart gunicorn

      Make sure you troubleshoot any issues before continuing on to configuring the Nginx server.

      Step 8 — Configuring Nginx HTTPS and Gunicorn Proxy Passing

      Now that Gunicorn is set up in a more robust fashion, we need to configure Nginx to encrypt connections and hand off traffic to the Gunicorn process.

      If you followed the preqrequisites and set up Nginx with Let's Encrypt, you should already have a server block file corresponding to your domain available to you in Nginx's sites-available directory. If not, follow How To Secure Nginx with Let's Encrypt on Ubuntu 18.04 and return to this step.

      Before we edit this example.com server block file, we’ll first remove the default server block file that gets rolled out by default after installing Nginx:

      • sudo rm /etc/nginx/sites-enabled/default

      We'll now modify the example.com server block file to pass traffic to Gunicorn instead of the default index.html page configured in the prerequisite step.

      Open the server block file corresponding to your domain in your editor:

      • sudo nano /etc/nginx/sites-available/example.com

      You should see something like the following:

      /etc/nginx/sites-available/example.com

      server {
      
              root /var/www/example.com/html;
              index index.html index.htm index.nginx-debian.html;
      
              server_name example.com www.example.com;
      
              location / {
                      try_files $uri $uri/ =404;
              }
      
          listen [::]:443 ssl ipv6only=on; # managed by Certbot
          listen 443 ssl; # managed by Certbot
          ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
          ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
          include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
          ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
      
      }
      
      server {
          if ($host = example.com) {
              return 301 https://$host$request_uri;
          } # managed by Certbot
      
      
              listen 80;
              listen [::]:80;
      
              server_name example.com www.example.com;
          return 404; # managed by Certbot
      
      
      }
      

      This is a combination of the default server block file created in How to Install Nginx on Ubuntu 18.04 as well as additions appended automatically by Let's Encrypt. We are going to delete the contents of this file and write a new configuration that redirects HTTP traffic to HTTPS, and forwards incoming requests to the Gunicorn socket we created in the previous step.

      If you'd like, you can make a backup of this file using cp. Quit your text editor and create a backup called example.com.old:

      • sudo cp /etc/nginx/sites-available/example.com /etc/nginx/sites-available/example.com.old

      Now, reopen the file and delete its contents. We'll build the new configuration block by block.

      Begin by pasting in the following block, which redirects HTTP requests at port 80 to HTTPS:

      /etc/nginx/sites-available/example.com

      server {
          listen 80 default_server;
          listen [::]:80 default_server;
          server_name _;
          return 301 https://example.com$request_uri;
      }
      

      Here we listen for HTTP IPv4 and IPv6 requests on port 80 and send a 301 response header to redirect the request to HTTPS port 443 using the example.com domain. This will also redirect direct HTTP requests to the server’s IP address.

      After this block, append the following block of config code that handles HTTPS requests for the example.com domain:

      /etc/nginx/sites-available/example.com

      . . . 
      server {
          listen [::]:443 ssl ipv6only=on;
          listen 443 ssl;
          server_name example.com www.example.com;
      
          # Let's Encrypt parameters
          ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
          ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
          include /etc/letsencrypt/options-ssl-nginx.conf;
          ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
      
          location = /favicon.ico { access_log off; log_not_found off; }
      
          location / {
              proxy_pass         http://unix:/run/gunicorn.sock;
              proxy_redirect     off;
      
              proxy_set_header   Host              $http_host;
              proxy_set_header   X-Real-IP         $remote_addr;
              proxy_set_header   X-Forwarded-For   $proxy_add_x_forwarded_for;
              proxy_set_header   X-Forwarded-Proto https;
          }
      }
      

      Here, we first listen on port 443 for requests hitting the example.com and www.example.com domains.

      Next, we provide the same Let's Encrypt configuration included in the default server block file, which specifies the location of the SSL certificate and private key, as well as some additional security parameters.

      The location = /favicon.ico line instructs Nginx to ignore any problems with finding a favicon.

      The last location = / block instructs Nginx to hand off requests to the Gunicorn socket configured in Step 8. In addition, it adds headers to inform the upstream Django server that a request has been forwarded and to provide it with various request properties.

      After you've pasted in those two configuration blocks, the final file should look something like this:

      /etc/nginx/sites-available/example.com

      server {
          listen 80 default_server;
          listen [::]:80 default_server;
          server_name _;
          return 301 https://example.com$request_uri;
      }
      server {
              listen [::]:443 ssl ipv6only=on;
              listen 443 ssl;
              server_name example.com www.example.com;
      
              # Let's Encrypt parameters
              ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
              ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
              include /etc/letsencrypt/options-ssl-nginx.conf;
              ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
      
              location = /favicon.ico { access_log off; log_not_found off; }
      
              location / {
                proxy_pass         http://unix:/run/gunicorn.sock;
                proxy_redirect     off;
      
                proxy_set_header   Host              $http_host;
                proxy_set_header   X-Real-IP         $remote_addr;
                proxy_set_header   X-Forwarded-For   $proxy_add_x_forwarded_for;
                proxy_set_header   X-Forwarded-Proto https;
              }
      }
      

      Save and close the file when you are finished.

      Test your Nginx configuration for syntax errors by typing:

      If your configuration is error-free, restart Nginx by typing:

      • sudo systemctl restart nginx

      You should now be able to visit your server's domain or IP address to view your application. Your browser should be using a secure HTTPS connection to connect to the Django backend.

      To completely secure our Django project, we need to add a couple of security parameters to its settings.py file. Reopen this file in your editor:

      • nano ~/django-polls/mysite/settings.py

      Scroll to the bottom of the file, and add the following parameters:

      ~/django-polls/mysite/settings.py

      . . .
      
      SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
      SESSION_COOKIE_SECURE = True
      CSRF_COOKIE_SECURE = True
      SECURE_SSL_REDIRECT = True
      

      These settings tell Django that you have enabled HTTPS on your server, and instruct it to use "secure" cookies. To learn more about these settings, consult the SSL/HTTPS section of Security in Django.

      When you're done, save and close the file.

      Finally, restart Gunicorn:

      • sudo systemctl restart gunicorn

      At this point, you have configured Nginx to redirect HTTP requests and hand off these requests to Gunicorn. HTTPS should now be fully enabled for your Django project and app. If you're running into errors, this discussion on troubleshooting Nginx and Gunicorn may help.

      Warning: As stated in Configuring CORS Headers, be sure to change the Origin from the wildcard * domain to your domain name (https://example.com in this guide) before making your app accessible to end users.

      Conclusion

      In this guide, you set up and configured a scalable Django application running on an Ubuntu 18.04 server. This setup can be replicated across multiple servers to create a highly-available architecture. Furthermore, this app and its config can be containerized using Docker or another container runtime to ease deployment and scaling. These containers can then be deployed into a container cluster like Kubernetes. In an upcoming Tutorial series, we will explore how to containerize and modernize this Django polls app so that it can run in a Kubernetes cluster.

      In addition to static files, you may also wish to offload your Django Media files to object storage. To learn how to do this, consult Using Amazon S3 to Store your Django Site's Static and Media Files. You might also consider compressing static files to further optimize their delivery to end users. To do this, you can use a Django plugin like Django compressor.



      Source link

      How To Display Data from the DigitalOcean API with React


      The author selected the Wikimedia Foundation to receive a donation as part of the Write for DOnations program.

      Introduction

      Over the last few years, open-source web frameworks have greatly simplified the process of coding an application. React, for example, has only added to the popularity of JavaScript by making the language more accessible to new developers and increasing the productivity of seasoned developers. Created by Facebook, React allows developers to quickly create high-end user interfaces for highly-scalable web-applications by supporting such features as declarative views, state management, and client-side rendering, each of which can greatly reduce the complexity of building an app in JavaScript.

      You can leverage frameworks like React to load and display data from the DigitalOcean API, through which you can manage your Droplets and other products within the DigitalOcean cloud using HTTP requests. Although one can fetch data from an API with many other JavaScript frameworks, React provides useful benefits like lifecycles and local state management that make it particularly well-suited for the job. With React, the data retrieved from the API is added to the local state when the application starts and can go through various lifecycles as components mount and dismount. At any point, you can retrieve the data from your local state and display it accordingly.

      In this tutorial, you will create a simple React application that interacts with the DigitalOcean API v2 to make calls and retrieve information about your Droplets. Your app will display a list containing your current Droplets and their details, like name, region, and technical specifications, and you will use the front-end framework Bootstrap to style your application.

      Once you have finished this tutorial, you will have a basic interface displaying a list of your DigitalOcean Droplets, styled to look like the following:

      The final version of your React Application

      Prerequisites

      Before you begin this guide, you’ll need a DigitalOcean account and at least one Droplet set up, in addition to the following:

      Step 1 — Creating a Basic React Application

      In this first step, you’ll create a basic React application using the Create React App package from npm. This package automatically installs and configures the essential dependencies needed to run React, like the module builder Webpack and the JavaScript compiler Babel. After installing, you’ll run the Create React App package using the package runner npx, which comes pre-installed with Node.js.

      To install Create React App and create the first version of your application, run the following command, replacing my-app with the name you want to give to your application:

      • npx create-react-app my-app

      After the installation is complete, move into the new project directory and start running the application using these commands:

      The preceding command starts a local development server provided by Create React App, which disables the command line prompt in your terminal. To proceed with the tutorial, open up a new terminal window and navigate back to the project directory before proceeding to the next step.

      You now have the first version of your React application running in development mode, which you can view by opening http://localhost:3000 in a web browser. At this point, your app will only display the welcome screen from Create React App:

      The first version of your React application

      Now that you have installed and created the first version of your React application, you can add a table component to your app that will eventually hold the data from the DigitalOcean API.

      Step 2 — Creating a Component to Show the Droplet Data

      In this step, you will create the first component that displays information about your Droplets. This component will be a table that lists all of your Droplets and their corresponding details.

      The DigitalOcean API documentation states that you can retrieve a list containing all of your Droplets by sending a request to the following endpoint using cURL: https://api.digitalocean.com/v2/droplets. Using the output from this request, you can create a table component containing id, name, region, memory, vcpus, and disk for each Droplet. Later on in this tutorial, you'll insert the data retrieved from the API into the table component.

      To define a clear structure for your application, create a new directory called components inside the src directory where you'll store all the code you write. Create a new file called Table.js inside the src/components directory and open it with nano or a text editor of your choice:

      • mkdir src/components
      • nano src/components/Table.js

      Define the table component by adding the following code to the file:

      src/components/Table.js

      import React from 'react';
      
      const Table = () => {
        return (
          <table>
            <thead>
              <tr>
                <th>Id</th>
                <th>Name</th>
                <th>Region</th>
                <th>Memory</th>
                <th>CPUs</th>
                <th>Disk Size</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td></td>
                <td></td>
                <td></td>
                <td></td>
                <td></td>
                <td></td>
              </tr>
            </tbody>
          </table>
        );
      }
      
      export default Table
      

      The code block above imports the React framework and defines a new component called Table, which consists of a table with a heading and a body.

      When you have added these lines of code, save and exit the file. With the nano text editor, you can do this by pressing CTRL+X, typing y, and pressing ENTER.

      Now that you have created the table component, it is time to include this component in your application. You'll do this by importing the component into the entry point of the application, which is in the file src/App.js. Open this file with the following command:

      Next, remove the boilerplate code that displays the Create React App welcome message in src/App.js, which is highlighted in the following code block.

      src/App.js

      import React, { Component } from 'react';
      import logo from './logo.svg';
      import './App.css';
      
      class App extends Component {
        render() {
          return (
            <div className="App">
              <header className="App-header">
                <img src={logo} className="App-logo" alt="logo" />
                <p>
                  Edit <code>src/App.js</code> and save to reload.
                </p>
                <a
                  className="App-link"
                  href="https://reactjs.org"
                  target="_blank"
                  rel="noopener noreferrer"
                >
                  Learn React
                </a>
              </header>
            </div>
          );
        }
      }
      
      export default App;
      

      After removing the lines that displayed the welcome message, include the table component inside this same file by adding the following highlighted lines:

      src/App.js

      import React, { Component } from 'react';
      import Table from './components/Table.js';
      
      class App extends Component {
        render() {
          return (
            <div className="App">
              <Table />
            </div>
          );
        }
      }
      
      export default App;
      

      If you access http://localhost:3000 in your web browser again, your application will now display a basic table with table heads:

      The React application with a basic table

      In this step, you have created a table component and included this component into the entry point of your application. Next, you will set up a connection to the DigitalOcean API, which you'll use to retrieve the data that this table will display.

      Step 3 — Securing Your API Credentials

      Setting up a connection to the DigitalOcean API consists of several actions, starting with safely storing your Personal Access Token as an environment variable. This can be done by using dotenv, a package that allows you to store sensitive information in a .env file that your application can later access from the environment.

      Use npm to install the dotenv package:

      After installing dotenv, create an environment file called .env in the root directory of your application by executing this command:

      Add the following into .env, which contains your Personal Access Token and the URL for the DigitalOcean API :

      .env

      DO_API_URL=https://api.digitalocean.com/v2
      DO_ACCESS_TOKEN=YOUR_API_KEY
      

      To ensure this sensitive data doesn't get committed to a repository, add it to your .gitignore file with the following command:

      • echo ".env" >> .gitignore

      You have now created a safe and simple configuration file for your environment variables, which will provide your application with the information it needs to send requests to the DigitalOcean API. To ensure your API credentials aren't visible on the client side, you will next set up a proxy server to forward requests and responses between your application server and the DigitalOcean API.

      Install the middleware http-proxy-middleware by executing the following command:

      • npm install http-proxy-middleware

      After installing this, the next step is to set up your proxy. Create the setupProxy.js file in the src directory:

      Inside this file, add the following code to set up the proxy server:

      src/setupProxy.js

      const proxy = require('http-proxy-middleware')
      
      module.exports = function(app) {
      
        require('dotenv').config()
      
        const apiUrl = process.env.DO_API_URL
        const apiToken = process.env.DO_ACCESS_TOKEN
        const headers  = {
          "Content-Type": "application/json",
          "Authorization": "Bearer " + apiToken
        }
      
        // define http-proxy-middleware
        let DOProxy = proxy({
          target: apiUrl,
          changeOrigin: true,
        pathRewrite: {
          '^/api/' : '/'
        },
          headers: headers,
        })
      
        // define the route and map the proxy
        app.use('/api', DOProxy)
      
      };
      

      In the preceding code block, const apiURL = sets the url for the DigitalOcean API as the endpoint, and const apiToken = loads your Personal Access Token into the proxy server. The option pathRewrite mounts the proxy server to /api rather than / so that it does not interfere with the application server but still matches the DigitalOcean API.

      You've now successfully created a proxy server that will send all API requests made from your React application to the DigitalOcean API. This proxy server will make sure your Personal Access Token, which is safely stored as an environment variable, isn't exposed on the client side. Next, you will create the actual requests to retrieve your Droplet data for your application.

      Step 4 — Making API Calls to DigitalOcean

      Now that your display component is ready and the connection details to DigitalOcean are stored and secured through a proxy server, you can start retrieving data from the DigitalOcean API. First, add the following highlighted lines of code to src/App.js just before and after you declare the class App:

      src/App.js

      import React, { Component } from 'react';
      ...
      class App extends Component {
        constructor(props) {
          super(props);
          this.state = {
            droplets: []
          }
        }
      
          render() {
      ...
      

      These lines of code call a constructor method in your class component, which in React initializes the local state by providing this.state with an object or objects. In this case, the objects are your Droplets. From the code block above, you can see that the array containing your Droplets is empty, making it possible to fill it with the results from the API call.

      In order to display your current Droplets, you'll need to fetch this information from the DigitalOcean API. Using the JavaScript function Fetch, you'll send a request to the DigitalOcean API and update the state for droplets with the data you retrieve. You can do this using the componentDidMount method by adding the following lines of code after the constructor:

      src/App.js

      class App extends Component {
        constructor(props) {
          super(props);
          this.state = {
            droplets: []
          }
        }
      
        componentDidMount() {
          fetch('http://localhost:3000/api/droplets')
          .then(res => res.json())
          .then(json => json.droplets)
          .then(droplets => this.setState({ 'droplets': droplets }))
        }
      ...
      

      With your Droplet data stored into the state, it's time to retrieve it within the render function of your application and to send this data as a prop to the table component. Add the following highlighted statement to the table component in App.js:

      src/App.js

      ...
      class App extends Component {
        render() {
          return (
            <div className="App">
              <Table droplets={ this.state.droplets } />
            </div>
          );
        }
      }
      ...
      

      You have now created the functionality to retrieve data from the API, but you still need to make this data accessible via a web browser. In the next step, you will accomplish this by displaying your Droplet data in your table component.

      Step 5 — Displaying Droplet Data in Your Table Component

      Now that you have transferred the Droplet data to the table component, you can iterate this data over rows in the table. But since the application makes the request to the API after App.js is mounted, the property value for droplets will be empty at first. Therefore, you also need to add code to make sure droplets isn't empty before you try to display the data. To do this, add the following highlighted lines to the tbody section of Table.js:

      src/components/Table.js

      const Table = ({ droplets }) => {
        return (
          <table>
            <thead>
              <tr>
                <th>Id</th>
                <th>Name</th>
                <th>Region</th>
                <th>Memory</th>
                <th>CPUs</th>
                <th>Disk Size</th>
              </tr>
            </thead>
            <tbody>
              { (droplets.length > 0) ? droplets.map( (droplet, index) => {
                 return (
                  <tr key={ index }>
                    <td>{ droplet.id }</td>
                    <td>{ droplet.name }</td>
                    <td>{ droplet.region.slug}</td>
                    <td>{ droplet.memory }</td>
                    <td>{ droplet.vcpus }</td>
                    <td>{ droplet.disk }</td>
                  </tr>
                )
               }) : <tr><td colSpan="5">Loading...</td></tr> }
            </tbody>
          </table>
        );
      }
      

      With the addition of the preceding code, your application will display a Loading... placeholder message when no Droplet data is present. When the DigitalOcean API does return Droplet data, your application will iterate it over table rows containing columns for each data type and will display the result to your web browser:

      The React Application with Droplet data

      Note: If your web browser displays an error at http://localhost:3000, press CTRL+C in the terminal that is running your development server to stop your application. Run the following command to restart your application:

      In this step, you have modified the table component of your application to display your Droplet data in a web browser and added a placeholder message for when there are no Droplets found. Next, you will use a front-end web framework to style your data to make it more visually appealing and easier to read.

      Step 6 — Styling Your Table Component Using Bootstrap

      Your table is now populated with data, but the information is not displayed in the most appealing manner. To fix this, you can style your application by adding Bootstrap to your project. Bootstrap is an open-source styling and component library that lets you add responsive styling to a project with CSS templates.

      Install Bootstrap with npm using the following command:

      After Bootstrap has finished installing, import its CSS file into your project by adding the following highlighted line to src/App.js:

      src/App.js

      import React, { Component } from 'react';
      import Table from './components/Table.js';
      import 'bootstrap/dist/css/bootstrap.min.css';
      
      class App extends Component {
      ...
      

      Now that you have imported the CSS, apply the Bootstrap styling to your table component by adding the class table to the <table> tag in src/components/Table.js.

      src/components/Table.js

      import React from 'react';
      
      const Table = ({ droplets }) => {
        return (
          <table className="table">
            <thead>
      ...
      

      Next, finish styling your application by placing a header above your table with a title and the DigitalOcean logo. Click on Download Logos in the Brand Assets section of DigitalOcean's Press page to download a set of logos, pick your favorite from the SVG directory (this tutorial uses DO_Logo_icon_blue.svg), and add it to your project by copying the logo file into a new directory called assets within the src directory of your project. After uploading the logo, import it into the header by adding the highlighted lines to src/App.js:

      src/App.js

      import React, { Component } from 'react';
      import Table from './components/Table.js';
      import 'bootstrap/dist/css/bootstrap.min.css';
      import logo from './assets/DO_Logo_icon_blue.svg';
      
      class App extends Component {
      ...
        render() {
          return (
            <div className="App">
              <nav class="navbar navbar-light bg-light">
                <a class="navbar-brand" href="./">
                  <img src={logo} alt="logo" width="40" /> My Droplets
                </a>
              </nav>
              <Table droplets={ this.state.droplets } />
            </div>
          );
        }
      }
      
      export default App;
      

      In the preceding code block, the classes within the nav tag add a particular styling from Bootstrap to your header.

      Now that you have imported Bootstrap and applied its styling to your application, your data will show up in your web browser with an organized and legible display:

      The final version of your React Application

      Conclusion

      In this article, you've created a basic React application that fetches data from the DigitalOcean API through a secured proxy server and displays it with Bootstrap styling. Now that you are familiar with the React framework, you can apply the concepts you learned here to more complicated applications, such as the one found in How To Build a Modern Web Application to Manage Customer Information with Django and React on Ubuntu 18.04. If you want to find out what other actions are possible with the DigitalOcean API, have a look at the API documentation on DigitalOcean's website.



      Source link