One place for hosting & domains

      Scalable

      How to Set Up a Scalable Laravel 6 Application using Managed Databases and Object Storage


      Introduction

      When scaling web applications horizontally, the first difficulties you’ll typically face are dealing with file storage and data persistence. This is mainly due to the fact that it is hard to maintain consistency of variable data between multiple application nodes; appropriate strategies must be in place to make sure data created in one node is immediately available to other nodes in a cluster.

      A practical way of solving the consistency problem is by using managed databases and object storage systems. The first will outsource data persistence to a managed database, and the latter will provide a remote storage service where you can keep static files and variable content such as images uploaded by users. Each node can then connect to these services at the application level.

      The following diagram demonstrates how such a setup can be used for horizontal scalability in the context of PHP applications:

      Laravel at scale diagram

      In this guide, we will update an existing Laravel 6 application to prepare it for horizontal scalability by connecting it to a managed MySQL database and setting up an S3-compatible object store to save user-generated files. By the end, you will have a travel list application running on an Nginx + PHP-FPM web server:

      Travellist v1.0

      Note: this guide uses DigitalOcean Managed MySQL and Spaces to demonstrate a scalable application setup using managed databases and object storage. The instructions contained here should work in a similar way for other service providers.

      Prerequisites

      To begin this tutorial, you will first need the following prerequisites:

      • Access to an Ubuntu 18.04 server as a non-root user with sudo privileges, and an active firewall installed on your server. To set these up, please refer to our Initial Server Setup Guide for Ubuntu 18.04.
      • Nginx and PHP-FPM installed and configured on your server, as explained in steps 1 and 3 of How to Install LEMP on Ubuntu 18.04. You should skip the step where MySQL is installed.
      • Composer installed on your server, as explained in steps 1 and 2 of How to Install and Use Composer on Ubuntu 18.04.
      • Admin credentials to a managed MySQL 8 database. For this guide, we’ll be using a DigitalOcean Managed MySQL cluster, but the instructions here should work similarly for other managed database services.
      • A set of API keys with read and write permissions to an S3-compatible object storage service. In this guide, we’ll use DigitalOcean Spaces, but you are free to use a provider of your choice.
      • The s3cmd tool installed and configured to connect to your object storage drive. For instructions on how to set this up for DigitalOcean Spaces, please refer to our product documentation.

      Step 1 — Installing the MySQL 8 Client

      The default Ubuntu apt repositories come with the MySQL 5 client, which is not compatible with the MySQL 8 server we’ll be using in this guide. To install the compatible MySQL client, we’ll need to use the MySQL APT Repository provided by Oracle.

      Begin by navigating to the MySQL APT Repository page in your web browser. Find the Download button in the lower-right corner and click through to the next page. This page will prompt you to log in or sign up for an Oracle web account. You can skip that and instead look for the link that says No thanks, just start my download. Copy the link address and go back to your terminal window.

      This link should point to a .deb package that will set up the MySQL APT Repository in your server. After installing it, you’ll be able to use apt to install the latest releases of MySQL. We’ll use curl to download this file into a temporary location.

      Go to your server’s tmp folder:

      Now download the package with curl and using the URL you copied from the MySQL APT Repository page:

      • curl -OL https://dev.mysql.com/get/mysql-apt-config_0.8.13-1_all.deb

      After the download is finished, you can use dpkg to install the package:

      • sudo dpkg -i mysql-apt-config_0.8.13-1_all.deb

      You will be presented with a screen where you can choose which MySQL version you’d like to select as default, as well as which MySQL components you’re interested in:

      MySQL APT Repository Install

      You don’t need to change anything here, because the default options will install the repositories we need. Select “Ok” and the configuration will be finished.

      Next, you’ll need to update your apt cache with:

      Now we can finally install the MySQL 8 client with:

      • sudo apt install mysql-client

      Once that command finishes, check the software version number to ensure that you have the latest release:

      You’ll see output like this:

      Output

      mysql Ver 8.0.18 for Linux on x86_64 (MySQL Community Server - GPL)

      In the next step, we’ll use the MySQL client to connect to your managed MySQL server and prepare the database for the application.

      Step 2 — Creating a new MySQL User and Database

      At the time of this writing, the native MySQL PHP library mysqlnd doesn’t support caching_sha2_authentication, the default authentication method for MySQL 8. We’ll need to create a new user with the mysql_native_password authentication method in order to be able to connect our Laravel application to the MySQL 8 server. We’ll also create a dedicated database for our demo application.

      To get started, log into your server using an admin account. Replace the highlighted values with your own MySQL user, host, and port:

      • mysql -u MYSQL_USER -p -h MYSQL_HOST -P MYSQL_PORT

      When prompted, provide the admin user’s password. After logging in, you will have access to the MySQL 8 server command line interface.

      First, we’ll create a new database for the application. Run the following command to create a new database named travellist:

      • CREATE DATABASE travellist;

      Next, we’ll create a new user and set a password, using mysql_native_password as default authentication method for this user. You are encouraged to replace the highlighted values with values of your own, and to use a strong password:

      • CREATE USER "http://www.digitalocean.com/travellist-user'@'%' IDENTIFIED WITH mysql_native_password BY "http://www.digitalocean.com/MYSQL_PASSWORD';

      Now we need to give this user permission over our application database:

      • GRANT ALL ON travellist.* TO "http://www.digitalocean.com/travellist-user'@'%';

      You can now exit the MySQL prompt with:

      You now have a dedicated database and a compatible user to connect from your Laravel application. In the next step, we’ll get the application code and set up configuration details, so your app can connect to your managed MySQL database.

      In this guide, we’ll use Laravel Migrations and database seeds to set up our application tables. If you need to migrate an existing local database to a DigitalOcean Managed MySQL database, please refer to our documentation on How to Import MySQL Databases into DigitalOcean Managed Databases.

      Step 3 — Setting Up the Demo Application

      To get started, we’ll fetch the demo Laravel application from its Github repository. Feel free to inspect the contents of the application before running the next commands.

      The demo application is a travel bucket list app that was initially developed in our guide on How to Install and Configure Laravel with LEMP on Ubuntu 18.04. The updated app now contains visual improvements including travel photos that can be uploaded by a visitor, and a world map. It also introduces a database migration script and database seeds to create the application tables and populate them with sample data, using artisan commands.

      To obtain the application code that is compatible with this tutorial, we’ll download the 1.1 release from the project’s repository on Github. We’ll save the downloaded zip file as travellist.zip inside our home directory:

      • cd ~
      • curl -L https://github.com/do-community/travellist-laravel-demo/archive/1.1.zip -o travellist.zip

      Now, unzip the contents of the application and rename its directory with:

      • unzip travellist.zip
      • mv travellist-laravel-demo-1.1 travellist

      Navigate to the travellist directory:

      Before going ahead, we’ll need to install a few PHP modules that are required by the Laravel framework, namely: php-xml, php-mbstring, php-xml and php-bcmath. To install these packages, run:

      • sudo apt install unzip php-xml php-mbstring php-xml php-bcmath

      To install the application dependencies, run:

      You will see output similar to this:

      Output

      Loading composer repositories with package information Installing dependencies (including require-dev) from lock file Package operations: 80 installs, 0 updates, 0 removals - Installing doctrine/inflector (v1.3.0): Downloading (100%) - Installing doctrine/lexer (1.1.0): Downloading (100%) - Installing dragonmantank/cron-expression (v2.3.0): Downloading (100%) - Installing erusev/parsedown (1.7.3): Downloading (100%) ... Generating optimized autoload files > IlluminateFoundationComposerScripts::postAutoloadDump > @php artisan package:discover --ansi Discovered Package: beyondcode/laravel-dump-server Discovered Package: fideloper/proxy Discovered Package: laravel/tinker Discovered Package: nesbot/carbon Discovered Package: nunomaduro/collision Package manifest generated successfully.

      The application dependencies are now installed. Next, we’ll configure the application to connect to the managed MySQL Database.

      Creating the .env configuration file and setting the App Key

      We’ll now create a .env file containing variables that will be used to configure the Laravel application in a per-environment basis. The application includes an example file that we can copy and then modify its values to reflect our environment settings.

      Copy the .env.example file to a new file named .env:

      Now we need to set the application key. This key is used to encrypt session data, and should be set to a unique 32 characters-long string. We can generate this key automatically with the artisan tool:

      Let’s edit the environment configuration file to set up the database details. Open the .env file using your command line editor of choice. Here, we will be using nano:

      Look for the database credentials section. The following variables need your attention:

      DB_HOST: your managed MySQL server host.
      DB_PORT: your managed MySQL server port.
      DB_DATABASE: the name of the application database we created in Step 2.
      DB_USERNAME: the database user we created in Step 2.
      DB_PASSWORD: the password for the database user we defined in Step 2.

      Update the highlighted values with your own managed MySQL info and credentials:

      ...
      DB_CONNECTION=mysql
      DB_HOST=MANAGED_MYSQL_HOST
      DB_PORT=MANAGED_MYSQL_PORT
      DB_DATABASE=MANAGED_MYSQL_DB
      DB_USERNAME=MANAGED_MYSQL_USER
      DB_PASSWORD=MANAGED_MYSQL_PASSWORD
      ...
      

      Save and close the file by typing CTRL+X then Y and ENTER when you’re done editing.

      Now that the application is configured to connect to the MySQL database, we can use Laravel’s command line tool artisan to create the database tables and populate them with sample data.

      Migrating and populating the database

      We’ll now use Laravel Migrations and database seeds to set up the application tables. This will help us determine if our database configuration works as expected.

      To execute the migration script that will create the tables used by the application, run:

      You will see output similar to this:

      Output

      Migration table created successfully. Migrating: 2019_09_19_123737_create_places_table Migrated: 2019_09_19_123737_create_places_table (0.26 seconds) Migrating: 2019_10_14_124700_create_photos_table Migrated: 2019_10_14_124700_create_photos_table (0.42 seconds)

      To populate the database with sample data, run:

      You will see output like this:

      Output

      Seeding: PlacesTableSeeder Seeded: PlacesTableSeeder (0.86 seconds) Database seeding completed successfully.

      The application tables are now created and populated with sample data.

      To finish the application setup, we also need to create a symbolic link to the public storage folder that will host the travel photos we’re using in the application. You can do that using the artisan tool:

      Output

      The [public/storage] directory has been linked.

      This will create a symbolic link inside the public directory pointing to storage/app/public, where we’ll save the travel photos. To check that the link was created and where it points to, you can run:

      You’ll see output like this:

      Output

      total 36 drwxrwxr-x 5 sammy sammy 4096 Oct 25 14:59 . drwxrwxr-x 12 sammy sammy 4096 Oct 25 14:58 .. -rw-rw-r-- 1 sammy sammy 593 Oct 25 06:29 .htaccess drwxrwxr-x 2 sammy sammy 4096 Oct 25 06:29 css -rw-rw-r-- 1 sammy sammy 0 Oct 25 06:29 favicon.ico drwxrwxr-x 2 sammy sammy 4096 Oct 25 06:29 img -rw-rw-r-- 1 sammy sammy 1823 Oct 25 06:29 index.php drwxrwxr-x 2 sammy sammy 4096 Oct 25 06:29 js -rw-rw-r-- 1 sammy sammy 24 Oct 25 06:29 robots.txt lrwxrwxrwx 1 sammy sammy 41 Oct 25 14:59 storage -> /home/sammy/travellist/storage/app/public -rw-rw-r-- 1 sammy sammy 1194 Oct 25 06:29 web.config

      Running the test server (optional)

      You can use the artisan serve command to quickly verify that everything is set up correctly within the application, before having to configure a full-featured web server like Nginx to serve the application for the long term.

      We’ll use port 8000 to temporarily serve the application for testing. If you have the UFW firewall enabled on your server, you should first allow access to this port with:

      Now, to run the built in PHP server that Laravel exposes through the artisan tool, run:

      • php artisan serve --host=0.0.0.0 --port=8000

      This command will block your terminal until interrupted with a CTRL+C. It will use the built-in PHP web server to serve the application for test purposes on all network interfaces, using port 8000.

      Now go to your browser and access the application using the server’s domain name or IP address on port 8000:

      http://server_domain_or_IP:8000
      

      You will see the following page:

      Travellist v1.0

      If you see this page, it means the application is successfully pulling data about locations and photos from the configured managed database. The image files are still stored in the local disk, but we’ll change this in a following step of this guide.

      When you are finished testing the application, you can stop the serve command by hitting CTRL+C.

      Don’t forget to close port 8000 again if you are running UFW on your server:

      Step 4 — Configuring Nginx to Serve the Application

      Although the built-in PHP web server is very useful for development and testing purposes, it is not intended to be used as a long term solution to serve PHP applications. Using a full featured web server like Nginx is the recommended way of doing that.

      To get started, we’ll move the application folder to /var/www, which is the usual location for web applications running on Nginx. First, use the mv command to move the application folder with all its contents to /var/www/travellist:

      • sudo mv ~/travellist /var/www/travellist

      Now we need to give the web server user write access to the storage and bootstrap/cache folders, where Laravel stores application-generated files. We’ll set these permissions using setfacl, a command line utility that allows for more robust and fine-grained permission settings in files and folders.

      To include read, write and execution (rwx) permissions to the web server user over the required directories, run:

      • sudo setfacl -R -m g:www-data:rwx /var/www/travellist/storage
      • sudo setfacl -R -m g:www-data:rwx /var/www/travellist/bootstrap/cache

      The application files are now in order, but we still need to configure Nginx to serve the content. To do this, we’ll create a new virtual host configuration file at /etc/nginx/sites-available:

      • sudo nano /etc/nginx/sites-available/travellist

      The following configuration file contains the recommended settings for Laravel applications on Nginx:

      /etc/nginx/sites-available/travellist

      server {
          listen 80;
          server_name server_domain_or_IP;
          root /var/www/travellist/public;
      
          add_header X-Frame-Options "SAMEORIGIN";
          add_header X-XSS-Protection "1; mode=block";
          add_header X-Content-Type-Options "nosniff";
      
          index index.html index.htm index.php;
      
          charset utf-8;
      
          location / {
              try_files $uri $uri/ /index.php?$query_string;
          }
      
          location = /favicon.ico { access_log off; log_not_found off; }
          location = /robots.txt  { access_log off; log_not_found off; }
      
          error_page 404 /index.php;
      
          location ~ .php$ {
              fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
              fastcgi_index index.php;
              fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
              include fastcgi_params;
          }
      
          location ~ /.(?!well-known).* {
              deny all;
          }
      }
      

      Copy this content to your /etc/nginx/sites-available/travellist file and adjust the highlighted values to align with your own configuration. Save and close the file when you’re done editing.

      To activate the new virtual host configuration file, create a symbolic link to travellist in sites-enabled:

      • sudo ln -s /etc/nginx/sites-available/travellist /etc/nginx/sites-enabled/

      Note: If you have another virtual host file that was previously configured for the same server_name used in the travellist virtual host, you might need to deactivate the old configuration by removing the corresponding symbolic link inside /etc/nginx/sites-enabled/.

      To confirm that the configuration doesn’t contain any syntax errors, you can use:

      You should see output like this:

      Output

      • nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
      • nginx: configuration file /etc/nginx/nginx.conf test is successful

      To apply the changes, reload Nginx with:

      • sudo systemctl reload nginx

      If you reload your browser now, the application images will be broken. That happens because we moved the application directory to a new location inside the server, and for that reason we need to re-create the symbolic link to the application storage folder.

      Remove the old link with:

      • cd /var/www/travellist
      • rm -f public/storage

      Now run once again the artisan command to generate the storage link:

      Now go to your browser and access the application using the server’s domain name or IP address, as defined by the server_name directive in your configuration file:

      http://server_domain_or_IP
      

      Travellist v1.0

      In the next step, we’ll integrate an object storage service into the application. This will replace the current local disk storage used for the travel photos.

      Step 5 — Integrating an S3-Compatible Object Storage into the Application

      We’ll now set up the application to use an S3-compatible object storage service for storing the travel photos exhibited on the index page. Because the application already has a few sample photos stored in the local disk, we’ll also use the s3cmd tool to upload the existing local image files to the remote object storage.

      Setting Up the S3 Driver for Laravel

      Laravel uses league/flysystem, a filesystem abstraction library that enables a Laravel application to use and combine multiple storage solutions, including local disk and cloud services. An additional package is required to use the s3 driver. We can install this package using the composer require command.

      Access the application directory:

      • composer require league/flysystem-aws-s3-v3

      You will see output similar to this:

      Output

      Using version ^1.0 for league/flysystem-aws-s3-v3 ./composer.json has been updated Loading composer repositories with package information Updating dependencies (including require-dev) Package operations: 8 installs, 0 updates, 0 removals - Installing mtdowling/jmespath.php (2.4.0): Loading from cache - Installing ralouphie/getallheaders (3.0.3): Loading from cache - Installing psr/http-message (1.0.1): Loading from cache - Installing guzzlehttp/psr7 (1.6.1): Loading from cache - Installing guzzlehttp/promises (v1.3.1): Loading from cache - Installing guzzlehttp/guzzle (6.4.1): Downloading (100%) - Installing aws/aws-sdk-php (3.112.28): Downloading (100%) - Installing league/flysystem-aws-s3-v3 (1.0.23): Loading from cache ...

      Now that the required packages are installed, we can update the application to connect to the object storage. First, we’ll open the .env file again to set up configuration details such as keys, bucket name, and region for your object storage service.

      Open the .env file:

      Include the following environment variables, replacing the highlighted values with your object store configuration details:

      /var/www/travellist/.env

      DO_SPACES_KEY=EXAMPLE7UQOTHDTF3GK4
      DO_SPACES_SECRET=exampleb8e1ec97b97bff326955375c5
      DO_SPACES_ENDPOINT=https://ams3.digitaloceanspaces.com
      DO_SPACES_REGION=ams3
      DO_SPACES_BUCKET=sammy-travellist
      

      Save and close the file when you’re done. Now open the config/filesystems.php file:

      • nano config/filesystems.php

      Within this file, we’ll create a new disk entry in the disks array. We’ll name this disk spaces, and we’ll use the environment variables we’ve set in the .env file to configure the new disk. Include the following entry in the disks array:

      config/filesystems.php

      
      'spaces' => [
         'driver' => 's3',
         'key' => env('DO_SPACES_KEY'),
         'secret' => env('DO_SPACES_SECRET'),
         'endpoint' => env('DO_SPACES_ENDPOINT'),
         'region' => env('DO_SPACES_REGION'),
         'bucket' => env('DO_SPACES_BUCKET'),
      ],
      
      

      Still in the same file, locate the cloud entry and change it to set the new spaces disk as default cloud filesystem disk:

      config/filesystems.php

      'cloud' => env('FILESYSTEM_CLOUD', "http://www.digitalocean.com/spaces'),
      

      Save and close the file when you’re done editing. From your controllers, you can now use the Storage::cloud() method as a shortcut to access the default cloud disk. This way, the application stays flexible to use multiple storage solutions, and you can switch between providers on a per-environment basis.

      The application is now configured to use object storage, but we still need to update the code that uploads new photos to the application.

      Let’s first examine the current uploadPhoto route, located in the PhotoController class. Open the file using your text editor:

      • nano app/Http/Controllers/PhotoController.php

      app/Http/Controllers/PhotoController.php

      …
      
      public function uploadPhoto(Request $request)
      {
         $photo = new Photo();
         $place = Place::find($request->input('place'));
      
         if (!$place) {
             //add new place
             $place = new Place();
             $place->name = $request->input('place_name');
             $place->lat = $request->input('place_lat');
             $place->lng = $request->input('place_lng');
         }
      
         $place->visited = 1;
         $place->save();
      
         $photo->place()->associate($place);
         $photo->image = $request->image->store('/', 'public');
         $photo->save();
      
         return redirect()->route('Main');
      }
      
      

      This method accepts a POST request and creates a new photo entry in the photos table. It begins by checking if an existing place was selected in the photo upload form, and if that’s not the case, it will create a new place using the provided information. The place is then set to visited and saved to the database. Following that, an association is created so that the new photo is linked to the designated place. The image file is then stored in the root folder of the public disk. Finally, the photo is saved to the database. The user is then redirected to the main route, which is the index page of the application.

      The highlighted line in this code is what we’re interested in. In that line, the image file is saved to the disk using the store method. The store method is used to save files to any of the disks defined in the filesystem.php configuration file. In this case, it is using the default disk to store uploaded images.

      We will change this behavior so that the image is saved to the object store instead of the local disk. In order to do that, we need to replace the public disk by the spaces disk in the store method call. We also need to make sure the uploaded file’s visibility is set to public instead of private.

      The following code contains the full PhotoController class, including the updated uploadPhoto method:

      app/Http/Controllers/PhotoController.php

      <?php
      
      namespace AppHttpControllers;
      
      use IlluminateHttpRequest;
      use AppPhoto;
      use AppPlace;
      use IlluminateSupportFacadesStorage;
      
      class PhotoController extends Controller
      {
         public function uploadForm()
         {
             $places = Place::all();
      
             return view('upload_photo', [
                 'places' => $places
             ]);
         }
      
         public function uploadPhoto(Request $request)
         {
             $photo = new Photo();
             $place = Place::find($request->input('place'));
      
             if (!$place) {
                 //add new place
                 $place = new Place();
                 $place->name = $request->input('place_name');
                 $place->lat = $request->input('place_lat');
                 $place->lng = $request->input('place_lng');
             }
      
             $place->visited = 1;
             $place->save();
      
             $photo->place()->associate($place);
             $photo->image = $request->image->store('/', 'spaces');
             Storage::setVisibility($photo->image, 'public');
             $photo->save();
      
             return redirect()->route('Main');
         }
      }
      
      
      

      Copy the updated code to your own PhotoController so that it reflects the highlighted changes. Save and close the file when you’re done editing.

      We still need to modify the application’s main view so that it uses the object storage file URL to render the image. Open the travel_list.blade.php template:

      • nano resources/views/travel_list.blade.php

      Now locate the footer section of the page, which currently looks like this:

      resources/views/travel_list.blade.php

      @section('footer')
         <h2>Travel Photos <small>[ <a href="{{ route('Upload.form') }}">Upload Photo</a> ]</small></h2>
         @foreach ($photos as $photo)
             <div class="photo">
                <img src="https://www.digitalocean.com/{{ asset('storage') . '/' . $photo->image }}" />
                 <p>{{ $photo->place->name }}</p>
             </div>
         @endforeach
      
      @endsection
      

      Replace the current image src attribute to use the file URL from the spaces storage disk:

      <img src="https://www.digitalocean.com/{{ Storage::disk('spaces')->url($photo->image) }}" />
      

      If you go to your browser now and reload the application page, it will show only broken images. That happens because the image files for those travel photos are still only in the local disk. We need to upload the existing image files to the object storage, so that the photos already stored in the database can be successfully exhibited in the application page.

      Syncing local images with s3cmd

      The s3cmd tool can be used to sync local files with an S3-compatible object storage service. We’ll run a sync command to upload all files inside storage/app/public/photos to the object storage service.

      Access the public app storage directory:

      • cd /var/www/travellist/storage/app/public

      To have a look at the files already stored in your remote disk, you can use the s3cmd ls command:

      • s3cmd ls s3://your_bucket_name

      Now run the sync command to upload existing files in the public storage folder to the object storage:

      • s3cmd sync ./ s3://your_bucket_name --acl-public --exclude=.gitignore

      This will synchronize the current folder (storage/app/public) with the remote object storage’s root dir. You will get output similar to this:

      Output

      upload: './bermudas.jpg' -> 's3://sammy-travellist/bermudas.jpg' [1 of 3] 2538230 of 2538230 100% in 7s 329.12 kB/s done upload: './grindavik.jpg' -> 's3://sammy-travellist/grindavik.jpg' [2 of 3] 1295260 of 1295260 100% in 5s 230.45 kB/s done upload: './japan.jpg' -> 's3://sammy-travellist/japan.jpg' [3 of 3] 8940470 of 8940470 100% in 24s 363.61 kB/s done Done. Uploaded 12773960 bytes in 37.1 seconds, 336.68 kB/s.

      Now, if you run s3cmd ls again, you will see that three new files were added to the root folder of your object storage bucket:

      • s3cmd ls s3://your_bucket_name

      Output

      2019-10-25 11:49 2538230 s3://sammy-travellist/bermudas.jpg 2019-10-25 11:49 1295260 s3://sammy-travellist/grindavik.jpg 2019-10-25 11:49 8940470 s3://sammy-travellist/japan.jpg

      Go to your browser and reload the application page. All images should be visible now, and if you inspect them using your browser debug tools, you’ll notice that they’re all using URLs from your object storage.

      Testing the Integration

      The demo application is now fully functional, storing files in a remote object storage service, and saving data to a managed MySQL database. We can now upload a few photos to test our setup.

      Access the /upload application route from your browser:

      http://server_domain_or_IP/upload
      

      You will see the following form:

      Travellist  Photo Upload Form

      You can now upload a few photos to test the object storage integration. After choosing an image from your computer, you can select an existing place from the dropdown menu, or you can add a new place by providing its name and geographic coordinates so it can be loaded in the application map.

      Step 6 — Scaling Up a DigitalOcean Managed MySQL Database with Read-Only Nodes (Optional)

      Because read-only operations are typically more frequent than writing operations on database servers, its is a common practice to scale up a database cluster by setting up multiple read-only nodes. This will distribute the load generated by SELECT operations.

      To demonstrate this setup, we’ll first add 2 read-only nodes to our DigitalOcean Managed MySQL cluster. Then, we’ll configure the Laravel application to use these nodes.

      Access the DigitalOcean Cloud Panel and follow these instructions:

      1. Go to Databases and select your MySQL cluster.
      2. Click Actions and choose Add a read-only node from the drop-down menu.
      3. Configure the node options and hit the Create button. Notice that it might take several minutes for the new node to be ready.
      4. Repeat steps 1-4 one more time so that you have 2 read-only nodes.
      5. Note down the hosts of the two nodes as we will need them for our Laravel configuration.

      Once you have your read-only nodes ready, head back to your terminal.

      We’ll now configure our Laravel application to work with multiple database nodes. When we’re finished, queries such as INSERT and UPDATE will be forwarded to your primary cluster node, while all SELECT queries will be redirected to your read-only nodes.

      First, go to the application’s directory on the server and open your .env file using your text editor of choice:

      • cd /var/www/travellist
      • nano .env

      Locate the MySQL database configuration and comment out the DB_HOST line:

      /var/www/travellist/.env

      DB_CONNECTION=mysql
      #DB_HOST=MANAGED_MYSQL_HOST
      DB_PORT=MANAGED_MYSQL_PORT
      DB_DATABASE=MANAGED_MYSQL_DB
      DB_USERNAME=MANAGED_MYSQL_USER
      DB_PASSWORD=MANAGED_MYSQL_PASSWORD
      

      Save and close the file when you’re done. Now open the config/database.php in your text editor:

      Look for the mysql entry inside the connections array. You should include three new items in this configuration array: read, write, and sticky. The read and write entries will set up the cluster nodes, and the sticky option set to true will reuse write connections so that data written to the database is immediately available in the same request cycle. You can set it to false if you don’t want this behavior.

      /var/www/travel_list/config/database.php

      ...
            'mysql' => [
               'read' => [
                 'host' => [
                    "http://www.digitalocean.com/READONLY_NODE1_HOST',
                    "http://www.digitalocean.com/READONLY_NODE2_HOST',
                 ],
               ],
               'write' => [
                 'host' => [
                   "http://www.digitalocean.com/MANAGED_MYSQL_HOST',
                 ],
               ],
             'sticky' => true,
      ...
      

      Save and close the file when you are done editing. To test that everything works as expected, we can create a temporary route inside routes/web.php to pull some data from the database and show details about the connection being used. This way we will be able to see how the requests are being load balanced between the read-only nodes.

      Open the routes/web.php file:

      Include the following route:

      /var/www/travel_list/routes/web.php

      ...
      
      Route::get('/mysql-test', function () {
        $places = AppPlace::all();
        $results = DB::select( DB::raw("SHOW VARIABLES LIKE 'server_id'") );
      
        return "Server ID: " . $results[0]->Value;
      });
      

      Now go to your browser and access the /mysql-test application route:

      http://server_domain_or_IP/mysql-test
      

      You’ll see a page like this:

      mysql node test page

      Reload the page a few times and you will notice that the Server ID value changes, indicating that the requests are being randomly distributed between the two read-only nodes.

      Conclusion

      In this guide, we’ve prepared a Laravel 6 application for a highly available and scalable environment. We’ve outsourced the database system to an external managed MySQL service, and we’ve integrated an S3-compatible object storage service into the application to store files uploaded by users. Finally, we’ve seen how to scale up the application’s database by including additional read-only cluster nodes in the app’s configuration file.

      The updated demo application code containing all modifications made in this guide can be found within the 2.1 tag in the application’s repository on Github.

      From here, you can set up a Load Balancer to distribute load and scale your application among multiple nodes. You can also leverage this setup to create a containerized environment to run your application on Docker.



      Source link

      How to Set Up a Scalable Django App with DigitalOcean Managed Databases and Spaces


      Introduction

      Django is a powerful web framework that can help you get your Python application or website off the ground quickly. It includes several convenient features like an object-relational mapper, a Python API, and a customizable administrative interface for your application. It also includes a caching framework and encourages clean app design through its URL Dispatcher and Template system.

      Out of the box, Django includes a minimal web server for testing and local development, but it should be paired with a more robust serving infrastructure for production use cases. Django is often rolled out with an Nginx web server to handle static file requests and HTTPS redirection, and a Gunicorn WSGI server to serve the app.

      In this guide, we will augment this setup by offloading static files like Javascript and CSS stylesheets to DigitalOcean Spaces, and optionally delivering them using a Content Delivery Network, or CDN, which stores these files closer to end users to reduce transfer times. We’ll also use a DigitalOcean Managed PostgreSQL database as our data store to simplify the data layer and avoid having to manually configure a scalable PostgreSQL database.

      Prerequisites

      Before you begin with this guide, you should have the following available to you:

      Step 1 — Installing Packages from the Ubuntu Repositories

      To begin, we’ll download and install all of the items we need from the Ubuntu repositories. We’ll use the Python package manager pip to install additional components a bit later.

      We need to first update the local apt package index and then download and install the packages.

      In this guide, we’ll use Django with Python 3. To install the necessary libraries, log in to your server and type:

      • sudo apt update
      • sudo apt install python3-pip python3-dev libpq-dev curl postgresql-client

      This will install pip, the Python development files needed to build Gunicorn, the libpq header files needed to build the Pyscopg PostgreSQL Python adapter, and the PostgreSQL command-line client.

      Hit Y and then ENTER when prompted to begin downloading and installing the packages.

      Next, we’ll configure the database to work with our Django app.

      Step 2 — Creating the PostgreSQL Database and User

      We’ll now create a database and database user for our Django application.

      To begin, grab the Connection Parameters for your cluster by navigating to Databases from the Cloud Control Panel, and clicking into your database. You should see a Connection Details box containing some parameters for your cluster. Note these down.

      Back on the command line, log in to your cluster using these credentials and the psql PostgreSQL client we just installed:

      • psql -U do_admin -h host -p port -d database

      When prompted, enter the password displayed alongside the doadmin Postgres username, and hit ENTER.

      You will be given a PostgreSQL prompt from which you can manage the database.

      First, create a database for your project called polls:

      Note: Every Postgres statement must end with a semicolon, so make sure that your command ends with one if you are experiencing issues.

      We can now switch to the polls database:

      Next, create a database user for the project. Make sure to select a secure password:

      • CREATE USER myprojectuser WITH PASSWORD 'password';

      We'll now modify a few of the connection parameters for the user we just created. This will speed up database operations so that the correct values do not have to be queried and set each time a connection is established.

      We are setting the default encoding to UTF-8, which Django expects. We are also setting the default transaction isolation scheme to "read committed", which blocks reads from uncommitted transactions. Lastly, we are setting the timezone. By default, our Django projects will be set to use UTC. These are all recommendations from the Django project itself.

      Enter the following commands at the PostgreSQL prompt:

      • ALTER ROLE myprojectuser SET client_encoding TO 'utf8';
      • ALTER ROLE myprojectuser SET default_transaction_isolation TO 'read committed';
      • ALTER ROLE myprojectuser SET timezone TO 'UTC';

      Now we can give our new user access to administer our new database:

      • GRANT ALL PRIVILEGES ON DATABASE polls TO myprojectuser;

      When you are finished, exit out of the PostgreSQL prompt by typing:

      Your Django app is now ready to connect to and manage this database.

      In the next step, we'll install virtualenv and create a Python virtual environment for our Django project.

      Step 3 — Creating a Python Virtual Environment for your Project

      Now that we've set up our database to work with our application, we'll create a Python virtual environment that will isolate this project's dependencies from the system's global Python installation.

      To do this, we first need access to the virtualenv command. We can install this with pip.

      Upgrade pip and install the package by typing:

      • sudo -H pip3 install --upgrade pip
      • sudo -H pip3 install virtualenv

      With virtualenv installed, we can create a directory to store our Python virtual environments and make one to use with the Django polls app.

      Create a directory called envs and navigate into it:

      Within this directory, create a Python virtual environment called polls by typing:

      This will create a directory called polls within the envs directory. Inside, it will install a local version of Python and a local version of pip. We can use this to install and configure an isolated Python environment for our project.

      Before we install our project's Python requirements, we need to activate the virtual environment. You can do that by typing:

      • source polls/bin/activate

      Your prompt should change to indicate that you are now operating within a Python virtual environment. It will look something like this: (polls)user@host:~/envs$.

      With your virtual environment active, install Django, Gunicorn, and the psycopg2 PostgreSQL adaptor with the local instance of pip:

      Note: When the virtual environment is activated (when your prompt has (polls) preceding it), use pip instead of pip3, even if you are using Python 3. The virtual environment's copy of the tool is always named pip, regardless of the Python version.

      • pip install django gunicorn psycopg2-binary

      You should now have all of the software you need to run the Django polls app. In the next step, we'll create a Django project and install this app.

      Step 4 — Creating the Polls Django Application

      We can now set up our sample application. In this tutorial, we'll use the Polls demo application from the Django documentation. It consists of a public site that allows users to view polls and vote in them, and an administrative control panel that allows the admin to modify, create, and delete polls.

      In this guide, we'll skip through the tutorial steps, and simply clone the final application from the DigitalOcean Community django-polls repo.

      If you'd like to complete the steps manually, create a directory called django-polls in your home directory and navigate into it:

      • cd
      • mkdir django-polls
      • cd django-polls

      From there, you can follow the Writing your first Django app tutorial from the official Django documentation. When you're done, skip to Step 5.

      If you just want to clone the finished app, navigate to your home directory and use git to clone the django-polls repo:

      • cd
      • git clone https://github.com/do-community/django-polls.git

      cd into it, and list the directory contents:

      You should see the following objects:

      Output

      LICENSE README.md manage.py mysite polls templates

      manage.py is the main command-line utility used to manipulate the app. polls contains the polls app code, and mysite contains project-scope code and settings. templates contains custom template files for the administrative interface. To learn more about the project structure and files, consult Creating a Project from the official Django documentation.

      Before running the app, we need to adjust its default settings and connect it to our database.

      Step 5 — Adjusting the App Settings

      In this step, we'll modify the Django project's default configuration to increase security, connect Django to our database, and collect static files into a local directory.

      Begin by opening the settings file in your text editor:

      • nano ~/django-polls/mysite/settings.py

      Start by locating the ALLOWED_HOSTS directive. This defines a list of the addresses or domain names that you want to use to connect to the Django instance. An incoming request with a Host header not in this list will raise an exception. Django requires that you set this to prevent a certain class of security vulnerability.

      In the square brackets, list the IP addresses or domain names associated with your Django server. Each item should be listed in quotations with entries separated by a comma. Your list will also include localhost, since you will be proxying connections through a local Nginx instance. If you wish to include requests for an entire domain and any subdomains, prepend a period to the beginning of the entry.

      In the snippet below, there are a few commented out examples that demonstrate what these entries should look like:

      ~/django-polls/mysite/settings.py

      . . .
      
      # The simplest case: just add the domain name(s) and IP addresses of your Django server
      # ALLOWED_HOSTS = [ 'example.com', '203.0.113.5']
      # To respond to 'example.com' and any subdomains, start the domain with a dot
      # ALLOWED_HOSTS = ['.example.com', '203.0.113.5']
      ALLOWED_HOSTS = ['your_server_domain_or_IP', 'second_domain_or_IP', . . ., 'localhost']
      
      . . . 
      

      Next, find the section of the file that configures database access. It will start with DATABASES. The configuration in the file is for a SQLite database. We already created a PostgreSQL database for our project, so we need to adjust these settings.

      We will tell Django to use the psycopg2 database adaptor we installed with pip, instead of the default SQLite engine. We’ll also reuse the Connection Parameters referenced in Step 2. You can always find this information from the Managed Databases section of the DigitalOcean Cloud Control Panel.

      Update the file with your database settings: the database name (polls), the database username, the database user's password, and the database host and port. Be sure to replace the database-specific values with your own information:

      ~/django-polls/mysite/settings.py

      . . .
      
      DATABASES = {
          'default': {
              'ENGINE': 'django.db.backends.postgresql_psycopg2',
              'NAME': 'polls',
              'USER': 'myprojectuser',
              'PASSWORD': 'password',
              'HOST': 'managed_db_host',
              'PORT': 'managed_db_port',
          }
      }
      
      . . .
      

      Next, move down to the bottom of the file and add a setting indicating where the static files should be placed. This is necessary so that Nginx can handle requests for these items. The following line tells Django to place them in a directory called static in the base project directory:

      ~/django-polls/mysite/settings.py

      . . .
      
      STATIC_URL = '/static/'
      STATIC_ROOT = os.path.join(BASE_DIR, 'static/')
      

      Save and close the file when you are finished.

      At this point, you've configured the Django project's database, security, and static files settings. If you followed the polls tutorial from the start and did not clone the GitHub repo, you can move on to Step 6. If you cloned the GitHub repo, there remains one additional step.

      The Django settings file contains a SECRET_KEY variable that is used to create hashes for various Django objects. It's important that it is set to a unique, unpredictable value. The SECRET_KEY variable has been scrubbed from the GitHub repository, so we'll create a new one using a function built-in to the django Python package called get_random_secret_key(). From the command line, open up a Python interpreter:

      You should see the following output and prompt:

      Output

      Python 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>>

      Import the get_random_secret_key function from the Django package, then call the function:

      • from django.core.management.utils import get_random_secret_key
      • get_random_secret_key()

      Copy the resulting key to your clipboard.

      Exit the Python interpreter by pressing CTRL+D.

      Next, open up the settings file in your text editor once again:

      nano ~/django-polls/mysite/settings.py
      

      Locate the SECRET_KEY variable and paste in the key you just generated:

      ~/django-polls/mysite/settings.py

      . . .
      
      # SECURITY WARNING: keep the secret key used in production secret!
      SECRET_KEY = 'your_secret_key_here'
      
      . . .
      

      Save and close the file.

      We'll now test the app locally using the Django development server to ensure that everything's been correctly configured.

      Step 6 — Testing the App

      Before we run the Django development server, we need to use the manage.py utility to create the database schema and collect static files into the STATIC_ROOT directory.

      Navigate into the project's base directory, and create the initial database schema in our PostgreSQL database using the makemigrations and migrate commands:

      • cd django-polls
      • ./manage.py makemigrations
      • ./manage.py migrate

      makemigrations will create the migrations, or database schema changes, based on the changes made to Django models. migrate will apply these migrations to the database schema. To learn more about migrations in Django, consult Migrations from the official Django documentation.

      Create an administrative user for the project by typing:

      • ./manage.py createsuperuser

      You will have to select a username, provide an email address, and choose and confirm a password.

      We can collect all of the static content into the directory location we configured by typing:

      • ./manage.py collectstatic

      The static files will then be placed in a directory called static within your project directory.

      If you followed the initial server setup guide, you should have a UFW firewall protecting your server. In order to test the development server, we'll have to allow access to the port we'll be using.

      Create an exception for port 8000 by typing:

      Testing the App Using the Django Development Server

      Finally, you can test your project by starting the Django development server with this command:

      • ./manage.py runserver 0.0.0.0:8000

      In your web browser, visit your server's domain name or IP address followed by :8000 and the polls path:

      • http://server_domain_or_IP:8000/polls

      You should see the Polls app interface:

      Polls App Interface

      To check out the admin interface, visit your server's domain name or IP address followed by :8000 and the administrative interface's path:

      • http://server_domain_or_IP:8000/admin

      You should see the Polls app admin authentication window:

      Polls Admin Auth Page

      Enter the administrative username and password you created with the createsuperuser command.

      After authenticating, you can access the Polls app's administrative interface:

      Polls Admin Main Interface

      When you are finished exploring, hit CTRL-C in the terminal window to shut down the development server.

      Testing the App Using Gunicorn

      The last thing we want to do before offloading static files is test Gunicorn to make sure that it can serve the application. We can do this by entering our project directory and using gunicorn to load the project's WSGI module:

      • gunicorn --bind 0.0.0.0:8000 mysite.wsgi

      This will start Gunicorn on the same interface that the Django development server was running on. You can go back and test the app again.

      Note: The admin interface will not have any of the styling applied since Gunicorn does not know how to find the static CSS content responsible for this.

      We passed Gunicorn a module by specifying the relative directory path to Django's wsgi.py file, the entry point to our application,. This file defines a function called application, which communicates with the application. To learn more about the WSGI specification, click here.

      When you are finished testing, hit CTRL-C in the terminal window to stop Gunicorn.

      We'll now offload the application’s static files to DigitalOcean Spaces.

      Step 7 — Offloading Static Files to DigitalOcean Spaces

      At this point, Gunicorn can serve our Django application but not its static files. Usually we'd configure Nginx to serve these files, but in this tutorial we'll offload them to DigitalOcean Spaces using the django-storages plugin. This allows you to easily scale Django by centralizing its static content and freeing up server resources. In addition, you can deliver this static content using the DigitalOcean Spaces CDN.

      For a full guide on offloading Django static files to Object storage, consult How to Set Up Object Storage with Django.

      Installing and Configuring django-storages

      We'll begin by installing the django-storages Python package. The django-storages package provides Django with the S3Boto3Storage storage backend that uses the boto3 library to upload files to any S3-compatible object storage service.

      To start, install thedjango-storages and boto3 Python packages using pip:

      • pip install django-storages boto3

      Next, open your app's Django settings file again:

      • nano ~/django-polls/mysite/settings.py

      Navigate down to the INSTALLED_APPS section of the file, and append storages to the list of installed apps:

      ~/django-polls/mysite/settings.py

      . . .
      
      INSTALLED_APPS = [
          . . .
          'django.contrib.staticfiles',
          'storages',
      ]
      
      . . .
      

      Scroll further down the file to the STATIC_URL we previously modified. We'll now overwrite these values and append new S3Boto3Storage backend parameters. Delete the code you entered earlier, and add the following blocks, which include access and location information for your Space. Remember to replace the highlighted values here with your own information::

      ~/django-polls/mysite/settings.py

      . . .
      
      # Static files (CSS, JavaScript, Images)
      # https://docs.djangoproject.com/en/2.1/howto/static-files/
      
      AWS_ACCESS_KEY_ID = 'your_spaces_access_key'
      AWS_SECRET_ACCESS_KEY = 'your_spaces_secret_key'
      
      AWS_STORAGE_BUCKET_NAME = 'your_space_name'
      AWS_S3_ENDPOINT_URL = 'spaces_endpoint_URL'
      AWS_S3_OBJECT_PARAMETERS = {
          'CacheControl': 'max-age=86400',
      }
      AWS_LOCATION = 'static'
      AWS_DEFAULT_ACL = 'public-read'
      
      STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
      
      STATIC_URL = '{}/{}/'.format(AWS_S3_ENDPOINT_URL, AWS_LOCATION)
      STATIC_ROOT = 'static/'
      

      We define the following configuration items:

      • AWS_ACCESS_KEY_ID: The Access Key ID for the Space, which you created in the tutorial prerequisites. If you didn’t create a set of Access Keys, consult Sharing Access to Spaces with Access Keys.
      • AWS_SECRET_ACCESS_KEY: The secret key for the DigitalOcean Space.
      • AWS_STORAGE_BUCKET_NAME: Your DigitalOcean Space name.
      • AWS_S3_ENDPOINT_URL : The endpoint URL used to access the object storage service. For DigitalOcean, this will be something like https://nyc3.digitaloceanspaces.com depending on the Space region.
      • AWS_S3_OBJECT_PARAMETERS Sets the cache control headers on static files.
      • AWS_LOCATION: Defines a directory within the object storage bucket where all static files will be placed.
      • AWS_DEFAULT_ACL: Defines the access control list (ACL) for the static files. Setting it to public-read ensures that the files are publicly accessible to end users.
      • STATICFILES_STORAGE: Sets the storage backend Django will use to offload static files. This backend should work with any S3-compatible backend, including DigitalOcean Spaces.
      • STATIC_URL: Specifies the base URL that Django should use when generating URLs for static files. Here, we combine the endpoint URL and the static files subdirectory to construct a base URL for static files.
      • STATIC_ROOT: Specifies where to collect static files locally before copying them to object storage.

      From now on, when you run collectstatic, Django will upload your app's static files to the Space. When you start Django, it'll begin serving static assets like CSS and Javascript from this Space.

      Before we test that this is all functioning correctly, we need to configure Cross-Origin Resource Sharing (CORS) headers for our Spaces files or access to certain static assets may be denied by your web browser.

      CORS headers tell the web browser that the an application running at one domain can access scripts or resources located at another. In this case, we need to allow cross-origin resource sharing for our Django server's domain so that requests for static files in the Space are not denied by the web browser.

      To begin, navigate to the Settings page of your Space using the Cloud Control Panel:

      Screenshot of the Settings tab

      In the CORS Configurations section, click Add.

      CORS advanced settings

      Here, under Origin, enter the wildcard origin, *

      Warning: When you deploy your app into production, be sure to change this value to your exact origin domain (including the http:// or https:// protocol). Leaving this as the wildcard origin is insecure, and we do this here only for testing purposes since setting the origin to http://example.com:8000 (using a nonstandard port) is currently not supported.

      Under Allowed Methods, select GET.

      Click on Add Header, and in text box that appears, enter Access-Control-Allow-Origin.

      Set Access Control Max Age to 600 so that the header we just created expires every 10 minutes.

      Click Save Options.

      From now on, objects in your Space will contain the appropriate Access-Control-Allow-Origin response headers, allowing modern secure web browsers to fetch these files across domains.

      At this point, you can optionally enable the CDN for your Space, which will serve these static files from a distributed network of edge servers. To learn more about CDNs, consult Using a CDN to Speed Up Static Content Delivery. This can significantly improve web performance. If you don't want to enable the CDN for your Space, skip ahead to the next section, Testing Spaces Static File Delivery.

      Enabling CDN (Optional)

      To activate static file delivery via the DigitalOcean Spaces CDN, begin by enabling the CDN for your DigitalOcean Space. To learn how to do this, consult How to Enable the Spaces CDN from the DigitalOcean product documentation.

      Once you've enabled the CDN for your Space, navigate to it using the Cloud Control Panel. You should see a new Endpoints link under your Space name:

      List of Space Endpoints

      These endpoints should contain your Space name.

      Notice the addition of a new Edge endpoint. This endpoint routes requests for Spaces objects through the CDN, serving them from the edge cache as much as possible. Note down this Edge endpoint, as we'll use it to configure the django-storages plugin.

      Next, edit your app's Django settings file once again:

      • nano ~/django-polls/mysite/settings.py

      Navigate down to the Static Files section we recently modified. Add the AWS_S3_CUSTOM_DOMAIN parameter to configure the django-storages plugin CDN endpoint and update the STATIC_URL parameter to use this new CDN endpoint:

      ~/django-polls/mysite/settings.py

      . . .
      
      # Static files (CSS, JavaScript, Images)
      # https://docs.djangoproject.com/en/2.1/howto/static-files/
      
      # Moving static assets to DigitalOcean Spaces as per:
      # https://www.digitalocean.com/community/tutorials/how-to-set-up-object-storage-with-django
      AWS_ACCESS_KEY_ID = 'your_spaces_access_key'
      AWS_SECRET_ACCESS_KEY = 'your_spaces_secret_key'
      
      AWS_STORAGE_BUCKET_NAME = 'your_space_name'
      AWS_S3_ENDPOINT_URL = 'spaces_endpoint_URL'
      AWS_S3_CUSTOM_DOMAIN = 'spaces_edge_endpoint_URL'
      AWS_S3_OBJECT_PARAMETERS = {
          'CacheControl': 'max-age=86400',
      }
      AWS_LOCATION = 'static'
      AWS_DEFAULT_ACL = 'public-read'
      
      STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
      
      STATIC_URL = '{}/{}/'.format(AWS_S3_CUSTOM_DOMAIN, AWS_LOCATION)
      STATIC_ROOT = 'static/'
      

      Here, replace the spaces_edge_endpoint_URL with the Edge endpoint you just noted down, truncating the https:// prefix. For example, if the Edge endpoint URL is https://example.sfo2.cdn.digitaloceanspaces.com, AWS_S3_CUSTOM_DOMAIN should be set to example.sfo2.cdn.digitaloceanspaces.com.

      When you're done, save and close the file.

      When you start Django, it will now serve static content using the CDN for your DigitalOcean Space.

      Testing Spaces Static File Delivery

      We'll now test that Django is correctly serving static files from our DigitalOcean Space.

      Navigate to your Django app directory:

      From here, run collectstatic to collect and upload static files to your DigitalOcean Space:

      • python manage.py collectstatic

      You should see the following output:

      Output

      You have requested to collect static files at the destination location as specified in your settings. This will overwrite existing files! Are you sure you want to do this? Type 'yes' to continue, or 'no' to cancel:

      Type yes and hit ENTER to confirm.

      You should then see output like the following

      Output

      121 static files copied.

      This confirms that Django successfully uploaded the polls app static files to your Space. You can navigate to your Space using the Cloud Control Panel, and inspect the files in the static directory.

      Next, we'll verify that Django is rewriting the appropriate URLs.

      Start the Gunicorn server:

      • gunicorn --bind 0.0.0.0:8000 mysite.wsgi

      In your web browser, visit your server's domain name or IP address followed by :8000 and /admin:

      http://server_domain_or_IP:8000/admin
      

      You should once again see the Polls app admin authentication window, this time with correct styling.

      Now, use your browser's developer tools to inspect the page contents and reveal the source file storage locations.

      To do this using Google Chrome, right-click the page, and select Inspect.

      You should see the following window:

      Chrome Dev Tools Window

      From here, click on Sources in the toolbar. In the list of source files in the left-hand pane, you should see /admin/login under your Django server's domain, and static/admin under your Space's CDN endpoint. Within static/admin, you should see both the css and fonts directories.

      This confirms that CSS stylesheets and fonts are correctly being served from your Space's CDN.

      When you are finished testing, hit CTRL-C in the terminal window to stop Gunicorn.

      You can disable your active Python virtual environment by entering deactivate:

      Your prompt should return to normal.

      At this point you've successfully offloaded static files from your Django server, and are serving them from object storage. We can now move on to configuring Gunicorn to start automatically as a system service.

      Step 8 — Creating systemd Socket and Service Files for Gunicorn

      In Step 6 we tested that Gunicorn can interact with our Django application, but we should implement a more robust way of starting and stopping the application server. To accomplish this, we'll make systemd service and socket files.

      The Gunicorn socket will be created at boot and will listen for connections. When a connection occurs, systemd will automatically start the Gunicorn process to handle the connection.

      Start by creating and opening a systemd socket file for Gunicorn with sudo privileges:

      • sudo nano /etc/systemd/system/gunicorn.socket

      Inside, we will create a [Unit] section to describe the socket, a [Socket] section to define the socket location, and an [Install] section to make sure the socket is created at the right time. Add the following code to the file:

      /etc/systemd/system/gunicorn.socket

      [Unit]
      Description=gunicorn socket
      
      [Socket]
      ListenStream=/run/gunicorn.sock
      
      [Install]
      WantedBy=sockets.target
      

      Save and close the file when you are finished.

      Next, create and open a systemd service file for Gunicorn with sudo privileges in your text editor. The service filename should match the socket filename with the exception of the extension:

      • sudo nano /etc/systemd/system/gunicorn.service

      Start with the [Unit] section, which specifies metadata and dependencies. We'll put a description of our service here and tell the init system to only start this after the networking target has been reached. Because our service relies on the socket from the socket file, we need to include a Requires directive to indicate that relationship:

      /etc/systemd/system/gunicorn.service

      [Unit]
      Description=gunicorn daemon
      Requires=gunicorn.socket
      After=network.target
      

      Next, we'll open up the [Service] section. We'll specify the user and group that we want to process to run under. We will give our regular user account ownership of the process since it owns all of the relevant files. We'll give group ownership to the www-data group so that Nginx can communicate easily with Gunicorn.

      We'll then map out the working directory and specify the command to use to start the service. In this case, we'll have to specify the full path to the Gunicorn executable, which is installed within our virtual environment. We will bind the process to the Unix socket we created within the /run directory so that the process can communicate with Nginx. We log all data to standard output so that the journald process can collect the Gunicorn logs. We can also specify any optional Gunicorn tweaks here, like the number of worker processes. Here, we run Gunicorn with 3 worker processes.

      Add the following Service section to the file. Be sure to replace the username listed here with your own username:

      /etc/systemd/system/gunicorn.service

      [Unit]
      Description=gunicorn daemon
      Requires=gunicorn.socket
      After=network.target
      
      [Service]
      User=sammy
      Group=www-data
      WorkingDirectory=/home/sammy/django-polls
      ExecStart=/home/sammy/envs/polls/bin/gunicorn 
                --access-logfile - 
                --workers 3 
                --bind unix:/run/gunicorn.sock 
                mysite.wsgi:application
      

      Finally, we'll add an [Install] section. This will tell systemd what to link this service to if we enable it to start at boot. We want this service to start when the regular multi-user system is up and running:

      /etc/systemd/system/gunicorn.service

      [Unit]
      Description=gunicorn daemon
      Requires=gunicorn.socket
      After=network.target
      
      [Service]
      User=sammy
      Group=www-data
      WorkingDirectory=/home/sammy/django-polls
      ExecStart=/home/sammy/envs/polls/bin/gunicorn 
                --access-logfile - 
                --workers 3 
                --bind unix:/run/gunicorn.sock 
                mysite.wsgi:application
      
      [Install]
      WantedBy=multi-user.target
      

      With that, our systemd service file is complete. Save and close it now.

      We can now start and enable the Gunicorn socket. This will create the socket file at /run/gunicorn.sock now and at boot. When a connection is made to that socket, systemd will automatically start the gunicorn.service to handle it:

      • sudo systemctl start gunicorn.socket
      • sudo systemctl enable gunicorn.socket

      We can confirm that the operation was successful by checking for the socket file.

      Checking for the Gunicorn Socket File

      Check the status of the process to find out whether it started successfully:

      • sudo systemctl status gunicorn.socket

      You should see the following output:

      Output

      Failed to dump process list, ignoring: No such file or directory ● gunicorn.socket - gunicorn socket Loaded: loaded (/etc/systemd/system/gunicorn.socket; enabled; vendor preset: enabled) Active: active (running) since Tue 2019-03-05 19:19:16 UTC; 1h 22min ago Listen: /run/gunicorn.sock (Stream) CGroup: /system.slice/gunicorn.socket Mar 05 19:19:16 django systemd[1]: Listening on gunicorn socket.

      Next, check for the existence of the gunicorn.sock file within the /run directory:

      Output

      /run/gunicorn.sock: socket

      If the systemctl status command indicated that an error occurred, or if you do not find the gunicorn.sock file in the directory, it's an indication that the Gunicorn socket was not created correctly. Check the Gunicorn socket's logs by typing:

      • sudo journalctl -u gunicorn.socket

      Take another look at your /etc/systemd/system/gunicorn.socket file to fix any problems before continuing.

      Testing Socket Activation

      Currently, if you've only started the gunicorn.socket unit, the gunicorn.service will not be active, since the socket has not yet received any connections. You can check this by typing:

      • sudo systemctl status gunicorn

      Output

      ● gunicorn.service - gunicorn daemon Loaded: loaded (/etc/systemd/system/gunicorn.service; disabled; vendor preset: enabled) Active: inactive (dead)

      To test the socket activation mechanism, we can send a connection to the socket through curl by typing:

      • curl --unix-socket /run/gunicorn.sock localhost

      You should see the HTML output from your application in the terminal. This indicates that Gunicorn has started and is able to serve your Django application. You can verify that the Gunicorn service is running by typing:

      • sudo systemctl status gunicorn

      Output

      ● gunicorn.service - gunicorn daemon Loaded: loaded (/etc/systemd/system/gunicorn.service; disabled; vendor preset: enabled) Active: active (running) since Tue 2019-03-05 20:43:56 UTC; 1s ago Main PID: 19074 (gunicorn) Tasks: 4 (limit: 4915) CGroup: /system.slice/gunicorn.service ├─19074 /home/sammy/envs/polls/bin/python3 /home/sammy/envs/polls/bin/gunicorn --access-logfile - --workers 3 --bind unix:/run/gunicorn.sock mysite.wsgi:application ├─19098 /home/sammy/envs/polls/bin/python3 /home/sammy/envs/polls/bin/gunicorn . . . Mar 05 20:43:56 django systemd[1]: Started gunicorn daemon. Mar 05 20:43:56 django gunicorn[19074]: [2019-03-05 20:43:56 +0000] [19074] [INFO] Starting gunicorn 19.9.0 . . . Mar 05 20:44:15 django gunicorn[19074]: - - [05/Mar/2019:20:44:15 +0000] "GET / HTTP/1.1" 301 0 "-" "curl/7.58.0"

      If the output from curl or the output of systemctl status indicates that a problem occurred, check the logs for additional details:

      • sudo journalctl -u gunicorn

      You can also check your /etc/systemd/system/gunicorn.service file for problems. If you make changes to this file, be sure to reload the daemon to reread the service definition and restart the Gunicorn process:

      • sudo systemctl daemon-reload
      • sudo systemctl restart gunicorn

      Make sure you troubleshoot any issues before continuing on to configuring the Nginx server.

      Step 8 — Configuring Nginx HTTPS and Gunicorn Proxy Passing

      Now that Gunicorn is set up in a more robust fashion, we need to configure Nginx to encrypt connections and hand off traffic to the Gunicorn process.

      If you followed the preqrequisites and set up Nginx with Let's Encrypt, you should already have a server block file corresponding to your domain available to you in Nginx's sites-available directory. If not, follow How To Secure Nginx with Let's Encrypt on Ubuntu 18.04 and return to this step.

      Before we edit this example.com server block file, we’ll first remove the default server block file that gets rolled out by default after installing Nginx:

      • sudo rm /etc/nginx/sites-enabled/default

      We'll now modify the example.com server block file to pass traffic to Gunicorn instead of the default index.html page configured in the prerequisite step.

      Open the server block file corresponding to your domain in your editor:

      • sudo nano /etc/nginx/sites-available/example.com

      You should see something like the following:

      /etc/nginx/sites-available/example.com

      server {
      
              root /var/www/example.com/html;
              index index.html index.htm index.nginx-debian.html;
      
              server_name example.com www.example.com;
      
              location / {
                      try_files $uri $uri/ =404;
              }
      
          listen [::]:443 ssl ipv6only=on; # managed by Certbot
          listen 443 ssl; # managed by Certbot
          ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
          ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
          include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
          ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
      
      }
      
      server {
          if ($host = example.com) {
              return 301 https://$host$request_uri;
          } # managed by Certbot
      
      
              listen 80;
              listen [::]:80;
      
              server_name example.com www.example.com;
          return 404; # managed by Certbot
      
      
      }
      

      This is a combination of the default server block file created in How to Install Nginx on Ubuntu 18.04 as well as additions appended automatically by Let's Encrypt. We are going to delete the contents of this file and write a new configuration that redirects HTTP traffic to HTTPS, and forwards incoming requests to the Gunicorn socket we created in the previous step.

      If you'd like, you can make a backup of this file using cp. Quit your text editor and create a backup called example.com.old:

      • sudo cp /etc/nginx/sites-available/example.com /etc/nginx/sites-available/example.com.old

      Now, reopen the file and delete its contents. We'll build the new configuration block by block.

      Begin by pasting in the following block, which redirects HTTP requests at port 80 to HTTPS:

      /etc/nginx/sites-available/example.com

      server {
          listen 80 default_server;
          listen [::]:80 default_server;
          server_name _;
          return 301 https://example.com$request_uri;
      }
      

      Here we listen for HTTP IPv4 and IPv6 requests on port 80 and send a 301 response header to redirect the request to HTTPS port 443 using the example.com domain. This will also redirect direct HTTP requests to the server’s IP address.

      After this block, append the following block of config code that handles HTTPS requests for the example.com domain:

      /etc/nginx/sites-available/example.com

      . . . 
      server {
          listen [::]:443 ssl ipv6only=on;
          listen 443 ssl;
          server_name example.com www.example.com;
      
          # Let's Encrypt parameters
          ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
          ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
          include /etc/letsencrypt/options-ssl-nginx.conf;
          ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
      
          location = /favicon.ico { access_log off; log_not_found off; }
      
          location / {
              proxy_pass         http://unix:/run/gunicorn.sock;
              proxy_redirect     off;
      
              proxy_set_header   Host              $http_host;
              proxy_set_header   X-Real-IP         $remote_addr;
              proxy_set_header   X-Forwarded-For   $proxy_add_x_forwarded_for;
              proxy_set_header   X-Forwarded-Proto https;
          }
      }
      

      Here, we first listen on port 443 for requests hitting the example.com and www.example.com domains.

      Next, we provide the same Let's Encrypt configuration included in the default server block file, which specifies the location of the SSL certificate and private key, as well as some additional security parameters.

      The location = /favicon.ico line instructs Nginx to ignore any problems with finding a favicon.

      The last location = / block instructs Nginx to hand off requests to the Gunicorn socket configured in Step 8. In addition, it adds headers to inform the upstream Django server that a request has been forwarded and to provide it with various request properties.

      After you've pasted in those two configuration blocks, the final file should look something like this:

      /etc/nginx/sites-available/example.com

      server {
          listen 80 default_server;
          listen [::]:80 default_server;
          server_name _;
          return 301 https://example.com$request_uri;
      }
      server {
              listen [::]:443 ssl ipv6only=on;
              listen 443 ssl;
              server_name example.com www.example.com;
      
              # Let's Encrypt parameters
              ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
              ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
              include /etc/letsencrypt/options-ssl-nginx.conf;
              ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
      
              location = /favicon.ico { access_log off; log_not_found off; }
      
              location / {
                proxy_pass         http://unix:/run/gunicorn.sock;
                proxy_redirect     off;
      
                proxy_set_header   Host              $http_host;
                proxy_set_header   X-Real-IP         $remote_addr;
                proxy_set_header   X-Forwarded-For   $proxy_add_x_forwarded_for;
                proxy_set_header   X-Forwarded-Proto https;
              }
      }
      

      Save and close the file when you are finished.

      Test your Nginx configuration for syntax errors by typing:

      If your configuration is error-free, restart Nginx by typing:

      • sudo systemctl restart nginx

      You should now be able to visit your server's domain or IP address to view your application. Your browser should be using a secure HTTPS connection to connect to the Django backend.

      To completely secure our Django project, we need to add a couple of security parameters to its settings.py file. Reopen this file in your editor:

      • nano ~/django-polls/mysite/settings.py

      Scroll to the bottom of the file, and add the following parameters:

      ~/django-polls/mysite/settings.py

      . . .
      
      SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
      SESSION_COOKIE_SECURE = True
      CSRF_COOKIE_SECURE = True
      SECURE_SSL_REDIRECT = True
      

      These settings tell Django that you have enabled HTTPS on your server, and instruct it to use "secure" cookies. To learn more about these settings, consult the SSL/HTTPS section of Security in Django.

      When you're done, save and close the file.

      Finally, restart Gunicorn:

      • sudo systemctl restart gunicorn

      At this point, you have configured Nginx to redirect HTTP requests and hand off these requests to Gunicorn. HTTPS should now be fully enabled for your Django project and app. If you're running into errors, this discussion on troubleshooting Nginx and Gunicorn may help.

      Warning: As stated in Configuring CORS Headers, be sure to change the Origin from the wildcard * domain to your domain name (https://example.com in this guide) before making your app accessible to end users.

      Conclusion

      In this guide, you set up and configured a scalable Django application running on an Ubuntu 18.04 server. This setup can be replicated across multiple servers to create a highly-available architecture. Furthermore, this app and its config can be containerized using Docker or another container runtime to ease deployment and scaling. These containers can then be deployed into a container cluster like Kubernetes. In an upcoming Tutorial series, we will explore how to containerize and modernize this Django polls app so that it can run in a Kubernetes cluster.

      In addition to static files, you may also wish to offload your Django Media files to object storage. To learn how to do this, consult Using Amazon S3 to Store your Django Site's Static and Media Files. You might also consider compressing static files to further optimize their delivery to end users. To do this, you can use a Django plugin like Django compressor.



      Source link