One place for hosting & domains

      Nginx

      How To Set Up Laravel, Nginx, and MySQL With Docker Compose on Ubuntu 20.04


      The author selected The FreeBSD Foundation to receive a donation as part of the Write for DOnations program.

      Introduction

      Over the past few years, Docker has become a frequently used solution for deploying applications thanks to how it simplifies running and deploying applications in ephemeral containers. When you are using a LEMP application stack, for example, with PHP, Nginx, MySQL and the Laravel framework, Docker can significantly streamline the setup process.

      Docker Compose has further simplified the development process by allowing developers to define their infrastructure, including application services, networks, and volumes, in a single file. Docker Compose offers an efficient alternative to running multiple docker container create and docker container run commands.

      In this tutorial, you will build a web application using the Laravel framework, with Nginx as the web server and MySQL as the database, all inside Docker containers. You will define the entire stack configuration in a docker-compose file, along with configuration files for PHP, MySQL, and Nginx.

      Prerequisites

      Before you start, you will need:

      Step 1 — Downloading Laravel and Installing Dependencies

      As a first step, you will get the latest version of Laravel and install the dependencies for the project, including Composer, the application-level package manager for PHP. We will install these dependencies with Docker to avoid having to install Composer globally.

      First, check that you are in your home directory and clone the latest Laravel release to a directory called laravel-app:

      • cd ~
      • git clone https://github.com/laravel/laravel.git laravel-app

      Move into the laravel-app directory:

      Next, use Docker’s composer image to mount the directories that you will need for your Laravel project and avoid the overhead of installing Composer globally:

      • docker run --rm -v $(pwd):/app composer install

      Using the -v and --rm flags with docker run creates an ephemeral container that will be bind-mounted to your current directory before being removed. This will copy the contents of your ~/laravel-app directory to the container and also ensure that the vendor folder Composer creates inside the container is copied to your current directory.

      As a final step, set permissions on the project directory so that it is owned by your non-root user:

      • sudo chown -R sammy:sammy ~/laravel-app

      This will be important when you write the Dockerfile for your application image in Step 4, as it will allow you to work with your application code and run processes in your container as a non-root user.

      With your application code in place, you can move on to defining your services with Docker Compose.

      Step 2 — Creating the Docker Compose File

      Building your applications with Docker Compose simplifies the process of setting up and versioning your infrastructure. To set up your Laravel application, you will write a docker-compose file that defines your web server, database, and application services.

      Open the file:

      • nano ~/laravel-app/docker-compose.yml

      In the docker-compose file, you will define three services: app, webserver, and db. Add the following code to the file, being sure to replace MYSQL_ROOT_PASSWORD with the root user’s MySQL password, defined as an environment variable under the db service, with a strong password of your choice:

      ~/laravel-app/docker-compose.yml

      version: '3'
      services:
      
        #PHP Service
        app:
          build:
            context: .
            dockerfile: Dockerfile
          image: digitalocean.com/php
          container_name: app
          restart: unless-stopped
          tty: true
          environment:
            SERVICE_NAME: app
            SERVICE_TAGS: dev
          working_dir: /var/www
          networks:
            - app-network
      
        #Nginx Service
        webserver:
          image: nginx:alpine
          container_name: webserver
          restart: unless-stopped
          tty: true
          ports:
            - "80:80"
            - "443:443"
          networks:
            - app-network
      
        #MySQL Service
        db:
          image: mysql:5.7.22
          container_name: db
          restart: unless-stopped
          tty: true
          ports:
            - "3306:3306"
          environment:
            MYSQL_DATABASE: laravel
            MYSQL_ROOT_PASSWORD: MYSQL_ROOT_PASSWORD
            SERVICE_TAGS: dev
            SERVICE_NAME: mysql
          networks:
            - app-network
      
      #Docker Networks
      networks:
        app-network:
          driver: bridge
      

      The services defined here include:

      • app: This service definition contains the Laravel application and runs a custom Docker image, digitalocean.com/php, that you will define in Step 4. It also sets the working_dir in the container to /var/www.
      • webserver: This service definition pulls the nginx:alpine image from Docker and exposes ports 80 and 443.
      • db: This service definition pulls the mysql:5.7.22 image from Docker and defines a few environmental variables, including a database called laravel for your application and the root password for the database. You are free to name the database whatever you would like, and you should replace your_mysql_root_password with your own strong password. This service definition also maps port 3306 on the host to port 3306 on the container.

      Each container_name property defines a name for the container, which corresponds to the name of the service. If you don’t define this property, Docker will assign a name to each container by combining a historically famous person’s name and a random word separated by an underscore.

      To facilitate communication between containers, the services are connected to a bridge network called app-network. A bridge network uses a software bridge that allows containers connected to the same bridge network to communicate with each other. The bridge driver automatically installs rules in the host machine so that containers on different bridge networks cannot communicate directly with each other. This creates a greater level of security for applications, ensuring that only related services can communicate with one another. It also means that you can define multiple networks and services connecting to related functions: front-end application services can use a frontend network, for example, and back-end services can use a backend network.

      Next, you’ll look at how to add volumes and bind mounts to your service definitions to persist your application data.

      Step 3 — Persisting Data

      Docker has powerful and convenient features for persisting data. In your application, you will make use of volumes and bind mounts for persisting the database, and application and configuration files. Volumes offer flexibility for backups and persistence beyond a container’s lifecycle, while bind mounts facilitate code changes during development, making changes to your host files or directories immediately available in your containers. Your setup will make use of both.

      Warning: By using bind mounts, you make it possible to change the host filesystem through processes running in a container, including creating, modifying, or deleting important system files or directories. This is a powerful ability with security implications, and could impact non-Docker processes on the host system. Use bind mounts with care.

      In the docker-compose file, define a volume called dbdata under the db service definition to persist the MySQL database:

      ~/laravel-app/docker-compose.yml

      ...
      #MySQL Service
      db:
        ...
          volumes:
            - dbdata: /var/lib/mysql
          networks:
            - app-network
        ...
      

      The named volume dbdata persists the contents of the /var/lib/mysql folder present inside the container. This allows you to stop and restart the db service without losing data.

      At the bottom of the file, add the definition for the dbdata volume:

      ~/laravel-app/docker-compose.yml

      ...
      #Volumes
      volumes:
        dbdata:
          driver: local
      

      With this definition in place, you will be able to use this volume across services.

      Next, add a bind mount to the db service for the MySQL configuration files you will create in Step 7:

      ~/laravel-app/docker-compose.yml

      ...
      #MySQL Service
      db:
        ...
          volumes:
            - dbdata:/var/lib/mysql
            - ./mysql/my.cnf:/etc/mysql/my.cnf
        ...
      

      This bind mount binds ~/laravel-app/mysql/my.cnf to /etc/mysql/my.cnf in the container.

      Next, add bind mounts to the webserver service. There will be two: one for your application code and another for the Nginx configuration definition that you will create in Step 6:

      ~/laravel-app/docker-compose.yml

      #Nginx Service
      webserver:
        ...
        volumes:
            - ./:/var/www
            - ./nginx/conf.d/:/etc/nginx/conf.d/
        networks:
            - app-network
      

      The first bind mount binds the application code in the ~/laravel-app directory to the /var/www directory inside the container. The configuration file that you will add to ~/laravel-app/nginx/conf.d/ will also be mounted to /etc/nginx/conf.d/ in the container, allowing you to add or modify the configuration directory’s contents as needed.

      Finally, add the following bind mounts to the app service for the application code and configuration files:

      ~/laravel-app/docker-compose.yml

      #PHP Service
      app:
        ...
        volumes:
             - ./:/var/www
             - ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
        networks:
            - app-network
      

      The app service is bind-mounting the ~/laravel-app folder, which contains the application code, to the /var/www folder in the container. This will speed up the development process, since any changes made to your local application directory will be instantly reflected inside the container. You are also binding your PHP configuration file, ~/laravel-app/php/local.ini, to /usr/local/etc/php/conf.d/local.ini inside the container. You will create the local PHP configuration file in Step 5.

      Your docker-compose file will now look like this:

      ~/laravel-app/docker-compose.yml

      version: '3'
      services:
      
        #PHP Service
        app:
          build:
            context: .
            dockerfile: Dockerfile
          image: digitalocean.com/php
          container_name: app
          restart: unless-stopped
          tty: true
          environment:
            SERVICE_NAME: app
            SERVICE_TAGS: dev
          working_dir: /var/www
          volumes:
            - ./:/var/www
            - ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
          networks:
            - app-network
      
        #Nginx Service
        webserver:
          image: nginx:alpine
          container_name: webserver
          restart: unless-stopped
          tty: true
          ports:
            - "80:80"
            - "443:443"
          volumes:
            - ./:/var/www
            - ./nginx/conf.d/:/etc/nginx/conf.d/
          networks:
            - app-network
      
        #MySQL Service
        db:
          image: mysql:5.7.22
          container_name: db
          restart: unless-stopped
          tty: true
          ports:
            - "3306:3306"
          environment:
            MYSQL_DATABASE: laravel
            MYSQL_ROOT_PASSWORD: your_mysql_root_password
            SERVICE_TAGS: dev
            SERVICE_NAME: mysql
          volumes:
            - dbdata:/var/lib/mysql/
            - ./mysql/my.cnf:/etc/mysql/my.cnf
          networks:
            - app-network
      
      #Docker Networks
      networks:
        app-network:
          driver: bridge
      #Volumes
      volumes:
        dbdata:
          driver: local
      

      Save the file and exit your editor when you are finished making changes.

      With your docker-compose file written, you can now build the custom image for your application.

      Step 4 — Creating the Dockerfile

      Docker allows you to specify the environment inside of individual containers with a Dockerfile. A Dockerfile enables you to create custom images that you can use to install the software required by your application and configure settings based on your requirements. You can push the custom images you create to Docker Hub or any private registry.

      Your Dockerfile will be located in your ~/laravel-app directory. Create the file:

      • nano ~/laravel-app/Dockerfile

      This Dockerfile will set the base image and specify the necessary commands and instructions to build the Laravel application image. Add the following code to the file:

      ~/laravel-app/php/Dockerfile

      FROM php:7.4-fpm
      
      # Copy composer.lock and composer.json
      COPY composer.lock composer.json /var/www/
      
      # Set working directory
      WORKDIR /var/www
      
      # Install dependencies
      RUN apt-get update && apt-get install -y 
          build-essential 
          libpng-dev 
          libjpeg62-turbo-dev 
          libfreetype6-dev 
          locales 
          zip 
          jpegoptim optipng pngquant gifsicle 
          vim 
          unzip 
          git 
          curl 
          libzip-dev
      
      # Clear cache
      RUN apt-get clean && rm -rf /var/lib/apt/lists/*
      
      # Install extensions
      RUN docker-php-ext-install pdo_mysql mbstring zip exif pcntl
      RUN docker-php-ext-configure gd --with-gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ --with-png-dir=/usr/include/
      RUN docker-php-ext-install gd
      
      # Install composer
      RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
      
      # Add user for laravel application
      RUN groupadd -g 1000 www
      RUN useradd -u 1000 -ms /bin/bash -g www www
      
      # Copy existing application directory contents
      COPY . /var/www
      
      # Copy existing application directory permissions
      COPY --chown=www:www . /var/www
      
      # Change current user to www
      USER www
      
      # Expose port 9000 and start php-fpm server
      EXPOSE 9000
      CMD ["php-fpm"]
      

      First, the Dockerfile creates an image on top of the php:7.4-fpm Docker image. This is a Debian-based image that has the PHP FastCGI implementation PHP-FPM installed. The file also installs the prerequisite packages for Laravel: mcrypt, pdo_mysql, mbstring, and imagick with composer.

      The RUN directive specifies the commands to update, install, and configure settings inside the container, including creating a dedicated user and group called www. The WORKDIR instruction specifies the /var/www directory as the working directory for the application.

      Creating a dedicated user and group with restricted permissions mitigates the inherent vulnerability when running Docker containers, which run by default as root. Instead of running this container as root, you’ve created the www user, who has read/write access to the /var/www folder thanks to the COPY instruction that you are using with the --chown flag to copy the application folder’s permissions.

      Finally, the EXPOSE command exposes a port in the container, 9000, for the php-fpm server. CMD specifies the command that should run once the container is created. Here, CMD specifies "php-fpm", which will start the server.

      Save the file and exit your editor when you are finished making changes.

      You can now move on to defining your PHP configuration.

      Step 5 — Configuring PHP

      Now that you have defined your infrastructure in the docker-compose file, you can configure the PHP service to act as a PHP processor for incoming requests from Nginx.

      To configure PHP, you will create the local.ini file inside the php folder. This is the file that you bind-mounted to /usr/local/etc/php/conf.d/local.ini inside the container in Step 2. Creating this file will allow you to override the default php.ini file that PHP reads when it starts.

      Create the php directory:

      Next, open the local.ini file:

      • nano ~/laravel-app/php/local.ini

      To demonstrate how to configure PHP, you’ll add the following code to set size limitations for uploaded files:

      ~/laravel-app/php/local.ini

      upload_max_filesize=40M
      post_max_size=40M
      

      The upload_max_filesize and post_max_size directives set the maximum allowed size for uploaded files, and demonstrate how you can set php.ini configurations from your local.ini file. You can put any PHP-specific configuration that you want to override in the local.ini file.

      Save the file and exit your editor.

      With your PHP local.ini file in place, you can move on to configuring Nginx.

      Step 6 — Configuring Nginx

      With the PHP service configured, you can modify the Nginx service to use PHP-FPM as the FastCGI server to serve dynamic content. The FastCGI server is based on a binary protocol for interfacing interactive programs with a web server. For more information, please refer to this article on Understanding and Implementing FastCGI Proxying in Nginx.

      To configure Nginx, you will create an app.conf file with the service configuration in the ~/laravel-app/nginx/conf.d/ folder.

      First, create the nginx/conf.d/ directory:

      • mkdir -p ~/laravel-app/nginx/conf.d

      Next, create the app.conf configuration file:

      • nano ~/laravel-app/nginx/conf.d/app.conf

      Add the following code to the file to specify your Nginx configuration:

      ~/laravel-app/nginx/conf.d/app.conf

      server {
          listen 80;
          index index.php index.html;
          error_log  /var/log/nginx/error.log;
          access_log /var/log/nginx/access.log;
          root /var/www/public;
          location ~ .php$ {
              try_files $uri =404;
              fastcgi_split_path_info ^(.+.php)(/.+)$;
              fastcgi_pass app:9000;
              fastcgi_index index.php;
              include fastcgi_params;
              fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
              fastcgi_param PATH_INFO $fastcgi_path_info;
          }
          location / {
              try_files $uri $uri/ /index.php?$query_string;
              gzip_static on;
          }
      }
      

      The server block defines the configuration for the Nginx web server with the following directives:

      • listen: This directive defines the port on which the server will listen to incoming requests.
      • error_log and access_log: These directives define the files for writing logs.
      • root: This directive sets the root folder path, forming the complete path to any requested file on the local file system.

      In the php location block, the fastcgi_pass directive specifies that the app service is listening on a TCP socket on port 9000. This makes the PHP-FPM server listen over the network rather than on a Unix socket. Though a Unix socket has a slight advantage in speed over a TCP socket, it does not have a network protocol and thus skips the network stack. For cases where hosts are located on one machine, a Unix socket may make sense, but in cases where you have services running on different hosts, a TCP socket offers the advantage of allowing you to connect to distributed services. Because your app container is running on a different host from your webserver container, a TCP socket makes the most sense for your configuration.

      Save the file and exit your editor when you are finished making changes.

      Thanks to the bind mount you created in Step 2, any changes you make inside the nginx/conf.d/ folder will be directly reflected inside the webserver container.

      Next, you’ll look at and configure your MySQL settings.

      Step 7 — Configuring MySQL

      With PHP and Nginx configured, you can enable MySQL to act as the database for your application.

      To configure MySQL, you will create the my.cnf file in the mysql folder. This is the file that you bind-mounted to /etc/mysql/my.cnf inside the container in Step 2. This bind mount allows you to override the my.cnf settings as and when required.

      To demonstrate how this works, you’ll add settings to the my.cnf file that enable the general query log and specify the log file.

      First, create the mysql directory:

      • mkdir ~/laravel-app/mysql

      Next, make the my.cnf file:

      • nano ~/laravel-app/mysql/my.cnf

      In the file, add the following code to enable the query log and set the log file location:

      ~/laravel-app/mysql/my.cnf

      [mysqld]
      general_log = 1
      general_log_file = /var/lib/mysql/general.log
      

      This my.cnf file enables logs, defining the general_log setting as 1 to allow general logs. The general_log_file setting specifies where the logs will be stored.

      Save the file and exit your editor.

      Your next step will be to start the containers.

      Step 8 — Modifying Environment Settings and Running the Containers

      Now that you have defined all of your services in your docker-compose file and created the configuration files for these services, you can start the containers. As a final step, though, you will make a copy of the .env.example file that Laravel includes by default and name the copy .env, which is the file Laravel expects to define its environment:

      You can now modify the .env file on the app container to include specific details about your setup.

      Open the file using nano or your text editor of choice:

      Find the block that specifies DB_CONNECTION and update it to reflect the specifics of your setup. You will modify the following fields:

      • DB_HOST will be your db database container.
      • DB_DATABASE will be the laravel database.
      • DB_USERNAME will be the username you will use for your database. In this case, you will use laraveluser.
      • DB_PASSWORD will be the secure password you would like to use for this user account.

      /var/www/.env

      DB_CONNECTION=mysql
      DB_HOST=db
      DB_PORT=3306
      DB_DATABASE=laravel
      DB_USERNAME=laraveluser
      DB_PASSWORD=your_laravel_db_password
      

      Save your changes and exit your editor.

      With all of your services defined in your docker-compose file, use the following single command to start all of the containers, create the volumes, and set up and connect the networks:

      When you run docker-compose up for the first time, it will download all of the necessary Docker images, which might take a while. Once the images are downloaded and stored in your local machine, Compose will create your containers. The -d flag daemonizes the process, running your containers in the background.

      Once the process is complete, use the following command to list all of the running containers:

      You will see the following output with details about your app, webserver, and db containers:

      Output

      CONTAINER ID NAMES IMAGE STATUS PORTS c31b7b3251e0 db mysql:5.7.22 Up 2 seconds 0.0.0.0:3306->3306/tcp ed5a69704580 app digitalocean.com/php Up 2 seconds 9000/tcp 5ce4ee31d7c0 webserver nginx:alpine Up 2 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp

      The CONTAINER ID in this output is a unique identifier for each container, while NAMES lists the service name associated with each. You can use both of these identifiers to access the containers. IMAGE defines the image name for each container, while STATUS provides information about the container’s state: whether it’s running, restarting, or stopped.

      You’ll now use docker-compose exec to set the application key for the Laravel application. The docker-compose exec command allows you to run specific commands in containers.

      The following command will generate a key and copy it to your .env file, ensuring that your user sessions and encrypted data remain secure:

      • docker-compose exec app php artisan key:generate

      You now have the environment settings required to run your application. To cache these settings into a file, which will boost your application’s load speed, run:

      • docker-compose exec app php artisan config:cache

      Your configuration settings will be loaded into /var/www/bootstrap/cache/config.php on the container.

      As a final step, visit http://your_server_ip in the browser. You will see the following home page for your Laravel application:

      Laravel Home Page

      With your containers running and your configuration information in place, you can move on to configuring your user information for the laravel database on the db container.

      Step 9 — Creating a User for MySQL

      The default MySQL installation only creates the root administrative account, which has unlimited privileges on the database server. In general, it’s better to avoid using the root administrative account when interacting with the database. Instead, let’s create a dedicated database user for your application’s Laravel database.

      To create a new user, execute an interactive bash shell on the db container with docker-compose exec:

      • docker-compose exec db bash

      Inside the container, log into the MySQL root administrative account:

      You will be prompted for the password you set for the MySQL root account during installation in your docker-compose file.

      Start by checking for the database called laravel, which you defined in your docker-compose file. Run the show databases command to check for existing databases:

      You will see the laravel database listed in the output:

      Output

      +--------------------+ | Database | +--------------------+ | information_schema | | laravel | | mysql | | performance_schema | | sys | +--------------------+ 5 rows in set (0.00 sec)

      Next, create the user account that will be allowed to access this database. Your username will be laraveluser, though you can replace this with another name if you’d prefer. Just be sure that your username and password here match the details you set in your .env file in the previous step:

      • GRANT ALL ON laravel.* TO 'laraveluser'@'%' IDENTIFIED BY 'your_laravel_db_password';

      Flush the privileges to notify the MySQL server of the changes:

      Exit MySQL:

      Finally, exit the container:

      You have configured the user account for your Laravel application database and are ready to migrate your data and work with the Tinker console.

      Step 10 — Migrating Data and Working with the Tinker Console

      With your application running, you can migrate your data and experiment with the tinker command, which will initiate a PsySH console with Laravel preloaded. PsySH is a runtime developer console and interactive debugger for PHP, and Tinker is a REPL specifically for Laravel. Using the tinker command will allow you to interact with your Laravel application from the command line in an interactive shell.

      First, test the connection to MySQL by running the Laravel artisan migrate command, which creates a migrations table in the database from inside the container:

      • docker-compose exec app php artisan migrate

      This command will migrate the default Laravel tables. The output confirming the migration will look like this:

      Output

      Migration table created successfully. Migrating: 2014_10_12_000000_create_users_table Migrated: 2014_10_12_000000_create_users_table Migrating: 2014_10_12_100000_create_password_resets_table Migrated: 2014_10_12_100000_create_password_resets_table

      Once the migration is complete, you can run a query to check if you are properly connected to the database using the tinker command:

      • docker-compose exec app php artisan tinker

      Test the MySQL connection by getting the data you just migrated:

      • DB::table('migrations')->get();

      You will see output that looks like this:

      Output

      => IlluminateSupportCollection {#2856 all: [ {#2862 +"id": 1, +"migration": "2014_10_12_000000_create_users_table", +"batch": 1, }, {#2865 +"id": 2, +"migration": "2014_10_12_100000_create_password_resets_table", +"batch": 1, }, ], }

      You can use tinker to interact with your databases and to experiment with services and models.

      With your Laravel application in place, you are ready for further development and experimentation.

      Conclusion

      You now have a LEMP stack application running on your server, which you’ve tested by accessing the Laravel welcome page and creating MySQL database migrations. Key to the simplicity of this installation is Docker Compose, which allows you to create a group of Docker containers, defined in a single file, with a single command.



      Source link

      How To Set Up WordPress Multisite with Nginx and LEMP on Ubuntu 20.04


      Introduction

      The WordPress multisite feature is a unique way to host and manage a suite of sites in one place, and is useful for projects or positions that call for the operation of several WordPress sites under one host. Multisite offers the ability to create multiple WordPress websites from a single installation of WordPress, with each site having a separate theme, set of plugins, and collection of content (typically posts and pages). This feature helps to reduce the overhead of maintaining and updating several installations of WordPress, while allowing you to host multiple sites which may be unrelated to one another.

      In this tutorial, you’ll set up a WordPress multisite on an Ubuntu 20.04 Droplet using subdomains. The WordPress sites that you’ll create will have a subdomain web address like http://wp-site.yourdomain.com, but your subdomain address can be mapped to an external domain like http://wp-site.net so that each site looks independent to users visiting your multisite suite of addresses.

      Prerequisites

      This tutorial requires you to have a basic knowledge of WordPress multisite. The following articles might be helpful in deepening your understanding of multisites:

      Step 1 — Installing WordPress

      For this tutorial, you’ll need to have access to a WordPress Droplet running the LEMP stack on Ubuntu 20.04. You can create a WordPress installation on a Droplet in the following ways:

      You can also follow this tutorial using a WordPress installation on a hosting provider, but keep in mind that the steps of this tutorial will reflect a WordPress multisite setup on a DigitalOcean Droplet.

      Before installing WordPress, you’ll need to set up DNS records for each intended WordPress site in your multisite installation. For this tutorial, create the following domain names:

      • Site 1:

        Domain: examplewp.com (Primary domain)

        This is the site that is created when WordPress is installed.

      • Site 2:

        External Domain: shoppingsite.com

        Subdomain: shoppingsite.example.com

      • Site 3:

        External Domain: companysite.org

        Subdomain: companysite.example.com

      The first domain is the primary domain name through which WordPress will be referenced. Make sure to set up DNS for all three domains to point to the IP address of the Droplet hosting WordPress.

      While installing, if you’ve chosen to use a Droplet with a LEMP stack, be sure to follow the instructions for setting up your database and users. For DigitalOcean WordPress 1-Click installations, this has already been configured for you.

      After installing WordPress on your Droplet, let’s assign file ownership to the user www-data. This is essential for media uploads and for core/plugin/theme updates to work in WordPress.

      Log in to your Droplet via the command line, then execute the following command, replacing the highlighted path with the full path to your WordPress installation:

      • chown -R www-data:www-data /var/www/wordpress/

      This command will grant read and write access to the www-data user for media uploads and necessary updates.

      In the next step, we’ll further configure the primary domain, then configure our multisite installation from there.

      Step 2 — Setting Up DNS Wildcard Records

      Let’s continue by adding a DNS wildcard record for the primary domain. Adding a wildcard record allows more WordPress sites to be added to your multisite installation at any time, without needing individual A records. Alternatively, you can choose to add a new A record for each subdomain.

      Log in to your DigitalOcean control panel and navigate to the Networking section. Edit the primary domain and create a wildcard A record for this domain pointing to the Droplet’s IP address. A wildcard record is created by entering an asterisk (*) in the hostname input box as shown in the following screenshot.

      DNS Control Panel - wildcard record

      If you host your domain’s DNS on a hosting provider, set the wildcard record using the registrar’s site.

      After setting the wildcard record, DNS queries for any random-sub-domain.examplewp.com should return the IP address of your Droplet.

      Step 3 — Enable Multisite and Create Additional Sites

      In this section, let’s now enable our WordPress multisite and create the two additional sites as mentioned in Step 1.

      To begin this process, a PHP constant has to be defined in the wp-config.php file to enable the Network Setup page.

      You can edit the wp-config.php file via the command line while logged into your WordPress Droplet. Open the file using your command line editor of choice. Here, we’ll use nano:

      • nano /var/www/wordpress/wp-config.php

      Add the following code before the comment /* That's all, stop editing! Happy blogging. */ or similar text:

      /var/www/wordpress/wp-config.php

          /* Multisite settings */
          define( 'WP_ALLOW_MULTISITE', true );
      

      Save and close the file. If you’re using nano, you can do that with CTRL+X, then Y and ENTER to confirm.

      Next, log in to the WordPress admin panel and navigate to Tools -> Network Setup. Choose the Subdomains option, modify the Network Title as desired, and then click Install.

      WordPress Network Setup

      You will be presented with two blocks of code to be added in the wp-config.php and .htaccess files. Because we are using Nginx, we won’t need to use the suggested .htaccess code, so you can ignore that for now.

      Copy the wp-config.php code which looks similar to the following:

          define('MULTISITE', true);
          define('SUBDOMAIN_INSTALL', true);
          define('DOMAIN_CURRENT_SITE', 'examplewp.com');
          define('PATH_CURRENT_SITE', '/');
          define('SITE_ID_CURRENT_SITE', 1);
          define('BLOG_ID_CURRENT_SITE', 1);
      

      Next, open the wp-config.php file:

      • nano /var/www/wordpress/wp-config.php

      Include the code snippet you just copied, placing it before the comment that says /* That's all, stop editing! Happy blogging. */ (or a variation of text that advises not to write below it). Save the file when you’re done.

      Log out of the WordPress admin panel, and log in again. From the admin toolbar on the top left, navigate to the My Sites > Network Admin > Sites.

      WordPress Toolbar

      Click the Add New button to open the Add New Site form. The following screenshot shows the filled-in details for the shopping site in our example. The Site Address entered will form the subdomain of this site.

      Creating a new WordPress site

      Click Add Site and the created site will be accessible via http://shoppingsite.examplewp.com.

      Repeat these steps to create the second site (companysite.examplewp.com in our example).

      The following three WordPress sites can now have its own content, theme, and active set of plugins:

      • examplewp.com
      • shoppingsite.examplewp.com
      • companysite.examplewp.com

      Step 4 — Setting Up Domain Mapping

      In this step, let’s enable multisite to use a separate domain name for each WordPress site by downloading and enabling the WordPress MU Domain Mapping plugin. This third-party plugin allows users of WordPress multisite to map their blog/site to another domain.

      To download this plugin,visit My Sites -> Network Admin -> Plugins from your dashboard and select Add New on your primary domain to find the WordPress MU Domain Mapping plugin.

      Plugins

      Install the plugin, then click the Network Activate link under the WordPress MU Domain Mapping plugin. Go to Settings -> Domain Mapping and make changes to the Domain Options as follows:

      • Uncheck Remote Login
      • Check Permanent Redirect
      • Uncheck Redirect administration pages to site’s original domain

      Domain mapping options

      Click Save once done. These settings redirect all requests for subdomains (like companysite.examplewp.com) to their respective external domains (like companysite.org) including the administration pages (/wp-admin).

      In the next step, let’s map a domain name to each site based on its site ID. There are many ways to find the ID of a site, but for easier administration you’ll create a simple WordPress Must-use plugin that displays an additional ID column on the Sites page.

      Log in to your WordPress Droplet via SSH and create an mu-plugins directory.

      • mkdir /var/www/wordpress/wp-content/mu-plugins

      Create a PHP file inside this directory:

      nano /var/www/wordpress/wp-content/mu-plugins/wpms_blogid.php
      

      Next, copy the following content to your wpms_blogid.php file:

      /var/www/wordpress/wp-content/mu-plugins/wpms_blogid.php

          <?php
          add_filter( 'wpmu_blogs_columns', 'do_get_id' );
          add_action( 'manage_sites_custom_column', 'do_add_columns', 10, 2 );
          add_action( 'manage_blogs_custom_column', 'do_add_columns', 10, 2 );
      
          function do_add_columns( $column_name, $blog_id ) {
              if ( 'blog_id' === $column_name )
                  echo $blog_id;
              return $column_name;
          }
      
          function do_get_id( $columns ) {
              $columns['blog_id'] = 'ID';
              return $columns;
          }
      

      The Sites -> All Sites section should now show an additional ID column.

      ID

      Note down the ID values for each site and go to the Settings -> Domains page. Enter the site ID followed by the external domain for the site. For example, since companysite has an ID of 3, on this page, the Site ID should be 3, and the domain should be companysite.org.

      Mapping a site ID to a domain

      You may add a “www” prefix if you wish to set the site URL as www.companysite.org. Repeat these steps for the other domains. Click Save at the bottom of the page.

      Each site will now have its own domain name instead of a subdomain; i.e., entering http://companysite.org in your browser will open the My Online Company site. You can check this now by visiting your additional domain names that are mapped. You should see the site title change in the upper left corner of the page.

      Now each site can be maintained separately through its own WordPress admin panel:

      - `http://examplewp.com/wp-admin/`
      - `http://shoppingsite.com/wp-admin/`
      - `http://companysite.org/wp-admin/`
      

      Updates to the core/plugins/themes and installation of plugins/themes should be done from the network admin page of the primary domain:

      - `http://examplewp.com/wp-admin/network/`
      

      Conclusion

      In this tutorial, you learned how to set up the multisite feature on a WordPress website running on Ubuntu 20.04 with Nginx (LEMP stack). The multisite feature enables you to host multiple WordPress sites on one WordPress installation on your droplet. You also learned how to set up domain mapping, which allows each of your subsites to be reached through a custom domain name.

      For an alternate process to install the WordPress multisite feature, visit our tutorial, Setting up WordPress Multisite on Apache.



      Source link

      How To Deploy a Static HTML Website with Ansible on Ubuntu 20.04 (Nginx)



      Part of the Series:
      How To Write Ansible Playbooks

      Ansible is a modern configuration management tool that doesn’t require the use of an agent software on remote nodes, using only SSH and Python to communicate and execute commands on managed servers. This series will walk you through the main Ansible features that you can use to write playbooks for server automation. At the end, we’ll see a practical example of how to create a playbook to automate setting up a remote Nginx web server and deploy a static HTML website to it.

      If you were following along with all parts of this series, at this point you should be familiar with installing system packages, applying templates, and using handlers in Ansible playbooks. In this part of the series, you’ll use what you’ve seen so far to create a playbook that automates setting up a remote Nginx server to host a static HTML website on Ubuntu 20.04.

      Start by creating a new directory on your Ansible control node where you’ll set up the Ansible files and a demo static HTML website to be deployed to your remote server. This could be in any location of your choice within your home folder. In this example we’ll use ~/ansible-nginx-demo.

      • mkdir ~/ansible-nginx-demo
      • cd ~/ansible-nginx-demo

      Next, copy your existing inventory file into the new directory. In this example, we’ll use the same inventory you set up at the beginning of this series:

      • cp ~/ansible-practice/inventory .

      This will copy a file named inventory from a folder named ansible-practice in your home directory, and save it to the current directory.

      Obtaining the Demo Website

      For this demonstration, we’ll use a static HTML website that is the subject of our How To Code in HTML series. Start by downloading the demo website files by running the following command:

      • curl -L https://github.com/do-community/html_demo_site/archive/refs/heads/main.zip -o html_demo.zip

      You’ll need unzip to unpack the contents of this download. To make sure you have this tool installed, run:

      Then, unpack the demo website files with:

      This will create a new directory called html_demo_site-main on your current working directory. You can check the contents of the directory with an ls -la command:

      • ls -la html_demo_site-main

      Output

      total 28 drwxrwxr-x 3 sammy sammy 4096 sep 18 2020 . drwxrwxr-x 5 sammy sammy 4096 mrt 25 15:03 .. -rw-rw-r-- 1 sammy sammy 1289 sep 18 2020 about.html drwxrwxr-x 2 sammy sammy 4096 sep 18 2020 images -rw-rw-r-- 1 sammy sammy 2455 sep 18 2020 index.html -rw-rw-r-- 1 sammy sammy 1079 sep 18 2020 LICENSE -rw-rw-r-- 1 sammy sammy 675 sep 18 2020 README.md

      Creating a Template for Nginx’s Configuration

      You’ll now set up the Nginx template that is necessary to configure the remote web server. Create a new folder within your ansible-demo directory to hold non-playbook files:

      Then, open a new file called nginx.conf.j2:

      This template file contains an Nginx server block configuration for a static HTML website. It uses three variables: document_root, app_root, and server_name. We’ll define these variables later on when creating the playbook. Copy the following content to your template file:

      ~/ansible-nginx-demo/files/nginx.conf.j2

      server {
        listen 80;
      
        root {{ document_root }}/{{ app_root }};
        index index.html index.htm;
      
        server_name {{ server_name }};
      
        location / {
         default_type "text/html";
         try_files $uri.html $uri $uri/ =404;
        }
      }
      

      Save and close the file when you’re done.

      Creating a New Ansible Playbook

      Next, we’ll create a new Ansible playbook and set up the variables that we’ve used in the previous section of this guide. Open a new file named playbook.yml:

      This playbook starts with the hosts definition set to all and a become directive that tells Ansible to run all tasks as the root user by default (the same as manually running commands with sudo). Within this playbook’s var section, we’ll create three variables: server_name, document_root, and app_root. These variables are used in the Nginx configuration template to set up the domain name or IP address that this web server will respond to, and the full path to where the website files are located on the server. For this demo, we’ll use the ansible_default_ipv4.address fact variable because it contains the remote server’s public IP address, but you can replace this value with your server’s hostname in case it has a domain name properly configured within a DNS service to point to this server:

      ~/ansible-nginx-demo/playbook.yml

      ---
      - hosts: all
        become: yes
        vars:
          server_name: "{{ ansible_default_ipv4.address }}"
          document_root: /var/www/html
          app_root: html_demo_site-main
        tasks:
      

      You can keep this file open for now. The next sections will walk you through all tasks that you’ll need to include in this playbook to make it fully functional.

      Installing Required Packages

      The following task will update the apt cache and then install the nginx package on remote nodes:

      ~/ansible-nginx-demo/playbook.yml

      . . .
          - name: Update apt cache and install Nginx
            apt:
              name: nginx
              state: latest
              update_cache: yes
      

      Uploading Website Files to Remote Nodes

      The next task will use the copy built-in module to upload the website files to the remote document root. We’ll use the document_root variable to set the destination on the server where the application folder should be created.

      ~/ansible-nginx-demo/playbook.yml

      . . .
          - name: Copy website files to the server's document root
            copy:
              src: "{{ app_root }}"
              dest: "{{ document_root }}"
              mode: preserve
      

      Applying and Enabling the Custom Nginx Configuration

      We’ll now apply the Nginx template that will configure the web server to host your static HTML file. After the configuration file is set at /etc/nginx/sites-available, we’ll create a symbolic link to that file inside /etc/nginx-sites-enabled and notify the Nginx service for a posterior restart. The entire process will require two separate tasks:

      ~/ansible-nginx-demo/playbook.yml

      . . .
          - name: Apply Nginx template
            template:
              src: files/nginx.conf.j2
              dest: /etc/nginx/sites-available/default
            notify: Restart Nginx
      
          - name: Enable new site
            file:
              src: /etc/nginx/sites-available/default
              dest: /etc/nginx/sites-enabled/default
              state: link
            notify: Restart Nginx
      

      Allowing Port 80 on UFW

      Next, include the task that enables tcp access on port 80:

      ~/ansible-nginx-demo/playbook.yml

      . . .
          - name: Allow all access to tcp port 80
            ufw:
              rule: allow
              port: '80'
              proto: tcp
      . . .
      

      Creating a Handler for the Nginx Service

      To finish this playbook, the only thing left to do is to set up the Restart Nginx handler:

      ~/ansible-nginx-demo/playbook.yml

      . . .
        handlers:
          - name: Restart Nginx
            service:
              name: nginx
              state: restarted  
      

      Running the Finished Playbook

      Once you’re finished including all the required tasks in your playbook file, it will look like this:

      ~/ansible-nginx-demo/playbook.yml

      ---
      - hosts: all
        become: yes
        vars:
          server_name: "{{ ansible_default_ipv4.address }}"
          document_root: /var/www
          app_root: html_demo_site-main
        tasks:
          - name: Update apt cache and install Nginx
            apt:
              name: nginx
              state: latest
              update_cache: yes
      
          - name: Copy website files to the server's document root
            copy:
              src: "{{ app_root }}"
              dest: "{{ document_root }}"
              mode: preserve
      
          - name: Apply Nginx template
            template:
              src: files/nginx.conf.j2
              dest: /etc/nginx/sites-available/default
            notify: Restart Nginx
      
          - name: Enable new site
            file:
              src: /etc/nginx/sites-available/default
              dest: /etc/nginx/sites-enabled/default
              state: link
            notify: Restart Nginx
      
          - name: Allow all access to tcp port 80
            ufw:
              rule: allow
              port: '80'
              proto: tcp
      
        handlers:
          - name: Restart Nginx
            service:
              name: nginx
              state: restarted
      

      To execute this playbook on the server(s) that you set up in your inventory file, run ansible-playbook with the same connection arguments you’ve used when running a connection test within the introduction of this series. Here, we’ll be using an inventory file named inventory and the sammy user to connect to the remote server. Because the playbook requires sudo to run, we’re also including the -K argument to provide the remote user’s sudo password when prompted by Ansible:

      • ansible-playbook -i inventory playbook.yml -u sammy -K

      You’ll see output like this:

      Output

      BECOME password: PLAY [all] ********************************************************************************************** TASK [Gathering Facts] ********************************************************************************** ok: [203.0.113.10] TASK [Update apt cache and install Nginx] *************************************************************** ok: [203.0.113.10] TASK [Copy website files to the server's document root] ************************************************* changed: [203.0.113.10] TASK [Apply Nginx template] ***************************************************************************** changed: [203.0.113.10] TASK [Enable new site] ********************************************************************************** ok: [203.0.113.10] TASK [Allow all access to tcp port 80] ****************************************************************** ok: [203.0.113.10] RUNNING HANDLER [Restart Nginx] ************************************************************************* changed: [203.0.113.10] PLAY RECAP ********************************************************************************************** 203.0.113.10 : ok=7 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

      Once the playbook is finished, if you go to your browser and access your server’s hostname or IP address you should now see the following page:

      HTML Demo Site Deployed by Ansible

      Congratulations, you have successfully automated the deployment of a static HTML website to a remote Nginx server, using Ansible.

      If you make changes to any of the files in the demo website, you can run the playbook again and the copy task will make sure any file changes are reflected in the remote host. Because Ansible has an idempotent behavior, running the playbook multiple times will not trigger changes that were already made to the system.



      Source link