One place for hosting & domains

      Multiple

      How To Set Up Multiple WordPress Sites Using Multisite [Quickstart]


      Introduction

      WordPress is a robust content management platform, powering over 36% of the web currently. With the multisite feature, WordPress administrators can create multiple sites on one server, using a single WordPress installation, right within their dashboard.

      In this quickstart, learn how to set up multiple instances of WordPress sites that exist on one server or WordPress Droplet using the multisite feature. If you’d prefer a more in-depth walkthrough of a WordPress multisite set up, see How To Set Up WordPress Multisite with Nginx and LEMP on Ubuntu 20.04.

      Prerequisites

      The steps in this tutorial require the user to have sudo privileges. Follow our Initial Server Setup with Ubuntu 20.04 to create an administrative system user with sudo privileges.

      Before working with WordPress, you’ll need to have it installed on your virtual private server. This quickstart uses a Ubuntu 20.04 Droplet with the LAMP stack installed, with a user having root privileges. You can follow this tutorial with other installations, but keep in mind that the steps may vary depending on your installation.

      If you’ve chosen to install WordPress in a similar manner to this tutorial, be sure to stop at the end of step 5 to continue with step 1 of this tutorial.

      Step 1 — Configuring Your WordPress Installation

      With WordPress installed, we need to take a series of steps in a variety of configuration files.

      To begin, let’s modify the WordPress configuration, activating the multisite feature. In your command line after logging into your WordPress server, execute the following command:

      • sudo nano /var/www/html/wp-config.php

      Add the following line above the /* That’s all, stop editing! Happy blogging. */ (or similar text) comment on the wp-config.php file:

      /var/www/html/wp-config.php

      /* Multisite */
      define('WP_ALLOW_MULTISITE', true);
      

      Then, save the file and exit. You can do so by pressing CTRL+S to save, followed by CTRL+X to exit.

      Once done, the WordPress online installation page will be waiting. Access the page by adding /wp-admin/install.php to your site’s domain or IP address (eg. example.com/wp-admin/install.php) and fill out the short online form.

      Step 2 — Setting Up Multiple WordPress Sites

      Go into your WordPress dashboard and select the section called tools:

      networking setup

      Once you have filled out the required fields, go through the directions on the next page:

      next page

      Create a directory for your new sites:

      • sudo mkdir /var/www/wp-content/blogs.dir

      Next, you’ll need to alter your WordPress configuration. Make sure to place the following content above the line saying /* That’s all, stop editing! Happy blogging. */ or similar:

      • sudo nano /var/www/wp-config.php

      /var/www/wp-config.php

      define('MULTISITE', true);
      define('SUBDOMAIN_INSTALL', false);
      $base="/";
      define('DOMAIN_CURRENT_SITE', '<b><i>YOUR IP ADDRESS HERE</b></i>');
      define('PATH_CURRENT_SITE', '/');
      define('SITE_ID_CURRENT_SITE', 1);
      define('BLOG_ID_CURRENT_SITE', 1);
      

      After making all of the necessary changes, log into WordPress once more.

      Step 3 — Setting Up Your New WordPress Site

      After logging into your site again, you will notice that the header bar now has a section called “My Sites” instead of simply displaying your blog’s name:

      header

      You can now create new sites by going to My Sites at the top, clicking on Network Admin, and clicking on Sites:

      create a new site

      Here are a few links to other tutorials that are related to this quickstart guide:



      Source link

      How To Run a PHP Job Multiple Times in a Minute with Crontab on Ubuntu 20.04


      The author selected Girls Who Code to receive a donation as part of the Write for DOnations program.

      Introduction

      In Linux, you can use the versatile crontab tool to process long-running tasks in the background at specific times. While the daemon is great for running repetitive tasks, it has got one limitation: you can only execute tasks at a minimum time interval of 1 minute.

      However, in many applications, to avoid a poor user experience it’s preferable for jobs to execute more frequently. For instance, if you’re using the job-queue model to schedule file-processing tasks on your website, a significant wait will have a negative impact on end users.

      Another scenario is an application that uses the job-queue model either to send text messages or emails to clients once they’ve completed a certain task in an application (for example, sending money to a recipient). If users have to wait a minute before the delivery of a confirmation message, they might think that the transaction failed and try to repeat the same transaction.

      To overcome these challenges, you can program a PHP script that loops and processes tasks repetitively for 60 seconds as it awaits for the crontab daemon to call it again after the minute. Once the PHP script is called for the first time by the crontab daemon, it can execute tasks in a time period that matches the logic of your application without keeping the user waiting.

      In this guide, you will create a sample cron_jobs database on an Ubuntu 20.04 server. Then, you’ll set up a tasks table and a script that executes the jobs in your table in intervals of 5 seconds using the PHP while(...){...} loop and sleep() functions.

      Prerequisites

      To complete this tutorial, you require the following:

      Step 1 — Setting Up a Database

      In this step, you’ll create a sample database and table. First, SSH to your server and log in to MySQL as root:

      Enter your root password for the MySQL server and press ENTER to proceed. Then, run the following command to create a cron_jobs database.

      • CREATE DATABASE cron_jobs;

      Create a non-root user for the database. You’ll need the credentials of this user to connect to the cron_jobs database from PHP. Remember to replace EXAMPLE_PASSWORD with a strong value:

      • CREATE USER 'cron_jobs_user'@'localhost' IDENTIFIED WITH mysql_native_password BY 'EXAMPLE_PASSWORD';
      • GRANT ALL PRIVILEGES ON cron_jobs.* TO 'cron_jobs_user'@'localhost';
      • FLUSH PRIVILEGES;

      Next, switch to the cron_jobs database:

      Output

      Database changed

      Once you’ve selected the database, create a tasks table. In this table, you’ll insert some tasks that will be automatically executed by a cron job. Since the minimum time interval for running a cron job is 1 minute, you’ll later code a PHP script that overrides this setting and instead, execute the jobs in intervals of 5 seconds.

      For now, create your tasks table:

      • CREATE TABLE tasks (
      • task_id BIGINT NOT NULL AUTO_INCREMENT PRIMARY KEY,
      • task_name VARCHAR(50),
      • queued_at DATETIME,
      • completed_at DATETIME,
      • is_processed CHAR(1)
      • ) ENGINE = InnoDB;

      Insert three records to the tasks table. Use the MySQL NOW() function in the queued_at column to record the current date and time when the tasks are queued. Also for the completed_at column, use the MySQL CURDATE() function to set a default time of 00:00:00. Later, as tasks complete, your script will update this column:

      • INSERT INTO tasks (task_name, queued_at, completed_at, is_processed) VALUES ('TASK 1', NOW(), CURDATE(), 'N');
      • INSERT INTO tasks (task_name, queued_at, completed_at, is_processed) VALUES ('TASK 2', NOW(), CURDATE(), 'N');
      • INSERT INTO tasks (task_name, queued_at, completed_at, is_processed) VALUES ('TASK 3', NOW(), CURDATE(), 'N');

      Confirm the output after running each INSERT command:

      Output

      Query OK, 1 row affected (0.00 sec) ...

      Make sure the data is in place by running a SELECT statement against the tasks table:

      • SELECT task_id, task_name, queued_at, completed_at, is_processed FROM tasks;

      You will find a list of all tasks:

      Output

      +---------+-----------+---------------------+---------------------+--------------+ | task_id | task_name | queued_at | completed_at | is_processed | +---------+-----------+---------------------+---------------------+--------------+ | 1 | TASK 1 | 2021-03-06 06:27:19 | 2021-03-06 00:00:00 | N | | 2 | TASK 2 | 2021-03-06 06:27:28 | 2021-03-06 00:00:00 | N | | 3 | TASK 3 | 2021-03-06 06:27:36 | 2021-03-06 00:00:00 | N | +---------+-----------+---------------------+---------------------+--------------+ 3 rows in set (0.00 sec)

      The time for the completed_at column is set to 00:00:00, this column will update once the tasks are processed by a PHP script that you will create next.

      Exit from the MySQL command-line interface:

      Output

      Bye

      Your cron_jobs database and tasks table are now in place and you can now create a PHP script that processes the jobs.

      Step 2 — Creating a PHP Script that Runs Tasks After 5 Seconds

      In this step, you’ll create a script that uses a combination of the PHP while(...){...} loop and sleep functions to run tasks after every 5 seconds.

      Open a new /var/www/html/tasks.php file in the root directory of your web server using nano:

      • sudo nano /var/www/html/tasks.php

      Next, create a new try { block after a <?php tag and declare the database variables that you created in Step 1. Remember to replace EXAMPLE_PASSWORD with the actual password for your database user:

      /var/www/html/tasks.php

      <?php
      try {
          $db_name="cron_jobs";
          $db_user="cron_jobs_user";
          $db_password = 'EXAMPLE_PASSWORD';
          $db_host="localhost";
      

      Next, declare a new PDO (PHP Data Object) class and set the attribute ERRMODE_EXCEPTION to catch any PDO errors. Also, switch ATTR_EMULATE_PREPARES to false to let the native MySQL database engine handle emulation. Prepared statements allow you to send the SQL queries and data separately to enhance security and reduce chances of an SQL injection attack:

      /var/www/html/tasks.php

      
          $pdo = new PDO('mysql:host=" . $db_host . "; dbname=" . $db_name, $db_user, $db_password);
          $pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);  
          $pdo->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);       
      

      Then, create a new variable named $loop_expiry_time and set it to the current time plus 60 seconds. Then open a new PHP while(time() < $loop_expiry_time) { statement. The idea here is to create a loop that runs until the current time (time()) matches the variable $loop_expiry_time:

      /var/www/html/tasks.php

             
          $loop_expiry_time = time() + 60;
      
          while (time() < $loop_expiry_time) { 
      

      Next, declare a prepared SQL statement that retrieves unprocessed jobs from the tasks table:

      /var/www/html/tasks.php

         
              $data = [];
              $sql  = "select 
                       task_id
                       from tasks
                       where is_processed = :is_processed
                       ";
      

      Execute the SQL command and fetch all rows from the tasks table that have the column is_processed set to N. This means the rows are not processed:

      /var/www/html/tasks.php

        
              $data["is_processed'] = 'N';  
      
              $stmt = $pdo->prepare($sql);
              $stmt->execute($data);
      

      Next, loop through the retrieved rows using a PHP while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) {...} statement and create another SQL statement. This time around, the SQL command updates the is_processed and completed_at columns for each task processed. This ensures that you don’t process tasks more than one time:

      /var/www/html/tasks.php

        
              while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) { 
                  $data_update   = [];         
                  $sql_update    = "update tasks set 
                                    is_processed  = :is_processed,
                                    completed_at  = :completed_at
                                    where task_id = :task_id                                 
                                    ";
      
                  $data_update   = [                
                                   'is_processed' => 'Y',                          
                                   'completed_at' => date("Y-m-d H:i:s"),
                                   'task_id'      => $row['task_id']                         
                                   ];
                  $stmt = $pdo->prepare($sql_update);
                  $stmt->execute($data_update);
              }
      

      Note: If you have a large queue to be processed (for example, 100,000 records per second), you might consider queueing jobs in a Redis Server since it is faster than MySQL when it comes to implementing the job-queue model. Nevertheless, this guide will process a smaller dataset.

      Before you close the first PHP while (time() < $loop_expiry_time) { statement, include a sleep(5); statement to pause the jobs execution for 5 seconds and free up your server resources.

      You may change the 5 seconds period depending on your business logic and how fast you want tasks to execute. For instance, if you would like the tasks to be processed 3 times in a minute, set this value to 20 seconds.

      Remember to catch any PDO error messages inside a } catch (PDOException $ex) { echo $ex->getMessage(); } block:

      /var/www/html/tasks.php

                       sleep(5); 
      
              }       
      
      } catch (PDOException $ex) {
          echo $ex->getMessage(); 
      }
      

      Your complete tasks.php file will be as follows:

      /var/www/html/tasks.php

      <?php
      try {
          $db_name="cron_jobs";
          $db_user="cron_jobs_user";
          $db_password = 'EXAMPLE_PASSWORD';
          $db_host="localhost";
      
          $pdo = new PDO('mysql:host=" . $db_host . "; dbname=" . $db_name, $db_user, $db_password);
          $pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);  
          $pdo->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);               
      
          $loop_expiry_time = time() + 60;
      
          while (time() < $loop_expiry_time) { 
              $data = [];
              $sql  = "select 
                       task_id
                       from tasks
                       where is_processed = :is_processed
                       ";
      
              $data["is_processed'] = 'N';             
      
              $stmt = $pdo->prepare($sql);
              $stmt->execute($data);
      
              while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) { 
                  $data_update   = [];         
                  $sql_update    = "update tasks set 
                                    is_processed  = :is_processed,
                                    completed_at  = :completed_at
                                    where task_id = :task_id                                 
                                    ";
      
                  $data_update   = [                
                                   'is_processed' => 'Y',                          
                                   'completed_at' => date("Y-m-d H:i:s"),
                                   'task_id'      => $row['task_id']                         
                                   ];
                  $stmt = $pdo->prepare($sql_update);
                  $stmt->execute($data_update);
              }   
      
              sleep(5); 
      
              }       
      
      } catch (PDOException $ex) {
          echo $ex->getMessage(); 
      }
      

      Save the file by pressing CTRL + X, Y then ENTER.

      Once you’ve completed coding the logic in the /var/www/html/tasks.php file, you’ll schedule the crontab daemon to execute the file after every 1 minute in the next step.

      Step 3 — Scheduling the PHP Script to Run After 1 Minute

      In Linux, you can schedule jobs to run automatically after a stipulated time by entering a command into the crontab file. In this step, you will instruct the crontab daemon to run your /var/www/html/tasks.php script after every minute. So, open the /etc/crontab file using nano:

      Then add the following toward the end of the file to execute the http://localhost/tasks.php after every 1 minute:

      /etc/crontab

      ...
      * * * * * root /usr/bin/wget -O - http://localhost/tasks.php
      

      Save and close the file.

      This guide assumes that you have a basic knowledge of how cron jobs work. Consider reading our guide on How to Use Cron to Automate Tasks on Ubuntu.

      As earlier indicated, although the cron daemon runs the tasks.php file after every 1 minute, once the file is executed for the first time, it will loop through the open tasks for another 60 seconds. By the time the loop time expires, the cron daemon will execute the file again and the process will continue.

      After updating and closing the /etc/crontab file, the crontab daemon should begin executing the MySQL tasks that you inserted in the tasks table immediately. To confirm whether everything is working as expected, you’ll query your cron_jobs database next.

      Step 4 — Confirming Job Execution

      In this step, you will open your database one more time to check whether the tasks.php file is processing queued jobs when executed automatically by the crontab.

      Log back in to your MySQL server as root:

      Then, enter your MySQL server’s root password and hit ENTER to proceed. Then, switch to the database:

      Output

      Database changed

      Run a SELECT statement against the tasks table:

      • SELECT task_id, task_name, queued_at, completed_at, is_processed FROM tasks;

      You will receive output similar to the following. In the completed_at column, tasks have been processed at intervals of 5 seconds. Also, the tasks have been marked as completed since the is_processed column is now set to Y, which means YES.

      Output

      +---------+-----------+---------------------+---------------------+--------------+ | task_id | task_name | queued_at | completed_at | is_processed | +---------+-----------+---------------------+---------------------+--------------+ | 1 | TASK 1 | 2021-03-06 06:27:19 | 2021-03-06 06:30:01 | Y | | 2 | TASK 2 | 2021-03-06 06:27:28 | 2021-03-06 06:30:06 | Y | | 3 | TASK 3 | 2021-03-06 06:27:36 | 2021-03-06 06:30:11 | Y | +---------+-----------+---------------------+---------------------+--------------+ 3 rows in set (0.00 sec)

      This confirms that your PHP script is working as expected; you have run tasks in a shorter time interval by overriding the limitation of the 1 minute time period set by the crontab daemon.

      Conclusion

      In this guide, you’ve set up a sample database on an Ubuntu 20.04 server. Then, you have created jobs in a table and run them at intervals of 5 seconds using the PHP while(...){...} loop and sleep() functions. Use the logic in this tutorial when you’re next implementing a job-queue-based application where tasks need to be run multiple times within a 1 minute time period.

      For more PHP tutorials, check out our PHP topic page.



      Source link

      How To Deploy Multiple Environments in Your Terraform Project Without Duplicating Code


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Terraform offers advanced features that become increasingly useful as your project grows in size and complexity. It’s possible to alleviate the cost of maintaining complex infrastructure definitions for multiple environments by structuring your code to minimize repetitions and introducing tool-assisted workflows for easier testing and deployment.

      Terraform associates a state with a backend, which determines where and how state is stored and retrieved. Every state has only one backend and is tied to an infrastructure configuration. Certain backends, such as local or s3, may contain multiple states. In that case, the pairing of state and infrastructure to the backend is describing a workspace. Workspaces allow you to deploy multiple distinct instances of the same infrastructure configuration without storing them in separate backends.

      In this tutorial, you’ll first deploy multiple infrastructure instances using different workspaces. You’ll then deploy a stateful resource, which, in this tutorial, will be a DigitalOcean Volume. Finally, you’ll reference pre-made modules from the Terraform Registry, which you can use to supplement your own.

      Prerequisites

      • A DigitalOcean Personal Access Token, which you can create via the DigitalOcean Control Panel. You can find instructions for this in the How to Generate a Personal Access Token tutorial.
      • Terraform installed on your local machine and a project set up with the DO provider. Complete Step 1 and Step 2 of the How To Use Terraform with DigitalOcean tutorial, and be sure to name the project folder terraform-advanced, instead of loadbalance. During Step 2, do not include the pvt_key variable and the SSH key resource.

      Note: We have specifically tested this tutorial using Terraform 0.13.

      Deploying Multiple Infrastructure Instances Using Workspaces

      Multiple workspaces are useful when you want to deploy or test a modified version of your main infrastructure without creating a separate project and setting up authentication keys again. Once you have developed and tested a feature using the separate state, you can incorporate the new code into the main workspace and possibly delete the additional state. When you init a Terraform project, regardless of backend, Terraform creates a workspace called default. It is always present and you can never delete it.

      However, multiple workspaces are not a suitable solution for creating multiple environments, such as for staging and production. Therefore workspaces, which only track the state, do not store the code or its modifications.

      Since workspaces do not track the actual code, you should manage the code separation between multiple workspaces at the version control (VCS) level by matching them to their infrastructure variants. How you can achieve this is dependent on the VCS tool itself; for example, in Git branches would be a fitting abstraction. To make it easier to manage the code for multiple environments, you can break them up into reusable modules, so that you avoid repeating similar code for each environment.

      Deploying Resources in Workspaces

      You’ll now create a project that deploys a Droplet, which you’ll apply from multiple workspaces.

      You’ll store the Droplet definition in a file called droplets.tf.

      Assuming you’re in the terraform-advanced directory, create and open it for editing by running:

      Add the following lines:

      droplets.tf

      resource "digitalocean_droplet" "web" {
        image  = "ubuntu-18-04-x64"
        name   = "web-${terraform.workspace}"
        region = "fra1"
        size   = "s-1vcpu-1gb"
      }
      

      This definition will create a Droplet running Ubuntu 18.04 with one CPU core and 1 GB RAM in the fra1 region. Its name will contain the name of the current workspace it is deployed from. When you’re done, save and close the file.

      Apply the project for Terraform to run its actions with:

      • terraform apply -var "do_token=${DO_PAT}"

      Your output will be similar to the following:

      Output

      An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.web will be created + resource "digitalocean_droplet" "web" { + backups = false + created_at = (known after apply) + disk = (known after apply) + id = (known after apply) + image = "ubuntu-18-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + ipv6_address_private = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "web-default" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = (known after apply) + region = "fra1" + resize_disk = true + size = "s-1vcpu-1gb" + status = (known after apply) + urn = (known after apply) + vcpus = (known after apply) + volume_ids = (known after apply) + vpc_uuid = (known after apply) } Plan: 1 to add, 0 to change, 0 to destroy. ...

      Enter yes when prompted to deploy the Droplet in the default workspace.

      The name of the Droplet will be web-default, because the workspace you start with is called default. You can list the workspaces to confirm that it’s the only one available:

      You’ll receive the following output:

      Output

      * default

      The asterisk (*) means that you currently have that workspace selected.

      Create and switch to a new workspace called testing, which you’ll use to deploy a different Droplet, by running workspace new:

      • terraform workspace new testing

      You’ll have output similar to:

      Output

      Created and switched to workspace "testing"! You're now on a new, empty workspace. Workspaces isolate their state, so if you run "terraform plan" Terraform will not see any existing state for this configuration.

      You plan the deployment of the Droplet again by running:

      • terraform plan -var "do_token=${DO_PAT}"

      The output will be similar to the previous run:

      Output

      An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.web will be created + resource "digitalocean_droplet" "web" { + backups = false + created_at = (known after apply) + disk = (known after apply) + id = (known after apply) + image = "ubuntu-18-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + ipv6_address_private = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "web-testing" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = (known after apply) + region = "fra1" + resize_disk = true + size = "s-1vcpu-1gb" + status = (known after apply) + urn = (known after apply) + vcpus = (known after apply) + volume_ids = (known after apply) + vpc_uuid = (known after apply) } Plan: 1 to add, 0 to change, 0 to destroy. ...

      Notice that Terraform plans to deploy a Droplet called web-testing, which it has named differently from web-default. This is because the default and testing workspaces have separate states and have no knowledge of each other’s resources—even though they stem from the same code.

      To confirm that you’re in the testing workspace, output the current one you’re in with workspace show:

      The output will be the name of the current workspace:

      Output

      testing

      To delete a workspace, you first need to destroy all its deployed resources. Then, if it’s active, you need to switch to another one using workspace select. Since the testing workspace here is empty, you can switch to default right away:

      • terraform workspace select default

      You’ll receive output of Terraform confirming the switch:

      Output

      Switched to workspace "default".

      You can then delete it by running workspace delete:

      • terraform workspace delete testing

      Terraform will then perform the deletion:

      Output

      Deleted workspace "testing"!

      You can destroy the Droplet you’ve deployed in the default workspace by running:

      • terraform destroy -var "do_token=${DO_PAT}"

      Enter yes when prompted to finish the process.

      In this section, you’ve worked in multiple Terraform workspaces. In the next section, you’ll deploy a stateful resource.

      Deploying Stateful Resources

      Stateless resources do not store data, so you can create and replace them quickly, because they are not unique. Stateful resources, on the other hand, contain data that is unique or not simply re-creatable; therefore, they require persistent data storage.

      Since you may end up destroying such resources, or multiple resources require their data, it’s best to store it in a separate entity, such as DigitalOcean Volumes.

      Volumes are objects that you can attach to Droplets (servers), but are separate from them, and provide additional storage space. In this step, you’ll define the Volume and connect it to a Droplet in droplets.tf.

      Open it for editing:

      Add the following lines:

      droplets.tf

      resource "digitalocean_droplet" "web" {
        image  = "ubuntu-18-04-x64"
        name   = "web-${terraform.workspace}"
        region = "fra1"
        size   = "s-1vcpu-1gb"
      }
      
      resource "digitalocean_volume" "volume" {
        region                  = "fra1"
        name                    = "new-volume"
        size                    = 10
        initial_filesystem_type = "ext4"
        description             = "New Volume for Droplet"
      }
      
      resource "digitalocean_volume_attachment" "volume_attachment" {
        droplet_id = digitalocean_droplet.web.id
        volume_id  = digitalocean_volume.volume.id
      }
      

      Here you define two new resources, the Volume itself and a Volume attachment. The Volume will be 10GB, formatted as ext4, called new-volume, and located in the same region as the Droplet. To connect the Volume to the Droplet, since they are separate entities, you define a Volume attachment object. volume_attachment takes the Droplet and Volume IDs and instructs the DigitalOcean cloud to make the Volume available to the Droplet as a disk device.

      When you’re done, save and close the file.

      Plan this configuration by running:

      • terraform plan -var "do_token=${DO_PAT}"

      The actions that Terraform will plan will be the following:

      Output

      An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.web will be created + resource "digitalocean_droplet" "web" { + backups = false + created_at = (known after apply) + disk = (known after apply) + id = (known after apply) + image = "ubuntu-18-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + ipv6_address_private = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "web-default" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = (known after apply) + region = "fra1" + resize_disk = true + size = "s-1vcpu-1gb" + status = (known after apply) + urn = (known after apply) + vcpus = (known after apply) + volume_ids = (known after apply) + vpc_uuid = (known after apply) } # digitalocean_volume.volume will be created + resource "digitalocean_volume" "volume" { + description = "New Volume for Droplet" + droplet_ids = (known after apply) + filesystem_label = (known after apply) + filesystem_type = (known after apply) + id = (known after apply) + initial_filesystem_type = "ext4" + name = "new-volume" + region = "fra1" + size = 10 + urn = (known after apply) } # digitalocean_volume_attachment.volume_attachment will be created + resource "digitalocean_volume_attachment" "volume_attachment" { + droplet_id = (known after apply) + id = (known after apply) + volume_id = (known after apply) } Plan: 3 to add, 0 to change, 0 to destroy. ...

      The output details that Terraform would create a Droplet, a Volume, and a Volume attachment, which connects the Volume to the Droplet.

      You’ve now defined and connected a Volume (a stateful resource) to a Droplet. In the next section, you’ll review public, pre-made Terraform modules that you can incorporate in your project.

      Referencing Pre-made Modules

      Aside from creating your own custom modules for your projects, you can also use pre-made modules and providers from other developers, which are publicly available at Terraform Registry.

      In the modules section you can search the database of available modules and sort by provider in order to find the module with the functionality you need. Once you’ve found one, you can read its description, which lists the inputs and outputs the module provides, as well as its external module and provider dependencies.

      Terraform Registry - SSH key Module

      You’ll now add the DigitalOcean SSH key module to your project. You’ll store the code separate from existing definitions in a file called ssh-key.tf. Create and open it for editing by running:

      Add the following lines:

      ssh-key.tf

      module "ssh-key" {
        source         = "clouddrove/ssh-key/digitalocean"
        key_path       = "~/.ssh/id_rsa.pub"
        key_name       = "new-ssh-key"
        enable_ssh_key = true
      }
      

      This code defines an instance of the clouddrove/droplet/digitalocean module from the registry and sets some of the parameters it offers. It should add a public SSH key to your account by reading it from the ~/.ssh/id_rsa.pub.

      When you’re done, save and close the file.

      Before you plan this code, you must download the referenced module by running:

      You’ll receive output similar to the following:

      Output

      Initializing modules... Downloading clouddrove/ssh-key/digitalocean 0.13.0 for ssh-key... - ssh-key in .terraform/modules/ssh-key Initializing the backend... Initializing provider plugins... - Using previously-installed digitalocean/digitalocean v1.22.2 Terraform has been successfully initialized! ...

      You can now plan the code for the changes:

      • terraform plan -var "do_token=${DO_PAT}"

      You’ll receive output similar to this:

      Output

      Refreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage. ------------------------------------------------------------------------ An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: ... # module.ssh-key.digitalocean_ssh_key.default[0] will be created + resource "digitalocean_ssh_key" "default" { + fingerprint = (known after apply) + id = (known after apply) + name = "devops" + public_key = "ssh-rsa ... demo@clouddrove" } Plan: 4 to add, 0 to change, 0 to destroy. ...

      The output shows that you would create the SSH key resource, which means that you downloaded and invoked the module from your code.

      Conclusion

      Bigger projects can make use of some advanced features Terraform offers to help reduce complexity and make maintenance easier. Workspaces allow you to test new additions to your code without touching the stable main deployments. You can also couple workspaces with a version control system to track code changes. Using pre-made modules can also shorten development time, but may incur additional expenses or time in the future if the module becomes obsolete.

      For further resources on using Terraform, check out our How To Manage Infrastructure With Terraform series.



      Source link