One place for hosting & domains

      Deploy

      How To Deploy a Static HTML Website with Ansible on Ubuntu 20.04 (Nginx)



      Part of the Series:
      How To Write Ansible Playbooks

      Ansible is a modern configuration management tool that doesn’t require the use of an agent software on remote nodes, using only SSH and Python to communicate and execute commands on managed servers. This series will walk you through the main Ansible features that you can use to write playbooks for server automation. At the end, we’ll see a practical example of how to create a playbook to automate setting up a remote Nginx web server and deploy a static HTML website to it.

      If you were following along with all parts of this series, at this point you should be familiar with installing system packages, applying templates, and using handlers in Ansible playbooks. In this part of the series, you’ll use what you’ve seen so far to create a playbook that automates setting up a remote Nginx server to host a static HTML website on Ubuntu 20.04.

      Start by creating a new directory on your Ansible control node where you’ll set up the Ansible files and a demo static HTML website to be deployed to your remote server. This could be in any location of your choice within your home folder. In this example we’ll use ~/ansible-nginx-demo.

      • mkdir ~/ansible-nginx-demo
      • cd ~/ansible-nginx-demo

      Next, copy your existing inventory file into the new directory. In this example, we’ll use the same inventory you set up at the beginning of this series:

      • cp ~/ansible-practice/inventory .

      This will copy a file named inventory from a folder named ansible-practice in your home directory, and save it to the current directory.

      Obtaining the Demo Website

      For this demonstration, we’ll use a static HTML website that is the subject of our How To Code in HTML series. Start by downloading the demo website files by running the following command:

      • curl -L https://github.com/do-community/html_demo_site/archive/refs/heads/main.zip -o html_demo.zip

      You’ll need unzip to unpack the contents of this download. To make sure you have this tool installed, run:

      Then, unpack the demo website files with:

      This will create a new directory called html_demo_site-main on your current working directory. You can check the contents of the directory with an ls -la command:

      • ls -la html_demo_site-main

      Output

      total 28 drwxrwxr-x 3 sammy sammy 4096 sep 18 2020 . drwxrwxr-x 5 sammy sammy 4096 mrt 25 15:03 .. -rw-rw-r-- 1 sammy sammy 1289 sep 18 2020 about.html drwxrwxr-x 2 sammy sammy 4096 sep 18 2020 images -rw-rw-r-- 1 sammy sammy 2455 sep 18 2020 index.html -rw-rw-r-- 1 sammy sammy 1079 sep 18 2020 LICENSE -rw-rw-r-- 1 sammy sammy 675 sep 18 2020 README.md

      Creating a Template for Nginx’s Configuration

      You’ll now set up the Nginx template that is necessary to configure the remote web server. Create a new folder within your ansible-demo directory to hold non-playbook files:

      Then, open a new file called nginx.conf.j2:

      This template file contains an Nginx server block configuration for a static HTML website. It uses three variables: document_root, app_root, and server_name. We’ll define these variables later on when creating the playbook. Copy the following content to your template file:

      ~/ansible-nginx-demo/files/nginx.conf.j2

      server {
        listen 80;
      
        root {{ document_root }}/{{ app_root }};
        index index.html index.htm;
      
        server_name {{ server_name }};
      
        location / {
         default_type "text/html";
         try_files $uri.html $uri $uri/ =404;
        }
      }
      

      Save and close the file when you’re done.

      Creating a New Ansible Playbook

      Next, we’ll create a new Ansible playbook and set up the variables that we’ve used in the previous section of this guide. Open a new file named playbook.yml:

      This playbook starts with the hosts definition set to all and a become directive that tells Ansible to run all tasks as the root user by default (the same as manually running commands with sudo). Within this playbook’s var section, we’ll create three variables: server_name, document_root, and app_root. These variables are used in the Nginx configuration template to set up the domain name or IP address that this web server will respond to, and the full path to where the website files are located on the server. For this demo, we’ll use the ansible_default_ipv4.address fact variable because it contains the remote server’s public IP address, but you can replace this value with your server’s hostname in case it has a domain name properly configured within a DNS service to point to this server:

      ~/ansible-nginx-demo/playbook.yml

      ---
      - hosts: all
        become: yes
        vars:
          server_name: "{{ ansible_default_ipv4.address }}"
          document_root: /var/www/html
          app_root: html_demo_site-main
        tasks:
      

      You can keep this file open for now. The next sections will walk you through all tasks that you’ll need to include in this playbook to make it fully functional.

      Installing Required Packages

      The following task will update the apt cache and then install the nginx package on remote nodes:

      ~/ansible-nginx-demo/playbook.yml

      . . .
          - name: Update apt cache and install Nginx
            apt:
              name: nginx
              state: latest
              update_cache: yes
      

      Uploading Website Files to Remote Nodes

      The next task will use the copy built-in module to upload the website files to the remote document root. We’ll use the document_root variable to set the destination on the server where the application folder should be created.

      ~/ansible-nginx-demo/playbook.yml

      . . .
          - name: Copy website files to the server's document root
            copy:
              src: "{{ app_root }}"
              dest: "{{ document_root }}"
              mode: preserve
      

      Applying and Enabling the Custom Nginx Configuration

      We’ll now apply the Nginx template that will configure the web server to host your static HTML file. After the configuration file is set at /etc/nginx/sites-available, we’ll create a symbolic link to that file inside /etc/nginx-sites-enabled and notify the Nginx service for a posterior restart. The entire process will require two separate tasks:

      ~/ansible-nginx-demo/playbook.yml

      . . .
          - name: Apply Nginx template
            template:
              src: files/nginx.conf.j2
              dest: /etc/nginx/sites-available/default
            notify: Restart Nginx
      
          - name: Enable new site
            file:
              src: /etc/nginx/sites-available/default
              dest: /etc/nginx/sites-enabled/default
              state: link
            notify: Restart Nginx
      

      Allowing Port 80 on UFW

      Next, include the task that enables tcp access on port 80:

      ~/ansible-nginx-demo/playbook.yml

      . . .
          - name: Allow all access to tcp port 80
            ufw:
              rule: allow
              port: '80'
              proto: tcp
      . . .
      

      Creating a Handler for the Nginx Service

      To finish this playbook, the only thing left to do is to set up the Restart Nginx handler:

      ~/ansible-nginx-demo/playbook.yml

      . . .
        handlers:
          - name: Restart Nginx
            service:
              name: nginx
              state: restarted  
      

      Running the Finished Playbook

      Once you’re finished including all the required tasks in your playbook file, it will look like this:

      ~/ansible-nginx-demo/playbook.yml

      ---
      - hosts: all
        become: yes
        vars:
          server_name: "{{ ansible_default_ipv4.address }}"
          document_root: /var/www
          app_root: html_demo_site-main
        tasks:
          - name: Update apt cache and install Nginx
            apt:
              name: nginx
              state: latest
              update_cache: yes
      
          - name: Copy website files to the server's document root
            copy:
              src: "{{ app_root }}"
              dest: "{{ document_root }}"
              mode: preserve
      
          - name: Apply Nginx template
            template:
              src: files/nginx.conf.j2
              dest: /etc/nginx/sites-available/default
            notify: Restart Nginx
      
          - name: Enable new site
            file:
              src: /etc/nginx/sites-available/default
              dest: /etc/nginx/sites-enabled/default
              state: link
            notify: Restart Nginx
      
          - name: Allow all access to tcp port 80
            ufw:
              rule: allow
              port: '80'
              proto: tcp
      
        handlers:
          - name: Restart Nginx
            service:
              name: nginx
              state: restarted
      

      To execute this playbook on the server(s) that you set up in your inventory file, run ansible-playbook with the same connection arguments you’ve used when running a connection test within the introduction of this series. Here, we’ll be using an inventory file named inventory and the sammy user to connect to the remote server. Because the playbook requires sudo to run, we’re also including the -K argument to provide the remote user’s sudo password when prompted by Ansible:

      • ansible-playbook -i inventory playbook.yml -u sammy -K

      You’ll see output like this:

      Output

      BECOME password: PLAY [all] ********************************************************************************************** TASK [Gathering Facts] ********************************************************************************** ok: [203.0.113.10] TASK [Update apt cache and install Nginx] *************************************************************** ok: [203.0.113.10] TASK [Copy website files to the server's document root] ************************************************* changed: [203.0.113.10] TASK [Apply Nginx template] ***************************************************************************** changed: [203.0.113.10] TASK [Enable new site] ********************************************************************************** ok: [203.0.113.10] TASK [Allow all access to tcp port 80] ****************************************************************** ok: [203.0.113.10] RUNNING HANDLER [Restart Nginx] ************************************************************************* changed: [203.0.113.10] PLAY RECAP ********************************************************************************************** 203.0.113.10 : ok=7 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

      Once the playbook is finished, if you go to your browser and access your server’s hostname or IP address you should now see the following page:

      HTML Demo Site Deployed by Ansible

      Congratulations, you have successfully automated the deployment of a static HTML website to a remote Nginx server, using Ansible.

      If you make changes to any of the files in the demo website, you can run the playbook again and the copy task will make sure any file changes are reflected in the remote host. Because Ansible has an idempotent behavior, running the playbook multiple times will not trigger changes that were already made to the system.



      Source link

      How To Deploy a Gatsby Application to DigitalOcean App Platform


      The author selected /dev/color to receive a donation as part of the Write for DOnations program.

      Introduction

      In this tutorial, you will deploy a Gatsby application to DigitalOcean’s App Platform. App Platform is a Platform as a Service that builds, deploys, and manages apps automatically. When combined with the speed of a static site generator like Gatsby, this provides a scalable JAMStack solution that doesn’t require server-side programming.

      In this tutorial, you will create a sample Gatsby app on your local machine, push your code to GitHub, then deploy to App Platform.

      Prerequisites

      Step 1 — Creating a Gatsby Project

      In this section, you are going to create a sample Gatsby application, which you will later deploy to App Platform.

      First, clone the default Gatsby starter from GitHub. You can do that with the following command in your terminal:

      • git clone https://github.com/gatsbyjs/gatsby-starter-default

      The Gatsby starter site provides you with the boilerplate code you need to start coding your application. For more information on creating a Gatsby app, check out How To Set Up Your First Gatsby Website.

      When you are finished with cloning the repo, cd into the gatsby-starter-default directory:

      • cd gatsby-starter-default

      Then install the Node dependencies:

      After you’ve downloaded the app and installed the dependencies, open the following file in a text editor:

      You have just opened Gatsby’s config file. Here you can change metadata about your site.

      Go to the title key and change Gatsby Default Starter to Save the Whales, as shown in the following highlighted line:

      gatsby-starter-default/gatsby-config.js

      module.exports = {
        siteMetadata: {
          title: `Save the Whales`,
          description: `Kick off your next, great Gatsby project with this default starter. This barebones starter ships with the main Gatsby configuration files you might need.`,
          author: `@gatsbyjs`,
        },
        plugins: [
          `gatsby-plugin-react-helmet`,
          {
            resolve: `gatsby-source-filesystem`,
            options: {
              name: `images`,
              path: `${__dirname}/src/images`,
            },
          },
          `gatsby-transformer-sharp`,
          `gatsby-plugin-sharp`,
          {
            resolve: `gatsby-plugin-manifest`,
            options: {
              name: `gatsby-starter-default`,
              short_name: `starter`,
              start_url: `/`,
              background_color: `#663399`,
              theme_color: `#663399`,
              display: `minimal-ui`,
              icon: `src/images/gatsby-icon.png`, // This path is relative to the root of the site.
            },
          },
          // this (optional) plugin enables Progressive Web App + Offline functionality
          // To learn more, visit: https://gatsby.dev/offline
          // `gatsby-plugin-offline`,
        ],
      }
      

      Close and save the file. Now open the index file in your favorite text editor:

      To continue with the “Save the Whales” theme, replace Hi people with Adopt a whale today, change Welcome to your new Gatsby site. to Whales are our friends., and delete the last <p> tag:

      gatsby-starter-default/src/pages/index.js

      import React from "react"
      import { Link } from "gatsby"
      import { StaticImage } from "gatsby-plugin-image"
      
      import Layout from "../components/layout"
      import SEO from "../components/seo"
      
      const IndexPage = () => (
        <Layout>
          <SEO title="Home" />
          <h1>Adopt a whale today</h1>
          <p>Whales are our friends.</p>
          <StaticImage
            src="https://www.digitalocean.com/community/tutorials/images/gatsby-astronaut.png"
            width={300}
            quality={95}
            formats={["AUTO", "WEBP", "AVIF"]}
            alt="A Gatsby astronaut"
            style={{ marginBottom: `1.45rem` }}
          />
          <Link to="/page-2/">Go to page 2</Link> <br />
          <Link to="/using-typescript/">Go to "Using TypeScript"</Link>
        </Layout>
      )
      
      export default IndexPage
      

      Save and close the file. You are going to swap out the Gatsby astronaut image with a GIF of a whale. Before you add the GIF, you will first need to create a GIF directory and download it.

      Go to the src directory and create a gifs file:

      Now navigate into your newly created gifs folder:

      Download a whales GIF from Giphy:

      • wget https://media.giphy.com/media/lqdJsUDvJnHBgM82HB/giphy.gif

      Wget is a utilty that allows you to download files from the internet. Giphy is a website that hosts GIFs.

      Next, change the name from giphy.gif to whales.gif:

      After you have changed the name of the GIF, move back to the root folder of the project and open up the index file again:

      • cd ../..
      • nano src/pages/index.js

      Now you will add the GIF to your site’s homepage. Delete the StaticImage import and element, then replace with the following highlighted lines:

      gatsby-starter-default/src/pages/index.js

      import React from "react"
      import { Link } from "gatsby"
      
      import whaleGIF from "../gifs/whales.gif"
      import Layout from "../components/layout"
      import SEO from "../components/seo"
      
      const IndexPage = () => (
        <Layout>
          <SEO title="Home" />
          <h1>Adopt a whale today</h1>
          <p>Whales are our friends.</p>
          <div style={{ maxWidth: `300px`, marginBottom: `1.45rem` }}>
              <img src={whaleGIF} alt="Picture of Whale from BBC America" />
          </div>
          <Link to="/page-2/">Go to page 2</Link> <br />
          <Link to="/using-typescript/">Go to "Using TypeScript"</Link>
        </Layout>
      

      Here you imported the whales GIF and included it in an image tag between the <div> element. The alt tag informs the reader where the GIF originated.

      Close and save the index file.

      Now you will run your site locally to make sure it works. From the root of your project, run the development server:

      After your site has finished building, put localhost:8000 into your browser’s search bar. You will find the following rendered in your browser:

      Front page of a Save the Whales website

      In this section, you created a sample Gatsby app. In the next section, you are going to push your code to GitHub so that it is accessible to App Platform.

      Step 2 — Pushing Your Code to GitHub

      In this section of the tutorial, you are going to commit your code to git and push it up to GitHub. From there, DigitalOcean’s App Platform will be able to access the code for your website.

      Go to the root of your project and create a new git repository:

      Next, add any modified files to git:

      Finally, commit all of your changes to git with the following command:

      • git commit -m "Initial Commit"

      This will commit this version of your app to git version control. The -m takes a string argument and uses it as a message about the commit.

      Note: If you have not set up git before on this machine, you may receive the following output:

      *** Please tell me who you are.
      
      Run
      
        git config --global user.email "you@example.com"
        git config --global user.name "Your Name"
      
      to set your account's default identity.
      Omit --global to set the identity only in this repository.
      

      Run the two git config commands to provide this information before moving on. If you would like to learn more about git, check out our How To Contribute to Open Source: Getting Started with Git tutorial.

      You will receive output like the following:

      Output

      [master 1e3317b] Initial Commit 3 files changed, 7 insertions(+), 13 deletions(-) create mode 100644 src/gifs/whales.gif

      Once you have committed the file, go to GitHub and log in. After you log in, create a new repository called gatsby-digital-ocean-app-platform. You can make the repository either private or public:

      Creating a new github repo

      After you’ve created a new repo, go back to the command line and add the remote repo address:

      • git remote set-url origin https://github.com/your_name/gatsby-digital-ocean-app-platform

      Make sure to change your_name to your username on GitHub.

      Next, declare that you want to push to the main branch with the following:

      Finally, push your code to your newly created repo:

      Once you enter your credentials, you will receive output similar to the following:

      Output

      Counting objects: 3466, done. Compressing objects: 100% (1501/1501), done. Writing objects: 100% (3466/3466), 28.22 MiB | 32.25 MiB/s, done. Total 3466 (delta 1939), reused 3445 (delta 1926) remote: Resolving deltas: 100% (1939/1939), done. To https://github.com/your_name/gatsby-digital-ocean-app-platform * [new branch] main -> main Branch 'main' set up to track remote branch 'main' from 'origin'.

      You will now be able to access your code in your GitHub account.

      In this section you pushed your code to a remote GitHub repository. In the next section, you will deploy your Gatsby app from GitHub to App Platform.

      Step 3 — Deploying your Gatsby App on DigitalOcean App Platform

      In this step, you are going to deploy your app onto DigitalOcean App Platform. If you haven’t done so already, create a DigitalOcean account.

      Open your DigitalOcean control panel, select the Create button at the top of the screen, then select Apps from the dropdown menu:

      Go to drop down menu and select Apps

      After you have selected Apps, you are going to retrieve your repository from GitHub. Click on the GitHub icon and give DigitalOcean permission to access your repositories. It is a best practice to only select the repository that you want deployed.

      Choose the repo you want deployed

      You’ll be redirected back to DigitalOcean. Go to the Repository field and select the project and branch you want to deploy, then click Next:

      Selecting your GitHub repository on the DigitalOcean website

      Note: Below Branch there is a pre-checked box that says Autodeploy code changes. This means if you push any changes to your GitHub repository, DigitalOcean will automatically deploy those changes.

      On the next page you’ll be asked to configure your app. In your case, all of the presets are correct, so you can click on Next:

      Configuring your app

      When you’ve finished configuring your app, give it a name like save-the-whales:

      Name your app

      Once you select your name and click Next, you will go to the payment plan page. Since your app is a static site, you can choose the Starter plan, which is free:

      Choose starter plan

      Now click the Launch Starter App button. After waiting a couple of minutes, your app will be deployed.

      Navigate to the URL listed beneath the title of your app. You will find your Gatsby app successfully deployed.

      Conclusion

      In this tutorial, you created a Gatsby site with GIFs and deployed the site onto DigitalOcean App Platform. DigitalOcean App Platform is a convenient way to deploy and share your Gatsby projects. If you would like to learn more about this product, check out the official documentation for App Platform.



      Source link

      How To Deploy Multiple Environments in Your Terraform Project Without Duplicating Code


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Terraform offers advanced features that become increasingly useful as your project grows in size and complexity. It’s possible to alleviate the cost of maintaining complex infrastructure definitions for multiple environments by structuring your code to minimize repetitions and introducing tool-assisted workflows for easier testing and deployment.

      Terraform associates a state with a backend, which determines where and how state is stored and retrieved. Every state has only one backend and is tied to an infrastructure configuration. Certain backends, such as local or s3, may contain multiple states. In that case, the pairing of state and infrastructure to the backend is describing a workspace. Workspaces allow you to deploy multiple distinct instances of the same infrastructure configuration without storing them in separate backends.

      In this tutorial, you’ll first deploy multiple infrastructure instances using different workspaces. You’ll then deploy a stateful resource, which, in this tutorial, will be a DigitalOcean Volume. Finally, you’ll reference pre-made modules from the Terraform Registry, which you can use to supplement your own.

      Prerequisites

      • A DigitalOcean Personal Access Token, which you can create via the DigitalOcean Control Panel. You can find instructions for this in the How to Generate a Personal Access Token tutorial.
      • Terraform installed on your local machine and a project set up with the DO provider. Complete Step 1 and Step 2 of the How To Use Terraform with DigitalOcean tutorial, and be sure to name the project folder terraform-advanced, instead of loadbalance. During Step 2, do not include the pvt_key variable and the SSH key resource.

      Note: We have specifically tested this tutorial using Terraform 0.13.

      Deploying Multiple Infrastructure Instances Using Workspaces

      Multiple workspaces are useful when you want to deploy or test a modified version of your main infrastructure without creating a separate project and setting up authentication keys again. Once you have developed and tested a feature using the separate state, you can incorporate the new code into the main workspace and possibly delete the additional state. When you init a Terraform project, regardless of backend, Terraform creates a workspace called default. It is always present and you can never delete it.

      However, multiple workspaces are not a suitable solution for creating multiple environments, such as for staging and production. Therefore workspaces, which only track the state, do not store the code or its modifications.

      Since workspaces do not track the actual code, you should manage the code separation between multiple workspaces at the version control (VCS) level by matching them to their infrastructure variants. How you can achieve this is dependent on the VCS tool itself; for example, in Git branches would be a fitting abstraction. To make it easier to manage the code for multiple environments, you can break them up into reusable modules, so that you avoid repeating similar code for each environment.

      Deploying Resources in Workspaces

      You’ll now create a project that deploys a Droplet, which you’ll apply from multiple workspaces.

      You’ll store the Droplet definition in a file called droplets.tf.

      Assuming you’re in the terraform-advanced directory, create and open it for editing by running:

      Add the following lines:

      droplets.tf

      resource "digitalocean_droplet" "web" {
        image  = "ubuntu-18-04-x64"
        name   = "web-${terraform.workspace}"
        region = "fra1"
        size   = "s-1vcpu-1gb"
      }
      

      This definition will create a Droplet running Ubuntu 18.04 with one CPU core and 1 GB RAM in the fra1 region. Its name will contain the name of the current workspace it is deployed from. When you’re done, save and close the file.

      Apply the project for Terraform to run its actions with:

      • terraform apply -var "do_token=${DO_PAT}"

      Your output will be similar to the following:

      Output

      An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.web will be created + resource "digitalocean_droplet" "web" { + backups = false + created_at = (known after apply) + disk = (known after apply) + id = (known after apply) + image = "ubuntu-18-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + ipv6_address_private = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "web-default" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = (known after apply) + region = "fra1" + resize_disk = true + size = "s-1vcpu-1gb" + status = (known after apply) + urn = (known after apply) + vcpus = (known after apply) + volume_ids = (known after apply) + vpc_uuid = (known after apply) } Plan: 1 to add, 0 to change, 0 to destroy. ...

      Enter yes when prompted to deploy the Droplet in the default workspace.

      The name of the Droplet will be web-default, because the workspace you start with is called default. You can list the workspaces to confirm that it’s the only one available:

      You’ll receive the following output:

      Output

      * default

      The asterisk (*) means that you currently have that workspace selected.

      Create and switch to a new workspace called testing, which you’ll use to deploy a different Droplet, by running workspace new:

      • terraform workspace new testing

      You’ll have output similar to:

      Output

      Created and switched to workspace "testing"! You're now on a new, empty workspace. Workspaces isolate their state, so if you run "terraform plan" Terraform will not see any existing state for this configuration.

      You plan the deployment of the Droplet again by running:

      • terraform plan -var "do_token=${DO_PAT}"

      The output will be similar to the previous run:

      Output

      An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.web will be created + resource "digitalocean_droplet" "web" { + backups = false + created_at = (known after apply) + disk = (known after apply) + id = (known after apply) + image = "ubuntu-18-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + ipv6_address_private = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "web-testing" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = (known after apply) + region = "fra1" + resize_disk = true + size = "s-1vcpu-1gb" + status = (known after apply) + urn = (known after apply) + vcpus = (known after apply) + volume_ids = (known after apply) + vpc_uuid = (known after apply) } Plan: 1 to add, 0 to change, 0 to destroy. ...

      Notice that Terraform plans to deploy a Droplet called web-testing, which it has named differently from web-default. This is because the default and testing workspaces have separate states and have no knowledge of each other’s resources—even though they stem from the same code.

      To confirm that you’re in the testing workspace, output the current one you’re in with workspace show:

      The output will be the name of the current workspace:

      Output

      testing

      To delete a workspace, you first need to destroy all its deployed resources. Then, if it’s active, you need to switch to another one using workspace select. Since the testing workspace here is empty, you can switch to default right away:

      • terraform workspace select default

      You’ll receive output of Terraform confirming the switch:

      Output

      Switched to workspace "default".

      You can then delete it by running workspace delete:

      • terraform workspace delete testing

      Terraform will then perform the deletion:

      Output

      Deleted workspace "testing"!

      You can destroy the Droplet you’ve deployed in the default workspace by running:

      • terraform destroy -var "do_token=${DO_PAT}"

      Enter yes when prompted to finish the process.

      In this section, you’ve worked in multiple Terraform workspaces. In the next section, you’ll deploy a stateful resource.

      Deploying Stateful Resources

      Stateless resources do not store data, so you can create and replace them quickly, because they are not unique. Stateful resources, on the other hand, contain data that is unique or not simply re-creatable; therefore, they require persistent data storage.

      Since you may end up destroying such resources, or multiple resources require their data, it’s best to store it in a separate entity, such as DigitalOcean Volumes.

      Volumes are objects that you can attach to Droplets (servers), but are separate from them, and provide additional storage space. In this step, you’ll define the Volume and connect it to a Droplet in droplets.tf.

      Open it for editing:

      Add the following lines:

      droplets.tf

      resource "digitalocean_droplet" "web" {
        image  = "ubuntu-18-04-x64"
        name   = "web-${terraform.workspace}"
        region = "fra1"
        size   = "s-1vcpu-1gb"
      }
      
      resource "digitalocean_volume" "volume" {
        region                  = "fra1"
        name                    = "new-volume"
        size                    = 10
        initial_filesystem_type = "ext4"
        description             = "New Volume for Droplet"
      }
      
      resource "digitalocean_volume_attachment" "volume_attachment" {
        droplet_id = digitalocean_droplet.web.id
        volume_id  = digitalocean_volume.volume.id
      }
      

      Here you define two new resources, the Volume itself and a Volume attachment. The Volume will be 10GB, formatted as ext4, called new-volume, and located in the same region as the Droplet. To connect the Volume to the Droplet, since they are separate entities, you define a Volume attachment object. volume_attachment takes the Droplet and Volume IDs and instructs the DigitalOcean cloud to make the Volume available to the Droplet as a disk device.

      When you’re done, save and close the file.

      Plan this configuration by running:

      • terraform plan -var "do_token=${DO_PAT}"

      The actions that Terraform will plan will be the following:

      Output

      An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.web will be created + resource "digitalocean_droplet" "web" { + backups = false + created_at = (known after apply) + disk = (known after apply) + id = (known after apply) + image = "ubuntu-18-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + ipv6_address_private = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "web-default" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = (known after apply) + region = "fra1" + resize_disk = true + size = "s-1vcpu-1gb" + status = (known after apply) + urn = (known after apply) + vcpus = (known after apply) + volume_ids = (known after apply) + vpc_uuid = (known after apply) } # digitalocean_volume.volume will be created + resource "digitalocean_volume" "volume" { + description = "New Volume for Droplet" + droplet_ids = (known after apply) + filesystem_label = (known after apply) + filesystem_type = (known after apply) + id = (known after apply) + initial_filesystem_type = "ext4" + name = "new-volume" + region = "fra1" + size = 10 + urn = (known after apply) } # digitalocean_volume_attachment.volume_attachment will be created + resource "digitalocean_volume_attachment" "volume_attachment" { + droplet_id = (known after apply) + id = (known after apply) + volume_id = (known after apply) } Plan: 3 to add, 0 to change, 0 to destroy. ...

      The output details that Terraform would create a Droplet, a Volume, and a Volume attachment, which connects the Volume to the Droplet.

      You’ve now defined and connected a Volume (a stateful resource) to a Droplet. In the next section, you’ll review public, pre-made Terraform modules that you can incorporate in your project.

      Referencing Pre-made Modules

      Aside from creating your own custom modules for your projects, you can also use pre-made modules and providers from other developers, which are publicly available at Terraform Registry.

      In the modules section you can search the database of available modules and sort by provider in order to find the module with the functionality you need. Once you’ve found one, you can read its description, which lists the inputs and outputs the module provides, as well as its external module and provider dependencies.

      Terraform Registry - SSH key Module

      You’ll now add the DigitalOcean SSH key module to your project. You’ll store the code separate from existing definitions in a file called ssh-key.tf. Create and open it for editing by running:

      Add the following lines:

      ssh-key.tf

      module "ssh-key" {
        source         = "clouddrove/ssh-key/digitalocean"
        key_path       = "~/.ssh/id_rsa.pub"
        key_name       = "new-ssh-key"
        enable_ssh_key = true
      }
      

      This code defines an instance of the clouddrove/droplet/digitalocean module from the registry and sets some of the parameters it offers. It should add a public SSH key to your account by reading it from the ~/.ssh/id_rsa.pub.

      When you’re done, save and close the file.

      Before you plan this code, you must download the referenced module by running:

      You’ll receive output similar to the following:

      Output

      Initializing modules... Downloading clouddrove/ssh-key/digitalocean 0.13.0 for ssh-key... - ssh-key in .terraform/modules/ssh-key Initializing the backend... Initializing provider plugins... - Using previously-installed digitalocean/digitalocean v1.22.2 Terraform has been successfully initialized! ...

      You can now plan the code for the changes:

      • terraform plan -var "do_token=${DO_PAT}"

      You’ll receive output similar to this:

      Output

      Refreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage. ------------------------------------------------------------------------ An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: ... # module.ssh-key.digitalocean_ssh_key.default[0] will be created + resource "digitalocean_ssh_key" "default" { + fingerprint = (known after apply) + id = (known after apply) + name = "devops" + public_key = "ssh-rsa ... demo@clouddrove" } Plan: 4 to add, 0 to change, 0 to destroy. ...

      The output shows that you would create the SSH key resource, which means that you downloaded and invoked the module from your code.

      Conclusion

      Bigger projects can make use of some advanced features Terraform offers to help reduce complexity and make maintenance easier. Workspaces allow you to test new additions to your code without touching the stable main deployments. You can also couple workspaces with a version control system to track code changes. Using pre-made modules can also shorten development time, but may incur additional expenses or time in the future if the module becomes obsolete.

      For further resources on using Terraform, check out our How To Manage Infrastructure With Terraform series.



      Source link