One place for hosting & domains

      How To Use Terraform With Your Team


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      When multiple people are working on the same Terraform project from different locations simultaneously, it is important to handle the infrastructure code and project state correctly to avoid overwriting errors. The solution is to store the state remotely instead of locally. A remote system is available to all members of your team, and it is possible for them to lock the state while they’re working.

      One such remote backend is pg, which stores the state in a PostgreSQL database. During the course of this tutorial, you’ll use it with a DigitalOcean Managed Database to ensure data availability.

      Terraform also supports the official, managed cloud offering by HashiCorp called Terraform Cloud—a proprietary app that syncs your team’s work in one place and offers a user interface for configuration and management.

      In this tutorial, you’ll create an organization in Terraform Cloud to which you’ll connect your project. You’ll then use your organization to set up workspaces and resources. You will store your state in the managed cloud so it is always available. You’ll also set up the pg backend with an accompanying managed PostgreSQL database.

      Prerequisites

      • A DigitalOcean Personal Access Token, which you can create via the DigitalOcean Control Panel. You can find instructions to create this in How to Generate a Personal Access Token.
      • Terraform installed on your local machine and a project set up with the DigitalOcean provider. Complete Step 1 of the How To Use Terraform with DigitalOcean tutorial.
      • If you would like to use a pg backend, you will need a Managed PostgreSQL database cluster created and accessible. For more information, visit the Quickstart guide. You can use a separate database for this tutorial.
      • If you would like to use HashiCorp’s managed cloud, you will need an account with Terraform Cloud. You can create one on their sign-up page.

      Note: We have specifically tested this tutorial using Terraform 0.13.

      Storing State in Terraform Cloud

      In this step, you’ll create a project that deploys a Droplet, but instead of storing the state locally, you’ll use Terraform Cloud as the backend with the remote provider. This entails creating the organization and workspace in Terraform Cloud, writing the infrastructure code, and planning it.

      Creating an Organization

      Terraform Cloud allows you to have multiple organizations, which house your workspaces and modules. Paid-plan organizations can have multiple teams with access-level control features, while the free plan you’ll use provides only one team per organization. You can invite team members to join the organization.

      Start off by heading over to Terraform Cloud and logging in. If you haven’t yet created an organization, it will prompt you to do so.

      Terraform Cloud - Create a new organization

      Enter an organization name of your choosing and remember that it must be unique among all names in Terraform Cloud. You’ll receive an error if the name already exists. The email address should already be filled in with the address of your account. Once you’re finished, click the Create organization button to continue.

      It will then ask you to select the type of workspace.

      Terraform Cloud - Choosing a workspace type

      Since you’ll interface with Terraform Cloud using the command line, click the CLI-driven workflow option. Then, input a name for your workspace.

      Terraform Cloud - Setting workspace name

      Type in a workspace name of your choosing (we’ll call it sammy), then click Create workspace to finalize the organization creation process. It will then direct you to a workspace settings page.

      Terraform Cloud - Workspace settings

      You’ve now created your workspace, which is a part of your organization. Since you just created it, your workspace contains no infrastructure code. In the central part of the interface, Terraform Cloud gives you starting instructions for connecting to this workspace.

      Before connecting to it, you’ll need to configure the version of Terraform that the cloud will use to execute your commands. To set it, click the Settings dropdown in the upper-right corner and select General from the list. When the page opens, navigate to the Terraform Version dropdown and select 0.13.1 (for this tutorial).

      Terraform Cloud - Setting Terraform Version

      Then, click the Save settings button to save the changes.

      To connect your project to your organization and workspace, you’ll first need to log in using the command line. Before you run the command, navigate to the tokens page to create a new access token for your server, which will provide access to your account. You’ll receive a prompt to create an API token.

      Terraform Cloud - Create API token

      The default description is fine, so click Create API token to create it.

      Terraform Cloud - Created API token

      Click the token value, or the icon after it, to copy the API token. You’ll use this token to connect your project to your Terraform Cloud account.

      In the command line, run the following command to log in:

      You’ll receive the following output:

      Output

      Terraform will request an API token for app.terraform.io using your browser. If login is successful, Terraform will store the token in plain text in the following file for use by subsequent commands: /home/sammy/.terraform.d/credentials.tfrc.json Do you want to proceed? Only 'yes' will be accepted to confirm. ...

      Terraform is warning you that the token will be stored locally. Enter yes when it prompts you:

      Output

      --------------------------------------------------------------------------------- Open the following URL to access the tokens page for app.terraform.io: https://app.terraform.io/app/settings/tokens?source=terraform-login --------------------------------------------------------------------------------- Generate a token using your browser, and copy-paste it into this prompt. Terraform will store the token in plain text in the following file for use by subsequent commands: /home/sammy/.terraform.d/credentials.tfrc.json Token for app.terraform.io: Enter a value:

      Paste in the token you’ve copied and confirm with ENTER. Terraform will show a success message:

      Output

      Retrieved token for user your_username --------------------------------------------------------------------------------- Success! Terraform has obtained and saved an API token. The new API token will be used for any future Terraform command that must make authenticated requests to app.terraform.io.

      You’ve configured your local Terraform installation to access your Terraform Cloud account. You’ll now create a project that deploys a Droplet and configure it to use Terraform Cloud for storing its state.

      Setting Up the Project

      First, create a directory named terraform-team-remote where you’ll store the project:

      • mkdir ~/terraform-team-remote

      Navigate to it:

      • cd ~/terraform-team-remote

      To set up your project, you’ll need to:

      • define and configure the remote provider, which interfaces with Terraform Cloud.
      • require the digitalocean provider to be able to deploy DigitalOcean resources.
      • define and initialize variables that you’ll use.

      You’ll store the provider and module requirements specifications in a file named provider.tf. Create and open it for editing by running:

      Add the following lines:

      ~/terraform-team-remote/provider.tf

      terraform {
        required_version = "0.13.1"
      
        required_providers {
          digitalocean = {
            source = "digitalocean/digitalocean"
            version = ">1.22.2"
          }
        }
      
        backend "remote" {
          hostname = "app.terraform.io"
          organization = "your_organization_name"
      
          workspaces {
            name = "your_workspace_name"
          }
        }
      }
      
      variable "do_token" {}
      
      provider "digitalocean" {
        token = var.do_token
      }
      

      Here, you first specify your Terraform version. Then, you specify the digitalocean provider as required and set the backend to remote.

      Its hostname is set to app.terraform.io, which is the address of Terraform Cloud. For the organization and workspaces.name, replace the highlighted values with the names you specified.

      Next, you define a variable called do_token, which you pass to the digitalocean provider created after it. You’ve now configured your project to connect to your organization, so save and close the file.

      Initialize your project with the following command:

      The output will be similar to this:

      Output

      Initializing the backend... Successfully configured the backend "remote"! Terraform will automatically use this backend unless the backend configuration changes. Initializing provider plugins... - Finding digitalocean/digitalocean versions matching "> 1.22.2"... - Installing digitalocean/digitalocean v2.3.0... - Installed digitalocean/digitalocean v2.3.0 (signed by a HashiCorp partner, key ID F82037E524B9C0E8) Partner and community providers are signed by their developers. If you'd like to know more about provider signing, you can read about it here: https://www.terraform.io/docs/plugins/signing.html Terraform has been successfully initialized! ...

      Next, define the Droplet in a file called droplets.tf. Create and open it for editing by running:

      Add the following lines:

      ~/terraform-team-remote/droplets.tf

      resource "digitalocean_droplet" "web" {
        image  = "ubuntu-18-04-x64"
        name   = "web-1"
        region = "fra1"
        size   = "s-1vcpu-1gb"
      }
      

      This code will deploy a Droplet called web-1 in the fra1 region, running Ubuntu 18.04 on 1GB RAM and one CPU core. That is all you need to define, so save and close the file.

      What’s left to define are the variable values. The remote provider does not support passing in values to variables through the command line, so you’ll have to pass them in using variable files or set them in Terraform Cloud. Terraform reads variable values from files with a filename ending in .auto.tfvars. Create and open a file called vars.auto.tfvars for editing, in which you’ll define the do_token variable:

      Add the following line, replacing your_do_token with your DigitalOcean API token:

      vars.auto.tfvars

      do_token = "your_do_token"
      

      When you’re done, save and close the file. Terraform will automatically read this file when planning actions.

      Your project is now complete and set up to use Terraform Cloud as its backend. You’ll now plan and apply the Droplet and review how that reflects in the Cloud app.

      Applying the Configuration

      Since you haven’t yet planned or applied your project, the workspace in Terraform Cloud is currently empty. You can try applying the project by running the following command to update it:

      You’ll notice that the output is different from when you use local as your backend:

      Output

      Running apply in the remote backend. Output will stream here. Pressing Ctrl-C will cancel the remote apply if it's still pending. If the apply started it will stop streaming the logs, but will not stop the apply running remotely. Preparing the remote apply... To view this run in a browser, visit: https://app.terraform.io/app/sammy-shark/sammy/runs/run-QnAh2HDwx6zWbNV1 Waiting for the plan to start... Terraform v0.13.1 Configuring remote state backend... Initializing Terraform configuration... Refreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage. ------------------------------------------------------------------------ An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.web will be created + resource "digitalocean_droplet" "web" { + backups = false + created_at = (known after apply) + disk = (known after apply) + id = (known after apply) + image = "ubuntu-18-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "web-1" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = (known after apply) + region = "fra1" + resize_disk = true + size = "s-1vcpu-1gb" + status = (known after apply) + urn = (known after apply) + vcpus = (known after apply) + volume_ids = (known after apply) + vpc_uuid = (known after apply) } Plan: 1 to add, 0 to change, 0 to destroy. ...

      When using the remote backend, Terraform is not planning or applying configuration from the local machine. Instead, it delegates those tasks to the cloud, and only streams the output to the console in real time.

      Enter yes when prompted. Terraform will soon finish applying the configuration, and you can navigate to the workspace on the Terraform Cloud website to find that it has applied a new action.

      Terraform Cloud - New Run Applied

      You can now destroy the deployed resources by running the following:

      In this section, you’ve connected your project to Terraform Cloud. You’ll now use another backend, pg, which stores the state in a PostgreSQL database.

      Storing State in a Managed PostgreSQL Database

      In this section, you’ll set up a project that deploys a Droplet, much like the the previous step. This time, however, you’ll store the state in a DigitalOcean Managed PostgreSQL database using the pg provider. This provider supports state locking, so the state won’t ever be overwritten by two or more changes happening at the same time.

      Start by creating a directory named terraform-team-pg in which you’ll store the project:

      • mkdir ~/terraform-team-pg

      Navigate to it:

      Like the previous section, you’ll first define the provider and then pass in the connection string for the database and the digitalocean module. Create and open provider.tf for editing:

      Add the following lines:

      ~/terraform-team-pg/provider.tf

      terraform {
        required_providers {
          digitalocean = {
            source = "digitalocean/digitalocean"
            version = ">1.22.2"
          }
        }
      
        backend "pg" {
          conn_str = "your_db_connection_string"
        }
      }
      
      variable "do_token" {}
      
      provider "digitalocean" {
        token = var.do_token
      }
      

      Here you require the digitalocean provider and define the pg backend, which accepts a connection string. Then, you define the do_token variable and pass it to the instance of the digitalocean provider.

      Remember to replace your_db_connection_string with the connection string for your managed database from your DigitalOcean Control Panel. Then save and close the file.

      Warning: To continue, in the Settings of your database, make sure you have the IP address of the machine from which you’re running Terraform on an allowlist.

      Initialize the project by running:

      The output will be similar to the following:

      Output

      Initializing the backend... Successfully configured the backend "pg"! Terraform will automatically use this backend unless the backend configuration changes. Error: No existing workspaces. Use the "terraform workspace" command to create and select a new workspace. If the backend already contains existing workspaces, you may need to update the backend configuration.

      Terraform successfully initialized the backend; meaning it connected to the database. However, it complains about not having a workspace, since it does not create one during initialization. To resolve this, create a default workspace and switch to it by running:

      • terraform workspace new default

      The output will be the following:

      Output

      Created and switched to workspace "default"! You're now on a new, empty workspace. Workspaces isolate their state, so if you run "terraform plan" Terraform will not see any existing state for this configuration.

      To finish the initialization process, run terraform init again:

      You’ll receive output showing it has successfully completed:

      Output

      Initializing the backend... Initializing provider plugins... - Finding digitalocean/digitalocean versions matching "> 1.22.2"... - Installing digitalocean/digitalocean v2.3.0... - Installed digitalocean/digitalocean v2.3.0 (signed by a HashiCorp partner, key ID F82037E524B9C0E8) Partner and community providers are signed by their developers. If you'd like to know more about provider signing, you can read about it here: https://www.terraform.io/docs/plugins/signing.html Terraform has been successfully initialized!

      Since the Droplet definition is the same as in the previous project, you can copy it over by running:

      • cp ../terraform-team-remote/droplets.tf .

      You’ll need your DigitalOcean token in an environment variable. Create one, replacing your_do_token with your token:

      • export DO_PAT="your_do_token"

      To check that the connection to the database is working, try planning the configuration:

      • terraform plan -var "do_token=${DO_PAT}"

      The output will be similar to the following:

      Output

      Refreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage. ------------------------------------------------------------------------ An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.web will be created + resource "digitalocean_droplet" "web" { + backups = false + created_at = (known after apply) + disk = (known after apply) + id = (known after apply) + image = "ubuntu-18-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "web-1" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = (known after apply) + region = "fra1" + resize_disk = true + size = "s-1vcpu-1gb" + status = (known after apply) + urn = (known after apply) + vcpus = (known after apply) + volume_ids = (known after apply) + vpc_uuid = (known after apply) } Plan: 1 to add, 0 to change, 0 to destroy. ...

      Terraform reported no errors and planned out the actions as usual. It successfully connected to your PostgreSQL database and stored its state. Multiple people can now work on this simultaneously with the project remaining synchronized.

      Conclusion

      In this tutorial, you’ve used two different backends: Terraform Cloud, which is HashiCorp’s managed cloud offering for Terraform; and pg, which allows you to store the project’s state in a PostgreSQL database. You used a managed PostgreSQL database from DigitalOcean, which you can provision and use with Terraform within minutes.

      For more information about the features of Terraform Cloud, visit the official docs.

      To learn more about using Terraform, check out our series on How To Manage Infrastructure with Terraform.



      Source link

      IT Professionals Day 2020: 5 Reasons to Thank Your IT Team


      Today is IT Professionals Day. This annual day of recognition was first celebrated in 2015 as a way to thank the people who keep the wheels of IT turning. And while we advocate thanking your IT team regularly, this year in particular our IT teams have taken on a lion’s share of work in helping many companies go remote, protecting us from ongoing cyberthreats and accelerating digital transformation initiatives.

      Preparing networks for remote work on a moment’s notice and ensuring applications remain resilient and performant is no easy task, but thanks to IT pros’ skills and agility, businesses have been able to continue on, many without missing a beat.

      In addition to their essential functions throughout the COVID-19 pandemic, below we’ve assembled five reasons to thank you IT team.

      1. IT infrastructure managers go above and beyond around the clock.

      On average, IT pros are interrupted 6.24 times a month during their personal hours with issues related to server and/or cloud infrastructure.

      2. IT professionals are on the front lines of enterprise of cybersecurity.

      Daily, IT professionals are presented with 26.08 server and cloud-related alerts daily on average. This includes monitoring triggers, patches, updates, vulnerabilities and other issues.

      3. Our IT teams are always there for us when things go wrong.

      54% of pros say that they are only contacted by their coworkers when things go wrong, and 58% worry that senior leaders only view their function as “keeping the lights on.” But we too often fail to see all they to do help companies grow . . .

      4. IT teams are key in helping our companies grow and innovate.

      IT pros are hungry to move beyond routine activities and do the work that makes a greater impact for all of us. 47% want to spend more time on designing and implementing new solutions, and 59% of tech pros are frustrated by time spent on monitoring and OS maintenance. A whopping 83% say their departments should be viewed as centers for innovation.

      5. IT pros continually educate themselves to keep pace with this ever-evolving field.

      As such, 53.7% of IT pros say they feel like they’ve held 2-5 different roles since starting their careers in IT. In order to meet shifting demands, 37% of pros participate in training to learn new skills monthly, while 11% do so weekly.

      Data derived from INAP’s annual State of IT Infrastructure Management report. The survey was conducted among 500 IT senior leaders and infrastructure managers in the United States and Canada.

      Drop your IT team a note to let them know you appreciate their hard work and dedication. Or make a quick video to say, “thanks.”

      In the video below, INAP leadership thank our 100-plus essential data center workers. The terrific performance from these IT pros has enabled our thousands of global customers to successfully operate their businesses through a period of immense challenge.

      Laura Vietmeyer


      READ MORE



      Source link

      Cut the Bikeshedding! Use MVPs to Keep Your Team Aligned and Moving Faster, Without Writing a Line of Code


      This tech talk will be streaming live on Tues, Jun 16, 2020 12:00 PM – 1:00 PM ET.
      RSVP for free on GotoWebinar here

      About the Talk

      Case studies of how the DigitalOcean team has used “no-code” MVPs to accelerate delivery timelines and reduce cost. The techniques shared will work for a one-person company just as well as they will work for a 1,000-people company.

      What you’ll learn

      How to use scrappy, “no code” MVPs to learn and hypothesis test, while keeping your team aligned and excited.

      This talk is designed for

      • Product development teams
      • Product managers
      • Startup founders

      About the Presenters

      Antonio Rosales (@a_webtone) and John Gannon (@johnmgannon) have worked together on the DigitalOcean Marketplace as an Engineering Manager/Product Manager tandem from inception through its scaleup beyond 150+ open source and commercial apps.

      How to Join

      This tech talk is free and open to everyone. Join the live event on Thu, Jun 16, 2020 12:00 PM – 1:00 PM ET by registering on GotoWebinar here. Antonio and John will be answering questions at the end.

      If you can’t make the live event, the recording and transcript will be published here as soon as it’s available.





      Source link