One place for hosting & domains

      Beginner039s

      A Beginner's Guide to Terraform


      Updated by Linode Written by Linode

      Terraform by HashiCorp is an orchestration tool that allows you to represent your Linode instances and other resources with declarative code inside configuration files, instead of manually creating those resources via the Linode Manager or API. This practice is referred to as Infrastructure as Code, and Terraform is a popular example of this methodology. The basic workflow when using Terraform is:

      1. Write configuration files on your computer in which you declare the elements of your infrastructure that you want to create.

      2. Tell Terraform to analyze your configurations and then create the corresponding infrastructure.

      Terraform’s primary job is to create, modify, and destroy servers and other resources. Terraform generally does not configure your servers’ software. Configuring your software can be performed with scripts that you upload to and execute on your new servers, or via configuration management tools or container deployments.

      The Linode Provider

      Terraform is a general orchestration tool that can interface with a number of different cloud platforms. These integrations are referred to as providers. The Terraform provider for Linode was officially released in October 2018.

      Note

      The Linode provider can be used to create Linode instances, Images, domain records, Block Storage Volumes, StackScripts, and other resources. Terraform’s official Linode provider documentation details each resource that can be managed.

      Infrastructure as Code

      Terraform’s representation of your resources in configuration files is referred to as Infrastructure as Code (IAC). The benefits of this methodology and of using Terraform include:

      • Version control of your infrastructure. Because your resources are declared in code, you can track changes to that code over time in version control systems like Git.

      • Minimization of human error. Terraform’s analysis of your configuration files will produce the same results every time it creates your declared resources. As well, telling Terraform to repeatedly apply the same configuration will not result in extra resource creation, as Terraform tracks the changes it makes over time.

      • Better collaboration among team members. Terraform’s backends allow multiple team members to safely work on the same Terraform configuration simultaneously.

      HashiCorp Configuration Language

      Terraform’s configuration files can be written in either the HashiCorp Configuration Language (HCL), or in JSON. HCL is a configuration language authored by HashiCorp for use with its products, and it is designed to be human readable and machine friendly. It is recommended that you use HCL over JSON for your Terraform deployments.

      The next sections will illustrate core Terraform concepts with examples written in HCL. For a more complete review of HCL syntax, see Introduction to HashiCorp Configuration Language (HCL).

      Resources

      Here’s a simple example of a complete Terraform configuration in HCL:

      example.tf
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      
      provider "linode" {
          token = "your-linode-api-token"
      }
      
      resource "linode_instance" "example_instance" {
          label = "example_instance_label"
          image = "linode/ubuntu18.04"
          region = "us-central"
          type = "g6-standard-1"
          authorized_keys = ["ssh-rsa AAAA...Gw== user@example.local"]
          root_pass = "your-root-password"
      }

      Note

      The SSH key in this example was truncated for brevity.

      This example Terraform file, with the Terraform file extension .tf, represents the creation of a single Linode instance labeled example_instance_label. This example file is prefixed with a mandatory provider block, which sets up the Linode provider and which you must list somewhere in your configuration.

      The provider block is followed by a resource declaration. Resource declarations correspond with the components of your Linode infrastructure: Linode instances, Block Storage Volumes, etc.

      Resources can accept arguments. region and type are required arguments for the linode_instance resource. A root password must be assigned to every Linode, but the root_pass Terraform argument is optional; if it is not specified, a random password will be generated.

      Note

      The example_instance string that follows the linode_instance resource type declaration is Terraform’s name for the resource. You cannot declare more than one Terraform resource with the same name and resource type.

      The label argument specifies the label for the Linode instance in the Linode Manager. This name is independent of Terraform’s name for the resource (though you can assign the same value to both). The Terraform name is only recorded in Terraform’s state and is not communicated to the Linode API. Labels for Linode instances in the same Linode account must be unique.

      Dependencies

      Terraform resources can depend on each other. When one resource depends on another, it will be created after the resource it depends on, even if it is listed before the other resource in your configuration file.

      The following snippet expands on the previous example. It declares a new domain with an A record that targets the Linode instance’s IP address:

      example.tf
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      
      provider "linode" {
          # ...
      }
      
      resource "linode_instance" "example_instance" {
          # ...
      }
      
      resource "linode_domain" "example_domain" {
          domain = "example.com"
          soa_email = "example@example.com"
      }
      
      resource "linode_domain_record" "example_domain_record" {
          domain_id = "${linode_domain.example_domain.id}"
          name = "www"
          record_type = "A"
          target = "${linode_instance.example_instance.ip_address}"
      }

      The domain record’s domain_id and target arguments use HCL’s interpolation syntax to retrieve the ID of the domain resource and the IP of the Linode instance, respectively. Terraform creates an implicit dependency on the example_instance and example_domain resources for the example_domain_record resource. As a result, the domain record will not be created until after the Linode instance and the domain are created.

      Note

      Input Variables

      The previous example hard-coded sensitive data in your configuration, including your API token and root password. To avoid this practice, Terraform allows you to provide the values for your resource arguments in input variables. These variables are declared and referenced in your Terraform configuration (using interpolation syntax), and the values for those variables are assigned in a separate file.

      Input variables can also be used for non-sensitive data. The following example files will employ variables for the sensitive token and root_pass arguments and the non-sensitive authorized_keys and region arguments:

      example.tf
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      
      provider "linode" {
          token = "${var.token}"
      }
      
      resource "linode_instance" "example_instance" {
          label = "example_instance_label"
          image = "linode/ubuntu18.04"
          region = "${var.region}"
          type = "g6-standard-1"
          authorized_keys = ["${var.ssh_key}"]
          root_pass = "${var.root_pass}"
      }
      
      variable "token" {}
      variable "root_pass" {}
      variable "ssh_key" {}
      variable "region" {
        default = "us-southeast"
      }
      terraform.tfvars
      1
      2
      3
      
      token = "your-linode-api-token"
      root_pass = "your-root-password"
      ssh_key = "ssh-rsa AAAA...Gw== user@example.local"

      Note

      Place all of your Terraform project’s files in the same directory. Terraform will automatically load input variable values from any file named terraform.tfvars or ending in .auto.tfvars.

      The region variable is not assigned a specific value, so it will use the default value provided in the variable’s declaration. See Introduction to HashiCorp Configuration Language for more detailed information about input variables.

      Terraform CLI

      You interact with Terraform via its command line interface. After you have created the configuration files in your Terraform project, you need to run the init command from the project’s directory:

      terraform init
      

      This command will download the Linode provider plugin and take other actions needed to initialize your project. It is safe to run this command more than once, but you generally will only need to run it again if you are adding another provider to your project.

      Plan and Apply

      After you have declared your resources in your configuration files, you create them by running Terraform’s apply command from your project’s directory. However, you should always verify that Terraform will create the resources as you expect them to be created before making any actual changes to your infrastructure. To do this, you can first run the plan command:

      terraform plan
      

      This command will generate a report detailing what actions Terraform will take to set up your Linode resources.

      If you are satisfied with this report, run apply:

      terraform apply
      

      This command will ask you to confirm that you want to proceed. When Terraform has finished applying your configuration, it will show a report of what actions were taken.

      State

      When Terraform analyzes and applies your configuration, it creates an internal representation of the infrastructure it created and uses it to track the changes made. This state information is recorded in JSON in a local file named terraform.tfstate by default, but it can also be stored in other backends.

      Caution

      Your sensitive infrastructure data (like passwords and tokens) is visible in plain-text in your terraform.tfstate file. Review Secrets Management with Terraform for guidance on how to secure these secrets.

      Other Commands

      Other useful commands are available, like terraform show, which reports a human-readable version of your Terraform state. A full list of Terraform commands is available in the official Terraform documentation.

      Provisioners

      In addition to resource declarations, Terraform configurations can include provisioners. You declare provisioners to run scripts and commands in your local development environment or on your Terraform-managed servers. These actions are performed when you apply your Terraform configuration.

      The following example uploads a setup script to a newly created Linode instance and then executes it. This pattern can be used to bootstrap the new instance or enroll it in configuration management:

      example.tf
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      
      resource "linode_instance" "example_instance" {
        # ...
      
        provisioner "file" {
            source      = "setup_script.sh"
            destination = "/tmp/setup_script.sh"
        }
      
        provisioner "remote-exec" {
          inline = [
            "chmod +x /tmp/setup_script.sh",
            "/tmp/setup_script.sh",
          ]
        }
      }

      Most provisioners are declared inside of a resource declaration. When multiple provisioners are declared inside a resource, they are executed in the order they are listed. For a full list of provisioners, review the official Terraform documentation.

      Note

      Linode StackScripts can also be used to set up a new Linode instance. A distinction between using StackScripts and the file and remote-exec provisioners is that those provisioners will run and complete synchronously before Terraform continues to apply your plan, while a StackScript will run in parallel while Terraform creates the rest of your remaining resources. As a result, Terraform might complete its application before a StackScript has finished running.

      Modules

      Terraform allows you to organize your configurations into reusable structures called modules. This is useful if you need to create multiple instances of the same cluster of servers. Review Create a Terraform Module for more information on authoring and using modules.

      Backends

      By default, Terraform maintains its state in your project’s directory. Terraform also supports storing your state in non-local backends. The benefits of including your state in another backend include:

      • Better collaboration with your team. Backends let you share the same state as other team members that have access to the backend.

      • Better security. The state information stored in and retrieved from backends is only kept in memory on your computer.

      • Remote operations. When working with a large infrastructure, terraform apply can take a long time to complete. Some backends allow you to run the apply remotely, instead of on your computer.

      The kinds of backends available are listed in Terraform’s official documentation.

      Importing

      It is possible to import Linode infrastructure that was created outside of Terraform into your Terraform plan. Review Import Existing Infrastructure to Terraform for instructions on this subject.

      Next Steps

      To get started with installing Terraform and creating your first projects, read through our Use Terraform to Provision Linode Environments guide.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      A Beginner's Guide to Salt


      Updated by Linode Written by Linode

      Use promo code DOCS10 for $10 credit on a new account.

      Salt (also referred to as SaltStack) is a Python-based configuration management and orchestration system. Salt uses a master/client model in which a dedicated Salt master server manages one or more Salt minion servers. Two of Salt’s primary jobs are:

      This guide will introduce the core concepts that Salt employs to fulfill these jobs.

      Masters and Minions

      The Salt master is a server that acts as a command-and-control center for its minions, and it is where Salt’s remote execution commands are run from. For example, this command reports the current disk usage for each of the minions that the master controls:

      salt '*' disk.usage
      

      Many other commands are available. This installs NGINX on the minion named webserver1:

      salt 'webserver1' pkg.install nginx
      

      Salt minions are your servers that actually run your applications and services. Each minion has an ID assigned to it (which can be automatically generated from the minion’s hostname), and the Salt master can refer to this ID to target commands to specific minions.

      Note

      When using Salt, you should configure and manage your minion servers from the master as much as possible, instead of logging into them directly via SSH or another protocol.

      To enable all of these functions, the Salt master server runs a daemon named salt-master, and the Salt minion servers run a daemon named salt-minion.

      Authentication

      Communication between the master and minions is performed over the ZeroMQ transport protocol, and all communication is encrypted with public/private keypairs. A keypair is generated by a minion when Salt is first installed on it, after which the minion will send its public key to the master. You will need to accept the minion’s key from the master; communication can then proceed between the two.

      Remote Execution

      Salt offers a very wide array of remote execution modules. An execution module is a collection of related functions that you can run on your minions from the master. For example:

      salt 'webserver1' npm.install gulp
      

      In this command npm is the module and install is the function. This command installs the Gulp Node.js package via the Node Package Manager (NPM). Other functions in the npm module handle uninstalling NPM packages, listing installed NPM packages, and related tasks.

      The execution modules that Salt makes available represent system administration tasks that you would otherwise perform in a shell, including but not limited to:

      Note

      cmd.run

      The cmd.run function is used to run arbitrary commands on your minions from the master:

      salt '*' cmd.run 'ls -l /etc'
      

      This would return the contents of /etc on each minion.

      Note

      Where possible, it’s better to use execution modules than to “shell out” with cmd.run.

      States, Formulas, and the Top File

      The previous section described how to use remote execution to perform specific actions on a minion. With remote execution, you could administer a minion by entering a series of such commands.

      Salt offers another way to configure a minion in which you declare the state that a minion should be in. This kind of configuration is called a Salt state, and the methodology is referred to generally as configuration management.

      The distinction between the two styles is subtle; to illustrate, here’s how installing NGINX is interpreted in each methodology:

      Salt states are defined in state files. Once you have recorded your states, you then apply them to a minion. Salt analyzes the state file and determines what it needs to do to make sure that the minion satisfies the state’s declarations.

      Note

      This sometimes results in the same command that would be run via remote execution, but sometimes it doesn’t. In the NGINX example, if Salt sees that NGINX was already installed previously, it won’t invoke the package manager again when the state is applied.

      Anatomy of a State

      Here’s an example state file which ensures that: rsync and curl are installed; NGINX is installed; and NGINX is run and enabled to run at boot:

      /srv/salt/webserver_setup.sls
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      
      network_utilities:
        pkg.installed:
          - pkgs:
            - rsync
            - curl
      
      nginx_pkg:
        pkg.installed:
          - name: nginx
      
      nginx_service:
        service.running:
          - name: nginx
          - enable: True
          - require:
            - pkg: nginx_pkg

      State files end with the extension .sls (SaLt State). State files can have one or more state declarations, which are the top-level sections of the file (network_utilities, nginx_pkg, and nginx_service in the above example). State declarations IDs are arbitrary, so you can name them however you prefer.

      Note

      If you were to name the ID to be the same as the relevant installed package, then you do not need to specify the - name option, as it will be inferred from the ID. For example, this snippet also installs NGINX:

      The same name/ID inference convention is true for other Salt modules.

      State declarations contain state modules. State modules are distinct from execution modules but often perform similar jobs. For example, a pkg state module exists with functions analogous to the pkg execution module, as with the pkg.installed state function and the pkg.install execution function. As with execution modules, Salt provides a wide array of state modules for you to use.

      Note

      State declarations are not necessarily applied in the order they appear in a state file, but you can specify that a declaration depends on another one using the require option. This is the case in the above example; Salt will not attempt to run and enable NGINX until it is installed.

      State files are really just collections of dictionaries, lists, strings, and numbers that are then interpreted by Salt. By default, Salt uses the YAML syntax for representing states.

      State files are often kept on the Salt master’s filesystem, but they can also be stored in other fileserver locations, like a Git repository (in particular, GitHub).

      Applying a State to a Minion

      To apply a state to a minion, use the state.apply function from the master:

      salt `webserver1` state.apply webserver_setup
      

      This command applies the example webserver_setup.sls state to a minion named webserver1. When applying the state, the .sls suffix is not mentioned. All of the state declarations in the state file are applied.

      Salt Formulas

      Formulas are just collections of states that together configure an application or system component on a minion. Formulas are usually organized across several different .sls files. Splitting a formula’s states up across different files can make it easier to organize your work. State declarations can include and reference declarations across other files.

      Formulas that are sufficiently generic are often shared on GitHub to be used by others. The SaltStack organization maintains a collection of popular formulas. Salt’s documentation has a guide on using a formula hosted on GitHub.

      The definition of what constitutes a formula is somewhat loose, and the specific structure of a formula is not mandated by Salt.

      The Top File

      In addition to manually applying states to minions, Salt provides a way for you to automatically map which states should be applied to different minions. This map is called the top file.

      Here’s a simple top file:

      /srv/salt/top.sls
      1
      2
      3
      4
      5
      6
      
      base:
        '*':
          - universal_setup
      
        'webserver1':
          - webserver_setup

      base refers to the Salt environment. You can specify more than one environment corresponding to different phases of your work; for example: development, QA, production, etc. base is the default.

      Groups of minions are specified under the environment, and states are listed for each set of minions. The above example top file says that a universal_setup state should be applied to all minions ('*'), and the webserver_setup state should be applied to the webserver1 minion.

      If you run the state.apply function with no arguments, then Salt will inspect the top file and apply all states within it according to the mapping you’ve created:

      salt '*' state.apply
      

      Note

      This action is colloquially known as a highstate.

      Benefits of States and Configuration Management

      Defining your configurations in states eases system administration:

      • Setting up states minimizes human error, as you will not need to enter commands manually one-by-one.

      • Applying a state to minion multiple times generally does not result in any changes beyond the first application. Salt understands when a state has already been implemented on a minion and will not perform unnecessary actions.

      • If you update a state file and apply it to a minion, Salt will detect and only apply the changes, which makes updating your systems more efficient.

      • A state can be reused and applied to more than one minion, which will result in identical configurations across different servers.

      • State files can be entered into a version control system, which helps you track changes to your systems over time.

      Targeting Minions

      You can match against your minions’ IDs using shell style globbing. This works at either the command line or in the top file.

      These examples would apply the webserver_setup state to all minions whose ID begins with webserver (e.g. webserver1, webserver2, etc):

      Regular Expressions and lists can also be used to match against minion IDs.

      Grains

      Salt’s grains system provides access to information that is generated by and stored on a minion. Examples include a minion’s operating system, domain name, IP address, and so on. You can also specify custom grain data on a minion, as outlined in Salt’s documentation.

      You can use grain data to target minions from the command line. This command installs httpd on all minions running CentOS:

      salt -G 'os:CentOS' pkg.install httpd
      

      You can also use grains in a top file:

      1
      2
      3
      4
      
      base:
        'os:CentOS':
          - match: grain
          - centos_setup

      Grain information generally isn’t very dynamic, but it can change occasionally, and Salt will refresh its grain data when it does. To view your minions’ grain data:

      salt '*' grains.items
      

      Storing Data and Secrets in Pillar

      Salt’s pillar feature takes data defined on the Salt master and distributes it to minions. A primary use for pillar is to store secrets, such as account credentials. Pillar is also a useful place to store non-secret data that you wouldn’t want to record directly in your state files.

      Note

      Let’s say that you want to create system users on a minion and assign different shells to each of them. If you were to code this information into a state file, you would need a new declaration for each user. If you store the data in pillar instead, you can then just create one state declaration and inject the pillar data into it using Salt’s Jinja templating feature.

      Note

      Salt Pillar is sometimes confused with Salt Grains, as they both keep data that is used in states and remote execution. The data that grains maintains originates from the minions, while the data in pillar originates on the master (or another backend) and is delivered to the minions.

      Anatomy of Pillar Data

      Pillar data is kept in .sls files which are written in the same YAML syntax as states:

      /srv/pillar/user_info.sls
      1
      2
      3
      4
      5
      6
      7
      
      users:
        joe:
          shell: /bin/zsh
        amy:
          shell: /bin/bash
        sam
          shell: /bin/fish

      As with state files, a top file (separate from your states’ top file) maps pillar data to minions:

      /srv/pillar/top.sls
      1
      2
      3
      
      base:
        'webserver1':
          - user_info

      Jinja Templates

      To inject pillar data into your states, use Jinja’s template syntax. While Salt uses the YAML syntax for state and pillar files, the files are first interpreted as Jinja templates (by default).

      This example state file uses the pillar data from the previous section to create system users and set the shell for each:

      /srv/salt/user_setup.sls
      1
      2
      3
      4
      5
      
      {% for user_name, user_info in pillar['users'].iteritems() %}
      {{ user_name }}:
        user.present:
          - shell: {{ user_info['shell'] }}
      {% endfor %}

      Salt will compile the state file into something that looks like this before it is applied to the minion:

       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      
      joe:
        user.present:
          - shell: /bin/zsh
      
      amy:
        user.present:
          - shell: /bin/bash
      
      sam:
        user.present:
          - shell: /bin/fish

      You can also use Jinja to interact with grain data in your states. This example state will install Apache and adjust the name for the package according to the operating system:

      /srv/salt/webserver_setup.sls
      1
      2
      3
      4
      5
      6
      7
      
      install_apache:
        pkg.installed:
          {% if grains['os'] == 'CentOS' %}
          - name: httpd
          {% else %}
          - name: apache
          {% endif %}

      Note

      Beacons

      The beacon system is a way of monitoring a variety of system processes on Salt minions. There are a number of beacon modules available.

      Beacons can trigger reactors which can then help implement a change or troubleshoot an issue. For example, if a service’s response times out, the reactor system can restart the service.

      Getting Started with Salt

      Now that you’re familiar with some of Salt’s basic terminology and components, move on to our guide Getting Started with Salt – Basic Installation and Setup to set up a configuration to start running commands and provisioning minion servers.

      The SaltStack documentation also contains a page of best practices to be mindful of when working with Salt. You should review this page and implement those practices into your own workflow whenever possible.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Join our Community

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link