One place for hosting & domains

      Infrastructure

      How To Gather Infrastructure Metrics with Metricbeat on Ubuntu 18.04


      The author selected the Computer History Museum to receive a donation as part of the Write for DOnations program.

      Introduction

      Metricbeat, which is one of several Beats that helps send various types of server data to an Elastic Stack server, is a lightweight data shipper that, once installed on your servers, periodically collects system-wide and per-process CPU and memory statistics and sends the data directly to your Elasticsearch deployment. This shipper replaces the earlier Topbeat in version 5.0 of the Elastic Stack.

      Other Beats currently available from Elastic are:

      • Filebeat: collects and ships log files.
      • Packetbeat: collects and analyzes network data.
      • Winlogbeat: collects Windows event logs.
      • Auditbeat: collects Linux audit framework data and monitors file integrity.
      • Heartbeat: monitors services for their availability with active probing.

      In this tutorial, you will use Metricbeat to forward local system metrics like CPU/memory/disk usage and network utilization from an Ubuntu 18.04 server to another server of the same kind with the Elastic Stack installed. With this shipper, you will gather the basic metrics that you need to get the current state of your server.

      Prerequisites

      To follow this tutorial, you will need:

      Note: When installing the Elastic Stack, you must use the same version across the entire stack. In this tutorial, you will use the latest versions of the entire stack which are, at the time of this writing, Elasticsearch 6.6.2, Kibana 6.6.2, Logstash 6.6.2, and Metricbeat 6.6.2.

      Step 1 — Configuring Elasticsearch to Listen for Traffic on an External IP

      The tutorial How To Install Elasticsearch, Logstash, and Kibana (Elastic Stack) on Ubuntu 18.04 restricted Elasticsearch access to the localhost only. In practice, this is rare, since you will often need to monitor many hosts. In this step, you will configure the Elastic Stack components to interact with the external IP address.

      Log in to your Elastic Stack server as your non-root user:

      • ssh sammy@Elastic_Stack_server_ip

      Use your preferred text editor to edit Elasticsearch’s main configuration file, elasticsearch.yml. This tutorial will use nano:

      • sudo nano /etc/elasticsearch/elasticsearch.yml

      Find the following section and modify it so that Elasticsearch listens on all interfaces:

      /etc/elasticsearch/elasticsearch.yml

      . . .
      network.host: 0.0.0.0
      . . .
      

      The address 0.0.0.0 is assigned specific meanings in a number of contexts. In this case, 0.0.0.0 means “any IPv4 address at all.”

      Save and close elasticsearch.yml by pressing CTRL+X, followed by Y and then ENTER if you’re using nano. Then, restart the Elasticsearch service with systemctl to apply new settings:

      • sudo systemctl restart elasticsearch

      Now, allow access to the Elasticsearch port from your second Ubuntu server. You will use ufw for this:

      • sudo ufw allow from second_ubuntu_server_ip/32 to any port 9200

      Repeat this command for each of your servers if you have more than two. If your servers are on the same network, you can allow access using one rule for all hosts on the network. To do this, you need to replace the prefix /32 with a lower value, for example /24. You can find more examples of UFW setups in the UFW Essentials: Common Firewall Rules and Commands tutorial.

      Next, test the connection. Log in to your second Ubuntu server as your non-root user:

      • ssh sammy@second_ubuntu_server_ip

      Use the telnet command to test the connection to the Elastic Stack server. This command enables communication with another host using the Telnet protocol and can check the availability of a port on a remote system.

      • telnet Elastic_Stack_server_ip 9200

      You’ll receive the following output:

      Output

      Trying Elastic_Stack_server_ip... Connected to Elastic_Stack_server_ip. Escape character is '^]'.

      Close the Telnet connection by pressing CTRL+], followed by CTRL+d. You can type quit and then press ENTER to exit the Telnet utility.

      Now you are ready to send metrics to your Elastic Stack server.

      Step 2 — Installing and Configuring Metricbeat on the Elastic Stack Server

      In the next two steps, you will first install Metricbeat on the Elastic Stack server and import all the needed data, then install and configure the client on the second Ubuntu server.

      Log into your Elastic Stack server as your non-root user:

      • ssh sammy@Elastic_Stack_server_ip

      Since you previously set up the Elasticsearch repositories in the prerequisite, you only need to install Metricbeat:

      • sudo apt install metricbeat

      Once Metricbeat is finished installing, load the index template into Elasticsearch. An Elasticsearch index is a collection of documents that have similar characteristics. Specific names identify each index, which Elasticsearch will use to refer to the indexes when performing various operations. Your Elasticsearch server will automatically apply the index template when you create a new index.

      To load the template, use the following command:

      • sudo metricbeat setup --template -E 'output.elasticsearch.hosts=["localhost:9200"]'

      You will see the following output:

      Output

      Loaded index template

      Metricbeat comes packaged with example Kibana dashboards, visualizations, and searches for visualizing Metricbeat data in Kibana. Before you can use the dashboards, you need to create the index pattern and load the dashboards into Kibana.

      To load the templates, use the following command:

      • sudo metricbeat setup -e -E output.elasticsearch.hosts=['localhost:9200'] -E setup.kibana.host=localhost:5601

      You will see output that looks like this:

      Output

      . . . 2019-02-15T09:51:32.096Z INFO instance/beat.go:281 Setup Beat: metricbeat; Version: 6.6.2 2019-02-15T09:51:32.136Z INFO add_cloud_metadata/add_cloud_metadata.go:323 add_cloud_metadata: hosting provider type detected as digitalocean, metadata={"instance_id":"133130541","provider":"digitalocean","region":"fra1"} 2019-02-15T09:51:32.137Z INFO elasticsearch/client.go:165 Elasticsearch url: http://localhost:9200 2019-02-15T09:51:32.137Z INFO [publisher] pipeline/module.go:110 Beat name: elastic 2019-02-15T09:51:32.138Z INFO elasticsearch/client.go:165 Elasticsearch url: http://localhost:9200 2019-02-15T09:51:32.140Z INFO elasticsearch/client.go:721 Connected to Elasticsearch version 6.6.2 2019-02-15T09:51:32.148Z INFO template/load.go:130 Template already exists and will not be overwritten. 2019-02-15T09:51:32.148Z INFO instance/beat.go:894 Template successfully loaded. Loaded index template Loading dashboards (Kibana must be running and reachable) 2019-02-15T09:51:32.149Z INFO elasticsearch/client.go:165 Elasticsearch url: http://localhost:9200 2019-02-15T09:51:32.150Z INFO elasticsearch/client.go:721 Connected to Elasticsearch version 6.6.2 2019-02-15T09:51:32.151Z INFO kibana/client.go:118 Kibana url: http://localhost:5601 2019-02-15T09:51:56.209Z INFO instance/beat.go:741 Kibana dashboards successfully loaded. Loaded dashboards

      Now you can start and enable Metricbeat:

      • sudo systemctl start metricbeat
      • sudo systemctl enable metricbeat

      Metricbeat will begin shipping your system stats into Elasticsearch.

      To verify that Elasticsearch is indeed receiving this data, query the Metricbeat index with this command:

      • curl -XGET 'http://localhost:9200/metricbeat-*/_search?pretty'

      You will see an output that looks similar to this:

      Output

      ... { "took" : 3, "timed_out" : false, "_shards" : { "total" : 1, "successful" : 1, "skipped" : 0, "failed" : 0 }, "hits" : { "total" : 108, "max_score" : 1.0, "hits" : [ { "_index" : "metricbeat-6.6.2-2019.02.15", "_type" : "doc", "_id" : "A4mU8GgBKrpxEYMLjJZt", "_score" : 1.0, "_source" : { "@timestamp" : "2019-02-15T09:54:52.481Z", "metricset" : { "name" : "network", "module" : "system", "rtt" : 125 }, "event" : { "dataset" : "system.network", "duration" : 125260 }, "system" : { "network" : { "in" : { "packets" : 59728, "errors" : 0, "dropped" : 0, "bytes" : 736491211 }, "out" : { "dropped" : 0, "packets" : 31630, "bytes" : 8283069, "errors" : 0 }, "name" : "eth0" } }, "beat" : { "version" : "6.6.2", "name" : "elastic", "hostname" : "elastic" }, ...

      The line "total" : 108, indicates that Metricbeat has found 108 search results for this specific metric. If your output shows 0 total hits, you will need to review your setup for errors. If you received the expected output, continue to the next step, in which you will install Metricbeat on the second Ubuntu server.

      Step 3 — Installing and Configuring Metricbeat on the Second Ubuntu Server

      Perform this step on all Ubuntu servers from which you want to send metrics to your Elastic Stack server.

      Log into your second Ubuntu server as your non-root user:

      • ssh sammy@second_ubuntu_server_ip

      The Elastic Stack components are not available in Ubuntu’s default package repositories. However, you can install them with APT after adding Elastic’s package source list.

      All of the Elastic Stack’s packages are signed with the Elasticsearch signing key in order to protect your system from package spoofing. Your package manager will trust packages that have been authenticated using the key. In this step, you will import the Elasticsearch public GPG key and add the Elastic package source list in order to install Metricbeat.

      To begin, run the following command to import the Elasticsearch public GPG key into APT:

      • wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

      Next, add the Elastic source list to the sources.list.d directory, where APT will look for new sources:

      • echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list

      Next, update your package lists so APT will read the new Elastic source:

      Then install Metricbeat with this command:

      • sudo apt install metricbeat

      Once Metricbeat is finished installing, configure it to connect to Elasticsearch. Open its configuration file, metricbeat.yml:

      • sudo nano /etc/metricbeat/metricbeat.yml

      Note: Metricbeat's configuration file is in YAML format, which means that indentation is very important! Be sure that you do not add any extra spaces as you edit this file.

      Metricbeat supports numerous outputs, but you’ll usually only send events directly to Elasticsearch or to Logstash for additional processing. Find the following section and update the IP address:

      /etc/metricbeat/metricbeat.yml

      #-------------------------- Elasticsearch output ------------------------------
      output.elasticsearch:
        # Array of hosts to connect to.
        hosts: ["Elastic_Stack_server_ip:9200"]
      
      ...
      

      Save and close the file.

      You can extend the functionality of Metricbeat with modules. In this tutorial, you will use the system module, which allows you to monitor your server's stats like CPU/memory/disk usage and network utilization.

      In this case, the system module is enabled by default. You can see a list of enabled and disabled modules by running:

      • sudo metricbeat modules list

      You will see a list similar to the following:

      Output

      Enabled: system Disabled: aerospike apache ceph couchbase docker dropwizard elasticsearch envoyproxy etcd golang graphite haproxy http jolokia kafka kibana kubernetes kvm logstash memcached mongodb munin mysql nginx php_fpm postgresql prometheus rabbitmq redis traefik uwsgi vsphere windows zookeeper

      You can see the parameters of the module in the /etc/metricbeat/modules.d/system.yml configuration file. In the case of this tutorial, you do not need to change anything in the configuration. The default metricsets are cpu, load, memory, network, process, and process_summary. Each module has one or more metricset. A metricset is the part of the module that fetches and structures the data. Rather than collecting each metric as a separate event, metricsets retrieve a list of multiple related metrics in a single request to the remote system.

      Now you can start and enable Metricbeat:

      • sudo systemctl start metricbeat
      • sudo systemctl enable metricbeat

      You need to repeat this step on all servers where you want to collect metrics. After that, you can proceed to the next step in which you will see how to navigate through some of Kibana's dashboards.

      Step 4 — Exploring Kibana Dashboards

      In this step, you will take a look at Kibana, the web interface that you installed in the Prerequisites section.

      In a web browser, go to the FQDN or public IP address of your Elastic Stack server. After entering the login credentials you defined in Step 2 of the Elastic Stack tutorial, you will see the Kibana homepage:

      Kibana Homepage

      Click the Discover link in the left-hand navigation bar. On the Discover page, select the predefined meticbeat-* index pattern to see Metricbeat data. By default, this will show you all of the log data over the last 15 minutes. You will find a histogram and some metric details:

      Discover page

      Here, you can search and browse through your metrics and also customize your dashboard. At this point, though, there won't be much in there because you are only gathering system stats from your servers.

      Use the left-hand panel to navigate to the Dashboard page and search for the Metricbeat System dashboard. Once there, you can search for the sample dashboards that come with Metricbeat's system module.

      For example, you can view brief information about all your hosts:

      Syslog Dashboard

      You can also click on the host name and view the detailed information:

      Sudo Dashboard

      Kibana has many other features, such as graphing and filtering, so feel free to explore.

      Conclusion

      In this tutorial, you've installed Metricbeat and configured the Elastic Stack to collect and analyze system metrics. Metricbeat comes with internal modules that collect metrics from services like Apache, Nginx, Docker, MySQL, PostgreSQL, and more. Now you can collect and analyze the metrics of your applications by simply turning on the modules you need.

      If you want to understand more about server monitoring, check out An Introduction to Metrics, Monitoring, and Alerting and Putting Monitoring and Alerting into Practice.



      Source link

      Import Existing Infrastructure to Terraform


      Updated by Linode Contributed by Linode

      Terraform is an orchestration tool that uses declarative code to build, change, and version infrastructure that is made up of server instances and services. You can use Linode’s official Terraform provider to interact with Linode services. Existing Linode infrastructure can be imported and brought under Terraform management. This guide will describe how to import existing Linode infrastructure into Terraform using the official Linode provider plugin.

      Before You Begin

      1. Terraform and the Linode Terraform provider should be installed in your development environment. You should also have a basic understanding of Terraform resources. To install and learn about Terraform, read our Use Terraform to Provision Linode Environments guide.

      2. To use Terraform you must have a valid API access token. For more information on creating a Linode API access token, visit our Getting Started with the Linode API guide.

      3. This guide uses the Linode CLI to retrieve information about the Linode infrastructure you will import to Terraform. For more information on the setup, installation, and usage of the Linode CLI, check out the Using the Linode CLI guide.

      Terraform’s Import Command

      Throughout this guide the terraform import command will be used to import Linode resources. At the time of writing this guide, the import command does not generate a Terraform resource configuration. Instead, it imports your existing resources into Terraform’s state.

      State is Terraform’s stored JSON mapping of your current Linode resources to their configurations. You can access and use the information provided by the state to manually create a corresponding resource configuration file and manage your existing Linode infrastructure with Terraform.

      Additionally, there is no current way to import more than one resource at a time. All resources must be individually imported.

      Caution

      When importing your infrastructure to Terraform, failure to accurately provide your Linode service’s ID information can result in the unwanted alteration or destruction of the service. Please follow the instructions provided in this guide carefully. It might be beneficial to use multiple Terraform Workspaces to manage separate testing and production infrastructures.

      Import a Linode to Terraform

      Retrieve Your Linode’s ID

      1. Using the Linode CLI, retrieve a list of all your Linode instances and find the ID of the Linode you would like to manage under Terraform:

        linode-cli linodes list --json --pretty
        
          
        [
          {
            "id": 11426126,
            "image": "linode/debian9",
            "ipv4": [
            "192.0.2.2"
            ],
            "label": "terraform-import",
            "region": "us-east",
            "status": "running",
            "type": "g6-standard-1"
          }
        ]
        
        

        This command will return a list of your existing Linodes in JSON format. From the list, find the Linode you would like to import and copy down its corresponding id. In this example, the Linode’s ID is 11426126. You will use your Linode’s ID to import your Linode to Terraform.

      Create An Empty Resource Configuration

      1. Ensure you are in your Terraform project directory. Create a Terraform configuration file to manage the Linode instance you will import in the next section. Your file can be named anything you like, but it must end in .tf. Add a Linode provider block with your API access token and an empty linode_instance resource configuration block in the file:

        Note

        The example resource block defines example_label as the label. This can be changed to any value you prefer. This label is used to reference your Linode resource configuration within Terraform, and does not have to be the same label originally assigned to the Linode when it was created outside of Terraform.

        linode_import.tf
        1
        2
        3
        4
        5
        
        provider "linode" {
            token = "your_API_access_token"
        }
        
        resource "linode_instance" "example_label" {}

      Import Your Linode to Terraform

      1. Run the import command, supplying the linode_instance resource’s label, and the Linode’s ID that was retrieved in the Retrieve Your Linode’s ID section :

        terraform import linode_instance.example_label linodeID
        

        You should see a similar output:

          
        linode_instance.example_label: Importing from ID "11426126"...
        linode_instance.example_label: Import complete!
          Imported linode_instance (ID: 11426126)
        linode_instance.example_label: Refreshing state... (ID: 11426126)
        
        Import successful!
        
        The resources that were imported are shown above. These resources are now in
        your Terraform state and will henceforth be managed by Terraform.
        
        

        This command will create a terraform.tfstate file with information about your Linode. You will use this information to fill out your resource configuration.

      2. To view the information created by terraform import, run the show command. This command will display a list of key-value pairs representing information about the imported Linode instance.

        terraform show
        

        You should see an output similar to the following:

          
        linode_instance.example_label:
          id = 11426126
          alerts.# = 1
          alerts.0.cpu = 90
          alerts.0.io = 10000
          alerts.0.network_in = 10
          alerts.0.network_out = 10
          alerts.0.transfer_quota = 80
          backups.# = 1
          boot_config_label = My Debian 9 Disk Profile
          config.# = 1
          config.0.comments =
          config.0.devices.# = 1
          config.0.devices.0.sda.# = 1
          config.0.devices.0.sda.0.disk_id = 24170011
          config.0.devices.0.sda.0.disk_label = Debian 9 Disk
          config.0.devices.0.sda.0.volume_id = 0
          config.0.devices.0.sdb.# = 1
          config.0.devices.0.sdb.0.disk_id = 24170012
          config.0.devices.0.sdb.0.disk_label = 512 MB Swap Image
          config.0.devices.0.sdb.0.volume_id = 0
          config.0.devices.0.sdc.# = 0
          config.0.devices.0.sdd.# = 0
          config.0.devices.0.sde.# = 0
          config.0.devices.0.sdf.# = 0
          config.0.devices.0.sdg.# = 0
          config.0.devices.0.sdh.# = 0
          config.0.helpers.# = 1
          config.0.helpers.0.devtmpfs_automount = true
          config.0.helpers.0.distro = true
          config.0.helpers.0.modules_dep = true
          config.0.helpers.0.network = true
          config.0.helpers.0.updatedb_disabled = true
          config.0.kernel = linode/grub2
          config.0.label = My Debian 9 Disk Profile
          config.0.memory_limit = 0
          config.0.root_device = /dev/root
          config.0.run_level = default
          config.0.virt_mode = paravirt
          disk.# = 2
          disk.0.authorized_keys.# = 0
          disk.0.filesystem = ext4
          disk.0.id = 24170011
          disk.0.image =
          disk.0.label = Debian 9 Disk
          disk.0.read_only = false
          disk.0.root_pass =
          disk.0.size = 50688
          disk.0.stackscript_data.% = 0
          disk.0.stackscript_id = 0
          disk.1.authorized_keys.# = 0
          disk.1.filesystem = swap
          disk.1.id = 24170012
          disk.1.image =
          disk.1.label = 512 MB Swap Image
          disk.1.read_only = false
          disk.1.root_pass =
          disk.1.size = 512
          disk.1.stackscript_data.% = 0
          disk.1.stackscript_id = 0
          group = Terraform
          ip_address = 192.0.2.2
          ipv4.# = 1
          ipv4.1835604989 = 192.0.2.2
          ipv6 = 2600:3c03::f03c:91ff:fef6:3ebe/64
          label = terraform-import
          private_ip = false
          region = us-east
          specs.# = 1
          specs.0.disk = 51200
          specs.0.memory = 2048
          specs.0.transfer = 2000
          specs.0.vcpus = 1
          status = running
          swap_size = 512
          type = g6-standard-1
          watchdog_enabled = true
        
        

        You will use this information in the next section.

        Note

        There is a current bug in the Linode Terraform provider that causes the Linode’s root_device configuration to display an import value of /dev/root, instead of /dev/sda. This is visible in the example output above: config.0.root_device = /dev/root. However, the correct disk, /dev/sda, is in fact targeted. For this reason, when running the terraform plan or the terraform apply commands, the output will display config.0.root_device: "/dev/root" => "/dev/sda".

        You can follow the corresponding GitHub issue for more details.

      Fill In Your Linode’s Configuration Data

      As mentioned in the Terraform’s Import Command section, you must manually create your resource configurations when importing existing infrastructure.

      1. Fill in the configuration values for the linode_instance resource block. In the example below, the necessary values were collected from the output of the terraform show command applied in Step 2 of the Import Your Linode to Terraform section. The file’s comments indicate the corresponding keys used to determine the values for the linode_instance configuration block.

        linode_instance_import.tf
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        
        provider "linode" {
            token = "a12b3c4e..."
        }
        
        resource "linode_instance" "example_label" {
            label = "terraform-import" #label
            region = "us-east"         #region
            type = "g6-standard-1"     #type
            config {
                label = "My Debian 9 Disk Profile"     #config.0.label
                kernel = "linode/grub2"                #config.0.kernel
                root_device = "/dev/sda"               #config.0.root_device
                devices {
                    sda = {
                        disk_label = "Debian 9 Disk"    #config.0.devices.0.sda.0.disk_label
                    }
                    sdb = {
                        disk_label = "512 MB Swap Image" #config.0.devices.0.sdb.0.disk_label
                    }
                }
            }
            disk {
                label = "Debian 9 Disk"      #disk.0.label
                size = "50688"               #disk.0.size
            }
            disk {
                label = "512 MB Swap Image"  #disk.1.label
                size = "512"                 #disk.1.size
            }
        }
            

        Note

        If your Linode uses more than two disks (for instance, if you have attached a Block Storage Volume), you will need to add those disks to your Linode resource configuration block. In order to add a disk, you must add the disk to the devices stanza and create an additional disk stanza.

        Note

        If you have more than one configuration profile, you must choose which profile to boot from with the boot_config_label argument. For example:

        resource "linode_instance" "example_label" {
            boot_config_label = "My Debian 9 Disk Profile"
        ...
        
      2. To check for errors in your configuration, run the plan command:

        terraform plan
        

        terraform plan shows you the changes that would take place if you were to apply the configurations with a terraform apply. Running terraform plan is a good way to determine if the configuration you provided is exact enough for Terraform to take over the management of your Linode.

        Note

        Running terraform plan will display any changes that will be applied to your existing infrastructure based on your configuration file(s). However, you will not be notified about the addition and removal of disks with terraform plan. For this reason, it is vital that the values you include in your linode_instance resource configuration block match the values generated from running the terraform show command.

      3. Once you have verified the configurations you provided in the linode_instance resource block, you are ready to begin managing your Linode instance with Terraform. Any changes or updates can be made by updating your linode_instance_import.tf file, then verifying the changes with the terrform plan command, and then finally applying the changes with the terraform apply command.

        For more available configuration options, visit the Linode Instance Terraform documentation.

      Import a Domain to Terraform

      Retrieve Your Domain’s ID

      1. Using the Linode CLI, retrieve a list of all your domains to find the ID of the domain you would like to manage under Terraform:

        linode-cli domains list --json --pretty
        

        You should see output like the following:

          
        [
          {
            "domain": "import-example.com",
            "id": 1157521,
            "soa_email": "[email protected]",
            "status": "active",
            "type": "master"
          }
        ]
        
        

        Find the domain you would like to import and copy down the ID. You will need this ID to import your domain to Terraform.

      Create an Empty Resource Configuration

      1. Ensure you are in your Terraform project directory. Create a Terraform configuration file to manage the domain you will import in the next section. Your file can be named anything you like, but must end in .tf. Add a Linode provider block with your API access token and an empty linode_domain resource configuration block to the file:

        domain_import.tf
        1
        2
        3
        4
        5
        
        provider "linode" {
            token = "Your API Token"
        }
        
        resource "linode_domain" "example_label" {}

      Import Your Domain to Terraform

      1. Run the import command, supplying the linode_domain resource’s label, and the domain ID that was retrieved in the Retrieve Your Domain’s ID section:

        terraform import linode_domain.example_label domainID
        

        You should see output similar to the following:

          
        linode_domain.example_label: Importing from ID "1157521"...
        linode_domain.example_label: Import complete!
          Imported linode_domain (ID: 1157521)
        linode_domain.example_label: Refreshing state... (ID: 1157521)
        
        Import successful!
        
        The resources that were imported are shown above. These resources are now in
        your Terraform state and will henceforth be managed by Terraform.
        
        

        This command will create a terraform.tfstate file with information about your domain. You will use this information to fill out your resource configuration.

      2. To view the information created by terraform import, run the show command. This command will display a list of key-value pairs representing information about the imported domain:

        terraform show
        

        You should see output like the following:

          
        linode_domain.example_label:
          id = 1157521
          description =
          domain = import-example.com
          expire_sec = 0
          group =
          master_ips.# = 0
          refresh_sec = 0
          retry_sec = 0
          soa_email = [email protected]
          status = active
          ttl_sec = 0
          type = master
        
        

      Fill In Your Domain’s Configuration Data

      As mentioned in the Terraform’s Import Command section, you must manually create your resource configurations when importing existing infrastructure.

      1. Fill in the configuration values for the linode_domain resource block. The necessary values for the example resource configuration file were collected from the output of the terraform show command applied in Step 2 of the Import Your Domain to Terraform section.

        linode_domain_example.tf
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        
        provider "linode" {
            token = "1a2b3c..."
        }
        
        resource "linode_domain" "example_label" {
            domain = "import-example.com"
            soa_email = "[email protected]"
            type = "master"
        }
            

        Note

        If your Domain type is slave then you’ll need to include a master_ips argument with values set to the IP addresses that represent the Master DNS for your domain.

      2. Check for errors in your configuration by running the plan command:

        terraform plan
        

        terraform plan shows you the changes that would take place if you were to apply the configurations with the terraform apply command. Running terraform plan should result in Terraform displaying that no changes are to be made.

      3. Once you have verified the configurations you provided in the linode_domain block, you are ready to begin managing your domain with Terraform. Any changes or updates can be made by updating your linode_domain_example.tf file, then verifying the changes with the terrform plan command, and then finally applying the changes with the terraform apply command.

        For more available configuration options, visit the Linode Domain Terraform documentation.

      Import a Block Storage Volume to Terraform

      Retrieve Your Block Storage Volume’s ID

      1. Using the Linode CLI, retrieve a list of all your volumes to find the ID of the Block Storage Volume you would like to manage under Terraform:

        linode-cli volumes list --json --pretty
        

        You should see output similar to the following:

          
        [
          {
            "id": 17045,
            "label": "import-example",
            "linode_id": 11426126,
            "region": "us-east",
            "size": 20,
            "status": "active"
          }
        ]
        
        

        Find the Block Storage Volume you would like to import and copy down the ID. You will use this ID to import your volume to Terraform.

      Create an Empty Resource Configuration

      1. Ensure you are in your Terraform project directory. Create a Terraform configuration file to manage the Block Storage Volume you will import in the next section. Your file can be named anything you like, but must end in .tf. Add a Linode provider block with your API access token and an empty linode_volume resource configuration block to the file:

        linode_volume_example.tf
        1
        2
        3
        4
        5
        
        provider "linode" {
            token = "Your API Token"
        }
        
        resource "linode_volume" "example_label" {}

      Import Your Volume to Terraform

      1. Run the import command, supplying the linode_volume resource’s label, and the volume ID that was retrieved in the Retrieve Your Block Storage Volume’s ID section:

        terraform import linode_volume.example_label volumeID
        

        You should see output similar to the following:

          
        linode_volume.example_label: Importing from ID "17045"...
        linode_volume.example_label: Import complete!
          Imported linode_volume (ID: 17045)
        linode_volume.example_label: Refreshing state... (ID: 17045)
        
        Import successful!
        
        The resources that were imported are shown above. These resources are now in
        your Terraform state and will henceforth be managed by Terraform.
        
        

        This command will create a terraform.tfstate file with information about your Volume. You will use this information to fill out your resource configuration.

      2. To view the information created by terraform import, run the show command. This command will display a list of key-value pairs representing information about the imported Volume:

        terraform show
        

        You should see output like the following:

          
        linode_volume.example_label:
          id = 17045
          filesystem_path = /dev/disk/by-id/scsi-0Linode_Volume_import-example
          label = import-example
          linode_id = 11426126
          region = us-east
          size = "20"
          status = active
        
        

      Fill In Your Volume’s Configuration Data

      As mentioned in the Terraform’s Import Command section, you must manually create your resource configurations when importing existing infrastructure.

      1. Fill in the configuration values for the linode_volume resource block. The necessary values for the example resource configuration file were collected from the output of the terraform show command applied in Step 2 of the Import Your Volume to Terraform section:

        linode_volume_example.tf
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        
        provider "linode" {
            token = "1a2b3c..."
        }
        
        resource "linode_volume" "example_label" {
            label = "import-example"
            region = "us-east"
            size = "20"
        }
            

        Note

        Though it is not required, it’s a good idea to include a configuration for the size of the volume so that it can be managed more easily should you ever choose to expand the Volume. It is not possible to reduce the size of a volume.

      2. Check for errors in your configuration by running the plan command:

        terraform plan
        

        terraform plan shows you the changes that would take place if you were to apply the configurations with the terraform apply command. Running terraform plan should result in Terraform displaying that no changes are to be made.

      3. Once you have verified the configurations you provided in the linode_volume block, you are ready to begin managing your Block Storage Volume with Terraform. Any changes or updates can be made by updating your linode_volume_example.tf file, then verifying the changes with the terrform plan command, and then finally applying the changes with the terraform apply command.

        For more optional configuration options, visit the Linode Volume Terraform documentation.

      Import a NodeBalancer to Terraform

      Configuring Linode NodeBalancers with Terraform requires three separate resource configuration blocks: one to create the NodeBalancer, a second for the NodeBalancer Configuration, and a third for the NodeBalancer Nodes.

      Retrieve Your NodeBalancer, NodeBalancer Config, NodeBalancer Node IDs

      1. Using the Linode CLI, retrieve a list of all your NodeBalancers to find the ID of the NodeBalancer you would like to manage under Terraform:

        linode-cli nodebalancers list --json --pretty
        

        You should see output similar to the following:

          
        [
          {
            "client_conn_throttle": 0,
            "hostname": "nb-192-0-2-3.newark.nodebalancer.linode.com",
            "id": 40721,
            "ipv4": "192.0.2.3",
            "ipv6": "2600:3c03:1::68ed:945f",
            "label": "terraform-example",
            "region": "us-east"
          }
        ]
        
        

        Find the NodeBalancer you would like to import and copy down the ID. You will use this ID to import your NodeBalancer to Terraform.

      2. Retrieve your NodeBalancer configuration by supplying the ID of the NodeBalancer you retrieved in the previous step:

        linode-cli nodebalancers configs-list 40721 --json --pretty
        

        You should see output similar to the following:

          
        [
          {
            "algorithm": "roundrobin",
            "check_passive": true,
            "cipher_suite": "recommended",
            "id": 35876,
            "port": 80,
            "protocol": "http",
            "ssl_commonname": "",
            "ssl_fingerprint": "",
            "stickiness": "table"
          }
        ]
        
        

        Copy down the ID of your NodeBalancer configuration, you will use it to import your NodeBalancer configuration to Terraform.

      3. Retrieve a list of Nodes corresponding to your NodeBalancer to find the label and address of your NodeBalancer Nodes. Supply the ID of your NodeBalancer as the first argument and the ID of your NodeBalancer configuration as the second:

        linode-cli nodebalancers nodes-list 40721 35876 --json --pretty
        

        You should see output like the following:

          
        [
          {
            "address": "192.168.214.37:80",
            "id": 327539,
            "label": "terraform-import",
            "mode": "accept",
            "status": "UP",
            "weight": 100
          }
        ]
        
        

        If you are importing a NodeBalancer, chances are your output lists more than one Node. Copy down the IDs of each Node. You will use them to import your Nodes to Terraform.

      Create Empty Resource Configurations

      1. Ensure you are in your Terraform project directory. Create a Terraform configuration file to manage the NodeBalancer you will import in the next section. Your file can be named anything you like, but must end in .tf.

        Add a Linode provider block with your API access token and empty linode_nodebalancer, linode_nodebalancer_config, and linode_nodebalancer_node resource configuration blocks to the file. Be sure to give the resources appropriate labels. These labels will be used to reference the resources locally within Terraform:

        linode_nodebalancer_example.tf
        1
        2
        3
        4
        5
        6
        7
        8
        9
        
        provider "linode" {
            token = "Your API Token"
        }
        
        resource "linode_nodebalancer" "example_nodebalancer_label" {}
        
        resource "linode_nodebalancer_config" "example_nodebalancer_config_label" {}
        
        resource "linode_nodebalancer_node" "example_nodebalancer_node_label" {}

        If you have more than one NodeBalancer Configuration, you will need to supply multiple linode_nodebalancer_config resource blocks with different labels. The same is true for each NodeBalancer Node requiring an additional linode_nodebalancer_node block.

      Import Your NodeBalancer, NodeBalancer Configuration, and NodeBalancer Nodes to Terraform

      1. Run the import command for your NodeBalancer, supplying your local label and the ID of your NodeBalancer as the last parameter.

        terraform import linode_nodebalancer.example_nodebalancer_label nodebalancerID
        

        You should see output similar to the following:

          
        linode_nodebalancer.example_nodebalancer_label: Importing from ID "40721"...
        linode_nodebalancer.example_nodebalancer_label: Import complete!
          Imported linode_nodebalancer (ID: 40721)
        linode_nodebalancer.example_nodebalancer_label: Refreshing state... (ID: 40721)
        
        Import successful!
        
        The resources that were imported are shown above. These resources are now in
        your Terraform state and will henceforth be managed by Terraform.
        
        
      2. Run the import command for your NodeBalancer configuration, supplying your local label, and the ID of your NodeBalancer and the ID of your NodeBalancer configuration separated by commas as the last argument.

        terraform import linode_nodebalancer_config.example_nodebalancer_config_label nodebalancerID,nodebalancerconfigID
        

        You should see output similar to the following:

          
        linode_nodebalancer_config.example_nodebalancer_config_label: Importing from ID "40721,35876"...
        linode_nodebalancer_config.example_nodebalancer_config_label: Import complete!
          Imported linode_nodebalancer_config (ID: 35876)
        linode_nodebalancer_config.example_nodebalancer_config_label: Refreshing state... (ID: 35876)
        
        Import successful!
        
        The resources that were imported are shown above. These resources are now in
        your Terraform state and will henceforth be managed by Terraform.
        
        
      3. Run the import command for you NodeBalancer Nodes, supplying your local label, and the ID of your NodeBalancer, the ID of your NodeBalancer Configuration, and your NodeBalancer Node, separated by commas, as the last argument.

        terraform import linode_nodebalancer_node.example_nodebalancer_node_label nodebalancerID,nodebalancerconfigID,nodebalancernodeID
        

        You should see output like the following:

          
        linode_nodebalancer_node.example_nodebalancer_node_label: Importing from ID "40721,35876,327539"...
        linode_nodebalancer_node.example_nodebalancer_node_label: Import complete!
          Imported linode_nodebalancer_node (ID: 327539)
        linode_nodebalancer_node.example_nodebalancer_node_label: Refreshing state... (ID: 327539)
        
        Import successful!
        
        The resources that were imported are shown above. These resources are now in
        your Terraform state and will henceforth be managed by Terraform.
        
        
      4. Running terraform import creates a terraform.tfstate file with information about your NodeBalancer. You will use this information to fill out your resource configuration. To view the information created by terraform import, run the show command:

        terraform show
        

        You should see output like the following:

          
        linode_nodebalancer.example_nodebalancer_label:
          id = 40721
          client_conn_throttle = 0
          created = 2018-11-16T20:21:03Z
          hostname = nb-192-0-2-3.newark.nodebalancer.linode.com
          ipv4 = 192.0.2.3
          ipv6 = 2600:3c03:1::68ed:945f
          label = terraform-example
          region = us-east
          transfer.% = 3
          transfer.in = 0.013627052307128906
          transfer.out = 0.0015048980712890625
          transfer.total = 0.015131950378417969
          updated = 2018-11-16T20:21:03Z
        
        linode_nodebalancer_config.example_nodebalancer_config_label:
          id = 35876
          algorithm = roundrobin
          check = none
          check_attempts = 2
          check_body =
          check_interval = 5
          check_passive = true
          check_path =
          check_timeout = 3
          cipher_suite = recommended
          node_status.% = 2
          node_status.down = 0
          node_status.up = 1
          nodebalancer_id = 40721
          port = 80
          protocol = http
          ssl_commonname =
          ssl_fingerprint =
          ssl_key =
          stickiness = table
        
        linode_nodebalancer_node.example_nodebalancer_node_label:
          id = 327539
          address = 192.168.214.37:80
          config_id = 35876
          label = terraform-import
          mode = accept
          nodebalancer_id = 40721
          status = UP
          weight = 100
        
        

      Fill In Your NodeBalancer’s Configuration Data

      As mentioned in the Terraform’s Import Command section, you must manually create your resource configurations when importing existing infrastructure.

      1. Fill in the configuration values for all three NodeBalancer resource configuration blocks. The necessary values for the example resource configuration file were collected from the output of the terraform show command applied in Step 4 of the Import Your NodeBalancer, NodeBalancer Configuration, and NodeBalancer Nodes to Terraform section:
      linode_nodebalancer_example.tf
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      
      provider "linode" {
          token = "1a2b3c..."
      }
      
      resource "linode_nodebalancer" "nodebalancer_import" {
          label = "terraform-example"
          region = "us-east"
      }
      
      resource "linode_nodebalancer_config" "nodebalancer_config_import" {
          nodebalancer_id = "40721"
      }
      
      resource "linode_nodebalancer_node" "nodebalancer_node_import" {
          label = "terraform-import"
          address = "192.168.214.37:80"
          nodebalancer_id = "40721"
          config_id = "35876"
      }
      1. Check for errors in your configuration by running the plan command:

        terraform plan
        

        terraform plan shows you the changes that would take place if you were to apply the configurations with the terraform apply command. Running terraform plan should result in Terraform displaying that no changes are to be made.

      2. Once you have verified the configurations you provided in all three NodeBalancer configuration blocks, you are ready to begin managing your NodeBalancers with Terraform. Any changes or updates can be made by updating your linode_nodebalancer_example.tf file, then verifying the changes with the terrform plan command, and finally, applying the changes with the terraform apply command.

        For more available configuration options, visit the Linode NodeBalancer, Linode NodeBalancer Config, and Linode NodeBalancer Node Terraform documentation.

      Next Steps

      You can follow a process similar to what has been outlined in this guide to begin importing other pieces of your Linode infrastructure such as images, SSH keys, access tokens, and StackScripts. Check out the links in the More Information section below for helpful information.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Immutable Infrastructure


      Updated by Linode Contributed by Linode

      Use promo code DOCS10 for $10 credit on a new account.

      What is Immutable Infrastructure?

      Within a Continuous Delivery model it is crucial to automate a repeatable and reliable process for software deployment. The more you scale, the more complicated this task can become. You need granular control over all components of your stack across many servers and the ability to test how components will interact within their deployed environment.

      An immutable server infrastructure provides a level of control and testability to maintain a healthy and stable environment for all components that never deviate from a source definition. The key guideline behind an immutable infrastructure is that you never modify a running server. If a change is required, you instead completely replace the server with a new instance that contains the update or change. The new server instance is created with an origin image that is built upon, or a restored image from a previously defined server state. You can version control and tag your images for easy rollback and distribution. The image contains all the application code, runtime dependencies and configuration–in essence, the state needed for the software to run as expected.

      The immutable infrastructure approach to server management is a response to more traditional methods that rely on configuration management tools or one-off changes to maintain, update, and patch running server instances. With time, this method alone can lead to the slow drift of a server’s state from its original definition which can become difficult and time consuming to manage and debug (creating what is known as a snowflake server). Configuration synchronization can keep servers up to date, but any element that is not controlled by the configuration management tool can potentially introduce a point of drift. An immutable server, as a concept, naturally developed as result of the Phoenix Server pattern. This pattern asserts that servers should be destroyed frequently and then rebuilt with a base image. The concept of immutability goes one step further and restricts a production server from ever being adjusted.

      Create an Immutable Server Image

      The foundation of a successful immutable infrastructure is the server image. Below is a high-level outline of the steps involved in creating a source of truth production image that can be reliably deployed across many servers:

      1. Create an origin image to boot a server instance on a Linode. This will include baseline components like the running Linux distribution and installed packages.

      2. Use a configuration management tool, like Chef or Jenkins, to bring the server to the state needed to host your application code.

      3. Create a new server image from the configured server instance.

      4. Create a new test server instance with the new server image that includes all application code, configuration and dependencies.

      5. Run predefined automated tests to test the new server image.

      6. If the tests pass, deploy the new image to production.

      7. Destroy the previous production server and archive the destroyed server image.

      Docker Containers and Immutable Infrastructure

      Docker Containers were designed to be immutable. Docker comes with many utilities built in to help manage container images. If you change a container’s image definition, then you have created a new image. A docker commit will create a new image while still leaving the original image unchanged. A docker tag command lets you easily tag your Docker image commit. Other useful metadata can be added to Docker images to help identify image inheritance.

      Another benefit to using Docker containers to implement your immutable infrastructure, is that it helps manage data persistence or stateful components, like an application’s database. Stateful components cannot simply be destroyed and redeployed using a server image. With a Docker container, you can take advantage of their volumes feature. The Docker volume will exist outside the lifecycle of a given container, allowing you to destroy a container at will and spin up a new one with the persisted data.

      For more information on Docker, see our An Introduction to Docker guide. You can also read How to Deploy Microservices with Docker to learn about building large-scale applications with containers.

      Pros and Cons to an Immutable Infrastructure

      There are many benefits to implementing an immutable infrastructure into your CI/CD pipeline, however there are also some initial drawbacks that are important to understand. Your adoption of this pattern can depend on your current infrastructure, if one already exists, your team’s expertise and your own desire to learn and implement new tooling. This information will help you determine if this is a model that makes sense for your project or organization.

      Pros

      • Rollbacks are simpler since old server images are version controlled.
      • Changes to the server must be defined and automated providing more granular control over all server instances.
      • You can ensure consistent development and test environments across your organization.
      • It’s easier to implement and test microservices for a large-scale application.
      • Prevents snowflake servers.
      • Portability, especially when using Docker containers.

      Cons

      • Higher initial overhead to learn new tooling and implement the infrastructure.
      • Small quick fixes require a full redeploy.
      • Possible increase in resource usage and cost depending on how often servers are destroyed and redeployed in a given time period.

      Immutable infrastructure is an idea that was popularized by Chad Fowler in 2013 when he pronounced, “Trash your servers and burn your code”. Since then, many tools have been developed to make the Phoenix Pattern with an Immutable Infrastructure easier to implement.

      Here are some popular tools:

      • Linode Images allow you to take snapshots of your disks, and then deploy them to any Linode under your account
      • Packer helps you create multiple machine images from a single source configuration.
      • Terraform is used to manage change within your deployment stack and maintain Infrastructure as Code.
      • Docker can be used to create and manage images and isolate application services.
      • Docker Swarm helps you scale up the power of Docker by creating a cluster of Docker hosts.
      • SaltStack is a configuration management platform designed to control a number of minion servers from a single master server.
      • Linode Block Storage can easily store and persist date across Linodes.
      • Jenkins is an open-source automation server that allows you to build pipelines for build, testing, and deployment automation.

      Join our Community

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link