One place for hosting & domains

      Packer

      How To Build a Hashicorp Vault Server Using Packer and Terraform on DigitalOcean


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Vault, by Hashicorp, is an open-source tool for securely storing secrets and sensitive data in dynamic cloud environments. It provides strong data encryption, identity-based access using custom policies, and secret leasing and revocation, as well as a detailed audit log that is recorded at all times. Vault also features a HTTP API, making it the ideal choice for storing credentials in scattered service-oriented deployments, such as Kubernetes.

      Packer and Terraform, also developed by Hashicorp, can be used together to create and deploy images of Vault. Within this workflow, developers can use Packer to write immutable images for different platforms from a single configuration file, which specifies what the image should contain. Terraform will then deploy as many customized instances of the created images as needed.

      In this tutorial, you’ll use Packer to create an immutable snapshot of the system with Vault installed, and orchestrate its deployment using Terraform. In the end, you’ll have an automated system for deploying Vault in place, allowing you to focus on working with Vault itself, and not on the underlying installation and provisioning process.

      Prerequisites

      • Packer installed on your local machine. For instructions, visit the official documentation.
      • Terraform installed on your local machine. Visit the official documentation for a guide.
      • A personal access token (API key) with read and write permissions for your DigitalOcean account. To learn how to create one, visit How to Create a Personal Access Token from the docs.
      • An SSH key you’ll use to authenticate with the deployed Vault Droplets, available on your local machine and added to your DigitalOcean account. You’ll also need its fingerprint, which you can copy from the Security page of your account once you’ve added it. See the DigitalOcean documentation for detailed instructions or the How To Set Up SSH Keys tutorial.

      Step 1 — Creating a Packer Template

      In this step, you will write a Packer configuration file, called a template, that will instruct Packer on how to build an image that contains Vault pre-installed. You’ll be writing the configuration in JSON format, a commonly used human-readable configuration file format.

      For the purposes of this tutorial, you’ll store all files under ~/vault-orchestration. Create the directory by running the following command:

      • mkdir ~/vault-orchestration

      Navigate to it:

      You’ll store config files for Packer and Terraform separately, in different subdirectories. Create them using the following command:

      Because you’ll first be working with Packer, navigate to its directory:

      Using Template Variables

      Storing private data and application secrets in a separate variables file is the ideal way of keeping them out of your template. When building the image, Packer will substitute the referenced variables with their values. Hard coding secret values into your template is a security risk, especially if it’s going to be shared with team members or put up on public sites, such as GitHub.

      You’ll store them in the packer subdirectory, in a file called variables.json. Create it using your favorite text editor:

      Add the following lines:

      ~/vault-orchestration/packer/variables.json

      {
          "do_token": "your_do_api_key",
          "base_system_image": "ubuntu-18-04-x64",
          "region": "nyc3",
          "size": "s-1vcpu-1gb"
      }
      

      The variables file consists of a JSON dictionary, which maps variable names to their values. You’ll use these variables in the template you are about to create. If you wish, you can edit the base image, region, and Droplet size values according to the developer docs.

      Remember to replace your_do_api_key with your API key you created as part of the prerequisites, then save and close the file.

      Creating Builders and Provisioners

      With the variables file ready, you’ll now create the Packer template itself.

      You’ll store the Packer template for Vault in a file named template.json. Create it using your text editor:

      Add the following lines:

      ~/vault-orchestration/packer/template.json

      {
           "builders": [{
               "type": "digitalocean",
               "api_token": "{{user `do_token`}}",
               "image": "{{user `base_system_image`}}",
               "region": "{{user `region`}}",
               "size": "{{user `size`}}",
               "ssh_username": "root"
           }],
           "provisioners": [{
               "type": "shell",
               "inline": [
                   "sleep 30",
                   "sudo apt-get update",
                   "sudo apt-get install unzip -y",
                   "curl -L https://releases.hashicorp.com/vault/1.3.2/vault_1.3.2_linux_amd64.zip -o vault.zip",
                   "unzip vault.zip",
                   "sudo chown root:root vault",
                   "mv vault /usr/local/bin/",
                   "rm -f vault.zip"
               ]
          }]
      }
      

      In the template, you define arrays of builders and provisioners. Builders tell Packer how to build the system image (according to their type) and where to store it, while provisioners contain sets of actions Packer should perform on the system before turning it into an immutable image, such as installing or configuring software. Without any provisioners, you would end up with an untouched base system image. Both builders and provisioners expose parameters for further work flow customization.

      You first define a single builder of the type digitalocean, which means that when ordered to build an image, Packer will use the provided parameters to create a temporary Droplet of the defined size using the provided API key, with the specified base system image and in the specified region. The format for fetching a variable is {{user 'variable_name'}}, where the highlighted part is its name.

      When the temporary Droplet is provisioned, the provisioner will connect to it using SSH with the specified username, and will sequentially execute all defined provisioners before creating a DigitalOcean Snapshot from the Droplet and deleting it.

      It’s of type shell, which will execute given commands on the target. Commands can be specified either inline, as an array of strings, or defined in separate script files if inserting them into the template becomes unwieldy due to size. The commands in the template will wait 30 seconds for the system to boot up, and will then download and unpack Vault 1.3.2. Check the official Vault download page and replace the link in the commands with a newer version for Linux, if available.

      When you’re done, save and close the file.

      To verify the validity of your template, run the following command:

      • packer validate -var-file=variables.json template.json

      Packer accepts a path to the variables file via the -var-file argument.

      You’ll see the following output:

      Output

      Template validated successfully.

      If you get an error, Packer will specify exactly where it occurred, so you’ll be able to correct it.

      You now have a working template that produces an image with Vault installed, with your API key and other parameters defined in a separate file. You’re now ready to invoke Packer and build the snapshot.

      Step 2 — Building the Snapshot

      In this step, you’ll build a DigitalOcean Snapshot from your template using the Packer build command.

      To build your snapshot, run the following command:

      • packer build -var-file=variables.json template.json

      This command will take some time to finish. You’ll see a lot of output, which will look like this:

      Output

      digitalocean: output will be in this color. ==> digitalocean: Creating temporary ssh key for droplet... ==> digitalocean: Creating droplet... ==> digitalocean: Waiting for droplet to become active... ==> digitalocean: Using ssh communicator to connect: ... ==> digitalocean: Waiting for SSH to become available... ==> digitalocean: Connected to SSH! ==> digitalocean: Provisioning with shell script: /tmp/packer-shell035430322 ... ==> digitalocean: % Total % Received % Xferd Average Speed Time Time Time Current ==> digitalocean: Dload Upload Total Spent Left Speed digitalocean: Archive: vault.zip ==> digitalocean: 100 45.5M 100 45.5M 0 0 154M 0 --:--:-- --:--:-- --:--:-- 153M digitalocean: inflating: vault ==> digitalocean: Gracefully shutting down droplet... ==> digitalocean: Creating snapshot: packer-1581537927 ==> digitalocean: Waiting for snapshot to complete... ==> digitalocean: Destroying droplet... ==> digitalocean: Deleting temporary ssh key... Build 'digitalocean' finished. ==> Builds finished. The artifacts of successful builds are: --> digitalocean: A snapshot was created: 'packer-1581537927' (ID: 58230938) in regions '...'

      Packer logs all the steps it took while building your template. The last line contains the name of the snapshot (such as packer-1581537927) and its ID in parentheses, marked in red. Note your ID of the snapshot, because you’ll need it in the next step.

      If the build process fails due to API errors, wait a few minutes and then retry.

      You’ve built a DigitalOcean Snapshot according to your template. The snapshot has Vault pre-installed, and you can now deploy Droplets with it as their system image. In the next step, you’ll write Terraform configuration for automating such deployments.

      Step 3 — Writing Terraform Configuration

      In this step, you’ll write Terraform configuration for automating Droplet deployments of the snapshot containing the Vault you just built using Packer.

      Before writing actual Terraform configuration for deploying Vault from the previously built snapshot, you’ll first need to configure the DigitalOcean provider for it. Navigate to the terraform subdirectory by running:

      • cd ~/vault-orchestration/terraform

      Then, create a file named do-provider.tf, where you’ll store the provider:

      Add the following lines:

      ~/vault-orchestration/terraform/do-provider.tf

      variable "do_token" {
      }
      
      variable "ssh_fingerprint" {
      }
      
      variable "instance_count" {
        default = "1"
      }
      
      variable "do_snapshot_id" {
      }
      
      variable "do_name" {
        default = "vault"
      }
      
      variable "do_region" {
      }
      
      variable "do_size" {
      }
      
      variable "do_private_networking" {
        default = true
      }
      
      provider "digitalocean" {
        token = var.do_token
      }
      

      This file declares parameter variables and provides the digitalocean provider with an API key. You’ll later use these variables in your Terraform template, but you’ll first need to specify their values. For that purpose, Terraform supports specifying variable values in a variable definitions file similarly to Packer. The filename must end in either .tfvars or .tfvars.json. You’ll later pass that file to Terraform using the -var-file argument.

      Save and close the file.

      Create a variable definitions file called definitions.tfvars using your text editor:

      Add the following lines:

      ~/vault-orchestration/terraform/definitions.tf

      do_token         = "your_do_api_key"
      ssh_fingerprint  = "your_ssh_key_fingerprint"
      do_snapshot_id   = your_do_snapshot_id
      do_name          = "vault"
      do_region        = "nyc3"
      do_size          = "s-1vcpu-1gb"
      instance_count   = 1
      

      Remember to replace your_do_api_key, your_ssh_key_fingerprint, and your_do_snapshot_id with your account API key, the fingerprint of your SSH key, and the snapshot ID you noted from the previous step, respectively. The do_region and do_size parameters must have the same values as in the Packer variables file. If you want to deploy multiple instances at once, adjust instance_count to your desired value.

      When finished, save and close the file.

      For more information on the DigitalOcean Terraform provider, visit the official docs.

      You’ll store the Vault snapshot deployment configuration in a file named deployment.tf, under the terraform directory. Create it using your text editor:

      Add the following lines:

      ~/vault-orchestration/terraform/deployment.tf

      resource "digitalocean_droplet" "vault" {
        count              = var.instance_count
        image              = var.do_snapshot_id
        name               = var.do_name
        region             = var.do_region
        size               = var.do_size
        private_networking = var.do_private_networking
        ssh_keys = [
          var.ssh_fingerprint
        ]
      }
      
      output "instance_ip_addr" {
        value = {
          for instance in digitalocean_droplet.vault:
          instance.id => instance.ipv4_address
        }
        description = "The IP addresses of the deployed instances, paired with their IDs."
      }
      

      Here you define a single resource of the type digitalocean_droplet named vault. Then, you set its parameters according to the variable values and add a SSH key (using its fingerprint) from your DigitalOcean account to the Droplet resource. Finally, you output the IP addresses of all newly deployed instances to the console.

      Save and close the file.

      Before doing anything else with your deployment configuration, you’ll need to initialize the directory as a Terraform project:

      You’ll see the following output:

      Output

      Initializing the backend... Initializing provider plugins... The following providers do not have any version constraints in configuration, so the latest version was installed. To prevent automatic upgrades to new major versions that may contain breaking changes, it is recommended to add version = "..." constraints to the corresponding provider blocks in configuration, with the constraint strings suggested below. * provider.digitalocean: version = "~> 1.14" Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.

      When initializing a directory as a project, Terraform reads the available configuration files and downloads plugins deemed necessary, as logged in the output.

      You now have Terraform configuration for deploying your Vault snapshot ready. You can now move on to validating it and deploying it on a Droplet.

      Step 4 — Deploying Vault Using Terraform

      In this section, you’ll verify your Terraform configuration using the validate command. Once it verifies successfully, you’ll apply it and deploy a Droplet as a result.

      Run the following command to test the validity of your configuration:

      You’ll see the following output:

      Output

      Success! The configuration is valid.

      Next, run the plan command to see what Terraform will attempt when it comes to provision the infrastructure according to your configuration:

      • terraform plan -var-file="definitions.tfvars"

      Terraform accepts a variable definitions file via the -var-file parameter.

      The output will look similar to:

      Output

      Refreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage. ------------------------------------------------------------------------ An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.vault[0] will be created + resource "digitalocean_droplet" "vault" { ... } Plan: 1 to add, 0 to change, 0 to destroy. ------------------------------------------------------------------------ Note: You didn't specify an "-out" parameter to save this plan, so Terraform can't guarantee that exactly these actions will be performed if "terraform apply" is subsequently run.

      The green + on the beginning of the resource "digitalocean_droplet" "vault" line means that Terraform will create a new Droplet called vault, using the parameters that follow. This is correct, so you can now execute the plan by running terraform apply:

      • terraform apply -var-file="definitions.tfvars"

      Enter yes when prompted. After a few minutes, the Droplet will finish provisioning and you’ll see output similar to this:

      Output

      An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: + digitalocean_droplet.vault-droplet ... Plan: 1 to add, 0 to change, 0 to destroy. ... digitalocean_droplet.vault-droplet: Creating... ... Apply complete! Resources: 1 added, 0 changed, 0 destroyed. Outputs: instance_ip_addr = { "181254240" = "your_new_server_ip" }

      In the output, Terraform logs what actions it has performed (in this case, to create a Droplet) and displays its public IP address at the end. You’ll use it to connect to your new Droplet in the next step.

      You have created a new Droplet from the snapshot containing Vault and are now ready to verify it.

      Step 5 — Verifying Your Deployed Droplet

      In this step, you’ll access your new Droplet using SSH and verify that Vault was installed correctly.

      If you are on Windows, you can use software such as Kitty or Putty to connect to the Droplet with an SSH key.

      On Linux and macOS machines, you can use the already available ssh command to connect:

      Answer yes when prompted. Once you are logged in, run Vault by executing:

      You’ll see its “help” output, which looks like this:

      Output

      Usage: vault <command> [args] Common commands: read Read data and retrieves secrets write Write data, configuration, and secrets delete Delete secrets and configuration list List data or secrets login Authenticate locally agent Start a Vault agent server Start a Vault server status Print seal and HA status unwrap Unwrap a wrapped secret Other commands: audit Interact with audit devices auth Interact with auth methods debug Runs the debug command kv Interact with Vault's Key-Value storage lease Interact with leases namespace Interact with namespaces operator Perform operator-specific tasks path-help Retrieve API help for paths plugin Interact with Vault plugins and catalog policy Interact with policies print Prints runtime configurations secrets Interact with secrets engines ssh Initiate an SSH session token Interact with tokens

      You can quit the connection by typing exit.

      You have now verified that your newly deployed Droplet was created from the snapshot you made, and that Vault is installed correctly.

      Conclusion

      You now have an automated system for deploying Hashicorp Vault on DigitalOcean Droplets using Terraform and Packer. You can now deploy as many Vault servers as you need. To start using Vault, you’ll need to initialize it and further configure it. For instructions on how to do that, visit the official docs.

      For more tutorials using Terraform, check out our Terraform content page.



      Source link

      How to Use the Linode Packer Builder


      Updated by Linode Contributed by Linode

      What is Packer?

      Packer is a HashiCorp maintained open source tool that is used to create machine images. A machine image provides the operating system, applications, application configurations, and data files that a virtual machine instance will run once it’s deployed. Using a single source configuration, you can generate identical machine images. Packer can be used in conjunction with common configuration management tools like Chef, Puppet, or Ansible to install software to your Linode and include those configurations into your image.

      In this guide you will complete the following steps:

      Before You Begin

      1. Ensure you have access to cURL on your computer.

      2. Generate a Linode API v4 access token with permission to read and write Linodes. You can follow the Get an Access Token section of the Getting Started with the Linode API guide if you do not already have one.

        Note

        The example cURL commands in this guide will refer to a $TOKEN environment variable. For example:

        curl -H "Authorization: Bearer $TOKEN" 
            https://api.linode.com/v4/images
        

        To set this variable up in your terminal, run:

        export TOKEN='<your-Linode-APIv4-token>'
        

        If you do not do this, you will need to alter these commands so that your API token is inserted wherever $TOKEN appears.

      3. Create an SSH authentication key-pair if your computer does not already have one. Your SSH public key will be added to your image via an Ansible module.

      4. Install Ansible on your computer and familiarize yourself with basic Ansible concepts (optional). Using the Getting Started With Ansible – Basic Installation and Setup guide, follow the steps in the Install Ansible section.

      The Linode Packer Builder

      In Packer’s ecosystem, builders are responsible for deploying machine instances and generating redeployable images from them. The Linode Packer builder can be used to create a Linode image that can be redeployed to other Linodes. You can share your image template across your team to ensure everyone is using a uniform development and testing environment. This process will help your team maintain an immutable infrastructure within your continuous delivery pipeline.

      The Linode Packer builder works in the following way:

      • You create a template to define the type of image you want Packer to build.
      • Packer uses the template to build the image on a temporary Linode.
      • A snapshot of the built image is taken and stored as a private Linode image.
      • The temporary Linode is deleted.
      • You can then reuse the private Linode image as desired, for example, by using your image to create Linode instances with Terraform.

      Install Packer

      The following instructions will install Packer on Ubuntu 18.04 from a downloaded binary. For more installation methods, including installing on other operating systems or compiling from source, see Packer’s official documentation.

      1. Make a Packer project directory in your home directory and then navigate to it:

        mkdir ~/packer
        cd ~/packer
        
      2. Download the precompiled binary for your system from the Packer website. Example wget commands are listed using the latest version available at time of publishing (1.4.4). You should inspect the links on the download page to see if a newer version is available and update the wget commands to use those URLs instead:

        • The 64-bit Linux .zip archive

          wget https://releases.hashicorp.com/packer/1.4.4/packer_1.4.4_linux_amd64.zip
          
        • The SHA256 checksums file

          wget https://releases.hashicorp.com/packer/1.4.4/packer_1.4.4_SHA256SUMS
          
        • The checksum signature file

          wget https://releases.hashicorp.com/packer/1.4.4/packer_1.4.4_SHA256SUMS.sig
          

      Verify the Download

      1. Import the HashiCorp Security GPG key (listed on the HashiCorp Security page under Secure Communications):

        gpg --recv-keys 51852D87348FFC4C
        

        The output should show that the key was imported:

          
        gpg: keybox '/home/user/.gnupg/pubring.kbx' created
        gpg: key 51852D87348FFC4C: 17 signatures not checked due to missing keys
        gpg: /home/user/.gnupg/trustdb.gpg: trustdb created
        gpg: key 51852D87348FFC4C: public key "HashiCorp Security " imported
        gpg: no ultimately trusted keys found
        gpg: Total number processed: 1
        gpg:               imported: 1
        
        
      2. Verify the checksum file’s GPG signature:

        gpg --verify packer*.sig packer*SHA256SUMS
        

        The output should contain the Good signature from "HashiCorp Security <[email protected]>" confirmation message:

          
        gpg: Signature made Tue 01 Oct 2019 06:30:17 PM UTC
        gpg:                using RSA key 91A6E7F85D05C65630BEF18951852D87348FFC4C
        gpg: Good signature from "HashiCorp Security " [unknown]
        gpg: WARNING: This key is not certified with a trusted signature!
        gpg:          There is no indication that the signature belongs to the owner.
        Primary key fingerprint: 91A6 E7F8 5D05 C656 30BE  F189 5185 2D87 348F FC4C
              
        
      3. Verify that the fingerprint output matches the fingerprint listed in the Secure Communications section of the HashiCorp Security page.

      4. Verify the .zip archive’s checksum:

        sha256sum -c packer*SHA256SUMS 2>&1 | grep OK
        

        The output should show the file’s name as given in the packer*SHA256SUMS file:

          
        packer_1.4.4_linux_amd64.zip: OK
              
        

      Configure the Packer Environment

      1. Unzip packer_*_linux_amd64.zip to your ~/packer directory:

        unzip packer_*_linux_amd64.zip
        

        Note

        If you receive an error that indicates unzip is missing from your system, install the unzip package and try again.

      2. Edit your ~./profile shell configuration file to include the ~/packer directory in your PATH. Then, reload the Bash profile:

        echo 'export PATH="$PATH:$HOME/packer"' >> ~/.profile
        source ~/.profile
        

        Note

        If you use a different shell, your shell configuration may have a different file name.

      3. Verify Packer can run by calling it with no options or arguments:

        packer
        
          
        Usage: packer [--version] [--help]  []
        
        Available commands are:
            build       build image(s) from template
            console     creates a console for testing variable interpolation
            fix         fixes templates from old versions of packer
            inspect     see components of a template
            validate    check that a template is valid
            version     Prints the Packer version
            
        

      Use the Linode Packer Builder

      Now that Packer is installed on your local system, you can create a Packer template. A template is a JSON formatted file that contains the configurations needed to build a machine image.

      In this section you will create a template that uses the Linode Packer builder to create an image using Debian 9 as its base distribution. The template will also configure your system image with a new limited user account, and a public SSH key from your local computer. The additional system configuration will be completed using Packer’s Ansible provisioner and an example Ansible Playbook. A Packer provisioner is a built-in third-party integration that further configures a machine instance during the boot process and prior to taking the machine’s snapshot.

      Note

      The steps in this section will incur charges related to deploying a 1GB Nanode. The Linode will only be deployed for the duration of the time needed to create and snapshot your image and will then be deleted. See our Billing and Payments guide for details about hourly billing.

      Access Linode and Private Images

      The Linode Packer Builder requires a Linode Image ID to deploy a disk from. This guide’s example will use the image linode/debian9, but you can replace it with any other image you prefer. To list the official Linode images and your account’s private images, you can curl the Linode API:

      curl -H "Authorization: Bearer $TOKEN" 
          https://api.linode.com/v4/images
      

      Create Your Template

      Note

      The Packer builder does not manage images. Once it creates an image, it will be stored on your Linode account and can be accessed and used as needed from the Linode Cloud Manager, via Linode’s API v4, or using third-party tools like Terraform. Linode Images are limited to 2GB per Image and 3 Images per account.

      Create a file named example.json with the following content:

      ~/packer/example.json
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      
      {
        "variables": {
          "my_linode_token": ""
        },
        "builders": [{
          "type": "linode",
          "image": "linode/debian9",
          "linode_token": "{{user `my_linode_token` }}",
          "region": "us-east",
          "instance_type": "g6-nanode-1",
          "instance_label": "temp-linode-packer",
          "image_label": "my-private-packer-image",
          "image_description": "My private packer image",
          "ssh_username": "root"
        }],
        "provisioners": [
          {
            "type": "ansible",
            "playbook_file": "./limited_user_account.yml"
          }
        ]
      }

      If you would rather not use a provisioner in your Packer template, you can use the example file below:

      ~/packer/example.json
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      
      {
        "variables": {
          "my_linode_token": ""
        },
        "builders": [{
          "type": "linode",
          "image": "linode/debian9",
          "linode_token": "{{user `my_linode_token` }}",
          "region": "us-east",
          "instance_type": "g6-nanode-1",
          "instance_label": "temp-linode-packer",
          "image_label": "my-private-packer-image",
          "image_description": "My private packer image",
          "ssh_username": "root"
        }]
      }

      There are three sections to the Packer template file:

      • variables: This section allows you to further configure your template with command-line variables, environment variables, Vault, or variable files. In the section that follows, you will use a command line variable to pass your Linode account’s API token to the template.
      • builders: The builder section contains the definition for the machine image that will be created. In the example template, you use a single builder –the Linode builder. The builder uses the linode/debian9 image as its base and will assign the image a label of my-private-packer-image. It will deploy a 1GB Nanode, take a snapshot, and create a reusable Linode Image. Refer to Packer’s official documentation for a complete Linode Builder configuration reference.

        Note

        You can use multiple builders in a single template file. This process is known as a parallel build which allows you to create multiple images for multiple platforms from a single template.
      • provisioners: (optional) with a provisioner you can further configure your system by completing common system administration tasks, like adding users, installing and configuring software, and more. The example uses Packer’s built-in Ansible provider and executes the tasks defined in the local limited_user_account.yml playbook. This means your Linode image will also contain anything executed by the playbook on your Nanode. Packer supports several other provisioners, like Chef, Salt, and shell scripts.

      Create your Ansible Playbook (Optional)

      In the previous section you created a Packer template that makes use of an Ansible Playbook to add system configurations to your image. Prior to building your image, you will need to create the referenced limited_user_account.yml Playbook. You will complete those steps in this section. If you chose not to use the Ansible provider, you can skip this section.

      1. The example Ansible Playbook makes use of Ansible’s user module. This module requires that a hashed value be used for its password parameter. Use the mkpasswd utility to generate a hashed password that you will use in the next step.

        mkpasswd --method=sha-512
        

        You will be prompted to enter a plain-text password and the utility will return a hash of the password.

          
        Password:
        $6$aISRzCJH4$nNJ/9ywhnH/raHuVCRu/unE7lX.L9ragpWgvD0rknlkbAw0pkLAwkZqlY.ahjj/AAIKo071LUB0BONl.YMsbb0
                  
        
      2. In your packer directory, create a file with the following content. Ensure you replace the value of the password parameter with your own hashed password:

        ~/packer/limited_user_account.yml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        
        ---
        - hosts: all
          remote_user: root
          vars:
            NORMAL_USER_NAME: 'my-user-name'
          tasks:
            - name: "Create a secondary, non-root user"
              user: name={{ NORMAL_USER_NAME }}
                    password='$6$eebkauNy4h$peyyL1MTN7F4JKG44R27TTmbXlloDUsjPir/ATJue2bL0u8FBk0VuUvrpsMq6rSSOCm8VSip0QHN8bDaD/M/k/'
                    shell=/bin/bash
            - name: Add remote authorized key to allow future passwordless logins
              authorized_key: user={{ NORMAL_USER_NAME }} key="{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
            - name: Add normal user to sudoers
              lineinfile: dest=/etc/sudoers
                          regexp="{{ NORMAL_USER_NAME }} ALL"
                          line="{{ NORMAL_USER_NAME }}"
        • This Playbook will created a limited user account named my-user-name. You can replace my-user-name, the value of the variable NORMAL_USER_NAME, with any system username you’d like to create. It will then add a public SSH key stored on your local computer. If the public key you’d like to use is stored in a location other than ~/.ssh/id_rsa.pub, you can update that value. Finally, the Playbook adds the new system user to the sudoers file.

      Create your Linode Image

      You should now have your completed template file and your Ansible Playbook file (optional) and can validate the template and finally, build your image.

      1. Validate the template before building your image. Replace the value of my_linode_token with your own Linode API v4 token.

        packer validate -var 'my_linode_token=myL0ngT0kenStr1ng' example.json
        

        If successful, you will see the following:

          
        Template validated successfully.
              
        

        Note

        To learn how to securely store and use your API v4 token, see the Vault Variables section of Packer’s documentation.
      2. You can now build your final image. Replace the value of my_linode_token with your own Linode API v4 token. This process may take a few minutes to complete.

        packer build -var 'my_linode_token=myL0ngT0kenStr1ng' example.json
        
          
        linode output will be in this color.
        
        ==> linode: Running builder ...
        ==> linode: Creating temporary SSH key for instance...
        ==> linode: Creating Linode...
        ==> linode: Using ssh communicator to connect: 192.0.2.0
        ==> linode: Waiting for SSH to become available...
        ==> linode: Connected to SSH!
        ==> linode: Provisioning with Ansible...
        ==> linode: Executing Ansible: ansible-playbook --extra-vars packer_build_name=linode packer_builder_type=linode -o IdentitiesOnly=yes -i /tmp/packer-provisioner-ansible136766862 /home/user/packer/limited_user_account.yml -e ansible_ssh_private_key_file=/tmp/ansible-key642969643
            linode:
            linode: PLAY [all] *********************************************************************
            linode:
            linode: TASK [Gathering Facts] *********************************************************
            linode: ok: [default]
            linode:
            linode: TASK [Create a secondary, non-root user] ***************************************
            linode: changed: [default]
            linode:
            linode: TASK [Add remote authorized key to allow future passwordless logins] ***********
            linode: changed: [default]
            linode:
            linode: TASK [Add normal user to sudoers] **********************************************
            linode: changed: [default]
            linode:
            linode: PLAY RECAP *********************************************************************
            linode: default                    : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
            linode:
        ==> linode: Shutting down Linode...
        ==> linode: Creating image...
        Build 'linode' finished.
        
        ==> Builds finished. The artifacts of successful builds are:
        --> linode: Linode image: my-private-packer-image (private/7550080)
              
        

        The output will provide you with your new private image’s ID. In the example output the image ID is private/7550080. This image is now available on your Linode account to use as you desire. As an example, in the next section you will use this newly created image to deploy a new 1 GB Nanode using Linode’s API v4.

      Deploy a Linode with your New Image

      1. Issue the following curl command to deploy a 1GB Nanode to the us-east data center using your new Image to your Linode account. Ensure you replace private/7550080 with your own Linode Image’s ID and assign your own root_pass and label.

        curl -H "Content-Type: application/json" 
          -H "Authorization: Bearer $TOKEN" 
          -X POST -d '{
            "image": "private/7550080",
            "root_pass": "[email protected]",
            "booted": true,
            "label": "my-example-label",
            "type": "g6-nanode-1",
            "region": "us-east"
          }' 
          https://api.linode.com/v4/linode/instances
        

        You should receive a similar response from the API:

          
        {"id": 17882092, "created": "2019-10-23T22:47:47", "group": "", "specs": {"gpus": 0, "transfer": 1000, "memory": 1024, "disk": 25600, "vcpus": 1}, "label": "my-example-linode", "updated": "2019-10-23T22:47:47", "watchdog_enabled": true, "image": null, "ipv4": ["192.0.2.0"], "ipv6": "2600:3c03::f03c:92ff:fe98:6d9a/64", "status": "provisioning", "tags": [], "region": "us-east", "backups": {"enabled": false, "schedule": {"window": null, "day": null}}, "hypervisor": "kvm", "type": "g6-nanode-1", "alerts": {"cpu": 90, "network_in": 10, "transfer_quota": 80, "io": 10000, "network_out": 10}}%
            
        
      2. If you used the Ansible provisioner, once your Linode is deployed, you should be able to SSH into your newly deployed Linode using the limited user account you created with the Ansible playbook and your public SSH key. Your Linode’s IPv4 address will be available in the API response returned after creating the Linode.

        ssh [email protected]
        

      Next Steps

      If you’d like to learn how to use Terraform to deploy Linodes using your Packer created image, you can follow our Terraform guides to get started:

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link