One place for hosting & domains

      Deploy

      How To Deploy and Manage Your DNS using DNSControl on Ubuntu 18.04


      The author selected the Electronic Frontier Foundation Inc to receive a donation as part of the Write for DOnations program.

      Introduction

      DNSControl is an infrastructure-as-code tool that allows you to deploy and manage your DNS zones using standard software development principles, including version control, testing, and automated deployment. DNSControl was created by Stack Exchange and is written in Go.

      Using DNSControl eliminates many of the pitfalls of manual DNS management, as zone files are stored in a programmable format. This allows you to deploy zones to multiple DNS providers simultaneously, identify syntax errors, and push out your DNS configuration automatically, reducing the risk of human error. Another common usage of DNSControl is to quickly migrate your DNS to a different provider; for example, in the event of a DDoS attack or system outage.

      In this tutorial, you’ll install and configure DNSControl, create a basic DNS configuration, and begin deploying DNS records to a live provider. As part of this tutorial, we will use DigitalOcean as the example DNS provider. If you wish to use a different provider, the setup is very similar. When you’re finished, you’ll be able to manage and test your DNS configuration in a safe, offline environment, and then automatically deploy it to production.

      Prerequisites

      Before you begin this guide you’ll need the following:

      • One Ubuntu 18.04 server set up by following the Initial Server Setup with Ubuntu 18.04, including a sudo non-root user and enabled firewall to block non-essential ports. your-server-ip refers to the IP address of the server where you’re hosting your website or domain.
      • A fully registered domain name with DNS hosted by a supported provider. This tutorial will use example.com throughout and DigitalOcean as the service provider.
      • A DigitalOcean API key (Personal Access Token) with read and write permissions. To create one, visit How to Create a Personal Access Token.

      Once you have these ready, log in to your server as your non-root user to begin.

      Step 1 — Installing DNSControl

      DNSControl is written in Go, so you’ll start this step by installing Go to your server and setting your GOPATH.

      Go is available within Ubuntu’s default software repositories, making it possible to install using conventional package management tools.

      Begin by updating the local package index to reflect any new upstream changes:

      Then, install the golang-go package:

      • sudo apt install golang-go

      After confirming the installation, apt will download and install Go and all of its required dependencies.

      Next, you'll configure the required path environment variables for Go. If you would like to know more about this, you can read this tutorial on Understanding the GOPATH. Start by editing the ~/.profile file:

      Add the following lines to the very end of your file:

      ~/.profile

      ...
      export GOPATH="$HOME/go"
      export PATH="$PATH:$GOPATH/bin"
      

      Once you have added these lines to the bottom of the file, save and close it. Then reload your profile by either logging out and back in, or sourcing the file again:

      Now you've installed and configured Go, you can install DNSControl.

      The go get command can be used to fetch a copy of the code, automatically compile it and install it into your Go directory:

      • go get github.com/StackExchange/dnscontrol

      Once this is complete, you can check the installed version to make sure that everything is working:

      Your output will look similar to the following:

      Output

      dnscontrol 0.2.8-dev

      If you see a dnscontrol: command not found error, double-check your Go path setup.

      Now that you've installed DNSControl, you can create a configuration directory and connect DNSControl to your DNS provider in order to allow it to make changes to your DNS records.

      Step 2 — Configuring DNSControl

      In this step, you'll create the required configuration directories for DNSControl, and connect it to your DNS provider so that it can begin to make live changes to your DNS records.

      Firstly, create a new directory in which you can store your DNSControl configuration, and then move into it:

      • mkdir ~/dnscontrol
      • cd ~/dnscontrol

      Note: This tutorial will focus on the initial set up of DNSControl; however for production use it is recommended to store your DNSControl configuration in a version control system (VCS) such as Git. The advantages of this include full version control, integration with CI/CD for testing, seamlessly rolling-back deployments, and so on.

      If you plan to use DNSControl to write BIND zone files, you should also create the zones directory:

      BIND zone files are a raw, standardized method for storing DNS zones/records in plain text format. They were originally used for the BIND DNS server software, but are now widely adopted as the standard method for storing DNS zones. BIND zone files produced by DNSControl are useful if you want to import them to a custom or self-hosted DNS server, or for auditing purposes.

      However, if you just want to use DNSControl to push DNS changes to a managed provider, the zones directory will not be needed.

      Next, you need to configure the creds.json file, which is what will allow DNSControl to authenticate to your DNS provider and make changes. The format of creds.json differs slightly depending on the DNS provider that you are using. Please see the Service Providers list in the official DNSControl documentation to find the configuration for your own provider.

      Create the file creds.json in the ~/dnscontrol directory:

      • cd ~/dnscontrol
      • nano creds.json

      Add the sample creds.json configuration for your DNS provider to the file. If you're using DigitalOcean as your DNS provider, you can use the following:

      ~/dnscontrol/creds.json

      {
        "digitalocean": {
          "token": "your-digitalocean-oauth-token"
        }
      }
      

      This file tells DNSControl to which DNS providers you want it to connect.

      You'll need to provide some form of authentication for your DNS provider. This is usually an API key or OAuth token, but some providers require extra information, as documented in the Service Providers list in the official DNSControl documentation.

      Warning: This token will grant access to your DNS provider account, so you should protect it as you would a password. Also, ensure that if you're using a version control system, either the file containing the token is excluded (e.g. using .gitignore), or is securely encrypted in some way.

      If you're using DigitalOcean as your DNS provider, you can use the required OAuth token in your DigitalOcean account settings that you generated as part of the prerequisites.

      If you have multiple different DNS providers—for example, for multiple domain names, or delegated DNS zones—you can define these all in the same creds.json file.

      You've set up the initial DNSControl configuration directories, and configured creds.json to allow DNSControl to authenticate to your DNS provider and make changes. Next you'll create the configuration for your DNS zones.

      Step 3 — Creating a DNS Configuration File

      In this step, you'll create an initial DNS configuration file, which will contain the DNS records for your domain name or delegated DNS zone.

      dnsconfig.js is the main DNS configuration file for DNSControl. In this file, DNS zones and their corresponding records are defined using JavaScript syntax. This is known as a DSL, or Domain Specific Language. The JavaScript DSL page in the official DNSControl documentation provides further details.

      To begin, create the DNS configuration file in the ~/dnscontrol directory:

      • cd ~/dnscontrol
      • nano dnsconfig.js

      Then, add the following sample configuration to the file:

      ~/dnscontrol/dnsconfig.js

      // Providers:
      
      var REG_NONE = NewRegistrar('none', 'NONE');
      var DNS_DIGITALOCEAN = NewDnsProvider('digitalocean', 'DIGITALOCEAN');
      
      // Domains:
      
      D('example.com', REG_NONE, DnsProvider(DNS_DIGITALOCEAN),
          A('@', 'your-server-ip')
      );
      

      This sample file defines a domain name or DNS zone at a particular provider, which in this case is example.com hosted by DigitalOcean. An example A record is also defined for the zone root (@), pointing to the IP of the server that you're hosting your domain/website on.

      There are three main functions that make up a basic DNSControl configuration file:

      • NewRegistrar(name, type, metadata): defines the domain registrar for your domain name. DNSControl can use this to make required changes, such as modifying the authoritative nameservers. If you only want to use DNSControl to manage your DNS zones, this can generally be left as NONE.

      • NewDnsProvider(name, type, metadata): defines a DNS service provider for your domain name or delegated zone. This is where DNSControl will push the DNS changes that you make.

      • D(name, registrar, modifiers): defines a domain name or delegated DNS zone for DNSControl to manage, as well as the DNS records present in the zone.

      You should configure NewRegistrar(), NewDnsProvider(), and D() accordingly using the Service Providers list in the official DNSControl documentation.

      If you're using DigitalOcean as your DNS provider, and only need to be able to make DNS changes (rather than authoritative nameservers as well), the sample in the preceding code block is already correct.

      Once complete, save and close the file.

      In this step, you set up a DNS configuration file for DNSControl, with the relevant providers defined. Next, you'll populate the file with some useful DNS records.

      Step 4 — Populating Your DNS Configuration File

      Next, you can populate the DNS configuration file with useful DNS records for your website or service, using the DNSControl syntax.

      Unlike traditional BIND zone files, where DNS records are written in a raw, line-by-line format, DNS records within DNSControl are defined as a function parameter (domain modifier) to the D() function, as shown briefly in Step 3.

      A domain modifier exists for each of the standard DNS record types, including A, AAAA, MX, TXT, NS, CAA, and so on. A full list of available record types is available in the Domain Modifiers section of the DNSControl documentation.

      Modifiers for individual records are also available (record modifiers). Currently these are primarily used for setting the TTL (time to live) of individual records. A full list of available record modifiers is available in the Record Modifiers section of the DNSControl documentation. Record modifiers are optional, and in most basic use cases can be left out.

      The syntax for setting DNS records varies slightly for each record type. Following are some examples for the most common record types:

      • A records:

        • Purpose: To point to an IPv4 address.
        • Syntax: A('name', 'address', optional record modifiers)
        • Example: A('@', 'your-server-ip', TTL(30))
      • AAAA records:

        • Purpose: To point to an IPv6 address.
        • Syntax: AAAA('name', 'address', optional record modifiers)
        • Example: AAAA('@', '2001:db8::1') (record modifier left out, so default TTL will be used)
      • CNAME records:

        • Purpose: To make your domain/subdomain an alias of another.
        • Syntax: CNAME('name', 'target', optional record modifiers)
        • Example: CNAME('subdomain1', 'example.org.') (note that a trailing . must be included if there are any dots in the value)
      • MX records:

        • Purpose: To direct email to specific servers/addresses.
        • Syntax: MX('name', 'priority', 'target', optional record modifiers)
        • Example: MX('@', 10, 'mail.example.net') (note that a trailing . must be included if there are any dots in the value)
      • TXT records:

        • Purpose: To add arbitrary plain text, often used for configurations without their own dedicated record type.
        • Syntax: TXT('name', 'content', optional record modifiers)
        • Example: TXT('@', 'This is a TXT record.')
      • CAA records:

        • Purpose: To restrict and report on Certificate Authorities (CAs) who can issue TLS certificates for your domain/subdomains.
        • Syntax: CAA('name', 'tag', 'value', optional record modifiers)
        • Example: CAA('@', 'issue', 'letsencrypt.org')

      In order to begin adding DNS records for your domain or delegated DNS zone, edit your DNS configuration file:

      • cd ~/dnscontrol
      • nano dnsconfig.js

      Next, you can begin populating the parameters for the existing D() function using the syntax described in the previous list, as well as the Domain Modifiers section of the official DNSControl documentation. A comma (,) must be used in-between each record.

      For reference, the code block here contains a full sample configuration for a basic, initial DNS setup:

      ~/dnscontrol/dnsconfig.js

      ...
      
      D('example.com', REG_NONE, DnsProvider(DNS_DIGITALOCEAN),
          A('@', 'your-server-ip'),
          A('www', 'your-server-ip'),
          A('mail', 'your-server-ip'),
          AAAA('@', '2001:db8::1'),
          AAAA('www', '2001:db8::1'),
          AAAA('mail', '2001:db8::1'),
          MX('@', 10, 'mail.example.com.'),
          TXT('@', 'v=spf1 -all'),
          TXT('_dmarc', 'v=DMARC1; p=reject; rua=mailto:abuse@example.com; aspf=s; adkim=s;')
      );
      

      Once you have completed your initial DNS configuration, save and close the file.

      In this step, you set up the initial DNS configuration file, containing your DNS records. Next, you will test the configuration and deploy it.

      Step 5 — Testing and Deploying Your DNS Configuration

      In this step, you will run a local syntax check on your DNS configuration, and then deploy the changes to the live DNS server/provider.

      Firstly, move into your dnscontrol directory:

      Next, use the preview function of DNSControl to check the syntax of your file, and output what changes it will make (without actually making them):

      If the syntax of your DNS configuration file is correct, DNSControl will output an overview of the changes that it will make. This should look similar to the following:

      Output

      ******************** Domain: example.com ----- Getting nameservers from: digitalocean ----- DNS Provider: digitalocean...8 corrections #1: CREATE A example.com your-server-ip ttl=300 #2: CREATE A www.example.com your-server-ip ttl=300 #3: CREATE A mail.example.com your-server-ip ttl=300 #4: CREATE AAAA example.com 2001:db8::1 ttl=300 #5: CREATE TXT _dmarc.example.com "v=DMARC1; p=reject; rua=mailto:abuse@example.com; aspf=s; adkim=s;" ttl=300 #6: CREATE AAAA www.example.com 2001:db8::1 ttl=300 #7: CREATE AAAA mail.example.com 2001:db8::1 ttl=300 #8: CREATE MX example.com 10 mail.example.com. ttl=300 ----- Registrar: none...0 corrections Done. 8 corrections.

      If you see an error warning in your output, DNSControl will provide details on what and where the error is located within your file.

      Warning: The next command will make live changes to your DNS records and possibly other settings. Please ensure that you are prepared for this, including taking a backup of your existing DNS configuration, as well as ensuring that you have the means to roll back if needed.

      Finally, you can push out the changes to your live DNS provider:

      You'll see an output similar to the following:

      Output

      ******************** Domain: example.com ----- Getting nameservers from: digitalocean ----- DNS Provider: digitalocean...8 corrections #1: CREATE TXT _dmarc.example.com "v=DMARC1; p=reject; rua=mailto:abuse@example.com; aspf=s; adkim=s;" ttl=300 SUCCESS! #2: CREATE A example.com your-server-ip ttl=300 SUCCESS! #3: CREATE AAAA example.com 2001:db8::1 ttl=300 SUCCESS! #4: CREATE AAAA www.example.com 2001:db8::1 ttl=300 SUCCESS! #5: CREATE AAAA mail.example.com 2001:db8::1 ttl=300 SUCCESS! #6: CREATE A www.example.com your-server-ip ttl=300 SUCCESS! #7: CREATE A mail.example.com your-server-ip ttl=300 SUCCESS! #8: CREATE MX example.com 10 mail.example.com. ttl=300 SUCCESS! ----- Registrar: none...0 corrections Done. 8 corrections.

      Now, if you check the DNS settings for your domain in the DigitalOcean control panel, you'll see the changes.

      A screenshot of the DigitalOcean control panel, showing some of the DNS changes that DNSControl has made.

      You can also check the record creation by running a DNS query for your domain/delegated zone. You'll see that the records have been updated accordingly:

      You'll see output showing the IP address and relevant DNS record from your zone that was deployed using DNSControl. DNS records can take some time to propagate, so you may need to wait and run this command again.

      In this final step, you ran a local syntax check of the DNS configuration file, then deployed it to your live DNS provider, and tested that the changes were made successfully.

      Conclusion

      In this article you set up DNSControl and deployed a DNS configuration to a live provider. Now you can manage and test your DNS configuration changes in a safe, offline environment before deploying them to production.

      If you wish to explore this subject further, DNSControl is designed to be integrated into your CI/CD pipeline, allowing you to run in-depth tests and have more control over your deployment to production. You could also look into integrating DNSControl into your infrastructure build/deployment processes, allowing you to deploy servers and add them to DNS completely automatically.

      If you wish to go further with DNSControl, the following DigitalOcean articles provide some interesting next steps to help integrate DNSControl into your change management and infrastructure deployment workflows:



      Source link

      How to use the Linode Ansible Module to Deploy Linodes


      Updated by Linode Contributed by Linode

      Ansible is a popular open-source tool that can be used to automate common IT tasks, like cloud provisioning and configuration management. With Ansible’s 2.8 release, you can deploy Linode instances using our latest API (v4). Ansible’s linode_v4 module adds the functionality needed to deploy and manage Linodes via the command line or in your Ansible Playbooks. While the dynamic inventory plugin for Linode helps you source your Ansible inventory directly from the Linode API (v4).

      In this guide you will learn how to:

      • Deploy and manage Linodes using Ansible and the linode_v4 module.
      • Create an Ansible inventory for your Linode infrastructure using the dynamic inventory plugin for Linode.

      Caution

      This guide’s example instructions will create a 1 GB Nanode billable resource on your Linode account. If you do not want to keep using the Nanode that you create, be sure to delete the resource when you have finished the guide.

      If you remove the resource afterward, you will only be billed for the hour(s) that the resources were present on your account.

      Before You Begin

      Note

      Configure Ansible

      The Ansible configuration file is used to adjust Ansible’s default system settings. Ansible will search for a configuration file in the directories listed below, in the order specified, and apply the first configuration values it finds:

      • ANSIBLE_CONFIG environment variable pointing to a configuration file location. If passed, it will override the default Ansible configuration file.
      • ansible.cfg file in the current directory
      • ~/.ansible.cfg in the home directory
      • /etc/ansible/ansible.cfg

      In this section, you will create an Ansible configuration file and add options to disable host key checking, and to whitelist the Linode inventory plugin. The Ansible configuration file will be located in a development directory that you create, however, it could exist in any of the locations listed above. See Ansible’s official documentation for a full list of available configuration settings.

      Caution

      When storing your Ansible configuration file, ensure that its corresponding directory does not have world-writable permissions. This could pose a security risk that allows malicious users to use Ansible to exploit your local system and remote infrastructure. At minimum, the directory should restrict access to particular users and groups. For example, you can create an ansible group, only add privileged users to the ansible group, and update the Ansible configuration file’s directory to have 764 permissions. See the Linux Users and Groups guide for more information on permissions.
      1. In your home directory, create a directory to hold all of your Ansible related files and move into the directory:

        mkdir development && cd development
        
      2. Create the Ansible configuration file, ansible.cfg in the development directory and add the host_key_checking and enable_plugins options.

        ~/development/ansible.cfg
        1
        2
        3
        4
        5
        6
        
        [defaults]
        host_key_checking = False
        VAULT_PASSWORD_FILE = ./vault-pass
        [inventory]
        enable_plugins = linode
              
        • host_key_checking = False will allow Ansible to SSH into hosts without having to accept the remote server’s host key. This will disable host key checking globally.
        • VAULT_PASSWORD_FILE = ./vault-pass is used to specify a Vault password file to use whenever Ansible Vault requires a password. Ansible Vault offers several options for password management. To learn more password management, read Ansible’s Providing Vault Passwords documentation.
        • enable_plugins = linode enables the Linode dynamic inventory plugin.

      Create a Linode Instance

      You can now begin creating Linode instances using Ansible. In this section, you will create an Ansible Playbook that can deploy Linodes.

      Create your Linode Playbook

      1. Ensure you are in the development directory that you created in the Configure Ansible section:

        cd ~/development
        
      2. Using your preferred text editor, create the Create Linode Playbook file and include the following values:

        ~/development/linode_create.yml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        
        - name: Create Linode
          hosts: localhost
          vars_files:
              - ./group_vars/example_group/vars
          tasks:
          - name: Create a new Linode.
            linode_v4:
              label: "{{ label }}{{ 100 |random }}"
              access_token: "{{ token }}"
              type: g6-nanode-1
              region: us-east
              image: linode/debian9
              root_pass: "{{ password }}"
              authorized_keys: "{{ ssh_keys }}"
              group: example_group
              tags: example_group
              state: present
            register: my_linode
            
        • The Playbook my_linode contains the Create Linode play, which will be executed on hosts: localhost. This means the Ansible playbook will execute on the local system and use it as a vehicle to deploy the remote Linode instances.
        • The vars_files key provides the location of a local file that contains variable values to populate in the play. The value of any variables defined in the vars file will substitute any Jinja template variables used in the Playbook. Jinja template variables are any variables between curly brackets, like: {{ my_var }}.
        • The Create a new Linode task calls the linode_v4 module and provides all required module parameters as arguments, plus additional arguments to configure the Linode’s deployment. For details on each parameter, see the linode_v4 Module Parameters section.

          Note

          Usage of groups is deprecated, but still supported by Linode’s API v4. The Linode dynamic inventory module requires groups to generate an Ansible inventory and will be used later in this guide.
        • Theregister keyword defines a variable name, my_linode that will store linode_v4 module return data. For instance, you could reference the my_linode variable later in your Playbook to complete other actions using data about your Linode. This keyword is not required to deploy a Linode instance, but represents a common way to declare and use variables in Ansible Playbooks. The task in the snippet below will use Ansible’s debug module and the my_linode variable to print out a message with the Linode instance’s ID and IPv4 address during Playbook execution.

          1
          2
          3
          4
          5
          
          ...
          - name: Print info about my Linode instance
              debug:
                msg: "ID is {{ my_linode.instance.id }} IP is {{ my_linode.instance.ipv4 }}"
                  

      Create the Variables File

      In the previous section, you created the Create Linode Playbook to deploy Linode instances and made use of Jinja template variables. In this section, you will create the variables file to provide values to those template variables.

      1. Create the directory to store your Playbook’s variable files. The directory is structured to group your variable files by inventory group. This directory structure supports the use of file level encryption that Ansible Vault can detect and parse. Although it is not relevant to this guide’s example, it will be used as a best practice.

        mkdir -p ~/development/group_vars/example_group/
        
      2. Create the variables file and populate it with the example variables. You can replace the values with your own.

        ~/development/group_vars/example_group/vars
        1
        2
        3
        4
        
        ssh_keys: >
                ['ssh-rsa AAAAB3N..5bYqyRaQ== user@mycomputer', '~/.ssh/id_rsa.pub']
        label: simple-linode-
            
        • The ssh_keys example passes a list of two public SSH keys. The first provides the string value of the key, while the second provides a local public key file location.

          Configure your SSH Agent

          If your SSH Keys are passphrase-protected, you should add the keys to your SSH agent so that Ansible does not hang when running Playbooks on the remote Linode. The following instructions are for Linux systems:

          1. Run the following command; if you stored your private key in another location, update the path that’s passed to ssh-add accordingly:

            eval $(ssh-agent) && ssh-add ~/.ssh/id_rsa
            

            If you start a new terminal, you will need to run the commands in this step again before having access to the keys stored in your SSH agent.

        • label provides a label prefix that will be concatenated with a random number. This occurs when the Create Linode Playbook’s Jinja templating for the label argument is parsed (label: "{{ label }}{{ 100 |random }}").

      Encrypt Sensitive Variables with Ansible Vault

      Ansible Vault allows you to encrypt sensitive data, like passwords or tokens, to keep them from being exposed in your Ansible Playbooks or Roles. You will take advantage of this functionality to keep your Linode instance’s password and access_token encrypted within the variables file.

      Note

      Ansible Vault can also encrypt entire files containing sensitive values. View Ansible’s documentation on Vault for more information.
      1. Create your Ansible Vault password file and add your password to the file. Remember the location of the password file was configured in the ansible.cfg file in the Configure Ansible section of this guide.

        ~/development/vault-pass
        1
        2
        
        My.ANS1BLEvault-c00lPassw0rd
            
      2. Encrypt the value of your Linode’s root user password using Ansible Vault. Replace My.c00lPassw0rd with your own strong password that conforms to the root_pass parameter’s constraints.

        ansible-vault encrypt_string 'My.c00lPassw0rd' --name 'password'
        

        You will see a similar output:

          
              password: !vault |
                  $ANSIBLE_VAULT;1.1;AES256
                  30376134633639613832373335313062366536313334316465303462656664333064373933393831
                  3432313261613532346134633761316363363535326333360a626431376265373133653535373238
                  38323166666665376366663964343830633462623537623065356364343831316439396462343935
                  6233646239363434380a383433643763373066633535366137346638613261353064353466303734
                  3833
        Encryption successful
      3. Copy the generated output and add it to your vars file.

      4. Encrypt the value of your access token. Replace the value of 86210...1e1c6bd with your own access token.

        ansible-vault encrypt_string '86210...1e1c6bd' --name 'token'
        
      5. Copy the generated output and append it to the bottom of your vars file.

        The final vars file should resemble the example below:

        ~/development/group_vars/example_group/vars
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        
        ssh_keys: >
                ['ssh-rsa AAAAB3N..5bYqyRaQ== user@mycomputer', '~/.ssh/id_rsa.pub']
        label: simple-linode-
        password: !vault |
                  $ANSIBLE_VAULT;1.1;AES256
                  30376134633639613832373335313062366536313334316465303462656664333064373933393831
                  3432313261613532346134633761316363363535326333360a626431376265373133653535373238
                  38323166666665376366663964343830633462623537623065356364343831316439396462343935
                  6233646239363434380a383433643763373066633535366137346638613261353064353466303734
                  3833
        token: !vault |
                  $ANSIBLE_VAULT;1.1;AES256
                  65363565316233613963653465613661316134333164623962643834383632646439306566623061
                  3938393939373039373135663239633162336530373738300a316661373731623538306164363434
                  31656434356431353734666633656534343237333662613036653137396235353833313430626534
                  3330323437653835660a303865636365303532373864613632323930343265343665393432326231
                  61313635653463333630636631336539643430326662373137303166303739616262643338373834
                  34613532353031333731336339396233623533326130376431346462633832353432316163373833
                  35316333626530643736636332323161353139306533633961376432623161626132353933373661
                  36663135323664663130
            

      Run the Ansible Playbook

      You are now ready to run the Create Linode Playbook. When you run the Playbook, a 1 GB Nanode will be deployed in the Newark data center. Note: you want to run Ansible commands from the directory where your ansible.cfg file is located.

      1. Run your playbook to create your Linode instances.

        ansible-playbook ~/development/linode_create.yml
        

        You will see a similar output:

        PLAY [Create Linode] *********************************************************************
        
        TASK [Gathering Facts] *******************************************************************
        ok: [localhost]
        
        TASK [Create a new Linode.] **************************************************************
        changed: [localhost]
        
        PLAY RECAP *******************************************************************************
        localhost                  : ok=3    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
        

      linode_v4 Module Parameters

      Parameter Data type/Status Usage
      access_token string, required Your Linode API v4 access token. The token should have permission to read and write Linodes. The token can also be specified by exposing the LINODE_ACCESS_TOKEN environment variable.
      authorized_keys list A list of SSH public keys or SSH public key file locations on your local system, for example, ['averylongstring','~/.ssh/id_rsa.pub']. The public key will be stored in the /root/.ssh/authorized_keys file on your Linode. Ansible will use the public key to SSH into your Linodes as the root user and execute your Playbooks.
      group string, deprecated The Linode instance’s group. Please note, group labelling is deprecated but still supported. The encouraged method for marking instances is to use tags. This parameter must be provided to use the Linode dynamic inventory module.
      image string The Image ID to deploy the Linode disk from. Official Linode Images start with linode/, while your private images start with private/. For example, use linode/ubuntu18.04 to deploy a Linode instance with the Ubuntu 18.04 image. This is a required parameter only when creating Linode instances.

      To view a list of all available Linode images, issue the following command:

      curl https://api.linode.com/v4/images.

      label string, required The Linode instance label. The label is used by the module as the main determiner for idempotence and must be a unique value.

      Linode labels have the following constraints:

      • Must start with an alpha character.
      • May only consist of alphanumeric characters, dashes (-), underscores (_) or periods (.).
      • Cannot have two dashes (–), underscores (__) or periods (..) in a row.

      region string The region where the Linode will be located. This is a required parameter only when creating Linode instances.

      To view a list of all available regions, issue the following command:

      curl https://api.linode.com/v4/regions.

      root_pass string The password for the root user. If not specified, will be generated. This generated password will be available in the task success JSON.

      The root password must conform to the following constraints:

      • May only use alphanumerics, punctuation, spaces, and tabs.
      • Must contain at least two of the following characters classes: upper-case letters, lower-case letters, digits, punctuation.

      state string, required The desired instance state. The accepted values are absent and present.
      tags list The user-defined labels attached to Linodes. Tags are used for grouping Linodes in a way that is relevant to the user.
      type string, The Linode instance’s plan type. The plan type determines your Linode’s hardware resources and its pricing.

      To view a list of all available Linode types including pricing and specifications for each type, issue the following command:

      curl https://api.linode.com/v4/linode/types.

      The Linode Dynamic Inventory Plugin

      Ansible uses inventories to manage different hosts that make up your infrastructure. This allows you to execute tasks on specific parts of your infrastructure. By default, Ansible will look in /etc/ansible/hosts for an inventory, however, you can designate a different location for your inventory file and use multiple inventory files that represent your infrastructure. To support infrastructures that shift over time, Ansible offers the ability to track inventory from dynamic sources, like cloud providers. The Ansible dynamic inventory plugin for Linode can be used to source your inventory from Linode’s API v4. In this section, you will use the Linode plugin to source your Ansible deployed Linode inventory.

      Note

      The dynamic inventory plugin for Linode was enabled in the Ansible configuration file created in the Configure Ansible section of this guide.

      Configure the Plugin

      1. Configure the Ansible dynamic inventory plugin for Linode by creating a file named linode.yml.

        ~/development/linode.yml
        1
        2
        3
        4
        5
        6
        7
        
        plugin: linode
        regions:
          - us-east
        groups:
          - example_group
        types:
          - g6-nanode-1
        • The configuration file will create an inventory for any Linodes on your account that are in the us-east region, part of the example_group group and of type g6-nanode-1. Any Linodes that are not part of the example_group group, but that fulfill the us-east region and g6-nanode-type type will be displayed as ungrouped. All other Linodes will be excluded from the dynamic inventory. For more information on all supported parameters, see the Plugin Parameters section.

      Run the Inventory Plugin

      1. Export your Linode API v4 access token to the shell environment. LINODE_ACCESS_TOKEN must be used as the environment variable name. Replace mytoken with your own access token.

        export LINODE_ACCESS_TOKEN='mytoken'
        
      2. Run the Linode dynamic inventory plugin.

        ansible-inventory -i ~/development/linode.yml --graph
        

        You should see a similar output. The output may vary depending on the Linodes already deployed to your account and the parameter values you pass.

        @all:
        |--@example_group:
        |  |--simple-linode-29
        

        For a more detailed output including all Linode instance configurations, issue the following command:

        ansible-inventory -i ~/development/linode.yml --graph --vars
        
      3. Before you can communicate with your Linode instances using the dynamic inventory plugin, you will need to add your Linode’s IPv4 address and label to your /etc/hosts file.

        The Linode Dynamic Inventory Plugin assumes that the Linodes in your account have labels that correspond to hostnames that are in your resolver search path, /etc/hosts. This means you will have to create an entry in your /etc/hosts file to map the Linode’s IPv4 address to its hostname.

        Note

        A pull request currently exists to support using a public IP, private IP or hostname. This change will enable the inventory plugin to be used with infrastructure that does not have DNS hostnames or hostnames that match Linode labels.

        To add your deployed Linode instance to the /etc/hosts file:

        • Retrieve your Linode instance’s IPv4 address:

          ansible-inventory -i ~/development/linode.yml --graph --vars | grep 'ipv4|simple-linode'
          

          Your output will resemble the following:

          |  |--simple-linode-36
          |  |  |--{ipv4 = [u'192.0.2.0']}
          |  |  |--{label = simple-linode-36}
          
        • Open the /etc/hosts file and add your Linode’s IPv4 address and label:

          /etc/hosts
          1
          2
          3
          
          127.0.0.1       localhost
          192.0.2.0 simple-linode-29
                    
      4. Verify that you can communicate with your grouped inventory by pinging the Linodes. The ping command will use the dynamic inventory plugin configuration file to target example_group. The u root option will run the command as root on the Linode hosts.

        ansible -m ping example_group -i ~/development/linode.yml -u root
        

        You should see a similar output:

        simple-linode-29 | SUCCESS => {
            "ansible_facts": {
                "discovered_interpreter_python": "/usr/bin/python"
            },
            "changed": false,
            "ping": "pong"
        }
        

      Plugin Parameters

      Parameter Data type/Status Usage
      access_token string, required Your Linode API v4 access token. The token should have permission to read and write Linodes. The token can also be specified by exposing the LINODE_ACCESS_TOKEN environment variable.
      plugin string, required The plugin name. The value must always be linode in order to use the dynamic inventory plugin for Linode.
      regions list The Linode region with which to populate the inventory. For example, us-east is possible value for this parameter.

      To view a list of all available Linode images, issue the following command:

      curl https://api.linode.com/v4/images.

      types list The Linode type with which to populate the inventory. For example, g6-nanode-1 is a possible value for this parameter.

      To view a list of all available Linode types including pricing and specifications for each type, issue the following command:

      curl https://api.linode.com/v4/linode/types.

      groups list The Linode group with which to populate the inventory. Please note, group labelling is deprecated but still supported. The encouraged method for marking instances is to use tags. This parameter must be provided to use the Linode dynamic inventory module.

      Delete Your Resources

      1. To delete the Linode instance created in this guide, create a Delete Linode Playbook with the following content in the example. Replace the value of label with your Linode’s label:

        ~/development/linode_delete.yml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        
        - name: Delete Linode
          hosts: localhost
          vars_files:
            - ./group_vars/example_group/vars
          tasks:
          - name: Delete your Linode Instance.
            linode_v4:
              label: simple-linode-29
              state: absent
              
      2. Run the Delete Linode Playbook:

        ansible-playbook ~/development/linode_delete.yml
        

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      How to Deploy a Resilient Go Application to DigitalOcean Kubernetes


      The author selected Girls Who Code to receive a donation as part of the Write for DOnations program.

      Introduction

      Docker is a containerization tool used to provide applications with a filesystem holding everything they need to run, ensuring that the software will have a consistent run-time environment and will behave the same way regardless of where it is deployed. Kubernetes is a cloud platform for automating the deployment, scaling, and management of containerized applications.

      By leveraging Docker, you can deploy an application on any system that supports Docker with the confidence that it will always work as intended. Kubernetes, meanwhile, allows you to deploy your application across multiple nodes in a cluster. Additionally, it handles key tasks such as bringing up new containers should any of your containers crash. Together, these tools streamline the process of deploying an application, allowing you to focus on development.

      In this tutorial, you will build an example application written in Go and get it up and running locally on your development machine. Then you’ll containerize the application with Docker, deploy it to a Kubernetes cluster, and create a load balancer that will serve as the public-facing entry point to your application.

      Prerequisites

      Before you begin this tutorial, you will need the following:

      • A development server or local machine from which you will deploy the application. Although the instructions in this guide will largely work for most operating systems, this tutorial assumes that you have access to an Ubuntu 18.04 system configured with a non-root user with sudo privileges, as described in our Initial Server Setup for Ubuntu 18.04 tutorial.
      • The docker command-line tool installed on your development machine. To install this, follow Steps 1 and 2 of our tutorial on How to Install and Use Docker on Ubuntu 18.04.
      • The kubectl command-line tool installed on your development machine. To install this, follow this guide from the official Kubernetes documentation.
      • A free account on Docker Hub to which you will push your Docker image. To set this up, visit the Docker Hub website, click the Get Started button at the top-right of the page, and follow the registration instructions.
      • A Kubernetes cluster. You can provision a DigitalOcean Kubernetes cluster by following our Kubernetes Quickstart guide. You can still complete this tutorial if you provision your cluster from another cloud provider. Wherever you procure your cluster, be sure to set up a configuration file and ensure that you can connect to the cluster from your development server.

      Step 1 — Building a Sample Web Application in Go

      In this step, you will build a sample application written in Go. Once you containerize this app with Docker, it will serve My Awesome Go App in response to requests to your server’s IP address at port 3000.

      Get started by updating your server’s package lists if you haven’t done so recently:

      Then install Go by running:

      Next, make sure you're in your home directory and create a new directory which will contain all of your project files:

      Then navigate to this new directory:

      Use nano or your preferred text editor to create a file named main.go which will contain the code for your Go application:

      The first line in any Go source file is always a package statement that defines which code bundle the file belongs to. For executable files like this one, the package statement must point to the main package:

      go-app/main.go

      package main
      

      Following that, add an import statement where you can list all the libraries the application will need. Here, include fmt, which handles formatted text input and output, and net/http, which provides HTTP client and server implementations:

      go-app/main.go

      package main
      
      import (
        "fmt"
        "net/http"
      )
      

      Next, define a homePage function which will take in two arguments: http.ResponseWriter and a pointer to http.Request. In Go, a ResponseWriter interface is used to construct an HTTP response, while http.Request is an object representing an incoming request. Thus, this block reads incoming HTTP requests and then constructs a response:

      go-app/main.go

      . . .
      
      import (
        "fmt"
        "net/http"
      )
      
      func homePage(w http.ResponseWriter, r *http.Request) {
        fmt.Fprintf(w, "My Awesome Go App")
      }
      

      After this, add a setupRoutes function which will map incoming requests to their intended HTTP handler functions. In the body of this setupRoutes function, add a mapping of the / route to your newly defined homePage function. This tells the application to print the My Awesome Go App message even for requests made to unknown endpoints:

      go-app/main.go

      . . .
      
      func homePage(w http.ResponseWriter, r *http.Request) {
        fmt.Fprintf(w, "My Awesome Go App")
      }
      
      func setupRoutes() {
        http.HandleFunc("/", homePage)
      }
      

      And finally, add the following main function. This will print out a string indicating that your application has started. It will then call the setupRoutes function before listening and serving your Go application on port 3000.

      go-app/main.go

      . . .
      
      func setupRoutes() {
        http.HandleFunc("/", homePage)
      }
      
      func main() {
        fmt.Println("Go Web App Started on Port 3000")
        setupRoutes()
        http.ListenAndServe(":3000", nil)
      }
      

      After adding these lines, this is how the final file will look:

      go-app/main.go

      package main
      
      import (
        "fmt"
        "net/http"
      )
      
      func homePage(w http.ResponseWriter, r *http.Request) {
        fmt.Fprintf(w, "My Awesome Go App")
      }
      
      func setupRoutes() {
        http.HandleFunc("/", homePage)
      }
      
      func main() {
        fmt.Println("Go Web App Started on Port 3000")
        setupRoutes()
        http.ListenAndServe(":3000", nil)
      }
      

      Save and close this file. If you created this file using nano, do so by pressing CTRL + X, Y, then ENTER.

      Next, run the application using the following go run command. This will compile the code in your main.go file and run it locally on your development machine:

      Output

      Go Web App Started on Port 3000

      This output confirms that the application is working as expected. It will run indefinitely, however, so close it by pressing CTRL + C.

      Throughout this guide, you will use this sample application to experiment with Docker and Kubernetes. To that end, continue reading to learn how to containerize your application with Docker.

      Step 2 — Dockerizing Your Go Application

      In its current state, the Go application you just created is only running on your development server. In this step, you'll make this new application portable by containerizing it with Docker. This will allow it to run on any machine that supports Docker containers. You will build a Docker image and push it to a central public repository on Docker Hub. This way, your Kubernetes cluster can pull the image back down and deploy it as a container within the cluster.

      The first step towards containerizing your application is to create a special script called a Dockerfile. A Dockerfile typically contains a list of instructions and arguments that run in sequential order so as to automatically perform certain actions on a base image or create a new one.

      Note: In this step, you will configure a simple Docker container that will build and run your Go application in a single stage. If, in the future, you want to reduce the size of the container where your Go applications will run in production, you may want to look into mutli-stage builds.

      Create a new file named Dockerfile:

      At the top of the file, specify the base image needed for the Go app:

      go-app/Dockerfile

      FROM golang:1.12.0-alpine3.9
      

      Then create an app directory within the container that will hold the application's source files:

      go-app/Dockerfile

      FROM golang:1.12.0-alpine3.9
      RUN mkdir /app
      

      Below that, add the following line which copies everything in the root directory into the app directory:

      go-app/Dockerfile

      FROM golang:1.12.0-alpine3.9
      RUN mkdir /app
      ADD . /app
      

      Next, add the following line which changes the working directory to app, meaning that all the following commands in this Dockerfile will be run from that location:

      go-app/Dockerfile

      FROM golang:1.12.0-alpine3.9
      RUN mkdir /app
      ADD . /app
      WORKDIR /app
      

      Add a line instructing Docker to run the go build -o main command, which compiles the binary executable of the Go app:

      go-app/Dockerfile

      FROM golang:1.12.0-alpine3.9
      RUN mkdir /app
      ADD . /app
      WORKDIR /app
      RUN go build -o main .
      

      Then add the final line, which will run the binary executable:

      go-app/Dockerfile

      FROM golang:1.12.0-alpine3.9
      RUN mkdir /app
      ADD . /app
      WORKDIR /app
      RUN go build -o main .
      CMD ["/app/main"]
      

      Save and close the file after adding these lines.

      Now that you have this Dockerfile in the root of your project, you can create a Docker image based off of it using the following docker build command. This command includes the -t flag which, when passed the value go-web-app, will name the Docker image go-web-app and tag it.

      Note: In Docker, tags allow you to convey information specific to a given image, such as its version number. The following command doesn't provide a specific tag, so Docker will tag the image with its default tag: latest. If you want to give an image a custom tag, you would append the image name with a colon and the tag of your choice, like so:

      • docker build -t sammy/image_name:tag_name .

      Tagging an image like this can give you greater control over your images. For example, you could deploy an image tagged v1.1 to production, but deploy another tagged v1.2 to your pre-production or testing environment.

      The final argument you'll pass is the path: .. This specifies that you wish to build the Docker image from the contents of the current working directory. Also, be sure to update sammy to your Docker Hub username:

      • docker build -t sammy/go-web-app .

      This build command will read all of the lines in your Dockerfile, execute them in order, and then cache them, allowing future builds to run much faster:

      Output

      . . . Successfully built 521679ff78e5 Successfully tagged go-web-app:latest

      Once this command finishes building it, you will be able to see your image when you run the docker images command like so:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE sammy/go-web-app latest 4ee6cf7a8ab4 3 seconds ago 355MB

      Next, use the following command create and start a container based on the image you just built. This command includes the -it flag, which specifies that the container will run in interactive mode. It also has the -p flag which maps the port on which the Go application is running on your development machine — port 3000 — to port 3000 in your Docker container:

      • docker run -it -p 3000:3000 sammy/go-web-app

      Output

      Go Web App Started on Port 3000

      If there is nothing else running on that port, you'll be able to see the application in action by opening up a browser and navigating to the following URL:

      http://your_server_ip:3000
      

      Note: If you're following this tutorial from your local machine instead of a server, visit the application by instead going to the following URL:

      http://localhost:3000
      

      Your containerized Go App

      After checking that the application works as expected in your browser, stop it by pressing CTRL + C in your terminal.

      When you deploy your containerized application to your Kubernetes cluster, you'll need to be able to pull the image from a centralized location. To that end, you can push your newly created image to your Docker Hub image repository.

      Run the following command to log in to Docker Hub from your terminal:

      This will prompt you for your Docker Hub username and password. After entering them correctly, you will see Login Succeeded in the command's output.

      After logging in, push your new image up to Docker Hub using the docker push command, like so:

      • docker push sammy/go-web-app

      Once this command has successfully completed, you will be able to open up your Docker Hub account and see your Docker image there.

      Now that you've pushed your image to a central location, you're ready to deploy it to your Kubernetes cluster. First, though, we will walk through a brief process that will make it much less tedious to run kubectl commands.

      Step 3 — Improving Usability for kubectl

      By this point, you've created a functioning Go application and containerized it with Docker. However, the application still isn't publicly accessible. To resolve this, you will deploy your new Docker image to your Kubernetes cluster using the kubectl command line tool. Before doing this, though, let's make a small change to the Kubernetes configuration file that will help to make running kubectl commands less laborious.

      By default, when you run commands with the kubectl command-line tool, you have to specify the path of the cluster configuration file using the --kubeconfig flag. However, if your configuration file is named config and is stored in a directory named ~/.kube, kubectl will know where to look for the configuration file and will be able pick it up without the --kubeconfig flag pointing to it.

      To that end, if you haven't already done so, create a new directory called ~/.kube:

      Then move your cluster configuration file to this directory, and rename it config in the process:

      • mv clusterconfig.yaml ~/.kube/config

      Moving forward, you won't need to specify the location of your cluster's configuration file when you run kubectl, as the command will be able to find it now that it's in the default location. Test out this behavior by running the following get nodes command:

      This will display all of the nodes that reside within your Kubernetes cluster. In the context of Kubernetes, a node is a server or a worker machine on which one or more pods can be deployed:

      Output

      NAME STATUS ROLES AGE VERSION k8s-1-13-5-do-0-nyc1-1554148094743-1-7lfd Ready <none> 1m v1.13.5 k8s-1-13-5-do-0-nyc1-1554148094743-1-7lfi Ready <none> 1m v1.13.5 k8s-1-13-5-do-0-nyc1-1554148094743-1-7lfv Ready <none> 1m v1.13.5

      With that, you're ready to move on and deploy your application to your Kubernetes cluster. You will do this by creating two Kubernetes objects: one that will deploy the application to some pods in your cluster and another that will create a load balancer, providing an access point to your application.

      Step 4 — Creating a Deployment

      RESTful resources make up all the persistent entities wihtin a Kubernetes system, and in this context they're commonly referred to as Kubernetes objects. It's helpful to think of Kubernetes objects as the work orders you submit to Kubernetes: you list what resources you need and how they should work, and then Kubernetes will constantly work to ensure that they exist in your cluster.

      One kind of Kubernetes object, known as a deployment, is a set of identical, indistinguishable pods. In Kubernetes, a pod is a grouping of one or more containers which are able to communicate over the same shared network and interact with the same shared storage. A deployment runs more than one replica of the parent application at a time and automatically replaces any instances that fail, ensuring that your application is always available to serve user requests.

      In this step, you'll create a Kubernetes object description file, also known as a manifest, for a deployment. This manifest will contain all of the configuration details needed to deploy your Go app to your cluster.

      Begin by creating a deployment manifest in the root directory of your project: go-app/. For small projects such as this one, keeping them in the root directory minimizes the complexity. For larger projects, however, it may be beneficial to store your manifests in a separate subdirectory so as to keep everything organized.

      Create a new file called deployment.yml:

      Different versions of the Kubernetes API contain different object definitions, so at the top of this file you must define the apiVersion you're using to create this object. For the purpose of this tutorial, you will be using the apps/v1 grouping as it contains many of the core Kubernetes object definitions that you'll need in order to create a deployment. Add a field below apiVersion describing the kind of Kubernetes object you're creating. In this case, you're creating a Deployment:

      go-app/deployment.yml

      ---
      apiVersion: apps/v1
      kind: Deployment
      

      Then define the metadata for your deployment. A metadata field is required for every Kubernetes object as it contains information such as the unique name of the object. This name is useful as it allows you to distinguish different deployments from one another and identify them using names that are human-readable:

      go-app/deployment.yml

      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
          name: go-web-app
      

      Next, you'll build out the spec block of your deployment.yml. A spec field is a requirement for every Kubernetes object, but its precise format differs for each type of object. In the case of a deployment, it can contain information such as the number of replicas of you want to run. In Kubernetes, a replica is the number of pods you want to run in your cluster. Here, set the number of replicas to 5:

      go-app/deployment.yml

      . . .
      metadata:
          name: go-web-app
      spec:
        replicas: 5
      

      Next, create a selector block nested under the spec block. This will serve as a label selector for your pods. Kubernetes uses label selectors to define how the deployment finds the pods which it must manage.

      Within this selector block, define matchLabels and add the name label. Essentially, the matchLabels field tells Kubernetes what pods the deployment applies to. In this example, the deployment will apply to any pods with the name go-web-app:

      go-app/deployment.yml

      . . .
      spec:
        replicas: 5
        selector:
          matchLabels:
            name: go-web-app
      

      After this, add a template block. Every deployment creates a set of pods using the labels specified in a template block. The first subfield in this block is metadata which contains the labels that will be applied to all of the pods in this deployment. These labels are key/value pairs that are used as identifying attributes of Kubernetes objects. When you define your service later on, you can specify that you want all the pods with this name label to be grouped under that service. Set this name label to go-web-app:

      go-app/deployment.yml

      . . .
      spec:
        replicas: 5
        selector:
          matchLabels:
            name: go-web-app
        template:
          metadata:
            labels:
              name: go-web-app
      

      The second part of this template block is the spec block. This is different from the spec block you added previously, as this one applies only to the pods created by the template block, rather than the whole deployment.

      Within this spec block, add a containers field and once again define a name attribute. This name field defines the name of any containers created by this particular deployment. Below that, define the image you want to pull down and deploy. Be sure to change sammy to your own Docker Hub username:

      go-app/deployment.yml

      . . .
        template:
          metadata:
            labels:
              name: go-web-app
          spec:
            containers:
            - name: application
              image: sammy/go-web-app
      

      Following that, add an imagePullPolicy field set to IfNotPresent which will direct the deployment to only pull an image if it has not already done so before. Then, lastly, add a ports block. There, define the containerPort which should match the port number that your Go application listens on. In this case, the port number is 3000:

      go-app/deployment.yml

      . . .
          spec:
            containers:
            - name: application
              image: sammy/go-web-app
              imagePullPolicy: IfNotPresent
              ports:
                - containerPort: 3000
      

      The full version of your deployment.yml will look like this:

      go-app/deployment.yml

      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: go-web-app
      spec:
        replicas: 5
        selector:
          matchLabels:
            name: go-web-app
        template:
          metadata:
            labels:
              name: go-web-app
          spec:
            containers:
            - name: application
              image: sammy/go-web-app
              imagePullPolicy: IfNotPresent
              ports:
                - containerPort: 3000
      

      Save and close the file.

      Next, apply your new deployment with the following command:

      • kubectl apply -f deployment.yml

      Note: For more information on all of the configuration available to you for deployments, please check out the official Kubernetes documentation here: Kubernetes Deployments

      In the next step, you'll create another kind of Kubernetes object which will manage how you access the pods that exist in your new deployment. This service will create a load balancer which will then expose a single IP address, and requests to this IP address will be distributed to the replicas in your deployment. This service will also handle port forwarding rules so that you can access your application over HTTP.

      Step 5 — Creating a Service

      Now that you have a successful Kubernetes deployment, you're ready to expose your application to the outside world. In order to do this, you'll need to define another kind of Kubernetes object: a service. This service will expose the same port on all of your cluster's nodes. Your nodes will then forward any incoming traffic on that port to the pods running your application.

      Note: For clarity, we will define this service object in a separate file. However, it is possible to group multiple resource manifests in the same YAML file, as long as they're separated by ---. See this page from the Kubernetes documentation for more details.

      Create a new file called service.yml:

      Start this file off by again defining the apiVersion and the kind fields in a similar fashion to your deployment.yml file. This time, point the apiVersion field to v1, the Kubernetes API commonly used for services:

      go-app/service.yml

      ---
      apiVersion: v1
      kind: Service
      

      Next, add the name of your service in a metadata block as you did in deployment.yml. This could be anything you like, but for clarity we will call it go-web-service:

      go-app/service.yml

      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: go-web-service
      

      Next, create a spec block. This spec block will be different than the one included in your deployment, and it will contain the type of this service, as well as the port forwarding configuration and the selector.

      Add a field defining this service's type and set it to LoadBalancer. This will automatically provision a load balancer that will act as the main entry point to your application.

      Warning: The method for creating a load balancer outlined in this step will only work for Kubernetes clusters provisioned from cloud providers that also support external load balancers. Additionally, be advised that provisioning a load balancer from a cloud provider will incur additional costs. If this is a concern for you, you may want to look into exposing an external IP address using an Ingress.

      go-app/service.yml

      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: go-web-service
      spec:
        type: LoadBalancer
      

      Then add a ports block where you'll define how you want your apps to be accessed. Nested within this block, add the following fields:

      • name, pointing to http
      • port, pointing to port 80
      • targetPort, pointing to port 3000

      This will take incoming HTTP requests on port 80 and forward them to the targetPort of 3000. This targetPort is the same port on which your Go application is running:

      go-app/service.yml

      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: go-web-service
      spec:
        type: LoadBalancer
        ports:
        - name: http
          port: 80
          targetPort: 3000
      

      Lastly, add a selector block as you did in the deployments.yml file. This selector block is important, as it maps any deployed pods named go-web-app to this service:

      go-app/service.yml

      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: go-web-service
      spec:
        type: LoadBalancer
        ports:
        - name: http
          port: 80
          targetPort: 3000
        selector:
          name: go-web-app
      

      After adding these lines, save and close the file. Following that, apply this service to your Kubernetes cluster by once again using the kubectl apply command like so:

      • kubectl apply -f service.yml

      This command will apply the new Kubernetes service as well as create a load balancer. This load balancer will serve as the public-facing entry point to your application running within the cluster.

      To view the application, you will need the new load balancer's IP address. Find it by running the following command:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE go-web-service LoadBalancer 10.245.107.189 203.0.113.20 80:30533/TCP 10m kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 3h4m

      You may have more than one service running, but find the one labeled go-web-service. Find the EXTERNAL-IP column and copy the IP address associated with the go-web-service. In this example output, this IP address is 203.0.113.20. Then, paste the IP address into the URL bar of your browser to the view the application running on your Kubernetes cluster.

      Note: When Kubernetes creates a load balancer in this manner, it does so asynchronously. Consequently, the kubectl get services command's output may show the EXTERNAL-IP address of the LoadBalancer remaining in a <pending> state for some time after running the kubectl apply command. If this the case, wait a few minutes and try re-running the command to ensure that the load balancer was created and is functioning as expected.

      The load balancer will take in the request on port 80 and forward it to one of the pods running within your cluster.

      Your working Go App!

      With that, you've created a Kubernetes service coupled with a load balancer, giving you a single, stable entry point to application.

      Conclusion

      In this tutorial, you've built Go application, containerized it with Docker, and then deployed it to a Kubernetes cluster. You then created a load balancer that provides a resilient entry point to this application, ensuring that it will remain highly available even if one of the nodes in your cluster fails. You can use this tutorial to deploy your own Go application to a Kubernetes cluster, or continue learning other Kubernetes and Docker concepts with the sample application you created in Step 1.

      Moving forward, you could map your load balancer's IP address to a domain name that you control so that you can access the application through a human-readable web address rather than the load balancer IP. Additionally, the following Kubernetes tutorials may be of interest to you:

      Finally, if you'd like to learn more about Go, we encourage you to check out our series on How To Code in Go.



      Source link