One place for hosting & domains

      Ansible

      How To Set Up Ansible Inventories


      Introduction

      Ansible is a modern configuration management tool that facilitates the task of setting up and maintaining remote servers, with a minimalist design intended to get users up and running quickly. Ansible uses an inventory file to keep track of which hosts are part of your infrastructure, and how to reach them for running commands and playbooks.

      There are multiple ways in which you can set up your Ansible inventory file, depending on your environment and project needs. In this guide, we’ll demonstrate how to create inventory files and organize servers into groups and subgroups, how to set up host variables, and how to use patterns to control the execution of Ansible commands and playbooks per host and per group.

      Prerequisites

      In order follow this guide, you’ll need:

      • One Ansible control node: an Ubuntu 20.04 machine with Ansible installed and configured to connect to your Ansible hosts using SSH keys. Make sure the control node has a regular user with sudo permissions and a firewall enabled, as explained in our Initial Server Setup guide. To set up Ansible, please follow our guide on How to Install and Configure Ansible on Ubuntu 20.04.
      • Two or more Ansible Hosts: two or more remote Ubuntu 20.04 servers.

      Step 1 — Creating a Custom Inventory File

      Upon installation, Ansible creates an inventory file that is typically located at /etc/ansible/hosts. This is the default location used by Ansible when a custom inventory file is not provided with the -i option, during a playbook or command execution.

      Even though you can use this file without problems, using per-project inventory files is a good practice to avoid mixing servers when executing commands and playbooks. Having per-project inventory files will also facilitate sharing your provisioning setup with collaborators, given you include the inventory file within the project’s code repository.

      To get started, access your home folder and create a new directory to hold your Ansible files:

      Move to that directory and open a new inventory file using your text editor of choice. Here, we’ll use nano:

      • cd ansible
      • nano inventory

      A list of your nodes, with one server per line, is enough for setting up a functional inventory file. Hostnames and IP addresses are interchangeable:

      ~/ansible/inventory

      203.0.113.111
      203.0.113.112
      203.0.113.113
      server_hostname
      

      Once you have an inventory file set up, you can use the ansible-inventory command to validate and obtain information about your Ansible inventory:

      • ansible-inventory -i inventory --list

      Output

      { "_meta": { "hostvars": {} }, "all": { "children": [ "ungrouped" ] }, "ungrouped": { "hosts": [ "203.0.113.111", "203.0.113.112", "203.0.113.113", "server_hostname" ] } }

      Even though we haven’t set up any groups within our inventory, the output shows 2 distinct groups that are automatically inferred by Ansible: all and ungrouped. As the name suggests, all is used to refer to all servers from your inventory file, no matter how they are organized. The ungrouped group is used to refer to servers that aren’t listed within a group.

      Running Commands and Playbooks with Custom Inventories

      To run Ansible commands with a custom inventory file, use the -i option as follows:

      • ansible all -i inventory -m ping

      This would execute the ping module on all hosts listed in your custom inventory file.

      Similarly, this is how you execute Ansible playbooks with a custom inventory file:

      • ansible-playbook -i inventory playbook.yml

      Note: For more information on how to connect to nodes and how to run commands and playbooks, please refer to our How to Use Ansible guide.

      So far, we’ve seen how to create a basic inventory and how to use it for running commands and playbooks. In the next step, we’ll see how to organize nodes into groups and subgroups.

      Step 2 — Organizing Servers Into Groups and Subgroups

      Within the inventory file, you can organize your servers into different groups and subgroups. Beyond helping to keep your hosts in order, this practice will enable you to use group variables, a feature that can greatly facilitate managing multiple staging environments with Ansible.

      A host can be part of multiple groups. The following inventory file in INI format demonstrates a setup with four groups: webservers, dbservers, development, and production. You’ll notice that the servers are grouped by two different qualities: their purpose (web and database), and how they’re being used (development and production).

      ~/ansible/inventory

      [webservers]
      203.0.113.111
      203.0.113.112
      
      [dbservers]
      203.0.113.113
      server_hostname
      
      [development]
      203.0.113.111
      203.0.113.113
      
      [production]
      203.0.113.112
      server_hostname
      

      If you were to run the ansible-inventory command again with this inventory file, you would see the following arrangement:

      Output

      { "_meta": { "hostvars": {} }, "all": { "children": [ "dbservers", "development", "production", "ungrouped", "webservers" ] }, "dbservers": { "hosts": [ "203.0.113.113", "server_hostname" ] }, "development": { "hosts": [ "203.0.113.111", "203.0.113.113" ] }, "production": { "hosts": [ "203.0.113.112", "server_hostname" ] }, "webservers": { "hosts": [ "203.0.113.111", "203.0.113.112" ] } }

      It is also possible to aggregate multiple groups as children under a “parent” group. The “parent” is then called a metagroup. The following example demonstrates another way to organize the previous inventory using metagroups to achieve a comparable, yet more granular arrangement:

      ~/ansible/inventory

      [web_dev]
      203.0.113.111
      
      [web_prod]
      203.0.113.112
      
      [db_dev]
      203.0.113.113
      
      [db_prod]
      server_hostname
      
      [webservers:children]
      web_dev
      web_prod
      
      [dbservers:children]
      db_dev
      db_prod
      
      [development:children]
      web_dev
      db_dev
      
      [production:children]
      web_prod
      db_prod
      

      The more servers you have, the more it makes sense to break groups down or create alternative arrangements so that you can target smaller groups of servers as needed.

      Step 3 — Setting Up Host Aliases

      You can use aliases to name servers in a way that facilitates referencing those servers later, when running commands and playbooks.

      To use an alias, include a variable named ansible_host after the alias name, containing the corresponding IP address or hostname of the server that should respond to that alias:

      ~/ansible/inventory

      server1 ansible_host=203.0.113.111
      server2 ansible_host=203.0.113.112
      server3 ansible_host=203.0.113.113
      server4 ansible_host=server_hostname
      

      If you were to run the ansible-inventory command with this inventory file, you would see output similar to this:

      Output

      { "_meta": { "hostvars": { "server1": { "ansible_host": "203.0.113.111" }, "server2": { "ansible_host": "203.0.113.112" }, "server3": { "ansible_host": "203.0.113.113" }, "server4": { "ansible_host": "server_hostname" } } }, "all": { "children": [ "ungrouped" ] }, "ungrouped": { "hosts": [ "server1", "server2", "server3", "server4" ] } }

      Notice how the servers are now referenced by their aliases instead of their IP addresses or hostnames. This makes it easier for targeting individual servers when running commands and playbooks.

      Step 4 — Setting Up Host Variables

      It is possible to use the inventory file to set up variables that will change Ansible’s default behavior when connecting and executing commands on your nodes. This is in fact what we did in the previous step, when setting up host aliases. The ansible_host variable tells Ansible where to find the remote nodes, in case an alias is used to refer to that server.

      Inventory variables can be set per host or per group. In addition to customizing Ansible’s default settings, these variables are also accessible from your playbooks, which enables further customization for individual hosts and groups.

      The following example shows how to define the default remote user when connecting to each of the nodes listed in this inventory file:

      ~/ansible/inventory

      server1 ansible_host=203.0.113.111 ansible_user=sammy
      server2 ansible_host=203.0.113.112 ansible_user=sammy
      server3 ansible_host=203.0.113.113 ansible_user=myuser
      server4 ansible_host=server_hostname ansible_user=myuser
      

      You could also create a group to aggregate the hosts with similar settings, and then set up their variables at the group level:

      ~/ansible/inventory

      [group_a]
      server1 ansible_host=203.0.113.111 
      server2 ansible_host=203.0.113.112
      
      [group_b]
      server3 ansible_host=203.0.113.113 
      server4 ansible_host=server_hostname
      
      [group_a:vars]
      ansible_user=sammy
      
      [group_b:vars]
      ansible_user=myuser
      

      This inventory arrangement would generate the following output with ansible-inventory:

      Output

      { "_meta": { "hostvars": { "server1": { "ansible_host": "203.0.113.111", "ansible_user": "sammy" }, "server2": { "ansible_host": "203.0.113.112", "ansible_user": "sammy" }, "server3": { "ansible_host": "203.0.113.113", "ansible_user": "myuser" }, "server4": { "ansible_host": "server_hostname", "ansible_user": "myuser" } } }, "all": { "children": [ "group_a", "group_b", "ungrouped" ] }, "group_a": { "hosts": [ "server1", "server2" ] }, "group_b": { "hosts": [ "server3", "server4" ] } }

      Notice that all inventory variables are listed within the _meta node in the JSON output produced by ansible-inventory.

      Step 5 — Using Patterns to Target Execution of Commands and Playbooks

      When executing commands and playbooks with Ansible, you must provide a target. Patterns allow you to target specific hosts, groups, or subgroups in your inventory file. They’re very flexible, supporting regular expressions and wildcards.

      Consider the following inventory file:

      ~/ansible/inventory

      [webservers]
      203.0.113.111
      203.0.113.112
      
      [dbservers]
      203.0.113.113
      server_hostname
      
      [development]
      203.0.113.111
      203.0.113.113
      
      [production]
      203.0.113.112
      server_hostname
      

      Now imagine you need to execute a command targeting only the database server(s) that are running on production. In this example, there’s only server_hostname matching that criteria; however, it could be the case that you have a large group of database servers in that group. Instead of individually targeting each server, you could use the following pattern:

      • ansible dbservers:&production -m ping

      This pattern would target only servers that are present both in the dbservers as well as in the production groups. If you wanted to do the opposite, targeting only servers that are present in the dbservers but not in the production group, you would use the following pattern instead:

      • ansible dbservers:!production -m ping

      The following table contains a few different examples of common patterns you can use when running commands and playbooks with Ansible:

      Pattern Result Target
      all All Hosts from your inventory file
      host1 A single host (host1)
      host1:host2 Both host1 and host2
      group1 A single group (group1)
      group1:group2 All servers in group1 and group2
      group1:&group2 Only servers that are both in group1 and group2
      group1:!group2 Servers in group1 except those also in group2

      For more advanced pattern options, such as using positional patterns and regex to define targets, please refer to the official Ansible documentation on patterns.

      Conclusion

      In this guide, we had a detailed look into Ansible inventories. We’ve seen how to organize nodes into groups and subgroups, how to set up inventory variables, and how to use patterns to target different groups of servers when running commands and playbooks.

      In the next part of this series, we’ll see how to manage multiple servers with Ansible ad-hoc commands.



      Source link

      Ansible Adhoc Commands – A Tutorial


      Updated by Linode Contributed by Avi

      Marquee image for Ansible Adhoc Commands - A Tutorial

      In this tutorial, you’ll learn about several Ansible adhoc commands which are used by system and devops engineers.

      Adhoc commands are commands which you run from the command line, outside of a playbook. These commands run on one or more managed nodes and perform a simple/quick task–most often, these will be tasks that you don’t need to repeat. For example, if you want to reload Apache across a cluster of web servers, you can run a single adhoc command to achieve that task.

      Note

      In Ansible, all modules can be executed in either a playbook or through an adhoc command.

      The basic syntax for invoking an adhoc command is:

      ansible host_pattern -m module_name -a "module_options"
      

      Before You Begin

      To run the commands in this tutorial, you’ll need:

      • A workstation or server with the Ansible command line tool installed on it that will act as the control node. The Set Up the Control Node section of the Getting Started With Ansible guide has instructions for setting up a Linode as a control node. Installation instructions for non-Linux distributions can be found on the Ansible documentation site.

      • At least one other server that will be managed by Ansible. Some commands in this guide will target a non-root user on this server. This user should have sudo privileges. There are a couple options for setting up this user:

      Note

      The commands in this guide will be run from the control node and will target a host named Client. Your control node’s Ansible inventory should be configured so that at least one of your managed nodes has this name. The Create an Ansible Inventory section of the Getting Started With Ansible guide outlines how to set up an inventory file.

      Note

      Alternatively, you can modify the commands in this guide to use a different host name.

      Basic Commands

      Ping

      To check that you can reach your managed node, use the ping module:

      ansible -m ping Client
      
        
      node1 | SUCCESS => {
          "ansible_facts": {
              "discovered_interpreter_python": "/usr/bin/python"
          },
          "changed": false,
          "ping": "pong"
      }
      
      

      Run with Privilege Escalation

      This adhoc command demonstrates how a non-root user on the managed node can gain the privileges of a root user when executing a module. Specifically, this example shows how to use privilege escalation to run the fdisk command through the shell module:

      ansible Client -m shell -a 'fdisk -l' -u non_root_user --become -K
      
        
      BECOME password:
      node1 | CHANGED | rc=0 >>
      Disk /dev/sda: 79.51 GiB, 85362475008 bytes, 166723584 sectors
      Disk model: QEMU HARDDISK
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      Disk /dev/sdb: 512 MiB, 536870912 bytes, 1048576 sectors
      Disk model: QEMU HARDDISK
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      
      
      • The -u option is used to specify the user on the managed node.

        Note

        By default, Ansible will try to establish a connection to the managed node under the same user that you execute the Ansible CLI with on the control node.

      • The --become option is used to execute the command with the privileges of the root user.

      • The -K option is used to prompt for the privilege escalation password of the user.

      Reboot a Managed Node

      Below is a command that reboots the managed node:

      ansible Client -a "/sbin/reboot" -f 1
      

      This command omits the -m option that specifies the module. When the module is not specified, the command module is the default that’s used.

      The command module is similar to the shell module in that both will execute a command that you pass to it. The shell module will run the command through a shell on the managed node, while the command module will not run it through a shell.

      Note

      The -f option is used to define number of forks that Ansible will use on the control node when running your command.

      Note

      If your managed node is a Linode, then Linode’s shutdown watchdog Lassie needs to be enabled for the reboot to succeed. This is because a Linode is not able to turn itself on–instead, Linode’s host environment must boot the Linode.

      Collecting System Diagnostics

      Check Free Disk Space

      This command is used to check the free disk space on all of a managed node’s mounted disks. It lists all the filesystems present on the managed node along with the filesystem size, space used, and space available in a human-readable format:

      ansible Client -a "df -h"
      
        
      node1 | CHANGED | rc=0 >>
      Filesystem      Size  Used Avail Use% Mounted on
      udev            1.9G     0  1.9G   0% /dev
      tmpfs           394M  596K  394M   1% /run
      /dev/sda         79G  2.6G   72G   4% /
      tmpfs           2.0G  124K  2.0G   1% /dev/shm
      tmpfs           5.0M     0  5.0M   0% /run/lock
      tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
      tmpfs           394M     0  394M   0% /run/user/0
      
      

      This command checks the available and used space on a specific filesystem:

      ansible Client -m shell -a 'df -h /dev/sda'
      
        
      node1 | CHANGED | rc=0 >>
      Filesystem      Size  Used Avail Use% Mounted on
      /dev/sda         79G  2.6G   72G   4% /
      
      

      Check Memory and CPU Usage

      Use the free command with the shell module to see the free and used memory of your managed node in megabytes:

      ansible Client -m shell -a 'free -m'
      
        
      node1 | CHANGED | rc=0 >>
                    total        used        free      shared  buff/cache   available
      Mem:           3936         190        3553           0         192        3523
      Swap:           511           0         511
      
      

      Use the mpstat command with the shell module to check CPU usage:

      ansible Client -m shell -a 'mpstat -P ALL'
      
        
      node1 | CHANGED | rc=0 >>
      Linux 5.3.0-40-generic (localhost)      03/21/2020      _x86_64_        (2 CPU)
      
      07:41:27 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest  %gnice   %idle
      07:41:27 PM  all    0.96    0.00    0.72    0.08    0.00    0.02    0.01    0.00    0.00   98.21
      07:41:27 PM    0    0.93    0.00    0.73    0.06    0.00    0.03    0.01    0.00    0.00   98.24
      07:41:27 PM    1    1.00    0.00    0.71    0.09    0.00    0.01    0.01    0.00    0.00   98.17
      
      

      Check System Uptime

      This Ansible command will show how long your managed nodes have been up and running:

      ansible Client -a "uptime"
      
        
      node1 | CHANGED | rc=0 >>
       19:40:11 up 8 min,  2 users,  load average: 0.00, 0.02, 0.00
      
      

      File Transfer

      Copy Files

      The copy module is used to transfer a file or directory from the control node to your managed nodes by defining the source and destination paths. You can define the file owner and file permissions in the command:

      cd ~
      echo "Hello World" > test.txt
      ansible Client -m copy -a 'src=test.txt dest=/etc/ owner=root mode=0644' -u non_root_user --become -K
      
        
      BECOME password:
      node1 | CHANGED => {
          "ansible_facts": {
              "discovered_interpreter_python": "/usr/bin/python"
          },
          "changed": true,
          "checksum": "13577023221e91069c21d8f10a4b90f8192d6a26",
          "dest": "/etc/test",
          "gid": 0,
          "group": "root",
          "md5sum": "eb662c21e683b643f0fcb5997d7bbccf",
          "mode": "0644",
          "owner": "root",
          "size": 18,
          "src": "/root/.ansible/tmp/ansible-tmp-1584820375.14-54524496813834/source",
          "state": "file",
          "uid": 0
      }
      
      

      You can also use Ansible to check whether your file got copied to your destination location:

      sudo ansible Client -m shell -a 'ls -l /etc/test*'
      
        
      node1 | CHANGED | rc=0 >>
      -rw-r--r-- 1 root root 12 Jun  1 22:35 /etc/test.txt
      
      

      Fetch Files

      The fetch module is used to transfer a file from a managed node to the control node. After the command runs successfully, the changed variable in Ansible’s output will be set to true.

      ansible Client -m fetch -a 'src=/etc/test.txt dest=/etc/'
      
        
      node1 | CHANGED => {
          "changed": true,
          "checksum": "648a6a6ffffdaa0badb23b8baf90b6168dd16b3a",
          "dest": "/etc/192.0.2.4/etc/test.txt",
          "md5sum": "e59ff97941044f85df5297e1c302d260",
          "remote_checksum": "648a6a6ffffdaa0badb23b8baf90b6168dd16b3a",
          "remote_md5sum": null
      }
      
      

      Note that the fetched file was placed into /etc/192.0.2.4/etc/test.txt. By default, the fetch module will put fetched files into separate directories for each hostname that you’re fetching from. This prevents a file from one managed node from overwriting the file from another managed node.

      To avoid creating these directories, include the flat=yes option:

      ansible Client -m fetch -a 'src=/etc/test.txt dest=/etc/ flat=yes'
      
        
      node1 | SUCCESS => {
          "changed": false,
          "checksum": "648a6a6ffffdaa0badb23b8baf90b6168dd16b3a",
          "dest": "/etc/test.txt",
          "file": "/etc/test.txt",
          "md5sum": "e59ff97941044f85df5297e1c302d260"
      }
      
      

      Create Directories

      The file module is used to create, remove, and set permissions on files and directories, and create symlinks. This command will create a directory at /root/linode/new/ on the the managed node with the owner and permissions defined in the options:

      ansible Client -m file -a "dest=/root/linode/new/ mode=755 owner=root group=root state=directory" -u non_root_user --become -K
      
        
      node1 | CHANGED => {
          "ansible_facts": {
              "discovered_interpreter_python": "/usr/bin/python"
          },
          "changed": true,
          "gid": 0,
          "group": "root",
          "mode": "0755",
          "owner": "root",
          "path": "/root/linode/new",
          "size": 4096,
          "state": "directory",
          "uid": 0
      }
      
      

      Note that all intermediate directories that did not exist will also be created. In this example, if the linode/ subdirectory did not already exist, then it was created.

      Managing Packages

      Install a Package

      The package module can be used to install a new package on the managed node. This command installs the latest version of NGINX:

      ansible Client -m package -a 'name=nginx state=present' -u non_root_user --become -K
      
        
      node1 | CHANGED => {
          "ansible_facts": {
              "discovered_interpreter_python": "/usr/bin/python"
          },
          "cache_update_time": 1584821061,
          "cache_updated": false,
          "changed": true,
          "stderr": "",
          "stderr_lines": [],
          "stdout": "Reading package lists...nBuilding dependency tree...
              "Unpacking nginx (1.16.1-0ubuntu2.1) ...",
              "Setting up libxpm4:amd64 (1:3.5.12-1) ...",
              "Setting up nginx-common (1.16.1-0ubuntu2.1) ...",
              "Setting up nginx-core (1.16.1-0ubuntu2.1) ...",
              "Setting up nginx (1.16.1-0ubuntu2.1) ...",
          ]
      }
      
      

      Note

      The package module works across distributions. There are also modules for specific package managers (e.g. the apt module and the yum module). These modules offer more options that are specific to those package managers.

      Uninstall a Package

      To uninstall a package, set state=absent in the command’s options:

      ansible Client -m package -a 'name=nginx state=absent' -u non_root_user --become -K
      
        
      node1 | CHANGED => {
          "ansible_facts": {
              "discovered_interpreter_python": "/usr/bin/python"
          },
          "changed": true,
          "stderr": "",
          "stderr_lines": [],
          "stdout": "Reading package lists...nBuilding dependency tree …
              "  nginx-core",
              "Use 'sudo apt autoremove' to remove them.",
              "The following packages will be REMOVED:",
              "  nginx*",
              "Removing nginx (1.16.1-0ubuntu2.1) ..."
          ]
      }
      
      

      Managing Services

      Start a Service

      Use the service module to start a service on the managed node. This command will start and enable the NGINX service:

      ansible Client -m service -a 'name=nginx state=started enabled=yes' -u non_root_user --become -K
      
        
      node1 | SUCCESS => {
          "ansible_facts": {
              "discovered_interpreter_python": "/usr/bin/python"
          },
          "changed": false,
          "enabled": true,
          "name": "nginx",
          "state": "started",
          "status": {
              "ActiveEnterTimestamp": "Sat 2020-03-21 20:04:35 UTC",
              "ActiveEnterTimestampMonotonic": "1999615481",
              "ActiveExitTimestampMonotonic": "0",
              "ActiveState": "active",
              "After": "system.slice systemd-journald.socket network.target sysinit.target basic.target",
              "AllowIsolate": "no",
              "AmbientCapabilities": "",
              "AssertResult": "yes",
              "AssertTimestamp": "Sat 2020-03-21 20:04:35 UTC",
              "AssertTimestampMonotonic": "1999560256",
              "Before": "multi-user.target shutdown.target",
          }
      }
      
      

      Stop a Service

      When you change the state to stopped, the service will stop running.

      ansible Client -m service -a 'name=nginx state=stopped' -u non_root_user --become -K
      
        
      node1 | CHANGED => {
          "ansible_facts": {
              "discovered_interpreter_python": "/usr/bin/python"
          },
          "changed": true,
          "name": "nginx",
          "state": "stopped",
          "status": {
              "ActiveEnterTimestamp": "Sat 2020-03-21 20:04:35 UTC",
              "ActiveEnterTimestampMonotonic": "1999615481",
              "ActiveExitTimestampMonotonic": "0",
              "ActiveState": "active",
              "After": "system.slice systemd-journald.socket network.target sysinit.target basic.target",
              "AllowIsolate": "no",
              "AmbientCapabilities": "",
              "AssertResult": "yes",
              "AssertTimestamp": "Sat 2020-03-21 20:04:35 UTC",
      }
      }
      
      

      Gathering Facts

      The setup module can be used to gather information about your managed nodes:

      ansible Client -m setup
      
        
      node1 | SUCCESS => {
          "ansible_facts": {
              "ansible_all_ipv4_addresses": [
                  "192.0.2.4"
              ],
              "ansible_all_ipv6_addresses": [
                  "2400:8904::f03c:92ff:fee9:dcb3",
                  "fe80::f03c:92ff:fee9:dcb3"
              ],
              "ansible_apparmor": {
                  "status": "enabled"
              },
              "ansible_architecture": "x86_64",
              "ansible_bios_date": "04/01/2014",
              "ansible_bios_version": "rel-1.12.0-0-ga698c8995f-prebuilt.qemu.org",
              "ansible_cmdline": {
                  "BOOT_IMAGE": "/boot/vmlinuz-5.3.0-40-generic",
                  "console": "ttyS0,19200n8",
                  "net.ifnames": "0",
                  "ro": true,
                  "root": "/dev/sda"
              },
              "ansible_date_time": {
                  "date": "2020-03-21",
                  "day": "21",
                  "epoch": "1584821656",
                  "hour": "20",
                  "iso8601": "2020-03-21T20:14:16Z",
                  "iso8601_basic": "20200321T201416267047",
                  "iso8601_basic_short": "20200321T201416",
                  "iso8601_micro": "2020-03-21T20:14:16.267127Z",
                  "minute": "14",
                  "month": "03",
                  "second": "16",
                  "time": "20:14:16",
                  "tz": "UTC",
                  "tz_offset": "+0000",
                  "weekday": "Saturday",
                  "weekday_number": "6",
                  "weeknumber": "11",
                  "year": "2020"
              },
              "ansible_default_ipv4": {
                  "address": "192.0.2.4",
                  "alias": "eth0",
                  "broadcast": "192.0.2.255",
                  "gateway": "192.0.2.1",
                  "interface": "eth0",
                  "macaddress": "f2:3c:92:e9:dc:b3",
                  "mtu": 1500,
                  "netmask": "255.255.255.0",
                  "network": "192.0.2.0",
                  "type": "ether"
              },
              "gather_subset": [
                  "all"
              ],
              "module_setup": true
          },
          "changed": false
      }
      
      

      Filtering Facts

      Using the filter option with the setup module will limit what is returned by the module. This command lists the details of your managed nodes’ installed distributions:

      ansible Client -m setup -a "filter=ansible_distribution*"
      
        
      node1 | SUCCESS => {
          "ansible_facts": {
              "ansible_distribution": "Ubuntu",
              "ansible_distribution_file_parsed": true,
              "ansible_distribution_file_path": "/etc/os-release",
              "ansible_distribution_file_variety": "Debian",
              "ansible_distribution_major_version": "19",
              "ansible_distribution_release": "eoan",
              "ansible_distribution_version": "19.10",
              "discovered_interpreter_python": "/usr/bin/python"
          },
          "changed": false
      }
      
      

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Automating the Complexity Out of Server Setup with Ansible


      Video

      Server automation now plays an essential role in systems administration, due to the disposable nature of modern application environments. Configuration management tools such as Ansible are typically used to streamline the process of automating server setup by establishing standard procedures for new servers while also reducing human error associated with manual setups.

      In this tech talk, you’ll learn how to create and execute Ansible playbooks to automate your server infrastructure setup. We’ll see some of the most important Ansible features and how to leverage them to create clean and flexible automation for your DigitalOcean Droplets.

      About the Presenter

      Erika Heidi is a software engineer and devOps turned writer, passionate about producing and presenting technical content for a variety of audiences. As a long-term Linux adopter and open source enthusiast, Erika is focused on lowering the barrier of entrance to the technologies empowering modern web application ecosystems today. Erika is a senior technical writer at the DigitalOcean Community.



      Source link