One place for hosting & domains

      Multiple

      How To Deploy Multiple Environments in Your Terraform Project Without Duplicating Code


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Terraform offers advanced features that become increasingly useful as your project grows in size and complexity. It’s possible to alleviate the cost of maintaining complex infrastructure definitions for multiple environments by structuring your code to minimize repetitions and introducing tool-assisted workflows for easier testing and deployment.

      Terraform associates a state with a backend, which determines where and how state is stored and retrieved. Every state has only one backend and is tied to an infrastructure configuration. Certain backends, such as local or s3, may contain multiple states. In that case, the pairing of state and infrastructure to the backend is describing a workspace. Workspaces allow you to deploy multiple distinct instances of the same infrastructure configuration without storing them in separate backends.

      In this tutorial, you’ll first deploy multiple infrastructure instances using different workspaces. You’ll then deploy a stateful resource, which, in this tutorial, will be a DigitalOcean Volume. Finally, you’ll reference pre-made modules from the Terraform Registry, which you can use to supplement your own.

      Prerequisites

      • A DigitalOcean Personal Access Token, which you can create via the DigitalOcean Control Panel. You can find instructions for this in the How to Generate a Personal Access Token tutorial.
      • Terraform installed on your local machine and a project set up with the DO provider. Complete Step 1 and Step 2 of the How To Use Terraform with DigitalOcean tutorial, and be sure to name the project folder terraform-advanced, instead of loadbalance. During Step 2, do not include the pvt_key variable and the SSH key resource.

      Note: We have specifically tested this tutorial using Terraform 0.13.

      Deploying Multiple Infrastructure Instances Using Workspaces

      Multiple workspaces are useful when you want to deploy or test a modified version of your main infrastructure without creating a separate project and setting up authentication keys again. Once you have developed and tested a feature using the separate state, you can incorporate the new code into the main workspace and possibly delete the additional state. When you init a Terraform project, regardless of backend, Terraform creates a workspace called default. It is always present and you can never delete it.

      However, multiple workspaces are not a suitable solution for creating multiple environments, such as for staging and production. Therefore workspaces, which only track the state, do not store the code or its modifications.

      Since workspaces do not track the actual code, you should manage the code separation between multiple workspaces at the version control (VCS) level by matching them to their infrastructure variants. How you can achieve this is dependent on the VCS tool itself; for example, in Git branches would be a fitting abstraction. To make it easier to manage the code for multiple environments, you can break them up into reusable modules, so that you avoid repeating similar code for each environment.

      Deploying Resources in Workspaces

      You’ll now create a project that deploys a Droplet, which you’ll apply from multiple workspaces.

      You’ll store the Droplet definition in a file called droplets.tf.

      Assuming you’re in the terraform-advanced directory, create and open it for editing by running:

      Add the following lines:

      droplets.tf

      resource "digitalocean_droplet" "web" {
        image  = "ubuntu-18-04-x64"
        name   = "web-${terraform.workspace}"
        region = "fra1"
        size   = "s-1vcpu-1gb"
      }
      

      This definition will create a Droplet running Ubuntu 18.04 with one CPU core and 1 GB RAM in the fra1 region. Its name will contain the name of the current workspace it is deployed from. When you’re done, save and close the file.

      Apply the project for Terraform to run its actions with:

      • terraform apply -var "do_token=${DO_PAT}"

      Your output will be similar to the following:

      Output

      An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.web will be created + resource "digitalocean_droplet" "web" { + backups = false + created_at = (known after apply) + disk = (known after apply) + id = (known after apply) + image = "ubuntu-18-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + ipv6_address_private = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "web-default" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = (known after apply) + region = "fra1" + resize_disk = true + size = "s-1vcpu-1gb" + status = (known after apply) + urn = (known after apply) + vcpus = (known after apply) + volume_ids = (known after apply) + vpc_uuid = (known after apply) } Plan: 1 to add, 0 to change, 0 to destroy. ...

      Enter yes when prompted to deploy the Droplet in the default workspace.

      The name of the Droplet will be web-default, because the workspace you start with is called default. You can list the workspaces to confirm that it’s the only one available:

      You’ll receive the following output:

      Output

      * default

      The asterisk (*) means that you currently have that workspace selected.

      Create and switch to a new workspace called testing, which you’ll use to deploy a different Droplet, by running workspace new:

      • terraform workspace new testing

      You’ll have output similar to:

      Output

      Created and switched to workspace "testing"! You're now on a new, empty workspace. Workspaces isolate their state, so if you run "terraform plan" Terraform will not see any existing state for this configuration.

      You plan the deployment of the Droplet again by running:

      • terraform plan -var "do_token=${DO_PAT}"

      The output will be similar to the previous run:

      Output

      An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.web will be created + resource "digitalocean_droplet" "web" { + backups = false + created_at = (known after apply) + disk = (known after apply) + id = (known after apply) + image = "ubuntu-18-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + ipv6_address_private = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "web-testing" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = (known after apply) + region = "fra1" + resize_disk = true + size = "s-1vcpu-1gb" + status = (known after apply) + urn = (known after apply) + vcpus = (known after apply) + volume_ids = (known after apply) + vpc_uuid = (known after apply) } Plan: 1 to add, 0 to change, 0 to destroy. ...

      Notice that Terraform plans to deploy a Droplet called web-testing, which it has named differently from web-default. This is because the default and testing workspaces have separate states and have no knowledge of each other’s resources—even though they stem from the same code.

      To confirm that you’re in the testing workspace, output the current one you’re in with workspace show:

      The output will be the name of the current workspace:

      Output

      testing

      To delete a workspace, you first need to destroy all its deployed resources. Then, if it’s active, you need to switch to another one using workspace select. Since the testing workspace here is empty, you can switch to default right away:

      • terraform workspace select default

      You’ll receive output of Terraform confirming the switch:

      Output

      Switched to workspace "default".

      You can then delete it by running workspace delete:

      • terraform workspace delete testing

      Terraform will then perform the deletion:

      Output

      Deleted workspace "testing"!

      You can destroy the Droplet you’ve deployed in the default workspace by running:

      • terraform destroy -var "do_token=${DO_PAT}"

      Enter yes when prompted to finish the process.

      In this section, you’ve worked in multiple Terraform workspaces. In the next section, you’ll deploy a stateful resource.

      Deploying Stateful Resources

      Stateless resources do not store data, so you can create and replace them quickly, because they are not unique. Stateful resources, on the other hand, contain data that is unique or not simply re-creatable; therefore, they require persistent data storage.

      Since you may end up destroying such resources, or multiple resources require their data, it’s best to store it in a separate entity, such as DigitalOcean Volumes.

      Volumes are objects that you can attach to Droplets (servers), but are separate from them, and provide additional storage space. In this step, you’ll define the Volume and connect it to a Droplet in droplets.tf.

      Open it for editing:

      Add the following lines:

      droplets.tf

      resource "digitalocean_droplet" "web" {
        image  = "ubuntu-18-04-x64"
        name   = "web-${terraform.workspace}"
        region = "fra1"
        size   = "s-1vcpu-1gb"
      }
      
      resource "digitalocean_volume" "volume" {
        region                  = "fra1"
        name                    = "new-volume"
        size                    = 10
        initial_filesystem_type = "ext4"
        description             = "New Volume for Droplet"
      }
      
      resource "digitalocean_volume_attachment" "volume_attachment" {
        droplet_id = digitalocean_droplet.web.id
        volume_id  = digitalocean_volume.volume.id
      }
      

      Here you define two new resources, the Volume itself and a Volume attachment. The Volume will be 10GB, formatted as ext4, called new-volume, and located in the same region as the Droplet. To connect the Volume to the Droplet, since they are separate entities, you define a Volume attachment object. volume_attachment takes the Droplet and Volume IDs and instructs the DigitalOcean cloud to make the Volume available to the Droplet as a disk device.

      When you’re done, save and close the file.

      Plan this configuration by running:

      • terraform plan -var "do_token=${DO_PAT}"

      The actions that Terraform will plan will be the following:

      Output

      An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.web will be created + resource "digitalocean_droplet" "web" { + backups = false + created_at = (known after apply) + disk = (known after apply) + id = (known after apply) + image = "ubuntu-18-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + ipv6_address_private = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "web-default" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = (known after apply) + region = "fra1" + resize_disk = true + size = "s-1vcpu-1gb" + status = (known after apply) + urn = (known after apply) + vcpus = (known after apply) + volume_ids = (known after apply) + vpc_uuid = (known after apply) } # digitalocean_volume.volume will be created + resource "digitalocean_volume" "volume" { + description = "New Volume for Droplet" + droplet_ids = (known after apply) + filesystem_label = (known after apply) + filesystem_type = (known after apply) + id = (known after apply) + initial_filesystem_type = "ext4" + name = "new-volume" + region = "fra1" + size = 10 + urn = (known after apply) } # digitalocean_volume_attachment.volume_attachment will be created + resource "digitalocean_volume_attachment" "volume_attachment" { + droplet_id = (known after apply) + id = (known after apply) + volume_id = (known after apply) } Plan: 3 to add, 0 to change, 0 to destroy. ...

      The output details that Terraform would create a Droplet, a Volume, and a Volume attachment, which connects the Volume to the Droplet.

      You’ve now defined and connected a Volume (a stateful resource) to a Droplet. In the next section, you’ll review public, pre-made Terraform modules that you can incorporate in your project.

      Referencing Pre-made Modules

      Aside from creating your own custom modules for your projects, you can also use pre-made modules and providers from other developers, which are publicly available at Terraform Registry.

      In the modules section you can search the database of available modules and sort by provider in order to find the module with the functionality you need. Once you’ve found one, you can read its description, which lists the inputs and outputs the module provides, as well as its external module and provider dependencies.

      Terraform Registry - SSH key Module

      You’ll now add the DigitalOcean SSH key module to your project. You’ll store the code separate from existing definitions in a file called ssh-key.tf. Create and open it for editing by running:

      Add the following lines:

      ssh-key.tf

      module "ssh-key" {
        source         = "clouddrove/ssh-key/digitalocean"
        key_path       = "~/.ssh/id_rsa.pub"
        key_name       = "new-ssh-key"
        enable_ssh_key = true
      }
      

      This code defines an instance of the clouddrove/droplet/digitalocean module from the registry and sets some of the parameters it offers. It should add a public SSH key to your account by reading it from the ~/.ssh/id_rsa.pub.

      When you’re done, save and close the file.

      Before you plan this code, you must download the referenced module by running:

      You’ll receive output similar to the following:

      Output

      Initializing modules... Downloading clouddrove/ssh-key/digitalocean 0.13.0 for ssh-key... - ssh-key in .terraform/modules/ssh-key Initializing the backend... Initializing provider plugins... - Using previously-installed digitalocean/digitalocean v1.22.2 Terraform has been successfully initialized! ...

      You can now plan the code for the changes:

      • terraform plan -var "do_token=${DO_PAT}"

      You’ll receive output similar to this:

      Output

      Refreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage. ------------------------------------------------------------------------ An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: ... # module.ssh-key.digitalocean_ssh_key.default[0] will be created + resource "digitalocean_ssh_key" "default" { + fingerprint = (known after apply) + id = (known after apply) + name = "devops" + public_key = "ssh-rsa ... demo@clouddrove" } Plan: 4 to add, 0 to change, 0 to destroy. ...

      The output shows that you would create the SSH key resource, which means that you downloaded and invoked the module from your code.

      Conclusion

      Bigger projects can make use of some advanced features Terraform offers to help reduce complexity and make maintenance easier. Workspaces allow you to test new additions to your code without touching the stable main deployments. You can also couple workspaces with a version control system to track code changes. Using pre-made modules can also shorten development time, but may incur additional expenses or time in the future if the module becomes obsolete.

      For further resources on using Terraform, check out our How To Manage Infrastructure With Terraform series.



      Source link

      How To Declare Values For Multiple Properties In a CSS Rule



      Part of the Series:
      How To Build a Website With CSS

      This tutorial is part of a series on creating and customizing this website with CSS, a stylesheet language used to control the presentation of websites. You may follow the entire series to recreate the demonstration website and gain familiarity with CSS or use the methods described here for other CSS website projects.

      Before proceeding, we recommend that you have some knowledge of HTML, the standard markup language used to display documents in a web browser. If you don’t have familiarity with HTML, you can follow the first ten tutorials of our series How To Build a Website With HTML before starting this series.

      Introduction

      In this tutorial, you will learn how to declare values for multiple properties in a CSS rule. Declaring multiple properties in a single rule allows you to apply many style instructions—such as size, color, and alignment—to an element all at once. We’ll also explore creating a variety of CSS rules that allow us to apply different styles to different pieces of content in a single HTML document.

      Prerequisites

      To follow this tutorial, make sure you have set up the necessary files and folders as instructed in a previous tutorial in this series How To Set Up You CSS and HTML Practice Project.

      Creating a CSS Rule With Multiple Declarations

      To add more than one declaration to a CSS rule, try modifying your <h1> rule in your styles.css file (or adding the entire code snippet if you haven’t been following the tutorial series) so that it includes the additional highlighted declarations:

      h1 {
        color: blue;
        font-size: 100px;
        font-family: Courier;
        text-align: center;
      }
      

      Save the file and reload your HTML document in your browser. (For instructions on loading an HTML file, please visit our tutorial step How To View An Offline HTML File In Your Browser). Your text should now be located in the center of the page, have a size of 100 pixels, and render with the Courier font:

      Styled header text

      In the next section, we’ll add more CSS rules to extend the styling possibilities for the webpage’s content.

      Creating Multiple CSS Rules To Style HTML Content

      In this section we’ll add some more text to the index.html file using an HTML <p> element. We’ll experiment with modifying its properties using a new CSS ruleset that only applies to <p> tags.

      In the index.html file, add a line containing <p>Some paragraph text</p> below the existing <h1>A sample title<h1> line that you added in the How To Understand and Create CSS Rules tutorial:

      index.html

      <h1>A sample title</h1>
      <p>Some paragraph text</p>
      

      Save the index.html file and reload it in the browser window to check how the file is displayed. Your browser should render a blue heading that says “A sample title” and an unstyled paragraph below it that says “Some paragraph text” like the following example:

      Webpage output with a blue `<h1>` heading and an unstyled `<p>` element

      Next, let’s add a CSS rule to style the <p> element. Return to your styles.css file and add the following ruleset at the bottom of the file:

      styles.css

      . . .
      p {
        color: green;
        font-size: 20px;
        font-family: Arial, Helvetica, sans-serif;
        text-align: center;
      }
      

      Save the file and reload it in the browser window to check how the file is displayed. Your <p> text should now have the style you declared in the CSS rule you just created:

      Webpage output with styled `<p>` text

      Now that you have CSS rules for the <h1> and <p> elements, any text you mark up with these tags in your HTML document will take on the style rules that you declared for these elements in your styles.css file.

      Further Practice

      If you want to continue experimenting with CSS rules, try creating CSS rulesets for different HTML text elements—such as <h2>, <h3>, and <h4>—and using them to modify text in your index.html file. If you’re stuck, you can copy the CSS rules in the following example and add them to your styles.css file:

      styles.css

      . . .
      h2 {
        color: red;
        font-size: 40px;
      }
      
      h3 {
        color: purple;
        font-size: 50px;
      }
      
      h4 {
        color: green;
        font-size: 60px;
      }
      

      Save your file and then add the following HTML content to your index.html file:

      index.html

      <h2> This is red text with a size of 40 pixels. </h2>
      <h3> This is purple text with a size of 50 pixels. </h3>
      <h4> This is green text with a size 60 pixels. </h4>
      

      Save the file and load index.html in your browser. You should receive the following results:

      Webpage content styled with multiple CSS rules

      Conclusion

      In this tutorial you experimented with specifying values for multiple properties using CSS. You also created multiple CSS rules for styling text content in an HTML document. You will expand upon both these skills when you begin building the demonstration website later on in the tutorial series. In the next tutorial, you will begin exploring how to style images with CSS.



      Source link

      How To Manage Multiple Servers with Ansible Ad Hoc Commands


      Introduction

      Ansible is a modern configuration management tool that facilitates the task of setting up and maintaining remote servers. With a minimalist design intended to get users up and running quickly, it allows you to control one to hundreds of systems from a central location with either playbooks or ad hoc commands.

      Unlike playbooks — which consist of collections of tasks that can be reused — ad hoc commands are tasks that you don’t perform frequently, such as restarting a service or retrieving information about the remote systems that Ansible manages.

      In this cheat sheet guide, you’ll learn how to use Ansible ad hoc commands to perform common tasks such as installing packages, copying files, and restarting services on one or more remote servers, from an Ansible control node.

      Prerequisites

      In order to follow this guide, you’ll need:

      • One Ansible control node. This guide assumes your control node is an Ubuntu 20.04 machine with Ansible installed and configured to connect to your Ansible hosts using SSH keys. Make sure the control node has a regular user with sudo permissions and a firewall enabled, as explained in our Initial Server Setup guide. To set up Ansible, please follow our guide on How to Install and Configure Ansible on Ubuntu 20.04.
      • Two or more Ansible hosts. An Ansible host is any machine that your Ansible control node is configured to automate. This guide assumes your Ansible hosts are remote Ubuntu 20.04 servers. Make sure each Ansible host has:
        • The Ansible control node’s SSH public key added to the authorized_keys of a system user. This user can be either root or a regular user with sudo privileges. To set this up, you can follow Step 2 of How to Set Up SSH Keys on Ubuntu 20.04.
      • An inventory file set up on the Ansible control node. Make sure you have a working inventory file containing all your Ansible hosts. To set this up, please refer to the guide on How To Set Up Ansible Inventories. Then, make sure you’re able to connect to your nodes by running the connection test outlined in the section Testing Connection to Ansible Hosts.

      Testing Connection to Ansible Hosts

      The following command will test connectivity between your Ansible control node and all your Ansible hosts. This command uses the current system user and its corresponding SSH key as the remote login, and includes the -m option, which tells Ansible to run the ping module. It also features the -i flag, which tells Ansible to ping the hosts listed in the specified inventory file

      • ansible all -i inventory -m ping

      If this is the first time you’re connecting to these servers via SSH, you’ll be asked to confirm the authenticity of the hosts you’re connecting to via Ansible. When prompted, type yes and then hit ENTER to confirm.

      You should get output similar to this:

      Output

      server1 | SUCCESS => { "changed": false, "ping": "pong" } server2 | SUCCESS => { "changed": false, "ping": "pong" }

      Once you get a "pong" reply back from a host, it means the connection is live and you’re ready to run Ansible commands on that server.

      Adjusting Connection Options

      By default, Ansible tries to connect to the nodes as a remote user with the same name as your current system user, using its corresponding SSH keypair.

      To connect as a different remote user, append the command with the -u flag and the name of the intended user:

      • ansible all -i inventory -m ping -u sammy

      If you’re using a custom SSH key to connect to the remote servers, you can provide it at execution time with the --private-key option:

      • ansible all -i inventory -m ping --private-key=~/.ssh/custom_id

      Note: For more information on how to connect to nodes, please refer to our How to Use Ansible guide, which demonstrates more connection options.

      Once you’re able to connect using the appropriate options, you can adjust your inventory file to automatically set your remote user and private key, in case they are different from the default values assigned by Ansible. Then, you won’t need to provide those parameters in the command line.

      The following example inventory file sets up the ansible_user variable only for the server1 server:

      ~/ansible/inventory

      server1 ansible_host=203.0.113.111 ansible_user=sammy
      server2 ansible_host=203.0.113.112
      

      Ansible will now use sammy as the default remote user when connecting to the server1 server.

      To set up a custom SSH key, include the ansible_ssh_private_key_file variable as follows:

      ~/ansible/inventory

      server1 ansible_host=203.0.113.111 ansible_ssh_private_key_file=/home/sammy/.ssh/custom_id
      server2 ansible_host=203.0.113.112
      

      In both cases, we have set up custom values only for server1. If you want to use the same settings for multiple servers, you can use a child group for that:

      ~/ansible/inventory

      [group_a]
      203.0.113.111
      203.0.113.112
      
      [group_b]
      203.0.113.113
      
      
      [group_a:vars]
      ansible_user=sammy
      ansible_ssh_private_key_file=/home/sammy/.ssh/custom_id
      

      This example configuration will assign a custom user and SSH key only for connecting to the servers listed in group_a.

      Defining Targets for Command Execution

      When running ad hoc commands with Ansible, you can target individual hosts, as well as any combination of groups, hosts and subgroups. For instance, this is how you would check connectivity for every host in a group named servers:

      • ansible servers -i inventory -m ping

      You can also specify multiple hosts and groups by separating them with colons:

      • ansible server1:server2:dbservers -i inventory -m ping

      To include an exception in a pattern, use an exclamation mark, prefixed by the escape character , as follows. This command will run on all servers from group1, except server2:

      • ansible group1:!server2 -i inventory -m ping

      In case you’d like to run a command only on servers that are part of both group1 and group2, for instance, you should use & instead. Don’t forget to prefix it with a escape character:

      • ansible group1:&group2 -i inventory -m ping

      For more information on how to use patterns when defining targets for command execution, please refer to Step 5 of our guide on How to Set Up Ansible Inventories.

      Running Ansible Modules

      Ansible modules are pieces of code that can be invoked from playbooks and also from the command-line to facilitate executing procedures on remote nodes. Examples include the apt module, used to manage system packages on Ubuntu, and the user module, used to manage system users. The ping command used throughout this guide is also a module, typically used to test connection from the control node to the hosts.

      Ansible ships with an extensive collection of built-in modules, some of which require the installation of additional software in order to provide full functionality. You can also create your own custom modules using your language of choice.

      To execute a module with arguments, include the -a flag followed by the appropriate options in double quotes, like this:

      ansible target -i inventory -m module -a "module options"
      

      As an example, this will use the apt module to install the package tree on server1:

      • ansible server1 -i inventory -m apt -a "name=tree"

      Running Bash Commands

      When a module is not provided via the -m option, the command module is used by default to execute the specified command on the remote server(s).

      This allows you to execute virtually any command that you could normally execute via an SSH terminal, as long as the connecting user has sufficient permissions and there aren’t any interactive prompts.

      This example executes the uptime command on all servers from the specified inventory:

      • ansible all -i inventory -a "uptime"

      Output

      server1 | CHANGED | rc=0 >> 14:12:18 up 55 days, 2:15, 1 user, load average: 0.03, 0.01, 0.00 server2 | CHANGED | rc=0 >> 14:12:19 up 10 days, 6:38, 1 user, load average: 0.01, 0.02, 0.00

      Using Privilege Escalation to Run Commands with sudo

      If the command or module you want to execute on remote hosts requires extended system privileges or a different system user, you’ll need to use Ansible’s privilege escalation module, become. This module is an abstraction for sudo as well as other privilege escalation software supported by Ansible on different operating systems.

      For instance, if you wanted to run a tail command to output the latest log messages from Nginx’s error log on a server named server1 from inventory, you would need to include the --become option as follows:

      • ansible server1 -i inventory -a "tail /var/log/nginx/error.log" --become

      This would be the equivalent of running a sudo tail /var/log/nginx/error.log command on the remote host, using the current local system user or the remote user set up within your inventory file.

      Privilege escalation systems such as sudo often require that you confirm your credentials by prompting you to provide your user’s password. That would cause Ansible to fail a command or playbook execution. You can then use the --ask-become-pass or -K option to make Ansible prompt you for that sudo password:

      • ansible server1 -i inventory -a "tail /var/log/nginx/error.log" --become -K

      Installing and Removing Packages

      The following example uses the apt module to install the nginx package on all nodes from the provided inventory file:

      • ansible all -i inventory -m apt -a "name=nginx" --become -K

      To remove a package, include the state argument and set it to absent:.

      • ansible all -i inventory -m apt -a "name=nginx state=absent" --become -K

      Copying Files

      With the file module, you can copy files between the control node and the managed nodes, in either direction. The following command copies a local text file to all remote hosts in the specified inventory file:

      • ansible all -i inventory -m copy -a "src=./file.txt dest=~/myfile.txt"

      To copy a file from the remote server to your control node, include the remote_src option:

      • ansible all -i inventory -m copy -a "src=~/myfile.txt remote_src=yes dest=./file.txt"

      Changing File Permissions

      To modify permissions on files and directories on your remote nodes, you can use the file module.

      The following command will adjust permissions on a file named file.txt located at /var/www on the remote host. It will set the file’s umask to 600, which will enable read and write permissions only for the current file owner. Additionally, it will set the ownership of that file to a user and a group called sammy:

      • ansible all -i inventory -m file -a "dest=/var/www/file.txt mode=600 owner=sammy group=sammy" --become -K

      Because the file is located in a directory typically owned by root, we might need sudo permissions to modify its properties. That’s why we include the --become and -K options. These will use Ansible’s privilege escalation system to run the command with extended privileges, and it will prompt you to provide the sudo password for the remote user.

      Restarting Services

      You can use the service module to manage services running on the remote nodes managed by Ansible. This will require extended system privileges, so make sure your remote user has sudo permissions and you include the --become option to use Ansible’s privilege escalation system. Using -K will prompt you to provide the sudo password for the connecting user.

      To restart the nginx service on all hosts in group called webservers, for instance, you would run:

      • ansible webservers -i inventory -m service -a "name=nginx state=restarted" --become -K

      Restarting Servers

      Although Ansible doesn’t have a dedicated module to restart servers, you can issue a bash command that calls the /sbin/reboot command on the remote host.

      Restarting the server will require extended system privileges, so make sure your remote user has sudo permissions and you include the --become option to use Ansible’s privilege escalation system. Using -K will prompt you to provide the sudo password for the connecting user.

      Warning: The following command will fully restart the server(s) targeted by Ansible. That might cause temporary disruption to any applications that rely on those servers.

      To restart all servers in a webservers group, for instance, you would run:

      • ansible webservers -i inventory -a "/sbin/reboot" --become -K

      Gathering Information About Remote Nodes

      The setup module returns detailed information about the remote systems managed by Ansible, also known as system facts.

      To obtain the system facts for server1, run:

      • ansible server1 -i inventory -m setup

      This will print a large amount of JSON data containing details about the remote server environment. To print only the most relevant information, include the "gather_subset=min" argument as follows:

      • ansible server1 -i inventory -m setup -a "gather_subset=min"

      To print only specific items of the JSON, you can use the filter argument. This will accept a wildcard pattern used to match strings, similar to fnmatch. For example, to obtain information about both the ipv4 and ipv6 network interfaces, you can use *ipv* as filter:

      • ansible server1 -i inventory -m setup -a "filter=*ipv*"

      Output

      server1 | SUCCESS => { "ansible_facts": { "ansible_all_ipv4_addresses": [ "203.0.113.111", "10.0.0.1" ], "ansible_all_ipv6_addresses": [ "fe80::a4f5:16ff:fe75:e758" ], "ansible_default_ipv4": { "address": "203.0.113.111", "alias": "eth0", "broadcast": "203.0.113.111", "gateway": "203.0.113.1", "interface": "eth0", "macaddress": "a6:f5:16:75:e7:58", "mtu": 1500, "netmask": "255.255.240.0", "network": "203.0.113.0", "type": "ether" }, "ansible_default_ipv6": {} }, "changed": false }

      If you’d like to check disk usage, you can run a Bash command calling the df utility, as follows:

      • ansible all -i inventory -a "df -h"

      Output

      server1 | CHANGED | rc=0 >> Filesystem Size Used Avail Use% Mounted on udev 3.9G 0 3.9G 0% /dev tmpfs 798M 624K 798M 1% /run /dev/vda1 155G 2.3G 153G 2% / tmpfs 3.9G 0 3.9G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/vda15 105M 3.6M 101M 4% /boot/efi tmpfs 798M 0 798M 0% /run/user/0 server2 | CHANGED | rc=0 >> Filesystem Size Used Avail Use% Mounted on udev 2.0G 0 2.0G 0% /dev tmpfs 395M 608K 394M 1% /run /dev/vda1 78G 2.2G 76G 3% / tmpfs 2.0G 0 2.0G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/vda15 105M 3.6M 101M 4% /boot/efi tmpfs 395M 0 395M 0% /run/user/0

      Conclusion

      In this guide, we demonstrated how to use Ansible ad hoc commands to manage remote servers, including how to execute common tasks such as restarting a service or copying a file from the control node to the remote servers managed by Ansible. We’ve also seen how to gather information from the remote nodes using limiting and filtering parameters.

      As an additional resource, you can check Ansible’s official documentation on ad hoc commands.



      Source link