One place for hosting & domains

      Reference

      How to Use Ansible: A Reference Guide


      Ansible Cheat Sheet

      Introduction

      Ansible is a modern configuration management tool that facilitates the task of setting up and maintaining remote servers.

      This cheat sheet-style guide provides a quick reference to commands and practices commonly used when working with Ansible. For an overview of Ansible and how to install and configure it, please check our guide on how to install and configure Ansible on Ubuntu 18.04.

      How to Use This Guide:

      • This guide is in cheat sheet format with self-contained command-line snippets.
      • Jump to any section that is relevant to the task you are trying to complete.
      • When you see highlighted text in this guide’s commands, keep in mind that this text should refer to hosts, usernames and IP addresses from your own inventory.

      Ansible Glossary

      The following Ansible-specific terms are largely used throughout this guide:

      • Control Machine / Node: a system where Ansible is installed and configured to connect and execute commands on nodes.
      • Node: a server controlled by Ansible.
      • Inventory File: a file that contains information about the servers Ansible controls, typically located at /etc/ansible/hosts.
      • Playbook: a file containing a series of tasks to be executed on a remote server.
      • Role: a collection of playbooks and other files that are relevant to a goal such as installing a web server.
      • Play: a full Ansible run. A play can have several playbooks and roles, included from a single playbook that acts as entry point.

      If you’d like to practice the commands used in this guide with a working Ansible playbook, you can use this playbook from our guide on Automating Initial Server Setup with Ansible on Ubuntu 18.04. You’ll need at least one remote server to use as node.

      Testing Connectivity to Nodes

      To test that Ansible is able to connect and run commands and playbooks on your nodes, you can use the following command:

      The ping module will test if you have valid credentials for connecting to the nodes defined in your inventory file, in addition to testing if Ansible is able to run Python scripts on the remote server. A pong reply back means Ansible is ready to run commands and playbooks on that node.

      Connecting as a Different User

      By default, Ansible tries to connect to the nodes as your current system user, using its corresponding SSH keypair. To connect as a different user, append the command with the -u flag and the name of the intended user:

      • ansible all -m ping -u sammy

      The same is valid for ansible-playbook:

      • ansible-playbook myplaybook.yml -u sammy

      Using a Custom SSH Key

      If you're using a custom SSH key to connect to the remote servers, you can provide it at execution time with the --private-key option:

      • ansible all -m ping --private-key=~/.ssh/custom_id

      This option is also valid for ansible-playbook:

      • ansible-playbook myplaybook.yml --private-key=~/.ssh/custom_id

      Using Password-Based Authentication

      If you need to use password-based authentication in order to connect to the nodes, you need to append the option --ask-pass to your Ansible command.

      This will make Ansible prompt you for the password of the user on the remote server that you're attempting to connect as:

      • ansible all -m ping --ask-pass

      This option is also valid for ansible-playbook:

      • ansible-playbook myplaybook.yml --ask-pass

      Providing the sudo Password

      If the remote user needs to provide a password in order to run sudo commands, you can include the option --ask-become-pass to your Ansible command. This will prompt you to provide the remote user sudo password:

      • ansible all -m ping --ask-become-pass

      This option is also valid for ansible-playbook:

      • ansible-playbook myplaybook.yml --ask-become-pass

      Using a Custom Inventory File

      The default inventory file is typically located at /etc/ansible/hosts, but you can also use the -i option to point to custom inventory files when running Ansible commands and playbooks. This is useful for setting up per-project inventories that can be included in version control systems such as Git:

      • ansible all -m ping -i my_custom_inventory

      The same option is valid for ansible-playbook:

      • ansible-playbook myplaybook.yml -i my_custom_inventory

      Using a Dynamic Inventory File

      Ansible supports inventory scripts for building dynamic inventory files. This is useful if your inventory fluctuates, with servers being created and destroyed often.

      You can find a number of open source inventory scripts on the official Ansible GitHub repository. After downloading the desired script to your Ansible control machine and setting up any required information — such as API credentials — you can use the executable as custom inventory with any Ansible command that supports this option.

      The following command uses Ansible's DigitalOcean inventory script with a ping command to check connectivity to all current active servers:

      • ansible all -m ping -i digital_ocean.py

      For more details on how to use dynamic inventory files, please refer to the official Ansible documentation.

      Running ad-hoc Commands

      To execute any command on a node, use the -a option followed by the command you want to run, in quotes.

      This will execute uname -a on all the nodes in your inventory:

      • ansible all -a "uname -a"

      It is also possible to run Ansible modules with the option -m. The following command would install the package vim on server1 from your inventory:

      • ansible server1 -m apt -a "name=vim"

      Before making changes to your nodes, you can conduct a dry run to predict how the servers would be affected by your command. This can be done by including the --check option:

      • ansible server1 -m apt -a "name=vim" --check

      Running Playbooks

      To run a playbook and execute all the tasks defined within it, use the ansible-playbook command:

      • ansible-playbook myplaybook.yml

      To overwrite the default hosts option in the playbook and limit execution to a certain group or host, include the option -l in your command:

      • ansible-playbook -l server1 myplaybook.yml

      Getting Information about a Play

      The option --list-tasks is used to list all tasks that would be executed by a play without making any changes to the remote servers:

      • ansible-playbook myplaybook.yml --list-tasks

      Similarly, it is possible to list all hosts that would be affected by a play, without running any tasks on the remote servers:

      • ansible-playbook myplaybook.yml --list-hosts

      You can use tags to limit the execution of a play. To list all tags available in a play, use the option --list-tags:

      • ansible-playbook myplaybook.yml --list-tags

      Controlling Playbook Execution

      You can use the option --start-at-task to define a new entry point for your playbook. Ansible will then skip anything that comes before the specified task, executing the remaining of the play from that point on. This option requires a valid task name as argument:

      • ansible-playbook myplaybook.yml --start-at-task="Set Up Nginx"

      To only execute tasks associated with specific tags, you can use the option --tags. For instance, if you'd like to only execute tasks tagged as nginx or mysql, you can use:

      • ansible-playbook myplaybook.yml --tags=mysql,nginx

      If you want to skip all tasks that are under specific tags, use --skip-tags. The following command would execute myplaybook.yml, skipping all tasks tagged as mysql:

      • ansible-playbook myplaybook.yml --skip-tags=mysql

      Using Ansible Vault to Store Sensitive Data

      If your Ansible playbooks deal with sensitive data like passwords, API keys, and credentials, it is important to keep that data safe by using an encryption mechanism. Ansible provides ansible-vault to encrypt files and variables.

      Even though it is possible to encrypt any Ansible data file as well as binary files, it is more common to use ansible-vault to encrypt variable files containing sensitive data. After encrypting a file with this tool, you'll only be able to execute, edit or view its contents by providing the relevant password defined when you first encrypted the file.

      Creating a New Encrypted File

      You can create a new encrypted Ansible file with:

      • ansible-vault create credentials.yml

      This command will perform the following actions:

      • First, it will prompt you to enter a new password. You'll need to provide this password whenever you access the file contents, whether it's for editing, viewing, or just running playbooks or commands using those values.
      • Next, it will open your default command-line editor so you can populate the file with the desired contents.
      • Finally, when you're done editing, ansible-vault will save the file as encrypted data.

      Encrypting an Existing Ansible File

      To encrypt an existing Ansible file, you can use the following syntax:

      • ansible-vault encrypt credentials.yml

      This will prompt you for a password that you'll need to enter whenever you access the file credentials.yml.

      Viewing the Contents of an Encrypted File

      If you want to view the contents of a file that was previously encrypted with ansible-vault and you don't need to change its contents, you can use:

      • ansible-vault view credentials.yml

      This will prompt you to provide the password you selected when you first encrypted the file with ansible-vault.

      Editing an Encrypted File

      To edit the contents of a file that was previously encrypted with Ansible Vault, run:

      • ansible-vault edit credentials.yml

      This will prompt you to provide the password you chose when first encrypting the file credentials.yml with ansible-vault. After password validation, your default command-line editor will open with the unencrypted contents of the file, allowing you to make your changes. When finished, you can save and close the file as you would normally, and the updated contents will be saved as encrypted data.

      Decrypting Encrypted Files

      If you wish to permanently revert a file that was previously encrypted with ansible-vault to its unencrypted version, you can do so with this syntax:

      • ansible-vault decrypt credentials.yml

      This will prompt you to provide the same password used when first encrypting the file credentials.yml with ansible-vault. After password validation, the file contents will be saved to the disk as unencrypted data.

      Using Multiple Vault Passwords

      Ansible supports multiple vault passwords grouped by different vault IDs. This is useful if you want to have dedicated vault passwords for different environments, such as development, testing, and production environments.

      To create a new encrypted file using a custom vault ID, include the --vault-id option along with a label and the location where ansible-vault can find the password for that vault. The label can be any identifier, and the location can either be prompt, meaning that the command should prompt you to enter a password, or a valid path to a password file.

      • ansible-vault create --vault-id dev@prompt credentials_dev.yml

      This will create a new vault ID named dev that uses prompt as password source. By combining this method with group variable files, you'll be able to have separate ansible vaults for each application environment:

      • ansible-vault create --vault-id prod@prompt credentials_prod.yml

      We used dev and prod as vault IDs to demonstrate how you can create separate vaults per environment, but you can create as many vaults as you want, and you can use any identifier of your choice as vault ID.

      Now to view, edit, or decrypt these files, you'll need to provide the same vault ID and password source along with the ansible-vault command:

      • ansible-vault edit credentials_dev.yml --vault-id dev@prompt

      Using a Password File

      If you need to automate the process of provisioning servers with Ansible using a third-party tool, you'll need a way to provide the vault password without being prompted for it. You can do that by using a password file with ansible-vault.

      A password file can be a plain text file or an executable script. If the file is an executable script, the output produced by this script will be used as the vault password. Otherwise, the raw contents of the file will be used as vault password.

      To use a password file with ansible-vault, you need to provide the path to a password file when running any of the vault commands:

      • ansible-vault create --vault-id dev@path/to/passfile credentials_dev.yml

      Ansible doesn't make a distinction between content that was encrypted using prompt or a password file as password source, as long as the input password is the same. In practical terms, this means it is OK to encrypt a file using prompt and then later use a password file to store the same password used with the prompt method. The opposite is also true: you can encrypt content using a password file and later use the prompt method, providing the same password when prompted by Ansible.

      For extended flexibility and security, instead of having your vault password stored in a plain text file, you can use a Python script to obtain the password from other sources. The official Ansible repository contains a few examples of vault scripts that you can use for reference when creating a custom script that suits the particular needs of your project.

      Running a Playbook with Data Encrypted via Ansible Vault

      Whenever you run a playbook that uses data previously encrypted via ansible-vault, you'll need to provide the vault password to your playbook command.

      If you used default options and the prompt password source when encrypting the data used in this playbook, you can use the option --ask-vault-pass to make Ansible prompt you for the password:

      • ansible-playbook myplaybook.yml --ask-vault-pass

      If you used a password file instead of prompting for the password, you should use the option --vault-password-file instead:

      • ansible-playbook myplaybook.yml --vault-password-file my_vault_password.py

      If you're using data encrypted under a vault ID, you'll need to provide the same vault ID and password source you used when first encrypting the data:

      • ansible-playbook myplaybook.yml --vault-id dev@prompt

      If using a password file with your vault ID, you should provide the label followed by the full path to the password file as password source:

      • ansible-playbook myplaybook.yml --vault-id dev@vault_password.py

      If your play uses multiple vaults, you should provide a --vault-id parameter for each of them, in no particular order:

      • ansible-playbook myplaybook.yml --vault-id dev@vault_password.py --vault-id test@prompt --vault-id ci@prompt

      Debugging

      If you run into errors while executing Ansible commands and playbooks, it's a good idea to increase output verbosity in order to get more information about the problem. You can do that by including the -v option to the command:

      • ansible-playbook myplaybook.yml -v

      If you need more detail, you can use -vvv and this will increase verbosity of the output. If you're unable to connect to the remote nodes via Ansible, use -vvvv to get connection debugging information:

      • ansible-playbook myplaybook.yml -vvvv

      Conclusion

      This guide covers some of the most common Ansible commands you may use when provisioning servers, such as how to execute remote commands on your nodes and how to run playbooks using a variety of custom settings.

      There are other command variations and flags that you may find useful for your Ansible workflow. To get an overview of all available options, you can use the help command:

      If you want a more comprehensive view of Ansible and all its available commands and features, please refer to the official Ansible documentation.



      Source link

      Infrastructure for Online Gaming: Bare Metal and Colocation Reference Architecture


      Bare Metal is powerful, fast and, most importantly, easily scalable—all qualities that make it perfect for resource-intensive, dynamic applications like massive online games. It’s a single-tenant environment, meaning you can harness all the computing power of the hardware for yourself (and without the need for virtualization).

      And beyond that, it offers all that performance and functionality at a competitive price, even when fully customized to your performance needs and unique requirements.

      Given all this, it’s easy to see why Bare Metal has quickly become the infrastructure solution of choice for gaming applications. So what does a comprehensive gaming deployment look like?

      Bare Metal for Gaming: Reference Architecture

      Here’s an example of what a Bare Metal deployment for gaming might look like.

      bare metal gaming reference architecture
      Download this Bare Metal reference architecture [PDF].

      1. Purpose-Built Configurations: Standard configurations are available, but one strength of Bare Metal is its customizability for specific performance needs or unique requirements.

      2. Access the Edge: Solution flexibility and wide reach across a global network puts gaming platforms closer to end users for better performance.

      3. Critical Services: Infrastructure designed for the needs of your application, combined with environment monitoring and support, enables the consistent performance your players expect from any high-quality gaming experience.

      4. Content Delivery Networks: CDNs are perfect for executing software downloads and patch updates or for delivering cut scenes and other static embedded content quickly, while reducing loads on main servers. Read our recent blog about CDN to learn more.

      5. Automated Route Optimization: Your infrastructure is nothing without a solid network to connect it to your players. Ours is powered by our proprietary Performance IP service, which ensures outbound traffic takes the lowest-latency path, reducing lag and packet loss. For more on this technology, read below.

      6. Cloud Connect: On-ramp to hyperscale cloud providers—ideal for test deployments and traffic bursting. If you’re not sure what kind of cloud is right for you, our cloud experts can help you craft a flexible multicloud deployment that meets the needs of your applications and integrates seamlessly into your other infrastructure solutions.

      7. Enterprise SAN Storage: Connect to a high-speed storage area network (SAN) for reliable, secure storage.

      CHAT NOW

      The Need for Ultra-Low Latency

      In online games, latency plays a huge role in the overall gaming experience. Just a few milliseconds of lag can mean the difference between winning and losing—between an immersive experience and something that people stop playing after a few frustrated minutes.

      Minimizing latency is always an ongoing battle, which is why INAP is proud of our automated route optimization engine Performance IP and its proven ability to put outbound traffic on the lowest-latency route possible.

      • Enhances default Border Gateway Protocol (BGP) by automatically routing outbound traffic along the lowest-latency path
      • Millions of optimizations made per location every hour
      • Carrier-diverse IP blend creates network redundancy (up to 7 carriers per location)
      • Supported by complex network security to protect client data and purchases

      Learn more about how it works by watching the video below or jump into a demo to see for yourself the difference that it makes.

      Colocation

      If a hosted model isn’t right for you—maybe you want or need to bring your own hardware—Colocation might be a good way to bring the power, resiliency and availability of modern data centers to your gaming application.

      colocation gaming reference architecture
      Download this Colocation reference architecture [PDF].

      1. Purpose-Built Configurations: Secure cabinets, cages and private suites can be configured to your needs.

      High-Density Colocation: High power density means more bang for your footprint. INAP environments support 20+ kW per rack for efficiency and ease of scalability.

      Designed for Concurrent Maintainability: Tier 3-design data centers provide component redundancy and superior availability.

      2. Automated Route Optimization: Your infrastructure is nothing without a solid network to connect it to your players. Ours is powered by our proprietary Performance IP service, which ensures outbound traffic takes the lowest-latency path, reducing lag and packet loss.

      3. Cloud Connect: On-ramp to hyperscale cloud providers—ideal for test deployments and traffic bursting. If you’re not sure what kind of cloud is right for you, our cloud experts can help you craft a flexible multicloud deployment that meets the needs of your applications and integrates seamlessly into your other infrastructure solutions.

      4. Integrated With Private Cloud & Bare Metal: Run auxiliary or back-office applications in right-sized Private Cloud and/or Bare Metal environments engineered to meet your needs. Get onboarding and support from experts.

      5. Enterprise SAN Storage: Connect to a high-speed storage area network (SAN) for reliable, secure storage.

      Interested in learning more about INAP Bare Metal?

      CHAT NOW

      Josh Williams


      Josh Williams is Vice President of Solutions Engineering. His team enables enterprises and service providers in the design, deployment and management of a wide range of data center and cloud IT solutions. READ MORE



      Source link

      SaltStack Command Line Reference


      Updated by Linode Contributed by Andy Stevens

      Use promo code DOCS10 for $10 credit on a new account.

      SaltStack is a powerful configuration management tool. The following is a quick-reference guide for Salt’s command line interface (CLI).

      salt

      Used to issue commands to minions in parallel. salt allows you to both control and query minions.

      Option Description Example
      --version Get the current version of Salt. salt --version
      -h, --help Display Salt commands and help text. salt -h
      -c, --config-dir Change the Salt configuration directory. The default is /etc/salt. salt -c /home/salt/conf test.ping
      -s, --static Only return data after all minions have returned. salt --static
      --async Instead of waiting for a job on a minion or minions, print the job ID and the job completion. salt '*' pkg.install apache2 --async
      --subset Execute commands on a random subset of minions. salt '*' telegram.post_message message="Hello random 3!" --subset 3
      -v, --verbose Print extra data, such as the job ID. salt 'minion1' user.add steve --verbose
      --hide-timeout Only print minions that can be reached. salt '*' test.ping --hide-timeout
      -b, --batch-size Execute on a batch or percentage of minions. salt '*' test.ping --batch-size 25%
      -a, --auth Use an external authentication medium. You will be prompted for credentials. Options are auto, keystone, ldap, and pam. Can be used with -T. salt -a pam '*' status.meminfo
      -T, --make-token Used with -a. Creates an authentication token in the active user’s home directory that has a default 12 hour expiration time. Token expiration time is set in the Salt master config file. salt -T -a pam '*' status.cpuinfo
      --return Used to select an alternative returner. Options are carbon, cassandra, couchbase, couchdb, elasticsearch, etcd, hipchat, local, local_cache, memcache, mongo, mysql, odbc, postgres, redis, sentry, slack, sms, smtp, sqlite3, syslog, and xmpp. salt '*' status.all_status --return mongo
      -d, --doc, --documentation Return all available documentation for a module function, or all functions if one is not provided. salt 'minion3' service.available -d
      -l, --log-level Change console log level. Defaults to warning. Available options are all, garbage, trace, debug, info, warning, error, and quiet. salt 'minion2' state.apply -l info
      --log-file Change the log file path. Defaults to /var/log/salt/master salt '*' test.ping --log-file /home/salt/log
      --log-file-level Change the logging level of the log file. Same options as --log-level salt '*' test.ping --log-level all
      -E, --pcre Target expression will be interpreted as a Perl Compatible Regular Expression (PCRE) rather than a shell glob. salt -E 'minion[0-9]' service.reload apache2
      -L, --list Target expression will be interpreted as a comma-delimited list. salt -L 'minion1,minion2' service.show sshd
      -G, --grain Target expression in the form of a glob expression matches a Salt grain. <grain value>:<glob expression>. salt -G 'os:Ubuntu' service.available mysql
      --grain-pcre Target expression in the form of a Perl Compatible Regular Expression matches values returned by Salt grains on the minion.<grain value>:<regular expression> salt --grain-pcre 'os:Arch' service.restart apache2
      -I, --pillar Use pillar values instead of shell globs to identify targets. salt -I 'role:production' test.echo 'playback'
      --out Choose an alternative outputter to display returned data. Available outputters are: grains, highstate, json, key, overstatestage, pprint, raw, txt, yaml. Note: when using --out json you will probably want to also use --static. salt '*' test.version --out json --static

      salt-call

      Runs module functions on a minion instead of the master. It is used to run a standalone minion.

      Option Description Example
      --version Get the current version of Salt. salt-call --version
      -h, --help Display Salt commands and help text. salt-call -h
      -c, --config-dir Change the Salt configuration directory. The default is /etc/salt. salt-call -c /home/salt/conf test.ping
      -g, --grains Get the information generated by the Salt grains. salt-call --grains
      -m, --module-dirs Select an additional modules directory. You can provide this option multiple times for multiple directories. salt-call -m /home/salt/modules1 -m /home/salt/modules2
      -d, --doc, --documentation Return all available documentation for module function, or all functions if one is not provided. salt-call system.get_system_time -d
      --master Choose which master to use. The minion must be authenticated with the master. If the master is omitted, the first master in the minion config will be used. salt-call --master master1
      --return Used to select an alternative returner. Options are carbon, cassandra, couchbase, couchdb, elasticsearch, etcd, hipchat, local, local_cache, memcache, mongo, mysql, odbc, postgres, redis, sentry, slack, sms, smtp, sqlite3, syslog, and xmpp. salt-call --return mongo status.all_status
      --local Run Salt as if there was no master running. salt-call --local system.get_system_time
      --file-root Set a directory as the base file directory. salt-call --file-root /home/salt
      --pillar-root Set a directory as the base pillar directory. salt-call --file-root /home/salt/pillar
      -l, --log-level Change console log level. Defaults to warning. Available options are all, garbage, trace, debug, info, warning, error, and quiet. salt-call -l all test.exception 'oh no!'
      --log-file Change log file path. Defaults to /var/log/salt/minion. salt-call --logfile /home/salt/log/minion test.exception 'oh no!'
      --log-file-level Change logfile log level. Defaults to warning. Available options are all, garbage, trace, debug, info, warning, error, and quiet. salt-call --log-file-level all test.exception 'oh no!'
      --out Choose an alternative outputter to display returned data. Available outputters are: grains, highstate, json, key, overstatestage, pprint, raw, txt, yaml. salt-call test.version --out json

      salt-cloud

      Used to provision virtual machines on public clouds with Salt.

      Option Description Example
      --version Get the current version of Salt. salt-cloud --version
      -h, --help Display Salt commands and help text. salt-cloud -h
      -c, --config-dir Change the Salt configuration directory. The default is /etc/salt. salt-cloud -c /home/salt/conf
      -a, --action Perform a cloud provider specific action. Requires an instance. salt-cloud -a reboot testlinode
      -f, --function Perform a cloud provider specific function that does not apply to an instance. Requires a provider. salt-cloud -f clone my-linode-config linode_id=1234567 datacenter_id=2 plan_id=5
      -p, --profile Choose a profile from which to build cloud VMs. salt-cloud -p linode-1024 mynewlinode
      -m, --map Choose a map file from which to create your VMs. If a VM exists it will be skipped. salt-cloud -m /path/to/map
      -H, --hard Used when creating VMs with a map file. If set, will destroy all VMs not listed in the map file. salt-cloud -m /path/to/map -H
      -d, --destroy Destroy the named VMs. Can be used with -m to provide a map of VMs to destroy. salt-cloud -m /path/to/map -d
      -P, --parallel Build VMs in parallel. salt-cloud -P -p linode-profile newlinode1 newlinode2
      -u, --update-boostrap Update salt-bootstrap. salt-cloud -u
      -y, --assume-yes Answer yes to all questions. salt-cloud -y -d linode1 linode2
      -k, -keep-tmp Do not remove /tmp files. salt-cloud -k -m /path/to/map
      --show-deploy-args Include deployment arguments in the return data. salt-cloud --show-deploy-args -m /path/to/map
      --script-args Arguments to be passed to the bootstrap script when deploying. salt-cloud -m /path/to/map --script-args '-h'
      -Q, --query Query nodes running on configured cloud providers. salt-cloud -Q
      -F, --full-query Query VMs and print all available information. Can be used with -m to provide a map. salt-cloud -F
      -S, --select-query Query VMs and print selected information. Can be used with -m to provide a map. salt-cloud -S
      --list-providers Display a list of configured providers. salt-cloud --list-providers
      --list-profiles Display a list of configured profiles. Supply a cloud provider, such as linode, or pass all to view all configured profiles. salt-cloud --list-profiles linode
      --list-locations Display a list of available locations. Supply a cloud provider, such as linode, or pass all to view all location for configured profiles. salt-cloud --list-locations linode
      --list-images Display a list of available images. Supply a cloud provider, such as linode, or pass all to view all images for configured profiles. salt-cloud --list-images linode
      --list-sizes Display a list of available sizes. Supply a cloud provider, such as linode, or pass all to view all sizes for configured profiles. salt-cloud --list-sizes linode
      --out Choose an alternative outputter to display returned data. Available outputters are: grains, highstate, json, key, overstatestage, pprint, raw, txt, yaml. salt-call test.version --out json

      salt-cp

      Used to copy files from the master to all Salt minions that match a specific target expression.

      Option Description Example
      --version Get the current version of Salt. salt-cp --version
      -h, --help Display Salt commands and help text. salt-cp -h
      -c, --config-dir Change the Salt configuration directory. The default is /etc/salt. salt-cp '*' -c /home/salt/conf /file/to/copy /destination
      -t, --timeout The amount of seconds to wait for replies from minions. The default is 5 seconds. salt-cp '*' -t 25 /file/to/copy /destination
      -l, --log-level Change console log level. Defaults to warning. Available options are all, garbage, trace, debug, info, warning, error, and quiet. salt-cp '*' -l all /file/to/copy /destination
      --log-file Change log file path. Defaults to /var/log/salt/master. salt-cp '*' --logfile /home/salt/log/minion /file/to/copy /destination
      --log-file-level Change logfile log level. Defaults to warning. Available options are all, garbage, trace, debug, info, warning, error, and quiet. salt-cp '*' --log-file-level all /file/to/copy /destination
      -E, --pcre Target expression will be interpreted as a Perl Compatible Regular Expression (PCRE) rather than a shell glob. salt-cp -E 'minion[0-9]' /file/to/copy /destination
      -L, --list Target expression will be interpreted as a comma-delimited list. salt -L 'minion1,minion2' /file/to/copy /destination
      -G, --grain Target expression matches a Salt grain. <grain value>:<glob expression>. salt -G 'os:Ubuntu' /file/to/copy /destination
      --grain-pcre Target expression in the form of a Perl Compatible Regular Expression matches values returned by Salt grains on the minion.<grain value>:<regular expression> salt-cp --grain-pcre 'os:Arch' /file/to/copy /destination
      -C, --chunked Use chunked mode to copy files. Supports large files, recursive directories copying and compression. salt-cp -C /some/large/file /destination
      -n, --no-compression Disable gzip in chunked mode. salt-cp -C -n /some/large/file /destination

      salt-key

      Used to manage the Salt server public keys.

      Option Description Example
      --version Get the current version of Salt. salt-key --version
      -h, --help Display Salt commands and help text. salt-key -h
      -c, --config-dir Change the Salt configuration directory. The default is /etc/salt. salt-key -c /home/salt/conf
      -u, --user Supply a user to run salt-key. salt-key --user steven
      -q, --quiet Suppress output salt-key -q
      -y, --yes Answer yes to all questions. Default is False. salt-key -y True
      --rotate-aes-key Setting to False prevents the key session from being refreshed when keys are deleted or rejected. Default is True. salt-key --rotate-aes-key False
      --log-file Change log file path. Defaults to /var/log/salt/minion. salt-key --logfile /home/salt/log/minion -D
      --log-file-level Change logfile log level. Defaults to warning. Available options are all, garbage, trace, debug, info, warning, error, and quiet. salt-key --log-file-level all --accept '*'
      -l, --list List public keys. pre, un, and unaccepted will list unaccepted/unsigned keys. acc or accepted will list accepted/signed keys. rej or rejected will list rejected keys. all will list all keys. salt-key -l all
      -a, --accept Accept a public key. Globs are supported. salt-key --accept 'minion*'
      -A, --accept-all Accept all pending keys. salt-key -A
      -r, --reject Reject a specific key. Globs are supported. salt-key -r 'minion*'
      -R, --reject-all Reject all pending keys. salt-key -R
      --include-all Include non-pending keys when accepting and rejecting. salt-key -r 'minion*' --include-all
      -p, --print Print a public key. salt-key --print 'minion1'
      -d, --delete Delete a public key. Globs are supported. salt-key -d 'minion*'
      -D, --delete-all Delete all public keys. salt-key --delete-all -y
      -f, --finger Print a key’s fingerprint. salt-key --finger 'minion1'
      -F, --finger-all Print all keys’ fingerprints. salt-key --F
      --gen-keys Set a name to generate a key-pair. salt-key --gen-keys newminion
      --gen-keys-dir Choose where to save newly generated key-pairs. Only works with --gen-keys. salt-key --gen-keys newminion --gen-keys-dir /home/salt/keypairs
      --keysize Set the keysize for a generated key. Must be a value of 2048 or higher. Only works with --gen-keys. salt-key --gen-keys newminion --keysize 4096
      --gen-signature Create a signature for the master’s public key named master_pubkey_signature. This requires a new-signing-keypair which can be created with the --auto-create option. salt-key --gen-signature --auto-create
      --priv The private-key file with which to create a signature. salt-key --priv key.pem
      --signature-path The file path for the new signature. salt-key --gen-signature --auto-create --signature-path /path/to/signature
      --pub The public-key file with which to create a signature. salt-key --gen-signature key.pub
      --auto-create Auto-create a signing key-pair. salt-key --gen-signature --auto-create

      salt-master

      A daemon used to control Salt minions.

      Option Description Example
      --version Get the current version of Salt. salt-master --version
      -h, --help Display Salt commands and help text. salt-master -h
      -c, --config-dir Change the Salt configuration directory. The default is /etc/salt. salt-master -c /home/salt/conf
      -u, --user Supply a user to run salt-master. salt-master --user steven
      -d, --daemon Run salt-master as daemon. salt-master -d
      --pid-file Specify the file path of the pidfile. Default is /var/run/salt-master.pid salt-master --pid-file /path/to/new/pid
      -l, --log-level Change console log level. Defaults to warning. Available options are all, garbage, trace, debug, info, warning, error, and quiet. salt-master -l info
      --log-file Change the log file path. Defaults to /var/log/salt/master salt-master --log-file /home/salt/log
      --log-file-level Change the logging level of the log file. Same options as --log-level salt-master --log-level all

      salt-minion

      A daemon that is controlled by a Salt master.

      Option Description Example
      --version Get the current version of Salt. salt-minion --version
      -h, --help Display Salt commands and help text. salt-minion -h
      -c, --config-dir Change the Salt configuration directory. The default is /etc/salt. salt-minion -c /home/salt/conf
      -u, --user Supply a user to run salt-minion. salt-minion --user steven
      -d, --daemon Run salt-minion as daemon. salt-minion -d
      --pid-file Specify the file path of the pidfile. Default is /var/run/salt-minion.pid salt-minion --pid-file /path/to/new/pid
      -l, --log-level Change console log level. Defaults to warning. Available options are all, garbage, trace, debug, info, warning, error, and quiet. salt-master -l info
      --log-file Change the log file path. Defaults to /var/log/salt/minion salt-minion --log-file /home/salt/log
      --log-file-level Change the logging level of the log file. Same options as --log-level salt-minion --log-level all

      salt-run

      Runs a Salt runner on a Salt master.

      Option Description Example
      --version Get the current version of Salt. salt-run --version
      -h, --help Display Salt commands and help text. salt-run -h
      -c, --config-dir Change the Salt configuration directory. The default is /etc/salt. salt-run -c /home/salt/conf foo.bar
      -t, --timeout The amount of seconds to wait for replies from minions. The default is 5 seconds. salt-run -t 25 foo.bar
      -d, --doc, --documentation Return all available documentation for a module or runner. salt-run foo.bar -d
      -l, --log-level Change console log level. Defaults to warning. Available options are all, garbage, trace, debug, info, warning, error, and quiet. salt-run -l info foo.bar
      --log-file Change the log file path. Defaults to /var/log/salt/master salt-minion --log-file /home/salt/log foo.bar
      --log-file-level Change the logging level of the log file. Same options as --log-level salt-minion --log-level all foo.bar

      salt-ssh

      Use SSH transport to execute salt routines.

      Option Description Example
      --version Get the current version of Salt. salt-ssh --version
      -h, --help Display Salt commands and help text. salt-ssh -h
      -c, --config-dir Change the Salt configuration directory. The default is /etc/salt. salt-ssh '*' -c /home/salt/conf test.ping
      -r, --raw, --raw-shell Run a raw shell command. salt-ssh '*' -r echo 'test'
      --roster Choose which roster system to use. The default is the flat file roster. salt-ssh '192.168.0.0/16' --roster scan pkg.install apache2
      --roster-file Change the roster file directory. The default is the same directory as the master config file. salt-ssh 'minion1' --roster-file /path/to/roster test.ping
      --refresh, --refresh-cache Use to force refresh the target’s data in the master side cache before the auto refresh timeframe has been reached. salt-ssh 'minion1' --refresh-cache status.diskstats
      --max-procs The number of minions to communicate with concurrently. In general, more connections mean faster communication. Default is 25. salt-ssh '*' --max-procs 50 test.ping
      -v, --verbose Display job ID. salt-ssh '*' -v test.ping
      -s, --static Return minion data as a grouping. salt-ssh '*' -s status.meminfo
      -w, --wipe Remove Salt files when the job is done. salt-ssh '*' -w state.apply
      -W. --rand-thin-dir Deploys to a random temp directory and cleans the directory when done. salt-ssh '*' -W state.apply
      --python2-bin File path to a python2 binary which has Salt installed. salt-ssh '*' --python2-bin /file/to/bin test.ping
      --python3-bin File path to a python3 binary which has Salt installed. salt-ssh '*' --python3-bin /file/to/bin test.ping
      --jid Supply a job ID instead of generating one. salt-ssh '*' -v --jid 00000000000000000000 test.ping
      --priv Supply which SSH private key to use for authentication. salt-ssh '*' --priv /path/to/privkey status.netstats
      -i, --ignore-host-keys Disable StrictHostKeyChecking, which suppresses asking for connection approval. salt-ssh '*' -i pkg.install mysql-client
      --no-host-keys Ignores SSH host keys. Useful if an error persists with --ignore-host-keys. salt-ssh '*' -i --no-host-keys pkg.install cowsay
      --user Supply the user to authenticate with. salt-ssh '*' --user steven -r cowsay 'hello!'
      --passwd Supply the password to authenticate with. salt-ssh 'minion2' --passwd p455w0rd system.reboot
      --askpass Request a password prompt. salt-ssh 'minion1' --askpass sys.doc
      --key-deploy Deploy the authorized SSH key to all minions. salt-ssh '*' --key-deploy --passwd test.ping
      --sudo Run command with elevated privileges. salt-ssh '*' -r --sudo somecommand
      --scan-ports A comma-separated list of ports to scan in the scan roster. salt-ssh '192.168.0.0/16' --roster scan --scan-ports 22,23 test.ping
      --scan-timeout Timeout for scan roster. salt-ssh '192.168.0.0/16' --roster scan --scan-timeout 100 test.ping
      -l, --log-level Change console log level. Defaults to warning. Available options are all, garbage, trace, debug, info, warning, error, and quiet. salt-ssh -l info test.ping
      --log-file Change the log file path. Defaults to /var/log/salt/ssh salt-ssh --log-file /home/salt/log test.ping
      --log-file-level Change the logging level of the log file. Same options as --log-level salt-ssh --log-level all test.ping
      -E, --pcre Target expression will be interpreted as a Perl Compatible Regular Expression (PCRE) rather than a shell glob. salt-ssh -E 'minion[0-9]' service.reload apache2
      --out Choose an alternative outputter to display returned data. Available outputters are: grains, highstate, json, key, overstatestage, pprint, raw, txt, yaml. salt-ssh '*' test.version --out json

      salt-syndic

      A minion set up on a master that allows for passing commands in from a higher master.

      Option Description Example
      --version Get the current version of Salt. salt-syndic --version
      -h, --help Display Salt commands and help text. salt-syndic -h
      -c, --config-dir Change the Salt configuration directory. The default is /etc/salt. salt-syndic -c /home/salt/conf
      -u, --user Supply a user to run salt-syndic. salt-syndic --user steven
      -d, --daemon Run salt-syndic as daemon. salt-syndic -d
      --pid-file Specify the file path of the pidfile. Default is /var/run/salt-syndic.pid salt-syndic --pid-file /path/to/new/pid
      -l, --log-level Change console log level. Defaults to warning. Available options are all, garbage, trace, debug, info, warning, error, and quiet. salt-syndic -l info
      --log-file Change the log file path. Defaults to /var/log/salt/master salt-syndic --log-file /home/salt/log
      --log-file-level Change the logging level of the log file. Same options as --log-level salt-syndic --log-level all

      spm

      Salt Package Manager

      Option Description Example
      -y, --yes Answer yes to all questions. spm remove -y apache
      -f, --force Force spm to perform an action it would normally refuse to perform.
      -l, --log-level Change console log level. Defaults to warning. Available options are all, garbage, trace, debug, info, warning, error, and quiet. spm -l info install apache
      --log-file Change the log file path. Defaults to /var/log/salt/spm spm --log-file /home/salt/log install mysql
      --log-file-level Change the logging level of the log file. Same options as --log-level spm --log-level all remove nginx
      Command Description Example
      update_repo Update locally configured repository metadata. spm update_repo
      install Install a package by name from a configured SPM repository. spm install nginx
      remove Remove a package. spm remove apache
      info Get an installed package’s information. spm info mysql
      files List an installed package’s files. spm files mongodb
      local Perform a command on a local package, not a package in a repository or an installed package. Does not work with remove. spm local install /path/to/package
      build Build a package. spm build /path/to/package
      create_repo Scan a directory for a valid SPM package and build an SPM-METADATA file in that directory. spm create_rep /path/to/package

      salt-api

      Used to start the Salt API

      Option Description Example
      --version Get the current version of Salt. salt-api --version
      -h, --help Display Salt commands and help text. salt-api -h
      -c, --config-dir Change the Salt configuration directory. The default is /etc/salt. salt-api -c /home/salt/conf
      -u, --user Supply a user to run salt-api. salt-api --user steven
      -d, --daemon Run salt-api as daemon. salt-api -d
      --pid-file Specify the file path of the pidfile. Default is /var/run/salt-api.pid salt-api --pid-file /path/to/new/pid
      -l, --log-level Change console log level. Defaults to warning. Available options are all, garbage, trace, debug, info, warning, error, and quiet. salt-api -l info
      --log-file Change the log file path. Defaults to /var/log/salt/api salt-api --log-file /home/salt/log
      --log-file-level Change the logging level of the log file. Same options as --log-level salt-api --log-level all

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Join our Community

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link