One place for hosting & domains

      Introduction

      Introduction to Jinja Templates for Salt


      Updated by Linode Contributed by Linode

      Use promo code DOCS10 for $10 credit on a new account.

      Introduction to Templating Languages

      Jinja is a flexible templating language for Python that can be used to generate any text based format such as HTML, XML, and YAML. Templating languages like Jinja allow you to insert data into a structured format. You can also embed logic or control-flow statements into templates for greater reusability and modularity. Jinja’s template engine is responsible for processing the code within the templates and generating the output to the final text based document.

      Templating languages are well known within the context of creating web pages in a Model View Controller architecture. In this scenario the template engine processes source data, like the data found in a database, and a web template that includes a mixture of HTML and the templating language. These two pieces are then used to generate the final web page for users to consume. Templating languages, however, are not limited to web pages. Salt, a popular Python based configuration management software, supports Jinja to allow for abstraction and reuse within Salt state files and regular files.

      This guide will provide an overview of the Jinja templating language used primarily within Salt. If you are not yet familiar with Salt concepts, review the Beginner’s Guide to Salt before continuing. While you will not be creating Salt states of your own in this guide, it is also helpful to review the Getting Started with Salt – Basic Installation and Setup guide.

      Jinja Basics

      This section provides an introductory description of Jinja syntax and concepts along with examples of Jinja and Salt states. For an exhaustive dive into Jinja, consult the official Jinja Template Designer Documentation.

      Applications like Salt can define default behaviors for the Jinja templating engine. All examples in this guide use Salt’s default Jinja environment options. These settings can be changed in the Salt master configuration file:

      /etc/salt/master
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      
      # Default Jinja environment options for all templates except sls templates
      #jinja_env:
      #  block_start_string: '{%'
      #  block_end_string: '%}'
      #  variable_start_string: '{{'
      #  variable_end_string: '}}'
      #  comment_start_string: '{#'
      #  comment_end_string: '#}'
      #  line_statement_prefix:
      #  line_comment_prefix:
      #  trim_blocks: False
      #  lstrip_blocks: False
      #  newline_sequence: 'n'
      #  keep_trailing_newline: False
      
      # Jinja environment options for sls templates
      #jinja_sls_env:
      #  block_start_string: '{%'
      #  block_end_string: '%}'
      #  variable_start_string: '{{'
      #  variable_end_string: '}}'
      #  comment_start_string: '{#'
      #  comment_end_string: '#}'
      #  line_statement_prefix:
      #  line_comment_prefix:
      #  trim_blocks: False
      #  lstrip_blocks: False

      Note

      Before including Jinja in your Salt states, be sure to review the Salt and Jinja Best Practices section of this guide to ensure that you are creating maintainable and readable Salt states. More advanced Salt tools and concepts can be used to improve the modularity and reusability of some of the Jinja and Salt state examples used throughout this guide.

      Delimiters

      Templating language delimiters are used to denote the boundary between the templating language and another type of data format like HTML or YAML. Jinja uses the following delimiters:

      Delimiter Syntax Usage
      {% ... %} Control structures
      {{ ... }} Evaluated expressions that will print to the template output
      {# ... #} Comments that will be ignored by the template engine
      # ... ## Line statements

      In this example Salt state file, you can differentiate the Jinja syntax from the YAML because of the {% ... %} delimiters surrounding the if/else conditionals:

      /srv/salt/webserver/init.sls
      1
      2
      3
      4
      5
      6
      7
      
      {% if grains['group'] == 'admin' %}
          America/Denver:
              timezone.system:
      {% else %}
          Europe/Minsk:
              timezone.system:
      {% endif %}

      See the control structures section for more information on conditionals.

      Template Variables

      Template variables are available via a template’s context dictionary. A template’s context dictionary is created automatically during the different stages of a template’s evaluation. These variables can be accessed using dot notation:

      {{ foo.bar }}
      

      Or they can be accessed by subscript syntax:

      {{ foo['bar'] }}
      

      Salt provides several context variables that are available by default to any Salt state file or file template:

      • Salt: The salt variable provides a powerful set of Salt library functions.

        {{ salt['pw_user.list_groups']('jdoe') }}
        

        You can run salt '*' sys.doc from the Salt master to view a list of all available functions.

      • Opts: The opts variable is a dictionary that provides access to the content of a Salt minion’s configuration file:

        {{ opts['log_file'] }}
        

        The location for a minion’s configuration file is /etc/salt/minion.

      • Pillar: The pillar variable is a dictionary used to access Salt’s pillar data:

        {{ pillar['my_key'] }}
        

        Although you can access pillar keys and values directly, it is recommended that you use Salt’s pillar.get variable library function, because it allows you to define a default value. This is useful when a value does not exist in the pillar:

        {{ salt['pillar.get']('my_key', 'default_value') }}
        
      • Grains: The grains variable is a dictionary and provides access to minions’ grains data:

        {{ grains['shell'] }}
        

        You can also use Salt’s grains.get variable library function to access grain data:

        {{ salt['grains.get']('shell') }}
        
      • Saltenv: You can define multiple salt environments for minions in a Salt master’s top file, such as base, prod, dev and test. The saltenv variable provides a way to access the current Salt environment within a Salt state file. This variable is only available within Salt state files.

        {{ saltenv }}
        
      • SLS: With the sls variable you can obtain the reference value for the current state file (e.g. apache, webserver, etc). This is the same value used in a top file to map minions to state files or via the include option in state files:

        {{ sls }}
        
      • Slspath: This variable provides the path to the current state file:

        {{ slspath }}
        

      Variable Assignments

      You can assign a value to a variable by using the set tag along with the following delimiter and syntax:

      {% set var_name = myvalue %}
      

      Follow Python naming conventions when creating variable names. If the variable is assigned at the top level of a template, the assignment is exported and available to be imported by other templates.

      Any value generated by a Salt template variable library function can be assigned to a new variable.

      {% set username = salt['user.info']('username') %}
      

      Filters

      Filters can be applied to any template variable via a | character. Filters are chainable and accept optional arguments within parentheses. When chaining filters, the output of one filter becomes the input of the following filter.

      {{ '/etc/salt/' | list_files | join('n') }}
      

      These chained filters will return a recursive list of all the files in the /etc/salt/ directory. Each list item will be joined with a new line.

        
        /etc/salt/master
        /etc/salt/proxy
        /etc/salt/minion
        /etc/salt/pillar/top.sls
        /etc/salt/pillar/device1.sls
        
      

      For a complete list of all built in Jinja filters, refer to the Jinja Template Design documentation. Salt’s official documentation includes a list of custom Jinja filters.

      Macros

      Macros are small, reusable templates that help you to minimize repetition when creating states. Define macros within Jinja templates to represent frequently used constructs and then reuse the macros in state files.

      /srv/salt/mysql/db_macro.sls
      1
      2
      3
      4
      5
      6
      7
      8
      
      {% macro mysql_privs(user, grant=select, database, host=localhost) %}
      {{ user }}_exampledb:
         mysql_grants.present:
          - grant: {{ grant }}
          - database: {{ database }}
          - user: {{user}}
          - host: {{ host }}
      {% endmacro %}
      db_privs.sls
      1
      2
      3
      
      {% import "/srv/salt/mysql/db_macro.sls" as db -%}
      
      db.mysql_privs('jane','exampledb.*','select,insert,update')

      The mysql_privs() macro is defined in the db_macro.sls file. The template is then imported to the db variable in the db_privs.sls state file and is used to create a MySQL grants state for a specific user.

      Refer to the Imports and Includes section for more information on importing templates and variables.

      Imports and Includes

      Imports

      Importing in Jinja is similar to importing in Python. You can import an entire template, a specific state, or a macro defined within a file.

      {% import '/srv/salt/users.sls' as users %}
      

      This example will import the state file users.sls into the variable users. All states and macros defined within the template will be available using dot notation.

      You can also import a specific state or macro from a file.

      {% from '/srv/salt/user.sls' import mysql_privs as grants %}
      

      This import targets the macro mysql_privs defined within the user.sls state file and is made available to the current template with the grants variable.

      Includes

      The {% include %} tag renders the output of another template into the position where the include tag is declared. When using the {% include %} tag the context of the included template is passed to the invoking template.

      /srv/salt/webserver/webserver_users.sls
      1
      2
      3
      4
      
      include:
        - groups
      
      {% include 'users.sls' %}

      Note

      Import Context Behavior

      By default, an import will not include the context of the imported template, because imports are cached. This can be overridden by adding with context to your import statements.

      {% from '/srv/salt/user.sls' import mysql_privs with context %}
      

      Similarly, if you would like to remove the context from an {% include %}, add without context:

      {% include 'users.sls' without context %}
      

      Whitespace Control

      Jinja provides several mechanisms for whitespace control of its rendered output. By default, Jinja strips single trailing new lines and leaves anything else unchanged, e.g. tabs, spaces, and multiple new lines. You can customize how Salt’s Jinja template engine handles whitespace in the Salt master configuration file. Some of the available environment options for whitespace control are:

      • trim_blocks: When set to True, the first newline after a template tag is removed automatically. This is set to False by default in Salt.
      • lstrip_blocks: When set to True, Jinja strips tabs and spaces from the beginning of a line to the start of a block. If other characters are present before the start of the block, nothing will be stripped. This is set to False by default in Salt.
      • keep_trailing_newline: When set to True, Jinja will keep single trailing newlines. This is set to False by default in Salt.

      To avoid running into YAML syntax errors, ensure that you take Jinja’s whitespace rendering behavior into consideration when inserting templating markup into Salt states. Remember, Jinja must produce valid YAML. When using control structures or macros, it may be necessary to strip whitespace from the template block to appropriately render valid YAML.

      To preserve the whitespace of contents within template blocks, you can set both the trim_blocks and lstrip_block options to True in the master configuration file. You can also manually enable and disable the white space environment options within each template block. A - character will set the behavior of trim_blocks and lstrip_blocks to False and a + character will set these options to True for the block:

      For example, to strip the whitespace after the beginning of the control structure include a - character before the closing %}:

      {% for item in [1,2,3,4,5] -%}
          {{ item }}
      {% endfor %}
      

      This will output the numbers 12345 without any leading whitespace. Without the - character, the output would preserve the spacing defined within the block.

      Control Structures

      Jinja provides control structures common to many programming languages such as loops, conditionals, macros, and blocks. The use of control structures within Salt states allow for fine-grained control of state execution flow.

      For Loops

      For loops allow you to iterate through a list of items and execute the same code or configuration for each item in the list. Loops provide a way to reduce repetition within Salt states.

      /srv/salt/users.sls
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      
      {% set groups = ['sudo','wheel', 'admins'] %}
      include:
        - groups
      
      jane:
        user.present:
          - fullname: Jane Doe
          - shell: /bin/zsh
          - createhome: True
          - home: /home/jane
          - uid: 4001
          - groups:
          {%- for group in groups %}
            - {{ group }}
          {%- endfor -%}

      The previous for loop will assign the user jane to all the groups in the groups list set at the top of the users.sls file.

      Conditionals

      A conditional expression evaluates to either True or False and controls the flow of a program based on the result of the evaluated boolean expression. Jinja’s conditional expressions are prefixed with if/elif/else and placed within the {% ... %} delimiter.

      /srv/salt/users.sls
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      
      {% set users = ['anna','juan','genaro','mirza'] %}
      {% set admin_users = ['genaro','mirza'] %}
      {% set admin_groups = ['sudo','wheel', 'admins'] %}
      {% set org_groups = ['games', 'webserver'] %}
      
      
      include:
        - groups
      
      {% for user in users %}
      {{ user }}:
        user.present:
          - shell: /bin/zsh
          - createhome: True
          - home: /home/{{ user }}
          - groups:
      {% if user in admin_users %}
          {%- for admin_group in admin_groups %}
            - {{ admin_group }}
          {%- endfor -%}
      {% else %}
          {%- for org_group in org_groups %}
            - {{ org_group }}
          {% endfor %}
      {%- endif -%}
      {% endfor %}

      In this example the presence of a user within the admin_users list determines which groups are set for that user in the state. Refer to the Salt Best Practices section for more information on using conditionals and control flow statements within state files.

      Template Inheritance

      With template inheritance you can define a base template that can be reused by child templates. The child template can override blocks designated by the base template.

      Use the {% block block_name %} tag with a block name to define an area of a base template that can be overridden.

      /srv/salt/users.jinja
      1
      2
      3
      4
      5
      6
      7
      8
      9
      
      {% block user %}jane{% endblock %}:
        user.present:
          - fullname: {% block fullname %}{% endblock %}
          - shell: /bin/zsh
          - createhome: True
          - home: /home/{% block home_dir %}
          - uid: 4000
          - groups:
            - sudo

      This example creates a base user state template. Any value containing a {% block %} tag can be overridden by a child template with its own value.

      To use a base template within a child template, use the {% extends "base.sls"%} tag with the location of the base template file.

      /srv/salt/webserver_users.sls
      1
      2
      3
      4
      
      {% extends "/srv/salt/users.jinja" %}
      
      {% block fullname %}{{ salt['pillar.get']('jane:fullname', '') }}{% endblock %}
      {% block home_dir %}{{ salt['pillar.get']('jane:home_dir', 'jane') }}{% endblock %}

      The webserver_users.sls state file extends the users.jinja template and defines values for the fullname and home_dir blocks. The values are generated using the salt context variable and pillar data. The rest of the state will be rendered as the parent user.jinja template has defined it.

      Salt and Jinja Best Practices

      If Jinja is overused, its power and versatility can create unmaintainable Salt state files that are difficult to read. Here are some best practices to ensure that you are using Jinja effectively:

      • Limit how much Jinja you use within state files. It is best to separate the data from the state that will use the data. This allows you to update your data without having to alter your states.
      • Do not overuse conditionals and looping within state files. Overuse will make it difficult to read, understand and maintain your states.
      • Use dictionaries of variables and directly serialize them into YAML, instead of trying to create valid YAML within a template. You can include your logic within the dictionary and retrieve the necessary variable within your states.

        The {% load_yaml %} tag will deserialize strings and variables passed to it.

         {% load_yaml as example_yaml %}
             user: jane
             firstname: Jane
             lastname: Doe
         {% endload %}
        
         {{ example_yaml.user }}:
            user.present:
              - fullname: {{ example_yaml.firstname }} {{ example_yaml.lastname }}
              - shell: /bin/zsh
              - createhome: True
              - home: /home/{{ example_yaml.user }}
              - uid: 4001
              - groups:
                - games
        

        Use {% import_yaml %} to import external files of data and make the data available as a Jinja variable.

         {% import_yaml "users.yml" as users %}
        
      • Use Salt Pillars to store general or sensitive data as variables. Access these variables inside state files and template files.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Join our Community

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Introduction to systemctl


      Updated by Linode

      Contributed by

      Linode


      Use promo code DOCS10 for $10 credit on a new account.

      What is systemctl?

      systemctl is a controlling interface and inspection tool for the widely-adopted init system and service manager systemd. This guide will cover how to use systemctl to manage systemd services, work with systemd Targets and extract meaningful information about your system’s overall state.

      Note

      This guide is written for a non-root user. Commands that require elevated privileges are prefixed with sudo. If you’re not familiar with the sudo command, see the Users and Groups guide.

      Managing Services

      systemd initializes user space components that run after the Linux kernel has booted, as well as continuously maintaining those components throughout a system’s lifecycle. These tasks are known as units, and each unit has a corresponding unit file. Units might concern mounting storage devices (.mount), configuring hardware (.device), sockets (.socket), or, as will be covered in this guide, managing services (.service).

      Starting and Stopping a Service

      To start a systemd service in the current session, issue the start command:

      sudo systemctl start apache2.service
      

      Conversely, to stop a systemd service, issue the stop command:

      sudo systemctl stop apache2.service
      

      In the above example we started and then stopped the Apache service. It is important to note that systemctl does not require the .service extension when working with service units. The following is just as acceptable:

      sudo systemctl start apache2
      

      If the service needs to be restarted, such as to reload a configuration file, you can issue the restart command:

      sudo systemctl restart apache2
      

      Similarly, if a service does not need to restart to reload it’s configuration, you can issue the reload command:

      sudo systemctl reload apache2
      

      Finally, you can use the reload-or-restart command if you are unsure about whether your application needs to be restarted or just reloaded.

      sudo systemctl reload-or-restart apache2
      

      Enabling a Service at Boot

      The above commands are good for managing a service in a single session, but many services are also required to start at boot. To enable a service at boot:

      sudo systemctl enable nginx
      

      To disable the service from starting at boot, issue the disable command:

      sudo systemctl disable nginx
      

      Note

      The enable command does not start the service in the current session, nor does disable stop the service in the current session. To enable/disable and start/stop a service simultaneously, combine the command with the --now switch:

      sudo systemctl enable nginx --now
      

      If the service unit file is not located within one of the known systemd file paths, you can provide a file path to the service unit file you wish to enable:

      sudo systemctl enable /path/to/myservice.service
      

      However, this file needs to be accessible by systemd at startup. For example, this means files underneath /home or /var are not allowed, unless those directories are located on the root file system.

      Checking a Service’s Status

      systemctl allows us to check on the status of individual services:

      systemctl status mysql
      

      This will result in a message similar to the output below:

        
          ● mysql.service - MySQL Community Server
            Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
            Active: active (running) since Thu 2018-08-30 09:15:35 EDT; 1 day 5h ago
          Main PID: 711 (mysqld)
              Tasks: 31 (limit: 2319)
            CGroup: /system.slice/mysql.service
                    └─711 /usr/sbin/mysqld --daemonize --pid-file=/run/mysqld/mysqld.pid
      
      

      You can also use is-active, is-enabled, and is-failed to monitor a service’s status:

      systemctl is-enabled mysql
      

      To view which systemd service units are currently active on your system, issue the following list-units command and filter by the service type:

      systemctl list-units --type=service
      

      Note

      list-units is the default action for the systemctl command, so you can simply enter systemctl to retrieve a list of units.

      The generated list includes all currently active service units, service units that have jobs pending, and service units that were active and have failed:

      UNIT                            LOAD   ACTIVE SUB       DESCRIPTION
      accounts-daemon.service         loaded active running   Accounts Service
      apparmor.service                loaded active exited    AppArmor initialization
      apport.service                  loaded active exited    LSB: automatic crash report generation
      atd.service                     loaded active running   Deferred execution scheduler
      blk-availability.service        loaded active exited    Availability of block devices
      console-setup.service           loaded active exited    Set console font and keymap
      cron.service                    loaded active running   Regular background program processing daemon
      dbus.service                    loaded active running   D-Bus System Message Bus
      ebtables.service                loaded active exited    ebtables ruleset management
      ...
      

      The output provides five pieces of data:

      • UNIT: The name of the unit.
      • LOAD: Was the unit properly loaded?
      • ACTIVE: The general activation state, i.e. a generalization of SUB.
      • SUB: The low-level unit activation state, with values dependent on unit type.
      • DESCRIPTION: The unit’s description.

      To list all units, including inactive units, append the --all flag:

      systemctl list-units --type=service --all
      

      You can filter the list of units by state. Supply a comma-separated list of unit states to output as the value for the --state flag:

      systemctl list-units --type=service --all --state=exited,inactive
      

      To retrieve a list of failed units, enter the list-units command with the --failed flag:

      systemctl list-units --failed
      

      Working with Unit Files

      Each unit has a corresponding unit file. These unit files are usually located in the following directories:

      • The /lib/systemd/system directory holds unit files that are provided by the system or are supplied by installed packages.
      • The /etc/systemd/system directory stores unit files that are user-provided.

      Listing Installed Unit Files

      Not all unit files are active on a system at any given time. To view all systemd service unit files installed on a system, use the list-unit-files command with the optional --type flag:

      systemctl list-unit-files --type=service
      

      The generated list has two columns, UNIT FILE and STATE:

      UNIT FILE                              STATE
      accounts-daemon.service                enabled
      acpid.service                          disabled
      apparmor.service                       enabled
      apport-forward@.service                static
      apt-daily-upgrade.service              static
      apt-daily.service                      static
      ...
      

      A unit’s STATE can be either enabled, disabled, static, masked, or generated. Unit files with a static state do not contain an Install section and are either meant to be run once or they are a dependency of another unit file and should not be run alone. For more on masking, see Masking a Unit File.

      Viewing a Unit File

      To view the contents of a unit file, run the cat command:

      systemctl cat cron
      
        
      # /lib/systemd/system/cron.service
      [Unit]
      Description=Regular background program processing daemon
      Documentation=man:cron(8)
      
      [Service]
      EnvironmentFile=-/etc/default/cron
      ExecStart=/usr/sbin/cron -f $EXTRA_OPTS
      IgnoreSIGPIPE=false
      KillMode=process
      
      [Install]
      WantedBy=multi-user.target
      
      

      If there are recent changes to the unit file that have not yet been loaded into systemd, the output of the systemctl cat command may be an older version of the service.

      For a low-level view of a unit file, issue the show command:

      systemctl show cron
      

      This will generate a list of property key=value pairs for all non-empty properties defined in the unit file:

      Type=simple
      Restart=no
      NotifyAccess=none
      RestartUSec=100ms
      TimeoutStartUSec=1min 30s
      TimeoutStopUSec=1min 30s
      RuntimeMaxUSec=infinity
      ...
      

      To show empty property values, supply the --all flag.

      To filter the key=value pairs by property, use the -p flag:

      systemctl show cron -p Names
      

      Note that the property name must be capitalized.

      Viewing a Unit File’s Dependencies

      To display a list of a unit file’s dependencies, use the list-dependencies command:

      systemctl list-dependencies cron
      

      The generated output will show a tree of unit dependencies that must run before the service in question runs.

      cron.service
      ● ├─system.slice
      ● └─sysinit.target
      ●   ├─apparmor.service
      ●   ├─blk-availability.service
      ●   ├─dev-hugepages.mount
      ●   ├─dev-mqueue.mount
      ●   ├─friendly-recovery.service
      ...
      

      Recursive dependencies are only listed for .target files. To list all recursive dependencies, pass in the --all flag.

      To check which unit files depend on a service unit file, you can run the list-dependencies command with the --reverse flag:

      systemctl list-dependencies cron --reverse
      

      Editing a Unit File

      Note

      While the particulars of unit file contents are beyond the scope of this article, there are a number of good resources online that describe them, such as the RedHat Customer Portal page on Creating and Modifying systemd Unit Files.

      There are two ways to edit a unit file using systemctl.

      1. The edit command opens up a blank drop-in snippet file in the system’s default text editor:

        sudo systemctl edit ssh
        

        When the file is saved, systemctl will create a file called override.conf under a directory at /etc/systemd/system/yourservice.service.d, where yourservice is the name of the service you chose to edit. This command is useful for changing a few properties of the unit file.

      2. The second way is to use the edit command with the --full flag:

        sudo systemctl edit ssh --full
        

        This command opens a full copy of whatever unit file you chose to edit in a text editor. When the file is saved, systemctl will create a file at /etc/systemd/system/yourservice.service. This is useful if you need to make many changes to an existing unit file.

      In general, any unit file in /etc/systemd/system will override the corresponding file in /lib/systemd/system.

      Creating a Unit File

      While systemctl will throw an error if you try to open a unit file that does not exist, you can force systemctl to create a new unit file using the --force flag:

      sudo systemctl edit yourservice.service --force
      

      When the file is saved, systemctl will create an override.conf file in the /etc/systemd/system/yourservice.service.d directory, where ‘yourservice’ is the name of the service you chose to create. To create a full unit file instead of just a snippet, use --force in tandem with --full:

      sudo systemctl edit yourservice.service --force --full
      

      Masking a Unit File

      To prevent a service from ever starting, either manually or automatically, use the mask command to symlink a service to /dev/null:

      sudo systemctl mask mysql
      

      Similar to disabling a service, the mask command will not prevent a service from continuing to run. To mask a service and stop the service at the same time, use the --now switch:

      sudo systemctl mask mysql --now
      

      To unmask a service, use the unmask command:

      sudo systemctl unmask mysql
      

      Removing a Unit File

      To remove a unit file snippet that was created with the edit command, remove the directory yourservice.service.d (where ‘yourservice’ is the service you would like to delete), and the override.conf file inside of the directory:

      sudo rm -r /etc/systemd/system/yourservice.service.d
      

      To remove a full unit file, run the following command:

      sudo rm /etc/systemd/system/yourservice.service
      

      After you issue these commands, reload the systemd daemon so that it no longer tries to reference the deleted service:

      sudo systemctl daemon-reload
      

      Working with systemd Targets

      Like other init system’s run levels, systemd’s targets help it determine which unit files are necessary to produce a certain system state. systemd targets are represented by target units. Target units end with the .target file extension and their only purpose is to group together other systemd units through a chain of dependencies.

      For instance, there is a graphical.target that denotes when the system’s graphical session is ready. Units that are required to start in order to achieve the necessary state have WantedBy= or RequiredBy= graphical.target in their configuration. Units that depend on graphical.target can include Wants=, Requires=, or After= in their configuration to make themselves available at the correct time.

      A target can have a corresponding directory whose name has the syntax target_name.target.wants (e.g. graphical.target.wants), located in /etc/systemd/system. When a symlink to a service file is added to this directory, that service becomes a dependency of the target.

      When you enable a service (using systemctl enable), symlinks to the service are created inside the .target.wants directory for each target listed in that service’s WantedBy= configuration. This is actually how the WantedBy= option is implemented.

      Getting and Setting the Default Target

      To get the default target for your system –the end goal of the chain of dependencies– issue the get-default command:

      systemctl get-default
      

      If you would like to change the default target for your system, issue the set-default command:

      sudo systemctl set-default multi-user.target
      

      Listing Targets

      To retrieve a list of available targets, use the list-unit-files command and filter by target:

      systemctl list-unit-files --type=target
      

      To list all currently active targets, use the list-units command and filter by target:

      systemctl list-units --type=target
      

      Changing the Active Target

      To change the current active target, issue the isolate command. This command starts the isolated target with all dependent units and shuts down all others. For instance, if you wanted to move to a multi-user command line interface and stop the graphical shell, use the following command:

      sudo systemctl isolate multi-user.target
      

      However, it is a good idea to first check on the dependencies of the target you wish to isolate so you do not stop anything important. To do this, issue the list-dependencies command:

      systemctl list-dependencies multi-user.target
      

      Rescue Mode

      When a situation arises where you are unable to proceed with a normal boot, you can place your system in rescue mode. Rescue mode provides a single-user interface used to repair your system. To place your system in rescue mode, enter the following command:

      sudo systemctl rescue
      

      This command is similar to systemctl isolate rescue, but will also issue a notice to all other users that the system is entering rescue mode. To prevent this message from being sent, apply the --no-wall flag:

      sudo systemctl rescue --no-wall
      

      Emergency Mode

      Emergency mode offers the user the most minimal environment possible to salvage a system in need of repair, and is useful if the system cannot enter rescue mode. For a full explanation of emergency mode, refer to the RedHat Customer Portal page. To enter emergency mode, enter the following command:

      sudo systemctl emergency
      

      This command is similar to systemctl isolate emergency, but will also issue a notice to all other users that the system is entering emergency mode. To prevent this message, apply the --no-wall flag:

      sudo systemctl emergency --no-wall
      

      More Shortcuts

      systemctl allows users the ability to halt, shutdown and reboot a system.

      To halt a system, issue the following command:

      sudo systemctl halt
      

      To shutdown a system, use:

      sudo systemctl shutdown
      

      Finally, to reboot a system, enter the following command:

      sudo systemctl reboot
      

      Similar to the emergency and rescue commands, these commands will issue a notice to all users that the system state is changing.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Join our Community

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      An Introduction to the Kubernetes DNS Service


      Introduction

      The Domain Name System (DNS) is a system for associating various types of information – such as IP addresses – with easy-to-remember names. By default most Kubernetes clusters automatically configure an internal DNS service to provide a lightweight mechanism for service discovery. Built-in service discovery makes it easier for applications to find and communicate with each other on Kubernetes clusters, even when pods and services are being created, deleted, and shifted between nodes.

      The implementation details of the Kubernetes DNS service have changed in recent versions of Kubernetes. In this article we will take a look at both the kube-dns and CoreDNS versions of the Kubernetes DNS service. We will review how they operate and the DNS records that Kubernetes generates.

      To gain a more thorough understanding of DNS before you begin, please read An Introduction to DNS Terminology, Components, and Concepts. For any Kubernetes topics you may be unfamiliar with, you could read An Introduction to Kubernetes.

      What Does the Kubernetes DNS Service Provide?

      Before Kubernetes version 1.11, the Kubernetes DNS service was based on kube-dns. Version 1.11 introduced CoreDNS to address some security and stability concerns with kube-dns.

      Regardless of the software handling the actual DNS records, both implementations work in a similar manner:

      • A service named kube-dns and one or more pods are created.
      • The kube-dns service listens for service and endpoint events from the Kubernetes API and updates its DNS records as needed. These events are triggered when you create, update or delete Kubernetes services and their associated pods.
      • kubelet sets each new pod’s /etc/resolv.conf nameserver option to the cluster IP of the kube-dns service, with appropriate search options to allow for shorter hostnames to be used:

        resolv.conf

        nameserver 10.32.0.10
        search namespace.svc.cluster.local svc.cluster.local cluster.local
        options ndots:5
        
      • Applications running in containers can then resolve hostnames such as example-service.namespace into the correct cluster IP addresses.

      Example Kubernetes DNS Records

      The full DNS A record of a Kubernetes service will look like the following example:

      service.namespace.svc.cluster.local
      

      A pod would have a record in this format, reflecting the actual IP address of the pod:

      10.32.0.125.namespace.pod.cluster.local
      

      Additionally, SRV records are created for a Kubernetes service’s named ports:

      _port-name._protocol.service.namespace.svc.cluster.local
      

      The result of all this is a built-in, DNS-based service discovery mechanism, where your application or microservice can target a simple and consistent hostname to access other services or pods on the cluster.

      Search Domains and Resolving Shorter Hostnames

      Because of the search domain suffixes listed in the resolv.conf file, you often won’t need to use the full hostname to contact another service. If you’re addressing a service in the same namespace, you can use just the service name to contact it:

      other-service
      

      If the service is in a different namespace, add it to the query:

      other-service.other-namespace
      

      If you’re targeting a pod, you’ll need to use at least the following:

      pod-ip.other-namespace.pod
      

      As we saw in the default resolv.conf file, only .svc suffixes are automatically completed, so make sure you specify everything up to .pod.

      Now that we know the practical uses of the Kubernetes DNS service, let’s run through some details on the two different implementations.

      Kubernetes DNS Implementation Details

      As noted in the previous section, Kubernetes version 1.11 introduced new software to handle the kube-dns service. The motivation for the change was to increase the performance and security of the service. Let’s take a look at the original kube-dns implementation first.

      kube-dns

      The kube-dns service prior to Kubernetes 1.11 is made up of three containers running in a kube-dns pod in the kube-system namespace. The three containers are:

      • kube-dns: a container that runs SkyDNS, which performs DNS query resolution
      • dnsmasq: a popular lightweight DNS resolver and cache that caches the responses from SkyDNS
      • sidecar: a sidecar container that handles metrics reporting and responds to health checks for the service

      Security vulnerabilities in Dnsmasq, and scaling performance issues with SkyDNS led to the creation of a replacement system, CoreDNS.

      CoreDNS

      As of Kubernetes 1.11 a new Kubernetes DNS service, CoreDNS has been promoted to General Availability. This means that it’s ready for production use and will be the default cluster DNS service for many installation tools and managed Kubernetes providers.

      CoreDNS is a single process, written in Go, that covers all of the functionality of the previous system. A single container resolves and caches DNS queries, responds to health checks, and provides metrics.

      In addition to addressing performance- and security-related issues, CoreDNS fixes some other minor bugs and adds some new features:

      • Some issues with incompatibilities between using stubDomains and external services have been fixed
      • CoreDNS can enhance DNS-based round-robin load balancing by randomizing the order in which it returns certain records
      • A feature called autopath can improve DNS response times when resolving external hostnames, by being smarter about iterating through each of the search domain suffixes listed in resolv.conf
      • With kube-dns 10.32.0.125.namespace.pod.cluster.local would always resolve to 10.32.0.125, even if the pod doesn’t actually exist. CoreDNS has a “pods verified” mode that will only resolve successfully if a pod exists with the right IP and in the right namespace.

      For more information on CoreDNS and how it differs from kube-dns, you can read the Kubernetes CoreDNS GA announcement.

      Additional Configuration Options

      Kubernetes operators often want to customize how their pods and containers resolve certain custom domains, or need to adjust the upstream nameservers or search domain suffixes configured in resolv.conf. You can do this with the dnsConfig option of your pod’s spec:

      example_pod.yaml

      apiVersion: v1
      kind: Pod
      metadata:
        namespace: example
        name: custom-dns
      spec:
        containers:
          - name: example
            image: nginx
        dnsPolicy: "None"
        dnsConfig:
          nameservers:
            - 203.0.113.44
          searches:
            - custom.dns.local
      

      Updating this config will rewrite a pod’s resolv.conf to enable the changes. The configuration maps directly to the standard resolv.conf options, so the above config would create a file with nameserver 203.0.113.44 and search custom.dns.local lines.

      Conclusion

      In this article we covered the basics of what the Kubernetes DNS service provides to developers, showed some example DNS records for services and pods, discussed how the system is implemented on different Kubernetes versions, and highlighted some additional configuration options available to customize how your pods resolve DNS queries.

      For more information on the Kubernetes DNS service, please refer to the official Kubernetes DNS for Services and Pods documentation.



      Source link