One place for hosting & domains

      Introduction

      An Introduction to Service Meshes


      Introduction

      A service mesh is an infrastructure layer that allows you to manage communication between your application’s microservices. As more developers work with microservices, service meshes have evolved to make that work easier and more effective by consolidating common management and administrative tasks in a distributed setup.

      Taking a microservice approach to application architecture involves breaking your application into a collection of loosely-coupled services. This approach offers certain benefits: teams can iterate designs and scale quickly, using a wider range of tools and languages. On the other hand, microservices pose new challenges for operational complexity, data consistency, and security.

      Service meshes are designed to address some of these challenges by offering a granular level of control over how services communicate with one another. Specifically, they offer developers a way to manage:

      • Service discovery
      • Routing and traffic configuration
      • Encryption and authentication/authorization
      • Metrics and monitoring

      Though it is possible to do these tasks natively with container orchestrators like Kubernetes, this approach involves a greater amount of up-front decision-making and administration when compared to what service mesh solutions like Istio and Linkerd offer out of the box. In this sense, service meshes can streamline and simplify the process of working with common components in a microservice architecture. In some cases they can even extend the functionality of these components.

      Why Services Meshes?

      Service meshes are designed to address some of the challenges inherent to distributed application architectures.

      These architectures grew out of the three-tier application model, which broke applications into a web tier, application tier, and database tier. At scale, this model has proved challenging to organizations experiencing rapid growth. Monolithic application code bases can grow to be unwieldy “big balls of mud”, posing challenges for development and deployment.

      In response to this problem, organizations like Google, Netflix, and Twitter developed internal “fat client” libraries to standardize runtime operations across services. These libraries provided load balancing, circuit breaking, routing, and telemetry — precursors to service mesh capabilities. However, they also imposed limitations on the languages developers could use and required changes across services when they themselves were updated or changed.

      A microservice design avoids some of these issues. Instead of having a large, centralized application codebase, you have a collection of discretely managed services that represent a feature of your application. Benefits of a microservice approach include:

      • Greater agility in development and deployment, since teams can work on and deploy different application features independently.
      • Better options for CI/CD, since individual microservices can be tested and redeployed independently.
      • More options for languages and tools. Developers can use the best tools for the tasks at hand, rather than being restricted to a given language or toolset.
      • Ease in scaling.
      • Improvements in uptime, user experience, and stability.

      At the same time, microservices have also created challenges:

      • Distributed systems require different ways of thinking about latency, routing, asynchronous workflows, and failures.
      • Microservice setups cannot necessarily meet the same requirements for data consistency as monolithic setups.
      • Greater levels of distribution necessitate more complex operational designs, particularly when it comes to service-to-service communication.
      • Distribution of services increases the surface area for security vulnerabilities.

      Service meshes are designed to address these issues by offering coordinated and granular control over how services communicate. In the sections that follow, we’ll look at how service meshes facilitate service-to-service communication through service discovery, routing and internal load balancing, traffic configuration, encryption, authentication and authorization, and metrics and monitoring. We will use Istio’s Bookinfo sample application — four microservices that together display information about particular books — as a concrete example to illustrate how service meshes work.

      Service Discovery

      In a distributed framework, it’s necessary to know how to connect to services and whether or not they are available. Service instance locations are assigned dynamically on the network and information about them is constantly changing as containers are created and destroyed through autoscaling, upgrades, and failures.

      Historically, there have been a few tools for doing service discovery in a microservice framework. Key-value stores like etcd were paired with other tools like Registrator to offer service discovery solutions. Tools like Consul iterated on this by combining a key-value store with a DNS interface that allows users to work directly with their DNS server or node.

      Taking a similar approach, Kubernetes offers DNS-based service discovery by default. With it, you can look up services and service ports, and do reverse IP lookups using common DNS naming conventions. In general, an A record for a Kubernetes service matches this pattern: service.namespace.svc.cluster.local. Let’s look at how this works in the context of the Bookinfo application. If, for example, you wanted information on the details service from the Bookinfo app, you could look at the relevant entry in the Kubernetes dashboard:

      Details Service in Kubernetes Dash

      This will give you relevant information about the Service name, namespace, and ClusterIP, which you can use to connect with your Service even as individual containers are destroyed and recreated.

      A service mesh like Istio also offers service discovery capabilities. To do service discovery, Istio relies on communication between the Kubernetes API, Istio’s own control plane, managed by the traffic management component Pilot, and its data plane, managed by Envoy sidecar proxies. Pilot interprets data from the Kubernetes API server to register changes in Pod locations. It then translates that data into a canonical Istio representation and forwards it onto the sidecar proxies.

      This means that service discovery in Istio is platform agnostic, which we can see by using Istio’s Grafana add-on to look at the details service again in Istio’s service dashboard:

      Details Service Istio Dash

      Our application is running on a Kubernetes cluster, so once again we can see the relevant DNS information about the details Service, along with other performance data.

      In a distributed architecture, it’s important to have up-to-date, accurate, and easy-to-locate information about services. Both Kubernetes and service meshes like Istio offer ways to obtain this information using DNS conventions.

      Routing and Traffic Configuration

      Managing traffic in a distributed framework means controlling how traffic gets to your cluster and how it’s directed to your services. The more control and specificity you have in configuring external and internal traffic, the more you will be able to do with your setup. For example, in cases where you are working with canary deployments, migrating applications to new versions, or stress testing particular services through fault injection, having the ability to decide how much traffic your services are getting and where it is coming from will be key to the success of your objectives.

      Kubernetes offers different tools, objects, and services that allow developers to control external traffic to a cluster: kubectl proxy, NodePort, Load Balancers, and Ingress Controllers and Resources. Both kubectl proxy and NodePort allow you to quickly expose your services to external traffic: kubectl proxy creates a proxy server that allows access to static content with an HTTP path, while NodePort exposes a randomly assigned port on each node. Though this offers quick access, drawbacks include having to run kubectl as an authenticated user, in the case of kubectl proxy, and a lack of flexibility in ports and node IPs, in the case of NodePort. And though a Load Balancer optimizes for flexibility by attaching to a particular Service, each Service requires its own Load Balancer, which can be costly.

      An Ingress Resource and Ingress Controller together offer a greater degree of flexibility and configurability over these other options. Using an Ingress Controller with an Ingress Resource allows you to route external traffic to Services and configure internal routing and load balancing. To use an Ingress Resource, you need to configure your Services, the Ingress Controller and LoadBalancer, and the Ingress Resource itself, which will specify the desired routes to your Services. Currently, Kubernetes supports its own Nginx Controller, but there are other options you can choose from as well, managed by Nginx, Kong, and others.

      Istio iterates on the Kubernetes Controller/Resource pattern with Istio Gateways and VirtualServices. Like an Ingress Controller, a Gateway defines how incoming traffic should be handled, specifying exposed ports and protocols to use. It works in conjunction with a VirtualService, which defines routes to Services within the mesh. Both of these resources communicate information to Pilot, which then forwards that information to the Envoy proxies. Though they are similar to Ingress Controllers and Resources, Gateways and VirtualServices offer a different level of control over traffic: instead of combining Open Systems Interconnection (OSI) layers and protocols, Gateways and VirtualServices allow you to differentiate between OSI layers in your settings. For example, by using VirtualServices, teams working with application layer specifications could have a separation of concerns from security operations teams working with different layer specifications. VirtualServices make it possible to separate work on discrete application features or within different trust domains, and can be used for things like canary testing, gradual rollouts, A/B testing, etc.

      To visualize the relationship between Services, you can use Istio’s Servicegraph add-on, which produces a dynamic representation of the relationship between Services using real-time traffic data. The Bookinfo application might look like this without any custom routing applied:

      Bookinfo service graph

      Similarly, you can use a visualization tool like Weave Scope to see the relationship between your Services at a given time. The Bookinfo application without advanced routing might look like this:

      Weave Scope Service Map

      When configuring application traffic in a distributed framework, there are a number of different solutions — from Kubernetes-native options to service meshes like Istio — that offer various options for determining how external traffic will reach your application resources and how these resources will communicate with one another.

      Encryption and Authentication/Authorization

      A distributed framework presents opportunities for security vulnerabilities. Instead of communicating through local internal calls, as they would in a monolithic setup, services in a microservice architecture communicate information, including privileged information, over the network. Overall, this creates a greater surface area for attacks.

      Securing Kubernetes clusters involves a range of procedures; we will focus on authentication, authorization, and encryption. Kubernetes offers native approaches to each of these:

      • Authentication: API requests in Kubernetes are tied to user or service accounts, which need to be authenticated. There are several different ways to manage the necessary credentials: Static Tokens, Bootstrap Tokens, X509 client certificates, and external tools like OpenID Connect.
      • Authorization: Kubernetes has different authorization modules that allow you to determine access based on things like roles, attributes, and other specialized functions. Since all requests to the API server are denied by default, each part of an API request must be defined by an authorization policy.
      • Encryption: This can refer to any of the following: connections between end users and services, secret data, endpoints in the Kubernetes control plane, and communication between worker cluster components and master components. Kubernetes has different solutions for each of these:

      Configuring individual security policies and protocols in Kubernetes requires administrative investment. A service mesh like Istio can consolidate some of these activities.

      Istio is designed to automate some of the work of securing services. Its control plane includes several components that handle security:

      • Citadel: manages keys and certificates.
      • Pilot: oversees authentication and naming policies and shares this information with Envoy proxies.
      • Mixer: manages authorization and auditing.

      For example, when you create a Service, Citadel receives that information from the kube-apiserver and creates SPIFFE certificates and keys for this Service. It then transfers this information to Pods and Envoy sidecars to facilitate communication between Services.

      You can also implement some security features by enabling mutual TLS during the Istio installation. These include strong service identities for cross- and inter-cluster communication, secure service-to-service and user-to-service communication, and a key management system that can automate key and certificate creation, distribution, and rotation.

      By iterating on how Kubernetes handles authentication, authorization, and encryption, service meshes like Istio are able to consolidate and extend some of the recommended best practices for running a secure Kubernetes cluster.

      Metrics and Monitoring

      Distributed environments have changed the requirements for metrics and monitoring. Monitoring tools need to be adaptive, accounting for frequent changes to services and network addresses, and comprehensive, allowing for the amount and type of information passing between services.

      Kubernetes includes some internal monitoring tools by default. These resources belong to its resource metrics pipeline, which ensures that the cluster runs as expected. The cAdvisor component collects network usage, memory, and CPU statistics from individual containers and nodes and passes that information to kubelet; kubelet in turn exposes that information via a REST API. The Metrics Server gets this information from the API and then passes it to the kube-aggregator for formatting.

      You can extended these internal tools and monitoring capabilities with a full metrics solution. Using a service like Prometheus as a metrics aggregator allows you to build directly on top of the Kubernetes resource metrics pipeline. Prometheus integrates directly with cAdvisor through its own agents, located on the nodes. Its main aggregation service collects and stores data from the nodes and exposes it though dashboards and APIs. Additional storage and visualization options are also available if you choose to integrate your main aggregation service with backend storage, logging, and visualization tools like InfluxDB, Grafana, ElasticSearch, Logstash, Kibana, and others.

      In a service mesh like Istio, the structure of the full metrics pipeline is part of the mesh’s design. Envoy sidecars operating at the Pod level communicate metrics to Mixer, which manages policies and telemetry. Additionally, Prometheus and Grafana services are enabled by default (though if you are installing Istio with Helm you will need to specify granafa.enabled=true during installation). As is the case with the full metrics pipeline, you can also configure other services and deployments for logging and viewing options.

      With these metric and visualization tools in place, you can access current information about services and workloads in a central place. For example, a global view of the BookInfo application might look like this in the Istio Grafana dashboard:

      Bookinfo services from Grafana dash

      By replicating the structure of a Kubernetes full metrics pipeline and simplifying access to some of its common components, service meshes like Istio streamline the process of data collection and visualization when working with a cluster.

      Conclusion

      Microservice architectures are designed to make application development and deployment fast and reliable. Yet an increase in inter-service communication has changed best practices for certain administrative tasks. This article discusses some of those tasks, how they are handled in a Kubernetes-native context, and how they can be managed using a service mesh — in this case, Istio.

      For more information on some of the Kubernetes topics covered here, please see the following resources:

      Additionally, the Kubernetes and Istio documentation hubs are great places to find detailed information about the topics discussed here.



      Source link

      Introduction to HashiCorp Configuration Language (HCL)


      Updated by Linode Written by Linode

      HCL is a configuration language authored by HashiCorp. HCL is used with HashiCorp’s cloud infrastructure automation tools, like Terraform. The language was created with the goal of being both human and machine friendly. It is JSON compatible, which means it is interoperable with other systems outside of the Terraform product line.

      This guide provides an introduction to HCL syntax and some commonly used HCL terminology.

      HCL Syntax Overview

      HashiCorp’s configuration syntax is easy to read and write. It was created to have a more clearly visible and defined structure when compared with other well known configuration languages, like YAML.

      ~/terraform/main.tf
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      
      # Linode provider block. Installs Linode plugin.
      provider "linode" {
          token = "${var.token}"
      }
      
      variable "region" {
        description = "This is the location where the Linode instance is deployed."
      }
      
      /* A multi
         line comment. */
      resource "linode_instance" "example_linode" {
          image = "linode/ubuntu18.04"
          label = "example-linode"
          region = "${var.region}"
          type = "g6-standard-1"
          authorized_keys = [ "my-key" ]
          root_pass = "example-password"
      }
          

      Note

      Key Elements of HCL

      • HCL syntax is composed of stanzas or blocks that define a variety of configurations available to Terraform. Provider plugins expand on the available base Terraform configurations.

      • Stanzas or blocks are comprised of key = value pairs. Terraform accepts values of type string, number, boolean, map, and list.

      • Single line comments start with #, while multi-line comments use an opening /* and a closing */.

      • Interpolation syntax can be used to reference values stored outside of a configuration block, like in an input variable, or from a Terraform module’s output.

        An interpolated variable reference is constructed with the "${var.region}" syntax. This example references a variable named region, which is prefixed by var.. The opening ${ and closing } indicate the start of interpolation syntax.

      • You can include multi-line strings by using an opening <<EOF, followed by a closing EOF on its own line.

      • Strings are wrapped in double quotes.

      • Lists of primitive types (string, number, and boolean) are wrapped in square brackets: ["Andy", "Leslie", "Nate", "Angel", "Chris"].

      • Maps use curly braces {} and colons :, as follows: { "password" : "my_password", "db_name" : "wordpress" }.

      See Terraform’s Configuration Syntax documentation for more details.

      Providers

      In Terraform, a provider is used to interact with an Infrastructure as a Service (IaaS) or Platform as a Service (PaaS) API, like the Linode APIv4. The provider determines which resources are exposed and available to create, read, update, and delete. A credentials set or token is usually required to interface with your service account. For example, the Linode Terraform provider requires your Linode API access token. A list of all official Terraform providers is available from HashiCorp.

      Configuring a Linode as your provider requires that you include a block which specifies Linode as the provider and sets your Linode API token in one of your .tf files:

      ~/terraform/terraform.tf
      1
      2
      3
      
      provider "linode" {
          token = "my-token"
      }

      Once your provider is declared, you can begin configuring resources available from the provider.

      Note

      Providers are packaged as plugins for Terraform. Whenever declaring a new provider in your Terraform configuration files, the terraform init command should be run. This command will complete several initialization steps that are necessary before you can apply your Terraform configuration, including downloading the plugins for any providers you’ve specified.

      Resources

      A Terraform resource is any component of your infrastructure that can be managed by your provider. Resources available with the Linode provider range from a Linode instance, to a block storage volume, to a DNS record. Terraform’s Linode Provider documentation contains a full listing of all supported resources.

      Resources are declared with a resource block in a .tf configuration file. This example block deploys a 2GB Linode instance located in the US East data center from an Ubuntu 18.04 image. Values are also provided for the Linode’s label, public SSH key, and root password:

      ~/terraform/main.tf
      1
      2
      3
      4
      5
      6
      7
      8
      
      resource "linode_instance" "WordPress" {
          image = "linode/ubuntu18.04"
          label = "WPServer"
          region = "us-east"
          type = "g6-standard-1"
          authorized_keys = [ "example-key" ]
          root_pass = "example-root-pass"
      }

      HCL-specific meta-parameters are available to all resources and are independent of the provider you use. Meta-parameters allow you to do things like customize the lifecycle behavior of the resource, define the number of resources to create, or protect certain resources from being destroyed. See Terraform’s Resource Configuration documentation for more information on meta-parameters.

      Modules

      A module is an encapsulated set of Terraform configurations used to organize the creation of resources in reusable configurations.

      The Terraform Module Registry is a repository of community modules that can help you get started creating resources for various providers. You can also create your own modules to better organize your Terraform configurations and make them available for reuse. Once you have created your modules, you can distribute them via a remote version control repository, like GitHub.

      Using Modules

      A module block instructs Terraform to create an instance of a module. This block instantiates any resources defined within that module.

      The only universally required configuration for all module blocks is the source parameter which indicates the location of the module’s source code. All other required configurations will vary from module to module. If you are using a local module you can use a relative path as the source value. The source path for a Terraform Module Registry module will be available on the module’s registry page.

      This example creates an instance of a module named linode-module-example and provides a relative path as the location of the module’s source code:

      ~/terraform/main.tf
      1
      2
      3
      
      module "linode-module-example" {
          source = "/modules/linode-module-example"
      }

      Authoring modules involves defining resource requirements and parameterizing configurations using input variables, variable files, and outputs. To learn how to write your own Terraform modules, see Create a Terraform Module.

      Input Variables

      You can define input variables to serve as Terraform configuration parameters. By convention, input variables are normally defined within a file named variables.tf. Terraform will load all files ending in .tf, so you can also define variables in files with other names.

      • Terraform accepts variables of type string, number, boolean, map, and list. If a variable type is not explicitly defined, Terraform will default to type = "string".

      • It is good practice to provide a meaningful description for all your input variables.

      • If a variable does not contain a default value, or if you would like to override a variable’s default value, you must provide a value as an environment variable or within a variable values file.

      Variable Declaration Example

      ~/terraform/variables.tf
      1
      2
      3
      4
      5
      6
      7
      8
      
      variable "token" {
        description = "This is your Linode APIv4 Token."
      }
      
      variable "region" {
          description: "This is the location where the Linode instance is deployed."
          default = "us-east"
      }

      Two input variables named token and region are defined, respectively. The region variable defines a default value. Both variables will default to type = "string", since a type is not explicitly declared.

      Supplying Variable Values

      Variable values can be specified in .tfvars files. These files use the same syntax as Terraform configuration files:

      ~/terraform/terraform.tfvars
      1
      2
      
      token = "my-token"
      region = "us-west"

      Terraform will automatically load values from filenames which match terraform.tfvars or *.auto.tfvars. If you store values in a file with another name, you need to specify that file with the -var-file option when running terraform apply. The -var-file option can be invoked multiple times:

      terraform apply 
      -var-file="variable-values-1.tfvars" 
      -var-file="variable-values-2.tfvars"
      

      Values can also be specified in environment variables when running terraform apply. The name of the variable should be prefixed with TF_VAR_:

      TF_VAR_token=my-token-value TF_VAR_region=us-west terraform apply
      

      Note

      Environment variables can only assign values to variables of type = "string"

      Referencing Variables

      You can call existing input variables within your configuration file using Terraform’s interpolation syntax. Observe the value of the region parameter:

      ~/terraform/main.tf
      1
      2
      3
      4
      5
      6
      7
      8
      
      resource "linode_instance" "WordPress" {
          image = "linode/ubuntu18.04"
          label = "WPServer"
          region = "${var.region}"
          type = "g6-standard-1"
          authorized_keys = [ "example-key" ]
          root_pass = "example-root-pass"
      }

      Note

      If a variable value is not provided in any of the ways discussed above, and the variable is called in a resource configuration, Terraform will prompt you for the value when you run terraform apply.

      For more information on variables, see Terraform’s Input Variables documentation.

      Interpolation

      HCL supports the interpolation of values. Interpolations are wrapped in an opening ${ and a closing }. Input variable names are prefixed with var.:

      ~/terraform/terraform.tf
      1
      2
      3
      
      provider "linode" {
          token = "${var.token}"
      }

      Interpolation syntax is powerful and includes the ability to reference attributes of other resources, call built-in functions, and use conditionals and templates.

      This resource’s configuration uses a conditional to provide a value for the tags parameter:

      ~/terraform/terraform.tf
      1
      2
      3
      
      resource "linode_instance" "web" {
          tags = ["${var.env == "production" ? var.prod_subnet : var.dev_subnet}"]
      }

      If the env variable has the value production, then the prod_subnet variable is used. If not, then the variable dev_subent is used.

      Functions

      Terraform has built-in computational functions that perform a variety of operations, including reading files, concatenating lists, encrypting or creating a checksum of an object, and searching and replacing.

      ~/terraform/terraform.tf
      1
      2
      3
      4
      
      resource "linode_sshkey" "main_key" {
          label = "foo"
          ssh_key = "${chomp(file("~/.ssh/id_rsa.pub"))}"
      }

      In this example, ssh_key = "${chomp(file("~/.ssh/id_rsa.pub"))}" uses Terraform’s built-in function file() to provide a local file path to the public SSH key’s location. The chomp() function removes trailing new lines from the SSH key. Observe that the nested functions are wrapped in opening ${ and closing } to indicate that the value should be interpolated.

      Note

      Running terraform console creates an environment where you can test interpolation functions. For example:

      terraform console
      
        
      > list("newark", "atlanta", "dallas")
      [
        "newark",
        "atlanta",
        "dallas",
      ]
      >
      
      

      Terraform’s official documentation includes a complete list of supported built-in functions.

      Templates

      Templates can be used to store large strings of data. The template provider exposes the data sources for other Terraform resources or outputs to consume. The data source can be a file or an inline template.

      The data source can use Terraform’s standard interpolation syntax for variables. The template is then rendered with variable values that you supply in the data block.

      This example template resource substitutes in the value from ${linode_instance.web.ip_address} anywhere ${web_ip} appears inside the template file ips.json:

      1
      2
      3
      4
      5
      6
      7
      
      data "template_file" "web" {
          template = "${file("${path.module}/ips.json")}"
      
          vars {
              web_ip = "${linode_instance.web.ip_address}"
          }
      }

      You could then define an output variable to view the rendered template when you later run terraform apply:

      1
      2
      3
      
      output "ip" {
        value = "${data.template_file.web.rendered}"
      }

      Terraform’s official documentation has a list of all available components of interpolation syntax.

      Next Steps

      Now that you are familiar with HCL, you can begin creating your own Linode instance with Terraform by following the Use Terraform to Provision Linode Environments guide.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Introduction to Jinja Templates for Salt


      Updated by Linode Contributed by Linode

      Use promo code DOCS10 for $10 credit on a new account.

      Introduction to Templating Languages

      Jinja is a flexible templating language for Python that can be used to generate any text based format such as HTML, XML, and YAML. Templating languages like Jinja allow you to insert data into a structured format. You can also embed logic or control-flow statements into templates for greater reusability and modularity. Jinja’s template engine is responsible for processing the code within the templates and generating the output to the final text based document.

      Templating languages are well known within the context of creating web pages in a Model View Controller architecture. In this scenario the template engine processes source data, like the data found in a database, and a web template that includes a mixture of HTML and the templating language. These two pieces are then used to generate the final web page for users to consume. Templating languages, however, are not limited to web pages. Salt, a popular Python based configuration management software, supports Jinja to allow for abstraction and reuse within Salt state files and regular files.

      This guide will provide an overview of the Jinja templating language used primarily within Salt. If you are not yet familiar with Salt concepts, review the Beginner’s Guide to Salt before continuing. While you will not be creating Salt states of your own in this guide, it is also helpful to review the Getting Started with Salt – Basic Installation and Setup guide.

      Jinja Basics

      This section provides an introductory description of Jinja syntax and concepts along with examples of Jinja and Salt states. For an exhaustive dive into Jinja, consult the official Jinja Template Designer Documentation.

      Applications like Salt can define default behaviors for the Jinja templating engine. All examples in this guide use Salt’s default Jinja environment options. These settings can be changed in the Salt master configuration file:

      /etc/salt/master
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      
      # Default Jinja environment options for all templates except sls templates
      #jinja_env:
      #  block_start_string: '{%'
      #  block_end_string: '%}'
      #  variable_start_string: '{{'
      #  variable_end_string: '}}'
      #  comment_start_string: '{#'
      #  comment_end_string: '#}'
      #  line_statement_prefix:
      #  line_comment_prefix:
      #  trim_blocks: False
      #  lstrip_blocks: False
      #  newline_sequence: 'n'
      #  keep_trailing_newline: False
      
      # Jinja environment options for sls templates
      #jinja_sls_env:
      #  block_start_string: '{%'
      #  block_end_string: '%}'
      #  variable_start_string: '{{'
      #  variable_end_string: '}}'
      #  comment_start_string: '{#'
      #  comment_end_string: '#}'
      #  line_statement_prefix:
      #  line_comment_prefix:
      #  trim_blocks: False
      #  lstrip_blocks: False

      Note

      Before including Jinja in your Salt states, be sure to review the Salt and Jinja Best Practices section of this guide to ensure that you are creating maintainable and readable Salt states. More advanced Salt tools and concepts can be used to improve the modularity and reusability of some of the Jinja and Salt state examples used throughout this guide.

      Delimiters

      Templating language delimiters are used to denote the boundary between the templating language and another type of data format like HTML or YAML. Jinja uses the following delimiters:

      Delimiter Syntax Usage
      {% ... %} Control structures
      {{ ... }} Evaluated expressions that will print to the template output
      {# ... #} Comments that will be ignored by the template engine
      # ... ## Line statements

      In this example Salt state file, you can differentiate the Jinja syntax from the YAML because of the {% ... %} delimiters surrounding the if/else conditionals:

      /srv/salt/webserver/init.sls
      1
      2
      3
      4
      5
      6
      7
      
      {% if grains['group'] == 'admin' %}
          America/Denver:
              timezone.system:
      {% else %}
          Europe/Minsk:
              timezone.system:
      {% endif %}

      See the control structures section for more information on conditionals.

      Template Variables

      Template variables are available via a template’s context dictionary. A template’s context dictionary is created automatically during the different stages of a template’s evaluation. These variables can be accessed using dot notation:

      {{ foo.bar }}
      

      Or they can be accessed by subscript syntax:

      {{ foo['bar'] }}
      

      Salt provides several context variables that are available by default to any Salt state file or file template:

      • Salt: The salt variable provides a powerful set of Salt library functions.

        {{ salt['pw_user.list_groups']('jdoe') }}
        

        You can run salt '*' sys.doc from the Salt master to view a list of all available functions.

      • Opts: The opts variable is a dictionary that provides access to the content of a Salt minion’s configuration file:

        {{ opts['log_file'] }}
        

        The location for a minion’s configuration file is /etc/salt/minion.

      • Pillar: The pillar variable is a dictionary used to access Salt’s pillar data:

        {{ pillar['my_key'] }}
        

        Although you can access pillar keys and values directly, it is recommended that you use Salt’s pillar.get variable library function, because it allows you to define a default value. This is useful when a value does not exist in the pillar:

        {{ salt['pillar.get']('my_key', 'default_value') }}
        
      • Grains: The grains variable is a dictionary and provides access to minions’ grains data:

        {{ grains['shell'] }}
        

        You can also use Salt’s grains.get variable library function to access grain data:

        {{ salt['grains.get']('shell') }}
        
      • Saltenv: You can define multiple salt environments for minions in a Salt master’s top file, such as base, prod, dev and test. The saltenv variable provides a way to access the current Salt environment within a Salt state file. This variable is only available within Salt state files.

        {{ saltenv }}
        
      • SLS: With the sls variable you can obtain the reference value for the current state file (e.g. apache, webserver, etc). This is the same value used in a top file to map minions to state files or via the include option in state files:

        {{ sls }}
        
      • Slspath: This variable provides the path to the current state file:

        {{ slspath }}
        

      Variable Assignments

      You can assign a value to a variable by using the set tag along with the following delimiter and syntax:

      {% set var_name = myvalue %}
      

      Follow Python naming conventions when creating variable names. If the variable is assigned at the top level of a template, the assignment is exported and available to be imported by other templates.

      Any value generated by a Salt template variable library function can be assigned to a new variable.

      {% set username = salt['user.info']('username') %}
      

      Filters

      Filters can be applied to any template variable via a | character. Filters are chainable and accept optional arguments within parentheses. When chaining filters, the output of one filter becomes the input of the following filter.

      {{ '/etc/salt/' | list_files | join('n') }}
      

      These chained filters will return a recursive list of all the files in the /etc/salt/ directory. Each list item will be joined with a new line.

        
        /etc/salt/master
        /etc/salt/proxy
        /etc/salt/minion
        /etc/salt/pillar/top.sls
        /etc/salt/pillar/device1.sls
        
      

      For a complete list of all built in Jinja filters, refer to the Jinja Template Design documentation. Salt’s official documentation includes a list of custom Jinja filters.

      Macros

      Macros are small, reusable templates that help you to minimize repetition when creating states. Define macros within Jinja templates to represent frequently used constructs and then reuse the macros in state files.

      /srv/salt/mysql/db_macro.sls
      1
      2
      3
      4
      5
      6
      7
      8
      
      {% macro mysql_privs(user, grant=select, database, host=localhost) %}
      {{ user }}_exampledb:
         mysql_grants.present:
          - grant: {{ grant }}
          - database: {{ database }}
          - user: {{user}}
          - host: {{ host }}
      {% endmacro %}
      db_privs.sls
      1
      2
      3
      
      {% import "/srv/salt/mysql/db_macro.sls" as db -%}
      
      db.mysql_privs('jane','exampledb.*','select,insert,update')

      The mysql_privs() macro is defined in the db_macro.sls file. The template is then imported to the db variable in the db_privs.sls state file and is used to create a MySQL grants state for a specific user.

      Refer to the Imports and Includes section for more information on importing templates and variables.

      Imports and Includes

      Imports

      Importing in Jinja is similar to importing in Python. You can import an entire template, a specific state, or a macro defined within a file.

      {% import '/srv/salt/users.sls' as users %}
      

      This example will import the state file users.sls into the variable users. All states and macros defined within the template will be available using dot notation.

      You can also import a specific state or macro from a file.

      {% from '/srv/salt/user.sls' import mysql_privs as grants %}
      

      This import targets the macro mysql_privs defined within the user.sls state file and is made available to the current template with the grants variable.

      Includes

      The {% include %} tag renders the output of another template into the position where the include tag is declared. When using the {% include %} tag the context of the included template is passed to the invoking template.

      /srv/salt/webserver/webserver_users.sls
      1
      2
      3
      4
      
      include:
        - groups
      
      {% include 'users.sls' %}

      Note

      Import Context Behavior

      By default, an import will not include the context of the imported template, because imports are cached. This can be overridden by adding with context to your import statements.

      {% from '/srv/salt/user.sls' import mysql_privs with context %}
      

      Similarly, if you would like to remove the context from an {% include %}, add without context:

      {% include 'users.sls' without context %}
      

      Whitespace Control

      Jinja provides several mechanisms for whitespace control of its rendered output. By default, Jinja strips single trailing new lines and leaves anything else unchanged, e.g. tabs, spaces, and multiple new lines. You can customize how Salt’s Jinja template engine handles whitespace in the Salt master configuration file. Some of the available environment options for whitespace control are:

      • trim_blocks: When set to True, the first newline after a template tag is removed automatically. This is set to False by default in Salt.
      • lstrip_blocks: When set to True, Jinja strips tabs and spaces from the beginning of a line to the start of a block. If other characters are present before the start of the block, nothing will be stripped. This is set to False by default in Salt.
      • keep_trailing_newline: When set to True, Jinja will keep single trailing newlines. This is set to False by default in Salt.

      To avoid running into YAML syntax errors, ensure that you take Jinja’s whitespace rendering behavior into consideration when inserting templating markup into Salt states. Remember, Jinja must produce valid YAML. When using control structures or macros, it may be necessary to strip whitespace from the template block to appropriately render valid YAML.

      To preserve the whitespace of contents within template blocks, you can set both the trim_blocks and lstrip_block options to True in the master configuration file. You can also manually enable and disable the white space environment options within each template block. A - character will set the behavior of trim_blocks and lstrip_blocks to False and a + character will set these options to True for the block:

      For example, to strip the whitespace after the beginning of the control structure include a - character before the closing %}:

      {% for item in [1,2,3,4,5] -%}
          {{ item }}
      {% endfor %}
      

      This will output the numbers 12345 without any leading whitespace. Without the - character, the output would preserve the spacing defined within the block.

      Control Structures

      Jinja provides control structures common to many programming languages such as loops, conditionals, macros, and blocks. The use of control structures within Salt states allow for fine-grained control of state execution flow.

      For Loops

      For loops allow you to iterate through a list of items and execute the same code or configuration for each item in the list. Loops provide a way to reduce repetition within Salt states.

      /srv/salt/users.sls
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      
      {% set groups = ['sudo','wheel', 'admins'] %}
      include:
        - groups
      
      jane:
        user.present:
          - fullname: Jane Doe
          - shell: /bin/zsh
          - createhome: True
          - home: /home/jane
          - uid: 4001
          - groups:
          {%- for group in groups %}
            - {{ group }}
          {%- endfor -%}

      The previous for loop will assign the user jane to all the groups in the groups list set at the top of the users.sls file.

      Conditionals

      A conditional expression evaluates to either True or False and controls the flow of a program based on the result of the evaluated boolean expression. Jinja’s conditional expressions are prefixed with if/elif/else and placed within the {% ... %} delimiter.

      /srv/salt/users.sls
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      
      {% set users = ['anna','juan','genaro','mirza'] %}
      {% set admin_users = ['genaro','mirza'] %}
      {% set admin_groups = ['sudo','wheel', 'admins'] %}
      {% set org_groups = ['games', 'webserver'] %}
      
      
      include:
        - groups
      
      {% for user in users %}
      {{ user }}:
        user.present:
          - shell: /bin/zsh
          - createhome: True
          - home: /home/{{ user }}
          - groups:
      {% if user in admin_users %}
          {%- for admin_group in admin_groups %}
            - {{ admin_group }}
          {%- endfor -%}
      {% else %}
          {%- for org_group in org_groups %}
            - {{ org_group }}
          {% endfor %}
      {%- endif -%}
      {% endfor %}

      In this example the presence of a user within the admin_users list determines which groups are set for that user in the state. Refer to the Salt Best Practices section for more information on using conditionals and control flow statements within state files.

      Template Inheritance

      With template inheritance you can define a base template that can be reused by child templates. The child template can override blocks designated by the base template.

      Use the {% block block_name %} tag with a block name to define an area of a base template that can be overridden.

      /srv/salt/users.jinja
      1
      2
      3
      4
      5
      6
      7
      8
      9
      
      {% block user %}jane{% endblock %}:
        user.present:
          - fullname: {% block fullname %}{% endblock %}
          - shell: /bin/zsh
          - createhome: True
          - home: /home/{% block home_dir %}
          - uid: 4000
          - groups:
            - sudo

      This example creates a base user state template. Any value containing a {% block %} tag can be overridden by a child template with its own value.

      To use a base template within a child template, use the {% extends "base.sls"%} tag with the location of the base template file.

      /srv/salt/webserver_users.sls
      1
      2
      3
      4
      
      {% extends "/srv/salt/users.jinja" %}
      
      {% block fullname %}{{ salt['pillar.get']('jane:fullname', '') }}{% endblock %}
      {% block home_dir %}{{ salt['pillar.get']('jane:home_dir', 'jane') }}{% endblock %}

      The webserver_users.sls state file extends the users.jinja template and defines values for the fullname and home_dir blocks. The values are generated using the salt context variable and pillar data. The rest of the state will be rendered as the parent user.jinja template has defined it.

      Salt and Jinja Best Practices

      If Jinja is overused, its power and versatility can create unmaintainable Salt state files that are difficult to read. Here are some best practices to ensure that you are using Jinja effectively:

      • Limit how much Jinja you use within state files. It is best to separate the data from the state that will use the data. This allows you to update your data without having to alter your states.
      • Do not overuse conditionals and looping within state files. Overuse will make it difficult to read, understand and maintain your states.
      • Use dictionaries of variables and directly serialize them into YAML, instead of trying to create valid YAML within a template. You can include your logic within the dictionary and retrieve the necessary variable within your states.

        The {% load_yaml %} tag will deserialize strings and variables passed to it.

         {% load_yaml as example_yaml %}
             user: jane
             firstname: Jane
             lastname: Doe
         {% endload %}
        
         {{ example_yaml.user }}:
            user.present:
              - fullname: {{ example_yaml.firstname }} {{ example_yaml.lastname }}
              - shell: /bin/zsh
              - createhome: True
              - home: /home/{{ example_yaml.user }}
              - uid: 4001
              - groups:
                - games
        

        Use {% import_yaml %} to import external files of data and make the data available as a Jinja variable.

         {% import_yaml "users.yml" as users %}
        
      • Use Salt Pillars to store general or sensitive data as variables. Access these variables inside state files and template files.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Join our Community

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link