One place for hosting & domains

      January 2019

      How To Use Traefik as a Reverse Proxy for Docker Containers on CentOS 7


      The author selected Girls Who Code to receive a donation as part of the Write for DOnations program.

      Introduction

      Docker can be an efficient way to run web applications in production, but you may want to run multiple applications on the same Docker host. In this situation, you’ll need to set up a reverse proxy since you only want to expose ports 80 and 443 to the rest of the world.

      Traefik is a Docker-aware reverse proxy that includes its own monitoring dashboard. In this tutorial, you’ll use Traefik to route requests to two different web application containers: a WordPress container and an Adminer container, each talking to a MySQL database. You’ll configure Traefik to serve everything over HTTPS using Let’s Encrypt.

      Prerequisites

      To follow along with this tutorial, you will need the following:

      Step 1 — Configuring and Running Traefik

      The Traefik project has an official Docker image, so we will use that to run Traefik in a Docker container.

      But before we get our Traefik container up and running, we need to create a configuration file and set up an encrypted password so we can access the monitoring dashboard.

      We’ll use the htpasswd utility to create this encrypted password. First, install the utility, which is included in the httpd-tools package:

      • sudo yum install -y httpd-tools

      Then generate the password with htpasswd. Substitute secure_password with the password you’d like to use for the Traefik admin user:

      • htpasswd -nb admin secure_password

      The output from the program will look like this:

      Output

      admin:$apr1$kEG/8JKj$yEXj8vKO7HDvkUMI/SbOO.

      You’ll use this output in the Traefik configuration file to set up HTTP Basic Authentication for the Traefik health check and monitoring dashboard. Copy the entire output line so you can paste it later.

      To configure the Traefik server, we’ll create a new configuration file called traefik.toml using the TOML format. TOML is a configuration language similar to INI files, but standardized. This file lets us configure the Traefik server and various integrations, or providers, we want to use. In this tutorial, we will use three of Traefik’s available providers: api, docker, and acme, which is used to support TLS using Let’s Encrypt.

      Open up your new file in Vi or your favorite text editor:

      Enter insert mode by pressing i, then add two named entry points, http and https, that all backends will have access to by default:

      traefik.toml

      defaultEntryPoints = ["http", "https"]
      

      We'll configure the http and https entry points later in this file.

      Next, configure the api provider, which gives you access to a dashboard interface. This is where you'll paste the output from the htpasswd command:

      traefik.toml

      ...
      [entryPoints]
        [entryPoints.dashboard]
          address = ":8080"
          [entryPoints.dashboard.auth]
            [entryPoints.dashboard.auth.basic]
              users = ["admin:your_encrypted_password"]
      
      [api]
      entrypoint="dashboard"
      

      The dashboard is a separate web application that will run within the Traefik container. We set the dashboard to run on port 8080.

      The entrypoints.dashboard section configures how we'll be connecting with the api provider, and the entrypoints.dashboard.auth.basic section configures HTTP Basic Authentication for the dashboard. Use the output from the htpasswd command you just ran for the value of the users entry. You could specify additional logins by separating them with commas.

      We've defined our first entryPoint, but we'll need to define others for standard HTTP and HTTPS communication that isn't directed towards the api provider. The entryPoints section configures the addresses that Traefik and the proxied containers can listen on. Add these lines to the file underneath the entryPoints heading:

      traefik.toml

      ...
        [entryPoints.http]
          address = ":80"
            [entryPoints.http.redirect]
              entryPoint = "https"
        [entryPoints.https]
          address = ":443"
            [entryPoints.https.tls]
      ...
      

      The http entry point handles port 80, while the https entry point uses port 443 for TLS/SSL. We automatically redirect all of the traffic on port 80 to the https entry point to force secure connections for all requests.

      Next, add this section to configure Let's Encrypt certificate support for Traefik:

      traefik.toml

      ...
      [acme]
      email = "your_email@your_domain"
      storage = "acme.json"
      entryPoint = "https"
      onHostRule = true
        [acme.httpChallenge]
        entryPoint = "http"
      

      This section is called acme because ACME is the name of the protocol used to communicate with Let's Encrypt to manage certificates. The Let's Encrypt service requires registration with a valid email address, so in order to have Traefik generate certificates for our hosts, set the email key to your email address. We then specify that we will store the information that we will receive from Let's Encrypt in a JSON file called acme.json. The entryPoint key needs to point to the entry point handling port 443, which in our case is the https entry point.

      The key onHostRule dictates how Traefik should go about generating certificates. We want to fetch our certificates as soon as our containers with specified hostnames are created, and that's what the onHostRule setting will do.

      The acme.httpChallenge section allows us to specify how Let's Encrypt can verify that the certificate should be generated. We're configuring it to serve a file as part of the challenge through the http entrypoint.

      Finally, configure the docker provider by adding these lines to the file:

      traefik.toml

      ...
      [docker]
      domain = "your_domain"
      watch = true
      network = "web"
      

      The docker provider enables Traefik to act as a proxy in front of Docker containers. We've configured the provider to watch for new containers on the web network (that we'll create soon) and expose them as subdomains of your_domain.

      At this point, traefik.toml should have the following contents:

      traefik.toml

      defaultEntryPoints = ["http", "https"]
      
      [entryPoints]
        [entryPoints.dashboard]
          address = ":8080"
          [entryPoints.dashboard.auth]
            [entryPoints.dashboard.auth.basic]
              users = ["admin:your_encrypted_password"]
        [entryPoints.http]
          address = ":80"
            [entryPoints.http.redirect]
              entryPoint = "https"
        [entryPoints.https]
          address = ":443"
            [entryPoints.https.tls]
      
      [api]
      entrypoint="dashboard"
      
      [acme]
      email = "your_email@your_domain"
      storage = "acme.json"
      entryPoint = "https"
      onHostRule = true
        [acme.httpChallenge]
        entryPoint = "http"
      
      [docker]
      domain = "your_domain"
      watch = true
      network = "web"
      

      Once you have added the contents, hit ESC to leave insert mode. Type :x then ENTER to save and exit the file. With all of this configuration in place, we can fire up Traefik.

      Step 2 – Running the Traefik Container

      Next, create a Docker network for the proxy to share with containers. The Docker network is necessary so that we can use it with applications that are run using Docker Compose. Let's call this network web.

      • docker network create web

      When the Traefik container starts, we will add it to this network. Then we can add additional containers to this network later for Traefik to proxy to.

      Next, create an empty file which will hold our Let's Encrypt information. We'll share this into the container so Traefik can use it:

      Traefik will only be able to use this file if the root user inside of the container has unique read and write access to it. To do this, lock down the permissions on acme.json so that only the owner of the file has read and write permission.

      Once the file gets passed to Docker, the owner will automatically change to the root user inside the container.

      Finally, create the Traefik container with this command:

      • docker run -d
      • -v /var/run/docker.sock:/var/run/docker.sock
      • -v $PWD/traefik.toml:/traefik.toml
      • -v $PWD/acme.json:/acme.json
      • -p 80:80
      • -p 443:443
      • -l traefik.frontend.rule=Host:monitor.your_domain
      • -l traefik.port=8080
      • --network web
      • --name traefik
      • traefik:1.7.6-alpine

      The command is a little long so let's break it down.

      We use the -d flag to run the container in the background as a daemon. We then share our docker.sock file into the container so that the Traefik process can listen for changes to containers. We also share the traefik.toml configuration file and the acme.json file we created into the container.

      Next, we map ports 80 and 443 of our Docker host to the same ports in the Traefik container so Traefik receives all HTTP and HTTPS traffic to the server.

      Then we set up two Docker labels that tell Traefik to direct traffic to the hostname monitor.your_domain to port 8080 within the Traefik container, exposing the monitoring dashboard.

      We set the network of the container to web, and we name the container traefik.

      Finally, we use the traefik:1.7.6-alpine image for this container, because it's small.

      A Docker image's ENTRYPOINT is a command that always runs when a container is created from the image. In this case, the command is the traefik binary within the container. You can pass additional arguments to that command when you launch the container, but we've configured all of our settings in the traefik.toml file.

      With the container started, you now have a dashboard you can access to see the health of your containers. You can also use this dashboard to visualize the frontends and backends that Traefik has registered. Access the monitoring dashboard by pointing your browser to https://monitor.your_domain. You will be prompted for your username and password, which are admin and the password you configured in Step 1.

      Once logged in, you'll see an interface similar to this:

      Empty Traefik dashboard

      There isn't much to see just yet, but leave this window open, and you will see the contents change as you add containers for Traefik to work with.

      We now have our Traefik proxy running, configured to work with Docker, and ready to monitor other Docker containers. Let's start some containers for Traefik to act as a proxy for.

      Step 3 — Registering Containers with Traefik

      With the Traefik container running, you're ready to run applications behind it. Let's launch the following containers behind Traefik:

      1. A blog using the official WordPress image.
      2. A database management server using the official Adminer image.

      We'll manage both of these applications with Docker Compose using a docker-compose.yml file. Open the docker-compose.yml file in your editor:

      Add the following lines to the file to specify the version and the networks we'll use:

      docker-compose.yml

      version: "3"
      
      networks:
        web:
          external: true
        internal:
          external: false
      

      We use Docker Compose version 3 because it's the newest major version of the Compose file format.

      For Traefik to recognize our applications, they must be part of the same network, and since we created the network manually, we pull it in by specifying the network name of web and setting external to true. Then we define another network so that we can connect our exposed containers to a database container that we won't expose through Traefik. We'll call this network internal.

      Next, we'll define each of our services, one at a time. Let's start with the blog container, which we'll base on the official WordPress image. Add this configuration to the file:

      docker-compose.yml

      version: "3"
      ...
      
      services:
        blog:
          image: wordpress:4.9.8-apache
          environment:
            WORDPRESS_DB_PASSWORD:
          labels:
            - traefik.backend=blog
            - traefik.frontend.rule=Host:blog.your_domain
            - traefik.docker.network=web
            - traefik.port=80
          networks:
            - internal
            - web
          depends_on:
            - mysql
      

      The environment key lets you specify environment variables that will be set inside of the container. By not setting a value for WORDPRESS_DB_PASSWORD, we're telling Docker Compose to get the value from our shell and pass it through when we create the container. We will define this environment variable in our shell before starting the containers. This way we don't hard-code passwords into the configuration file.

      The labels section is where you specify configuration values for Traefik. Docker labels don't do anything by themselves, but Traefik reads these so it knows how to treat containers. Here's what each of these labels does:

      • traefik.backend specifies the name of the backend service in Traefik (which points to the actual blog container).
      • traefik.frontend.rule=Host:blog.your_domain tells Traefik to examine the host requested and if it matches the pattern of blog.your_domain it should route the traffic to the blog container.
      • traefik.docker.network=web specifies which network to look under for Traefik to find the internal IP for this container. Since our Traefik container has access to all of the Docker info, it would potentially take the IP for the internal network if we didn't specify this.
      • traefik.port specifies the exposed port that Traefik should use to route traffic to this container.

      With this configuration, all traffic sent to our Docker host's port 80 will be routed to the blog container.

      We assign this container to two different networks so that Traefik can find it via the web network and it can communicate with the database container through the internal network.

      Lastly, the depends_on key tells Docker Compose that this container needs to start after its dependencies are running. Since WordPress needs a database to run, we must run our mysql container before starting our blog container.

      Next, configure the MySQL service by adding this configuration to your file:

      docker-compose.yml

      services:
      ...
        mysql:
          image: mysql:5.7
          environment:
            MYSQL_ROOT_PASSWORD:
          networks:
            - internal
          labels:
            - traefik.enable=false
      

      We're using the official MySQL 5.7 image for this container. You'll notice that we're once again using an environment item without a value. The MYSQL_ROOT_PASSWORD and WORDPRESS_DB_PASSWORD variables will need to be set to the same value to make sure that our WordPress container can communicate with MySQL. We don't want to expose the mysql container to Traefik or the outside world, so we're only assigning this container to the internal network. Since Traefik has access to the Docker socket, the process will still expose a frontend for the mysql container by default, so we'll add the label traefik.enable=false to specify that Traefik should not expose this container.

      Finally, add this configuration to define the Adminer container:

      docker-compose.yml

      services:
      ...
        adminer:
          image: adminer:4.6.3-standalone
          labels:
            - traefik.backend=adminer
            - traefik.frontend.rule=Host:db-admin.your_domain
            - traefik.docker.network=web
            - traefik.port=8080
          networks:
            - internal
            - web
          depends_on:
            - mysql
      

      This container is based on the official Adminer image. The network and depends_on configurations for this container exactly match what we're using for the blog container.

      However, since we're directing all of the traffic to port 80 on our Docker host directly to the blog container, we need to configure this container differently in order for traffic to make it to our adminer container. The line traefik.frontend.rule=Host:db-admin.your_domain tells Traefik to examine the host requested. If it matches the pattern of db-admin.your_domain, Traefik will route the traffic to the adminer container.

      At this point, docker-compose.yml should have the following contents:

      docker-compose.yml

      version: "3"
      
      networks:
        web:
          external: true
        internal:
          external: false
      
      services:
        blog:
          image: wordpress:4.9.8-apache
          environment:
            WORDPRESS_DB_PASSWORD:
          labels:
            - traefik.backend=blog
            - traefik.frontend.rule=Host:blog.your_domain
            - traefik.docker.network=web
            - traefik.port=80
          networks:
            - internal
            - web
          depends_on:
            - mysql
        mysql:
          image: mysql:5.7
          environment:
            MYSQL_ROOT_PASSWORD:
          networks:
            - internal
          labels:
            - traefik.enable=false
        adminer:
          image: adminer:4.6.3-standalone
          labels:
            - traefik.backend=adminer
            - traefik.frontend.rule=Host:db-admin.your_domain
            - traefik.docker.network=web
            - traefik.port=8080
          networks:
            - internal
            - web
          depends_on:
            - mysql
      

      Save the file and exit the text editor.

      Next, set values in your shell for the WORDPRESS_DB_PASSWORD and MYSQL_ROOT_PASSWORD variables before you start your containers:

      • export WORDPRESS_DB_PASSWORD=secure_database_password
      • export MYSQL_ROOT_PASSWORD=secure_database_password

      Substitute secure_database_password with your desired database password. Remember to use the same password for both WORDPRESS_DB_PASSWORD and MYSQL_ROOT_PASSWORD.

      With these variables set, run the containers using docker-compose:

      Now take another look at the Traefik admin dashboard. You'll see that there is now a backend and a frontend for the two exposed servers:

      Populated Traefik dashboard

      Navigate to blog.your_domain, substituting your_domain with your domain. You'll be redirected to a TLS connection and can now complete the WordPress setup:

      WordPress setup screen

      Now access Adminer by visiting db-admin.your_domain in your browser, again substituting your_domain with your domain. The mysql container isn't exposed to the outside world, but the adminer container has access to it through the internal Docker network that they share using the mysql container name as a host name.

      On the Adminer login screen, use the username root, use mysql for the server, and use the value you set for MYSQL_ROOT_PASSWORD for the password. Once logged in, you'll see the Adminer user interface:

      Adminer connected to the MySQL database

      Both sites are now working, and you can use the dashboard at monitor.your_domain to keep an eye on your applications.

      Conclusion

      In this tutorial, you configured Traefik to proxy requests to other applications in Docker containers.

      Traefik's declarative configuration at the application container level makes it easy to configure more services, and there's no need to restart the traefik container when you add new applications to proxy traffic to since Traefik notices the changes immediately through the Docker socket file it's monitoring.

      To learn more about what you can do with Traefik, head over to the official Traefik documentation. If you'd like to explore Docker containers further, check out How To Set Up a Private Docker Registry on Ubuntu 18.04 or How To Secure a Containerized Node.js Application with Nginx, Let's Encrypt, and Docker Compose. Although these tutorials are written for Ubuntu 18.04, many of the Docker-specific commands can be used for CentOS 7.



      Source link

      Install and Manage MySQL Databases with Puppet Hiera on Ubuntu 18.04


      Updated by Linode Contributed by Linode

      Puppet is a configuration management system that helps simplify the use and deployment of different types of software, making system administration more reliable and replicable. In this guide, we use Puppet to manage an installation of MySQL, a popular relational database used for applications such as WordPress, Ruby on Rails, and others. Hiera is a method of defining configuration values that Puppet will use to simplify MySQL configuration.

      In this guide, you’ll use Puppet to deploy modules on your server. At the end, you will have MySQL installed, configured, and ready to use for a variety of applications that require a database backend.

      Note

      This guide is written for a non-root user. Commands that require elevated privileges are prefixed with sudo. If you’re not familiar with the sudo command, see the Users and Groups guide.

      Before You Begin

      1. A Linode 1GB plan should be sufficient to run MySQL. Consider using a larger plan if you plan to use MySQL heavily, or for more than just a simple personal website.

      2. Familiarize yourself with our Getting Started guide and complete the steps for setting your Linode’s hostname and timezone.

      3. This guide will use sudo wherever possible. Complete the sections of our Securing Your Server to create a standard user account, harden SSH access and remove unnecessary network services.

      4. Update your system:

        sudo apt-get update && sudo apt-get upgrade
        

      Install and Configure Puppet

      Follow these steps to set up Puppet for single-host, local-only deployment. If you need to configure more than one server or to deploy a Puppet master, follow our multi-server Puppet guide.

      Install the Puppet Package

      1. Install the puppetlabs-release-bionic repository to add the Puppet packages:

        wget https://apt.puppetlabs.com/puppet-release-bionic.deb
        sudo dpkg -i puppet-release-bionic.deb
        
      2. Update the apt package index to make the Puppet Labs repository packages available, then install Puppet. This will install the puppet-agent package, which provides the puppet executable within in a compatible Ruby environment:

        sudo apt update && sudo apt install puppet-agent
        
      3. Confirm the version of Puppet installed:

        puppet --version
        

        At the time of writing, the Puppet version is 6.1.0.

      Install the Puppet MySQL Module

      Puppet Forge is a collection of modules that aid in the installation of different types of software. The MySQL module handles the installation and configuration of MySQL without you needing to manage various configuration files and services by hand.

      1. Install the MySQL module:

        sudo puppet module install puppetlabs-mysql --version 7.0.0
        

        This will install the mysql module into the default path: /etc/puppetlabs/code/environments/production/modules/.

      Puppet MySQL Manifest

      This guide uses a Puppet manifest to provide Puppet with installation and configuration instructions. Alternatively, you can configure a Puppet master.

      While the entirety of a Puppet manifest can contain the desired configuration for a host, values for Puppet classes or types can also be defined in a Hiera configuration file to simplify writing Puppet manifests in most cases. In this example, the mysql::server class parameters will be defined in Hiera, but the class must first be applied to the host.

      To apply the mysql::server class to all hosts by default, create the following Puppet manifest:

      /etc/puppetlabs/code/environments/production/manifests/site.pp
      1
      
      include ::mysql::server

      Note that site.pp is the default manifest file. Without a qualifying node { .. } line, this applies the class to any host applying the manifest. Puppet now knows to apply the mysql::server class, but still needs values for resources like databases, users, and other settings. Configure Hiera to provide these values in the next section.

      Install and Configure Puppet Hiera

      To understand how Hiera works, consider this excerpt from the default hiera.yaml file:

      /etc/puppetlabs/code/environments/production/hiera.yaml
      1
      2
      3
      4
      5
      6
      7
      
      ---
      version: 5
      hierarchy:
        - name: "Per-node data"
          path: "nodes/%{::trusted.certname}.yaml"
        - name: "Common data"
          path: "common.yaml"

      This Hiera configuration instructs Puppet to accept variable values from nodes/%{::trusted.certname}.yaml. If your Linode’s hostname is examplehostname, define a file called nodes/examplehostname.yaml). Any variables found in YAML files higher in the hierarchy are preferred, while any variable names that do not exist in those files will fall-through to files lower in the hierarchy (in this example, common.yaml).

      The following configuration will define Puppet variables in common.yaml to inject variables into the mysql::server class.

      Initial Hiera Configuration

      Hiera configuration files are formatted as yaml, with keys defining the Puppet parameters to inject their associated values. To get started, set the MySQL root password. The following example of a Puppet manifest is one way to control this password:

      example.pp
      1
      2
      3
      
      class { '::mysql::server':
        root_password => 'examplepassword',
      }

      We can also define the root password with the following Hiera configuration file. Create the following YAML file and note how the root_password parameter is defined as Hiera yaml:

      /etc/puppetlabs/code/environments/production/data/common.yaml
      1
      
      mysql::server::root_password: examplepassword

      Replace examplepassword with the secure password of your choice. Run Puppet to set up MySQL with default settings and the chosen root password:

      sudo -i puppet apply /etc/puppetlabs/code/environments/production/manifests/site.pp
      

      Puppet will output its progress before completing. To confirm MySQL has been configured properly, run a command:

      mysql -u root -p -e 'select version();'
      

      Enter the password and MySQL returns its version:

      +-------------------------+
      | version()               |
      +-------------------------+
      | 5.7.24-0ubuntu0.18.04.1 |
      +-------------------------+
      

      Define MySQL Resources

      Using Hiera, we can define the rest of the MySQL configuration entirely in yaml. The following steps will create a database and user for use in a WordPress installation.

      1. Create a pre-hashed MySQL password. Replace the password wordpresspassword in this example, and when prompted for a the root MySQL password, use the first root password chosen in the previous section to authenticate. Note the string starting with a * that the command returns for Step 2:

        mysql -u root -p -NBe 'select password("wordpresspassword")'
        *E62D3F829F44A91CC231C76347712772B3B9DABC
        
      2. With the MySQL password hash ready, we can define Hiera values. The following YAML defines parameters to create a database called wordpress and a user named wpuser that has permission to connect from localhost. The YAML also defines a GRANT allowing wpuser to operate on the wordpress database with ALL permissions:

        /etc/puppetlabs/code/environments/production/data/common.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        
        mysql::server::root_password: examplepassword
        mysql::server::databases:
          wordpress:
            ensure: present
        mysql::server::users:
          wpuser@localhost:
            ensure: present
            password_hash: '*E62D3F829F44A91CC231C76347712772B3B9DABC'
        mysql::server::grants:
          wpuser@localhost/wordpress.*:
            ensure: present
            privileges: ALL
            table: wordpress.*
            user: wpuser@localhost
      3. Re-run Puppet:

        sudo -i puppet apply /etc/puppetlabs/code/environments/production/manifests/site.pp
        
      4. The wpuser should now be able to connect to the wordpress database. To verify, connect to the MySQL daemon as the user wpuser to the wordpress database:

        mysql -u wpuser -p wordpress
        

        After you enter the password for wpuser, exit the MySQL prompt:

        exit
        

      Add Hierarchies for Specific Environments

      Additional configurations can be added that will only be applied to specific environments. For example, backup jobs may only be applied for hosts in a certain region, or specific databases can be created in a particular deployment.

      In the following example, Puppet will configure the MySQL server with one additional database, but only if that server’s distribution is Debian-based.

      1. Modify hiera.yaml to contain the following:

        /etc/puppetlabs/code/environments/production/hiera.yaml
        1
        2
        3
        4
        5
        6
        7
        8
        
        ---
        version: 5
        hierarchy:
          - name: "Per OS Family"
            path: "os/%{facts.os.family}.yaml"
          - name: "Other YAML hierarchy levels"
            paths:
              - "common.yaml"

        This change instructs Hiera to look for Puppet parameters first in "os/%{facts.os.family}.yaml" and then in common.yaml. The first, fact-based element of the hierarchy is dynamic, and dependent upon the host that Puppet and Hiera control. In this Ubuntu-based example, Hiera will look for Debian.yaml in the os folder, while on a distribution such as CentOS, the file RedHat.yaml will automatically be referenced instead.

      2. Create the following YAML file:

        /etc/puppetlabs/code/environments/production/data/os/Debian.yaml
        1
        2
        3
        4
        5
        6
        7
        
        lookup_options:
          mysql::server::databases:
            merge: deep
        
        mysql::server::databases:
          ubuntu-backup:
            ensure: present

        Though similar to the common.yaml file defined in previous steps, this file will add the ubuntu-backup database only on Debian-based hosts (like Ubuntu). In addition, the lookup_options setting ensures that the mysql::server:databases parameter is merged between Debian.yaml and common.yaml so that all databases are managed. Without lookup_options set to deeply merge these hashes, only the most specific hierarchy file will be applied to the host, in this case, Debian.yaml.

        • Alternatively, because our Puppet manifest is short, we can test the same command using the -e flag to apply an inline manifest:

          sudo -i puppet apply -e 'include ::mysql::server'
          
      3. Run Puppet and observe the changes:

        sudo -i puppet apply /etc/puppetlabs/code/environments/production/manifests/site.pp
        
      4. Verify that the new database exists:

        mysql -u root -p -e 'show databases;'
        

        This includes the new ubuntu-backup database:

        +---------------------+
        | Database            |
        +---------------------+
        | information_schema  |
        | mysql               |
        | performance_schema  |
        | sys                 |
        | ubuntu-backup       |
        | wordpress           |
        +---------------------+
        

      Congratulations! You can now control your Puppet configuration via highly configurable Hiera definitions.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Getting Started with Puppet – Basic Installation and Setup


      Updated by Linode Written by Linode

      Puppet is a configuration management tool that simplifies system administration. Puppet uses a client/server model in which your managed nodes, running a process called the Puppet agent, talk to and pull down configuration profiles from a Puppet master.

      Puppet deployments can range from small groups of servers up to enterprise-level operations. This guide will demonstrate how to install Puppet 6.1 on three servers:

      • A Puppet master running Ubuntu 18.04
      • A managed Puppet node running Ubuntu 18.04
      • A managed Puppet node running CentOS 7

      After installation, the next section will show you how to secure these servers via Puppet. This section will demonstrate core features of the Puppet language.

      Note

      Most guides will instruct you to follow the How to Secure your Server guide before proceeding. Because Puppet will be used to perform this task, you should begin this guide as the root user. A limited user with administrative privileges will be configured via Puppet in later steps.

      Before You Begin

      The following table displays example system information for the servers that will be deployed in this guide:

      DescriptionOSHostnameFQDNIP
      Puppet masterUbuntu 18.04puppetpuppet.example.com192.0.2.2
      Node 1 (Ubuntu)Ubuntu 18.04puppet-agent-ubuntupuppet-agent-ubuntu.example.com192.0.2.3
      Node 2 (CentOS)CentOS 7puppet-agent-centospuppet-agent-centos.example.com192.0.2.4

      You can choose different hostnames and fully qualified domain names (FQDN) for each of your servers, and the IP addresses for your servers will be different from the example addresses listed. You will need to have a registered domain name in order to specify FQDNs for your servers.

      Throughout this guide, commands and code snippets will reference the values displayed in this table. Wherever such a value appears, replace it with your own value.

      Create your Linodes

      1. Create three Linodes corresponding to the servers listed in the table above. Your Puppet master Linode should have at least four CPU cores; the Linode 8GB plan is recommended. The two other nodes can be of any plan size, depending on how you intend to use them after Puppet is installed and configured.

      2. Configure your timezone on your master and agent nodes so that they all have the same time data.

      3. Set the hostname for each server.

      4. Set the FQDN for each Linode by editing the servers’ /etc/hosts files.

        Example content for the hosts file

        You can model the contents of your /etc/hosts files on these snippets:

        Master
        1
        2
        3
        4
        5
        6
        7
        
        127.0.0.1   localhost
        192.0.2.2   puppet.example.com puppet
        
        # The following lines are desirable for IPv6 capable hosts
        ::1     localhost ip6-localhost ip6-loopback
        ff02::1 ip6-allnodes
        ff02::2 ip6-allrouters
        Node 1 (Ubuntu)
        1
        2
        3
        4
        5
        6
        7
        
        127.0.0.1   localhost
        192.0.2.3   puppet-agent-ubuntu.example.com puppet-agent-ubuntu
        
        # The following lines are desirable for IPv6 capable hosts
        ::1     localhost ip6-localhost ip6-loopback
        ff02::1 ip6-allnodes
        ff02::2 ip6-allrouters
        Node 2 (CentOS)
        1
        2
        3
        
        127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
        ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
        192.0.2.4   puppet-agent-centos.example.com puppet-agent-centos
      5. Set up DNS records for your Linodes’ FQDNs. For each Linode, create a new A record with the name specified by its FQDN and assign it to that Linode’s IP address.

        If you don’t use Linode’s name servers for your domain, consult your name server authority’s website for instructions on how to edit your DNS records.

        Updating DNS records at common nameserver authorities

        The following support documents describe how to update DNS records at common nameserver authorities:

      Puppet Master

      Install the Puppet Server Software

      The Puppet master runs the puppetserver service, which is responsible for compiling and supplying configuration profiles to your managed nodes.

      The puppetserver service has the Puppet agent service as a dependency (which is just called puppet when running on your system). This means that the agent software will also be installed and can be run on your master. Because your master can run the agent service, you can configure your master via Puppet just as you can configure your other managed nodes.

      1. Log in to your Puppet master via SSH (as root):

        ssh [email protected]
        
      2. Download the Puppet repository, update your system packages, and install puppetserver:

        wget https://apt.puppetlabs.com/puppet-release-bionic.deb
        dpkg -i puppet-release-bionic.deb
        apt update
        apt install puppetserver
        

      Configure the Server Software

      1. Use the puppet config command to set values for the dns_alt_names setting:

        /opt/puppetlabs/bin/puppet config set dns_alt_names 'puppet,puppet.example.com' --section main
        

        If you inspect the configuration file, you’ll see that the setting has been added:

        cat /etc/puppetlabs/puppet/puppet.conf
        
          
        [main]
        dns_alt_names = puppet,puppet.example.com
        # ...
        
        

        Note

        The puppet command by default is not added to your PATH. Using Puppet’s interactive commands requires a full file path. To avoid this, update your PATH for your existing shell session:

        export PATH=/opt/puppetlabs/bin:$PATH
        

        A more permanent solution would be to add this to your .profile or .bashrc files.

      2. Update your Puppet master’s /etc/hosts to resolve your managed nodes’ IP addresses. For example, your /etc/hosts file might look like the following:

        /etc/hosts
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        
        127.0.0.1   localhost
        192.0.2.2   puppet.example.com puppet
        
        192.0.2.3   puppet-agent-ubuntu.example.com puppet-agent-ubuntu
        192.0.2.4   puppet-agent-centos.example.com puppet-agent-centos
        
        # The following lines are desirable for IPv6 capable hosts
        ::1     localhost ip6-localhost ip6-loopback
        ff02::1 ip6-allnodes
        ff02::2 ip6-allrouters

        Note

      3. Start and enable the puppetserver service:

        systemctl start puppetserver
        systemctl enable puppetserver
        

        By default, the Puppet master listens for client connections on port 8140. If the puppetserver service fails to start, check that the port is not already in use:

        netstat -anpl | grep 8140
        

      Puppet Agents

      Install Puppet Agent

      1. On your managed node running Ubuntu 18.04, install the puppet-agent package:

        wget https://apt.puppetlabs.com/puppet-release-bionic.deb
        dpkg -i puppet-release-bionic.deb
        apt update
        apt install puppet-agent
        
      2. On your managed node running CentOS 7, enter:

        rpm -Uvh https://yum.puppet.com/puppet/puppet-release-el-7.noarch.rpm
        yum install puppet-agent
        

      Configure Puppet Agent

      1. Modify your managed nodes’ hosts files to resolve the Puppet master’s IP. To do so, add a line like:

        /etc/hosts
        1
        
        192.0.2.2    puppet.example.com puppet

        Example content for the hosts file

        You can model the contents of your managed nodes’ /etc/hosts files on the following snippets. These incorporate the FQDN declarations described in the Create your Linodes section:

        Node 1 (Ubuntu)
        1
        2
        3
        4
        5
        6
        7
        8
        9
        
        127.0.0.1   localhost
        192.0.2.3   puppet-agent-ubuntu.example.com puppet-agent-ubuntu
        
        192.0.2.2   puppet.example.com puppet
        
        # The following lines are desirable for IPv6 capable hosts
        ::1     localhost ip6-localhost ip6-loopback
        ff02::1 ip6-allnodes
        ff02::2 ip6-allrouters
        Node 2 (CentOS)
        1
        2
        3
        4
        5
        
        127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
        ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
        192.0.2.4   puppet-agent-centos.example.com puppet-agent-centos
        
        192.0.2.2   puppet.example.com puppet
      2. On each managed node, use the puppet config command to set the value for your server setting to the FQDN of the master:

        /opt/puppetlabs/bin/puppet config set server 'puppet.example.com' --section main
        

        If you inspect the configuration file on the nodes, you’ll see that the setting has been added:

        cat /etc/puppetlabs/puppet/puppet.conf
        
          
        [main]
        server = puppet.example.com
        # ...
        
        
      3. Use the puppet resource command to start and enable the Puppet agent service:

        /opt/puppetlabs/bin/puppet resource service puppet ensure=running enable=true
        

        Note

        On systemd systems, the above command is equivalent to using these two systemctl commands:

        systemctl start puppet
        systemctl enable puppet
        

      Generate and Sign Certificates

      Before your managed nodes can receive configurations from the master, they first need to be authenticated:

      1. On your Puppet agents, generate a certificate for the Puppet master to sign:

        /opt/puppetlabs/bin/puppet agent -t
        

        This command will output an error, stating that no certificate has been found. This error is because the generated certificate needs to be approved by the Puppet master.

      2. Log in to your Puppet master and list the certificates that need approval:

        /opt/puppetlabs/bin/puppetserver ca list
        

        It should output a list with your agent nodes’ hostnames.

      3. Approve the certificates:

        /opt/puppetlabs/bin/puppetserver ca sign --certname puppet-agent-ubuntu.example.com,puppet-agent-centos.example.com
        
      4. Return to the Puppet agent nodes and run the Puppet agent again:

        /opt/puppetlabs/bin/puppet agent -t
        

        You should see something like the following:

          
        Info: Downloaded certificate for hostname.example.com from puppet
        Info: Using configured environment 'production'
        Info: Retrieving pluginfacts
        Info: Retrieving plugin
        Info: Retrieving locales
        Info: Caching catalog for hostname.example.com
        Info: Applying configuration version '1547066428'
        Info: Creating state file /opt/puppetlabs/puppet/cache/state/state.yaml
        Notice: Applied catalog in 0.02 seconds
        
        

      Add Modules to Configure Agent Nodes

      The Puppet master and agent nodes are now functional, but they are not secure. Based on concepts from the How to Secure your Server guide, a limited user and a firewall should be configured. This can be done on all nodes through the creation of basic Puppet modules, shown below.

      Note

      This is not meant to provide a basis for a fully-hardened server, and is intended only as a starting point. Alter and add firewall rules and other configuration options, depending on your specific needs.

      Puppet modules are Puppet’s prescribed way of organizing configuration code to serve specific purposes, like installing and configuration an application. You can create custom modules, or you can download and use modules published on Puppet Forge.

      Add a Limited User

      To create a new limited user on your nodes, you will create and apply a new module called accounts. This module will employ the user resource.

      1. From the Puppet master, navigate to the /etc/puppetlabs/code/environments/production/modules directory. When a managed node requests its configuration from the master, the Puppet server process will look in this location for your modules:

        cd /etc/puppetlabs/code/environments/production/modules/
        
      2. Create the directory for a new accounts module:

        mkdir accounts
        cd accounts
        
      3. Create the following directories inside the accounts module:

        mkdir {examples,files,manifests,templates}
        
        DirectoryDescription
        manifestsThe Puppet code which powers the module
        filesStatic files to be copied to managed nodes
        templatesTemplate files to be copied to managed nodes that can e customized with variables
        examplesExample code which shows how to use the module

        Note

        Review Puppet’s Module fundamentals article for more information on how a module is structured.
      4. Navigate to the manifests directory:

        cd manifests
        
      5. Any file which contains Puppet code is called a manifest, and each manifest file ends in .pp. When located inside a module, a manifest should only define one class. If a module’s manifests directory has an init.pp file, the class definition it contains is considered the main class for the module. The class definition inside init.pp should have the same name as the module.

        Create an init.pp file with the contents of the following snippet. Replace all instances of username with a username of your choosing:

        accounts/manifests/init.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        
        class accounts {
        
          user { 'username':
            ensure      => present,
            home        => '/home/username',
            shell       => '/bin/bash',
            managehome  => true,
            gid         => 'username',
          }
        
        }
        OptionDescription
        ensureEnsures that the user exists if set to present, or does not exist if set to absent
        homeThe path for the user’s home directory
        managehomeControls whether a home directory should be created when creating the user
        shellThe path to the shell for the user
        gidThe user’s primary group
      6. Although the class declares what the user’s primary group should be, it will not create the group itself. Create a new file called groups.pp inside the manifests directory with the following contents. Replace username with your chosen username:

        accounts/manifests/groups.pp
        1
        2
        3
        4
        5
        6
        7
        
        class accounts::groups {
        
          group { 'username':
            ensure  => present,
          }
        
        }
      7. Your accounts class can declare your new accounts::groups class for use within the accounts class scope. Open your init.pp in your editor and enter a new include declaration at the beginning of the class:

        accounts/manifests/init.pp
        1
        2
        3
        4
        5
        6
        7
        
        class accounts {
        
          include accounts::groups
        
          # ...
        
        }
      8. The new user should have administrative privileges. Because we have agent nodes on both Debian- and Red Hat-based systems, the new user needs to be in the sudo group on Debian systems, and the wheel group on Red Hat systems.

        This value can be set dynamically through the use of Puppet facts. The facts system collects system information about your nodes and makes it available in your manifests.

        Add a selector statement to the top of your accounts class:

        accounts/manifests/init.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        
        class accounts {
        
          $rootgroup = $osfamily ? {
            'Debian'  => 'sudo',
            'RedHat'  => 'wheel',
            default   => warning('This distribution is not supported by the Accounts module'),
          }
        
          include accounts::groups
        
          # ...
        
        }

        This code defines the value for the $rootgroup variable by checking the value of $osfamily, which is one of Puppet’s core facts. If the value for $osfamily does not match Debian or Red Hat, the default value will output a warning that the distribution selected is not supported by this module.

        Note

        The Puppet Configuration Language executes code from top to bottom. Because the user resource declaration will reference the $rootgroup variable, you must define $rootgroup before the user declaration.

      9. Update the user resource to include the groups option as follows:

        accounts/manifests/init.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        
        # ...
        
        user { 'username':
          ensure      => present,
          home        => '/home/username',
          shell       => '/bin/bash',
          managehome  => true,
          gid         => 'username',
          groups      => "$rootgroup",
        }
        
        # ...

        The value "$rootgroup" is enclosed in double quotes " " instead of single quotes ' ' because it is a variable which needs to be interpolated in your code.

      10. The final value that needs to be added is the user’s password. Since we do not want to use plain text, the password should be supplied to Puppet as a SHA1 digest, which is supported by default. Generate a digest with the openssl command:

        openssl passwd -1
        

        You will be prompted to enter your password. A hashed password will be output. Copy this value to your clipboard.

      11. Update the user resource to include the password option as follows; insert your copied password hash as the value for the option:

        accounts/manifests/init.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        
        # ...
        
        user { 'username':
          ensure      => present,
          home        => '/home/username',
          shell       => '/bin/bash',
          managehome  => true,
          gid         => 'username',
          groups      => "$rootgroup",
          password    => 'your_password_hash',
        }
        
        # ...

        Caution

        The hashed password must be included in single quotes ' '.

      12. After saving your changes, use the Puppet parser to ensure that the code is correct:

        /opt/puppetlabs/bin/puppet parser validate init.pp
        

        Any errors that need to be addressed will be logged to standard output. If nothing is returned, your code is valid.

      13. Navigate to the examples directory and create another init.pp file:

        cd ../examples
        
        accounts/examples/init.pp
      14. While still in the examples directory, test the module:

        /opt/puppetlabs/bin/puppet apply --noop init.pp
        

        Note

        The --noop parameter prevents Puppet from actually applying the module to your system and making any changes.

        It should return:

          
        Notice: Compiled catalog for puppet.example.com in environment production in 0.26 seconds
        Notice: /Stage[main]/Accounts::Groups/Group[username]/ensure: current_value absent, should be present (noop)
        Notice: Class[Accounts::Groups]: Would have triggered 'refresh' from 1 events
        Notice: /Stage[main]/Accounts/User[username]/ensure: current_value absent, should be present (noop)
        Notice: Class[Accounts]: Would have triggered 'refresh' from 1 events
        Notice: Stage[main]: Would have triggered 'refresh' from 2 events
        Notice: Finished catalog run in 0.02 seconds
        
        
      15. Again from the examples directory, run puppet apply to make these changes to the Puppet master server:

        /opt/puppetlabs/bin/puppet apply init.pp
        

        Puppet will create your limited Linux user on your master.

      16. Log out as root and log in to the Puppet master as your new user.

      Edit SSH Settings

      Although a new limited user has successfully been added to the Puppet master, it is still possible to login to the system as root. To properly secure your system, root access should be disabled.

      Note

      Because you are now logged in to the Puppet master as a limited user, you will need to execute commands and edit files with the user’s sudo privileges.

      1. Navigate to the files directory within the accounts module:

        cd /etc/puppetlabs/code/environments/production/modules/accounts/files
        
      2. Copy your system’s existing sshd_config file to this directory:

        sudo cp /etc/ssh/sshd_config .
        
      3. Open the file in your editor (making sure that you open it with sudo privileges) and set the PermitRootLogin value to no:

        accounts/files/sshd_config
      4. Navigate back to the manifests directory:

        cd ../manifests
        
      5. Create a new manifest called ssh.pp. Use the file resource to replace the default SSH configuration file with one managed by Puppet:

        accounts/manifests/ssh.pp
        1
        2
        3
        4
        5
        6
        7
        8
        
        class accounts::ssh {
        
          file { '/etc/ssh/sshd_config':
            ensure  => present,
            source  => 'puppet:///modules/accounts/sshd_config',
          }
        
        }

        Note

        The files directory is omitted from the source line because the files folder is the default location of files within a module. For more information on the format used to access resources in a module, refer to the official Puppet module documentation.
      6. Create a second resource to restart the SSH service and set it to run whenever sshd_config is changed. This will also require a selector statement because the SSH service is named ssh on Debian systems and sshd on Red Hat systems:

        accounts/manifests/ssh.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        
        class accounts::ssh {
        
          $sshname = $osfamily ? {
            'Debian'  => 'ssh',
            'RedHat'  => 'sshd',
            default   => warning('This distribution is not supported by the Accounts module'),
          }
        
          file { '/etc/ssh/sshd_config':
            ensure  => present,
            source  => 'puppet:///modules/accounts/sshd_config',
            notify  => Service["$sshname"],
          }
        
          service { "$sshname":
            hasrestart  => true,
          }
        
        }

        Note

      7. Include the accounts::ssh class within the accounts class in init.pp:

        accounts/manifests/init.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        
        class accounts {
        
          # ...
        
          include accounts::groups
          include accounts::ssh
        
          # ...
        
        }

        The complete init.pp

        The contents of your init.pp should now look like the following snippet:

        accounts/manifests/init.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        
        class accounts {
        
            $rootgroup = $osfamily ? {
                'Debian' => 'sudo',
                'RedHat' => 'wheel',
                default => warning('This distro not supported by Accounts module'),
            }
        
            include accounts::groups
            include accounts::ssh
        
            user { 'example':
                ensure  => present,
                home    => '/home/username',
                shell   => '/bin/bash',
                managehome  => true,
                gid     => 'username',
                groups  => "$rootgroup",
                password => 'your_password_hash'
            }
        
        }
      8. Run the Puppet parser to test the syntax of the new class, then navigate to the examples directory to test and run the update to your accounts class:

        sudo /opt/puppetlabs/bin/puppet parser validate ssh.pp
        cd ../examples
        sudo /opt/puppetlabs/bin/puppet apply --noop init.pp
        sudo /opt/puppetlabs/bin/puppet apply init.pp
        

        Note

        You may see the following line in your output when validating:

          
        Error: Removing mount "files": /etc/puppet/files does not exist or is not a directory
        
        

        This refers to a Puppet configuration file, not the module resource you’re trying to copy. If this is the only error in your output, the operation should still succeed.

      9. To ensure that the ssh class is working properly, log out of the Puppet master and then try to log in as root. You should not be able to do so.

      Add and Configure IPtables

      To complete this guide’s security settings, the firewall needs to be configure on your Puppet master and nodes. The iptables firewall software will be used.

      1. By default, changes to your iptables rules will not persist across reboots. To avoid this, install the appropriate package on your Puppet master and nodes:

        Ubuntu/Debian:

        sudo apt install iptables-persistent
        

        CentOS 7:

        CentOS 7 uses firewalld by default as a controller for iptables. Be sure firewalld is stopped and disabled before starting to work directly with iptables:

        sudo systemctl stop firewalld && sudo systemctl disable firewalld
        sudo yum install iptables-services
        
      2. On your Puppet master, install Puppet Lab’s firewall module from the Puppet Forge:

        sudo /opt/puppetlabs/bin/puppet module install puppetlabs-firewall
        

        The module will be installed in your /etc/puppetlabs/code/environments/production/modules directory.

      3. Navigate to the manifests directory inside the new firewall module:

        cd /etc/puppetlabs/code/environments/production/modules/firewall/manifests/
        
      4. Create a file titled pre.pp, which will contain all basic networking rules that should be run first:

        firewall/manifests/pre.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        34
        35
        36
        37
        38
        39
        40
        41
        42
        43
        44
        45
        46
        47
        48
        49
        50
        51
        52
        53
        54
        55
        
        class firewall::pre {
        
          Firewall {
            require => undef,
          }
        
           # Accept all loopback traffic
          firewall { '000 lo traffic':
            proto       => 'all',
            iniface     => 'lo',
            action      => 'accept',
          }->
        
           #Drop non-loopback traffic
          firewall { '001 reject non-lo':
            proto       => 'all',
            iniface     => '! lo',
            destination => '127.0.0.0/8',
            action      => 'reject',
          }->
        
           #Accept established inbound connections
          firewall { '002 accept established':
            proto       => 'all',
            state       => ['RELATED', 'ESTABLISHED'],
            action      => 'accept',
          }->
        
           #Allow all outbound traffic
          firewall { '003 allow outbound':
            chain       => 'OUTPUT',
            action      => 'accept',
          }->
        
           #Allow ICMP/ping
          firewall { '004 allow icmp':
            proto       => 'icmp',
            action      => 'accept',
          }
        
           #Allow SSH connections
          firewall { '005 Allow SSH':
            dport    => '22',
            proto   => 'tcp',
            action  => 'accept',
          }->
        
           #Allow HTTP/HTTPS connections
          firewall { '006 HTTP/HTTPS connections':
            dport    => ['80', '443'],
            proto   => 'tcp',
            action  => 'accept',
          }
        
        }
      5. In the same directory, create post.pp, which will run any firewall rules that need to be input last:

        firewall/manifests/post.pp
        1
        2
        3
        4
        5
        6
        7
        8
        9
        
        class firewall::post {
        
          firewall { '999 drop all':
            proto  => 'all',
            action => 'drop',
            before => undef,
          }
        
        }

        These rules will direct the system to drop all inbound traffic that is not already permitted in the firewall.

      6. Run the Puppet parser on both files to check their syntax for errors:

        sudo /opt/puppetlabs/bin/puppet parser validate pre.pp
        sudo /opt/puppetlabs/bin/puppet parser validate post.pp
        
      7. Navigate to the main manifests directory:

        cd /etc/puppetlabs/code/environments/production/manifests
        
      8. Create a file named site.pp inside /etc/puppetlabs/code/environments/production/manifests. This file is the main manifest for the Puppet server service. It is used to map modules, classes, and resources to the nodes that they should be applied to.

        site.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        
        node default {
        
        }
        
        node 'puppet.example.com' {
        
          include accounts
        
          resources { 'firewall':
            purge => true,
          }
        
          Firewall {
            before        => Class['firewall::post'],
            require       => Class['firewall::pre'],
          }
        
          class { ['firewall::pre', 'firewall::post']: }
        
          firewall { '200 Allow Puppet Master':
            dport         => '8140',
            proto         => 'tcp',
            action        => 'accept',
          }
        
        }
      9. Run the site.pp file through the Puppet parser to check its syntax for errors. Then, test the file with the --noop option to see if it will run:

        sudo /opt/puppetlabs/bin/puppet parser validate site.pp
        sudo /opt/puppetlabs/bin/puppet apply --noop site.pp
        

        If successful, run puppet apply without the --noop option:

        sudo /opt/puppetlabs/bin/puppet apply site.pp
        
      10. Once Puppet has finished applying the changes, check the Puppet master’s iptables rules:

        sudo iptables -L
        

        It should return:

        Chain INPUT (policy ACCEPT)
        target     prot opt source               destination
        ACCEPT     all  --  anywhere             anywhere             /* 000 lo traffic */
        REJECT     all  --  anywhere             127.0.0.0/8          /* 001 reject non-lo */ reject-with icmp-port-unreachable
        ACCEPT     all  --  anywhere             anywhere             /* 002 accept established */ state RELATED,ESTABLISHED
        ACCEPT     icmp --  anywhere             anywhere             /* 004 allow icmp */
        ACCEPT     tcp  --  anywhere             anywhere             multiport ports ssh /* 005 Allow SSH */
        ACCEPT     tcp  --  anywhere             anywhere             multiport ports http,https /* 006 HTTP/HTTPS connections */
        ACCEPT     tcp  --  anywhere             anywhere             multiport ports 8140 /* 200 Allow Puppet Master */
        DROP       all  --  anywhere             anywhere             /* 999 drop all */
        
        Chain FORWARD (policy ACCEPT)
        target     prot opt source               destination
        
        Chain OUTPUT (policy ACCEPT)
        target     prot opt source               destination
        ACCEPT     tcp  --  anywhere             anywhere             /* 003 allow outbound */
        

      Apply Modules to the Agent Nodes

      Now that the accounts and firewall modules have been created, tested, and run on the Puppet master, it is time to apply them to your managed nodes.

      1. On the Puppet master, navigate to /etc/puppetlabs/code/environments/production/manifests:

        cd /etc/puppetlabs/code/environments/production/manifests
        
      2. Update site.pp to declare the modules, classes, and resources that should be applied to each managed node:

        site.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        34
        35
        36
        37
        38
        39
        40
        41
        
        node default {
        
        }
        
        node 'puppet.example.com' {
          # ...
        }
        
        node 'puppet-agent-ubuntu.example.com' {
        
          include accounts
        
          resources { 'firewall':
            purge => true,
          }
        
          Firewall {
            before        => Class['firewall::post'],
            require       => Class['firewall::pre'],
          }
        
          class { ['firewall::pre', 'firewall::post']: }
        
        }
        
        node 'puppet-agent-centos.example.com' {
        
          include accounts
        
          resources { 'firewall':
            purge => true,
          }
        
          Firewall {
            before        => Class['firewall::post'],
            require       => Class['firewall::pre'],
          }
        
          class { ['firewall::pre', 'firewall::post']: }
        
        }
      3. By default, the Puppet agent service on your managed nodes will automatically check with the master once every 30 minutes and apply any new configurations from the master. You can also manually invoke the Puppet agent process in-between automatic agent runs.

        Log in to each managed node (as root) and run the Puppet agent:

        /opt/puppetlabs/bin/puppet agent -t
        
      4. To ensure the Puppet agent worked:

      Congratulations! You’ve successfully installed Puppet on a master and two managed nodes. Now that you’ve confirmed everything is working, you can create additional modules to automate configuration management on your nodes. For more information, review Puppet’s open source documentation. You can also install and use modules others have created on the Puppet Forge.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link