One place for hosting & domains

      Troubleshooting Basic Connection Issues


      Updated by Linode Written by Linode

      This guide presents troubleshooting strategies for Linodes that are unresponsive to any network access. One reason that a Linode may be unresponsive is if you recently performed a distribution upgrade or other broad software updates to your Linode, as those changes can lead to unexpected problems for your core system components.

      Similarly, your server may be unresponsive after maintenance was applied by Linode to your server’s host (frequently, this is correlated with software/distribution upgrades performed on your deployment prior to the host’s maintenance). This guide is designed as a useful resource for either of these scenarios.

      If you can ping your Linode, but you cannot access SSH or other services, this guide will not assist with troubleshooting those services. Instead, refer to the Troubleshooting SSH or Troubleshooting Web Servers, Databases, and Other Services guides.

      Where to go for help outside this guide

      This guide explains how to use different troubleshooting commands on your Linode. These commands can produce diagnostic information and logs that may expose the root of your connection issues. For some specific examples of diagnostic information, this guide also explains the corresponding cause of the issue and presents solutions for it.

      If the information and logs you gather do not match a solution outlined here, consider searching the Linode Community Site for posts that match your system’s symptoms. Or, post a new question in the Community Site and include your commands’ output.

      Linode is not responsible for the configuration or installation of software on your Linode. Refer to Linode’s Scope of Support for a description of which issues Linode Support can help with.

      Before You Begin

      There are a few core troubleshooting tools you should familiarize yourself with that are used when diagnosing connection problems.

      The Linode Shell (Lish)

      Lish is a shell that provides access to your Linode’s serial console. Lish does not establish a network connection to your Linode, so you can use it when your networking is down or SSH is inaccessible. Much of your troubleshooting for basic connection issues will be performed from the Lish console.

      To learn about Lish in more detail, and for instructions on how to connect to your Linode via Lish, review the Using the Linode Shell (Lish) guide. In particular, using your web browser is a fast and simple way to access Lish.

      MTR

      When your network traffic leaves your computer to your Linode, it travels through a series of routers that are administered by your internet service provider, by Linode’s transit providers, and by the various organizations that form the Internet’s backbone. It is possible to analyze the route that your traffic takes for possible service interruptions using a tool called MTR.

      MTR is similar to the traceroute tool, in that it will trace and display your traffic’s route. MTR also runs several iterations of its tracing algorithm, which means that it can report statistics like average packet loss and latency over the period that the MTR test runs.

      Review the installation instructions in Linode’s Diagnosing Network Issues with MTR guide and install MTR on your computer.

      Is your Linode Running?

      Log in to the Linode Manager and inspect the Linode’s dashboard. If the Linode is powered off, turn it on.

      Inspect the Lish Console

      If the Linode is listed as running in the Manager, or after you boot it from the Manager, open the Lish console and look for a login prompt. If a login prompt exists, try logging in with your root user credentials (or any other Linux user credentials that you previously created on the server).

      Note

      The root user is available in Lish even if root user login is disabled in your SSH configuration.

      1. If you can log in at the Lish console, move on to the diagnose network connection issues section of this guide.

        If you see a log in prompt, but you have forgotten the credentials for your Linode, follow the instructions for resetting your root password and then attempt to log in at the Lish console again.

      2. If you do not see a login prompt, your Linode may have issues with booting.

      Troubleshoot Booting Issues

      If your Linode isn’t booting normally, you will not be able to rely on the Lish console to troubleshoot your deployment directly. To continue, you will first need to reboot your Linode into Rescue Mode, which is a special recovery environment that Linode provides.

      When you boot into Rescue Mode, you are booting your Linode into the Finnix recovery Linux distribution. This Finnix image includes a working network configuration, and you will be able to mount your Linode’s disks from this environment, which means that you will be able to access your files.

      1. Review the Rescue and Rebuild guide for instructions and boot into Rescue Mode. If your Linode does not reboot into Rescue Mode successfully, please contact Linode Support.

      2. Connect to Rescue Mode via the Lish console as you would normally. You will not be required to enter a username or password to start using the Lish console while in Rescue Mode.

      Perform a File System Check

      If your Linode can’t boot, then it may have experienced filesystem corruption.

      1. Review the Rescue and Rebuild guide for instructions on running a filesystem check.

        Caution

        Never run a filesystem check on a disk that is mounted.

      2. If your filesystem check reports errors that cannot be fixed, you may need to rebuild your Linode.

      3. If the filesystem check reports errors that it has fixed, try rebooting your Linode under your normal configuration profile. After you reboot, you may find that your connection issues are resolved. If you still cannot connect as normal, restart the troubleshooting process from the beginning of this guide.

      4. If the filesystem check does not report any errors, there may be another reason for your booting issues. Continue to inspecting your system and kernel logs.

      Inspect System and Kernel Logs

      In addition to being able to mount your Linode’s disks, you can also change root (sometimes abbreviated as chroot) within Rescue Mode. Chrooting will make Rescue Mode’s working environment emulate your normal Linux distribution. This means your files and logs will appear where you normally expect them, and you will be able to work with tools like your standard package manager and other system utilities.

      To proceed, review the Rescue and Rebuild guide’s instructions on changing root. Once you have chrooted, you can then investigate your Linode’s logs for messages that may describe the cause of your booting issues.

      In systemd Linux distributions (like Debian 8+, Ubuntu 16.04+, CentOS 7+, and recent releases of Arch), you can run the journalctl command to view system and kernel logs. In these and other distributions, you may also find system log messages in the following files:

      • /var/log/messages

      • /var/log/syslog

      • /var/log/kern.log

      • /var/log/dmesg

      You can use the less command to review the contents of these files (e.g. less /var/log/syslog). Try pasting your log messages into a search engine or searching in the Linode Community Site to see if anyone else has run into similar issues. If you don’t find any results, you can try asking about your issues in a new post on the Linode Community Site. If it becomes difficult to find a solution, you may need to rebuild your Linode.

      Quick Tip for Ubuntu and Debian Systems

      After you have chrooted inside Rescue Mode, the following command may help with issues related to your package manager’s configuration:

      dpkg --configure -a
      

      After running this command, try rebooting your Linode into your normal configuration profile. If your issues persist, you may need to investigate and research your system logs further, or consider rebuilding your Linode.

      Diagnose Network Connection Issues

      If you can boot your Linode normally and access the Lish console, you can continue investigating network issues. Networking issues may have two causes:

      • There may be a network routing problem between you and your Linode, or:

      • If the traffic is properly routed, your Linode’s network configuration may be malfunctioning.

      Check for Network Route Problems

      To diagnose routing problems, run and analyze an MTR report from your computer to your Linode. For instructions on how to use MTR, review Linode’s MTR guide. It is useful to run your MTR report for 100 cycles in order to get a good sample size (note that running a report with this many cycles will take more time to complete). This recommended command includes other helpful options:

      mtr -rwbzc 100 -i 0.2 -rw 198.51.100.0 <Linode's IP address>
      

      Once you have generated this report, compare it with the following example scenarios.

      Note

      If you are located in China, and the output of your MTR report shows high packet loss or an improperly configured router, then your IP address may have been blacklisted by the GFW (Great Firewall of China). Linode is not able to change your IP address if it has been blacklisted by the GFW. If you have this issue, review this community post for troubleshooting help.
      • High Packet Loss

        root@localhost:~# mtr --report www.google.com
        HOST: localhost                   Loss%   Snt   Last   Avg  Best  Wrst StDev
        1. 63.247.74.43                   0.0%    10    0.3   0.6   0.3   1.2   0.3
        2. 63.247.64.157                  0.0%    10    0.4   1.0   0.4   6.1   1.8
        3. 209.51.130.213                60.0%    10    0.8   2.7   0.8  19.0   5.7
        4. aix.pr1.atl.google.com        60.0%    10    6.7   6.8   6.7   6.9   0.1
        5. 72.14.233.56                  50.0%   10    7.2   8.3   7.1  16.4   2.9
        6. 209.85.254.247                40.0%   10   39.1  39.4  39.1  39.7   0.2
        7. 64.233.174.46                 40.0%   10   39.6  40.4  39.4  46.9   2.3
        8. gw-in-f147.1e100.net          40.0%   10   39.6  40.5  39.5  46.7   2.2
        

        This example report shows high persistent packet loss starting mid-way through the route at hop 3, which indicates an issue with the router at hop 3. If your report looks like this, open a support ticket with your MTR results for further troubleshooting assistance.

        Note

        If your route only shows packet loss at certain routers, and not through to the end of the route, then it is likely that those routers are purposefully limiting ICMP responses. This is generally not a problem for your connection. Linode’s MTR guide provides more context for packet loss issues.

        If your report resembles the example, open a support ticket with your MTR results for further troubleshooting assistance. Also, consult Linode’s MTR guide for more context on packet loss issues.

      • Improperly Configured Router

        root@localhost:~# mtr --report www.google.com
        HOST: localhost                   Loss%   Snt   Last   Avg  Best  Wrst StDev
        1. 63.247.74.43                  0.0%    10    0.3   0.6   0.3   1.2   0.3
        2. 63.247.64.157                 0.0%    10    0.4   1.0   0.4   6.1   1.8
        3. 209.51.130.213                0.0%    10    0.8   2.7   0.8  19.0   5.7
        4. aix.pr1.atl.google.com        0.0%    10    6.7   6.8   6.7   6.9   0.1
        5. ???                           0.0%    10    0.0   0.0   0.0   0.0   0.0
        6. ???                           0.0%    10    0.0   0.0   0.0   0.0   0.0
        7. ???                           0.0%    10    0.0   0.0   0.0   0.0   0.0
        8. ???                           0.0%    10    0.0   0.0   0.0   0.0   0.0
        9. ???                           0.0%    10    0.0   0.0   0.0   0.0   0.0
        10. ???                           0.0%    10    0.0   0.0   0.0   0.0   0.0
        

        If your report shows question marks instead of the hostnames (or IP addresses) of the routers, and if these question marks persist to the end of the route, then the report indicates an improperly configured router. If your report looks like this, open a support ticket with your MTR results for further troubleshooting assistance.

        Note

        If your route only shows question marks for certain routers, and not through to the end of the route, then it is likely that those routers are purposefully blocking ICMP responses. This is generally not a problem for your connection. Linode’s MTR guide provides more information about router configuration issues.
      • Destination Host Networking Improperly Configured

        root@localhost:~# mtr --report www.google.com
        HOST: localhost                   Loss%   Snt   Last   Avg  Best  Wrst StDev
        1. 63.247.74.43                  0.0%    10    0.3   0.6   0.3   1.2   0.3
        2. 63.247.64.157                 0.0%    10    0.4   1.0   0.4   6.1   1.8
        3. 209.51.130.213                0.0%    10    0.8   2.7   0.8  19.0   5.7
        4. aix.pr1.atl.google.com        0.0%    10    6.7   6.8   6.7   6.9   0.1
        5. 72.14.233.56                  0.0%    10    7.2   8.3   7.1  16.4   2.9
        6. 209.85.254.247                0.0%    10   39.1  39.4  39.1  39.7   0.2
        7. 64.233.174.46                 0.0%    10   39.6  40.4  39.4  46.9   2.3
        8. gw-in-f147.1e100.net         100.0    10    0.0   0.0   0.0   0.0   0.0
        

        If your report shows no packet loss or low packet loss (or non-persistent packet loss isolated to certain routers) until the end of the route, and 100% loss at your Linode, then the report indicates that your Linode’s network interface is not configured correctly. If your report looks like this, move down to confirming network configuration issues from Rescue Mode.

      Note

      If your report does not look like any of the previous examples, read through the MTR guide for other potential scenarios.

      Confirm Network Configuration Issues from Rescue Mode

      If your MTR indicates a configuration issue within your Linode, you can confirm the problem by using Rescue Mode:

      1. Reboot your Linode into Rescue Mode.

      2. Run another MTR report from your computer to your Linode’s IP address.

      3. As noted earlier, Rescue Mode boots with a working network configuration. If your new MTR report does not show the same packet loss that it did before, this result confirms that your deployment’s network configuration needs to be fixed. Continue to troubleshooting network configuration issues.

      4. If your new MTR report still shows the same packet loss at your Linode, this result indicates issues outside of your configuration. Open a support ticket with your MTR results for further troubleshooting assistance.

      Open a Support Ticket with your MTR Results

      Before opening a support ticket, you should also generate a reverse MTR report. The MTR tool is run from your Linode and targets your machine’s IP address on your local network, whether you’re on your home LAN, for example, or public WiFi. To run an MTR from your Linode, log in to your Lish console. To find your local IP, visit a website like https://www.whatismyip.com/.

      Once you have generated your original MTR and your reverse MTR, open a Linode support ticket, and include your reports and a description of the troubleshooting you’ve performed so far. Linode Support will try to help further diagnose the routing issue.

      Troubleshoot Network Configuration Issues

      If you have determined that your network configuration is the cause of the problem, review the following troubleshooting suggestions. If you make any changes in an attempt to fix the issue, you can test those changes with these steps:

      1. Run another MTR report (or ping the Linode) from your computer to your Linode’s IP.

      2. If the report shows no packet loss but you still can’t access SSH or other services, this result indicates that your network connection is up again, but the other services are still down. Move onto troubleshooting SSH or troubleshooting other services.

      3. If the report still shows the same packet loss, review the remaining troubleshooting suggestions in this section.

      If the recommendations in this section do not resolve your issue, try pasting your diagnostic commands’ output into a search engine or searching for your output in the Linode Community Site to see if anyone else has run into similar issues. If you don’t find any results, you can try asking about your issues in a new post on the Linode Community Site. If it becomes difficult to find a solution, you may need to rebuild your Linode.

      Try Enabling Network Helper

      A quick fix may be to enable Linode’s Network Helper tool. Network Helper will attempt to generate the appropriate static networking configuration for your Linux distribution. After you enable Network Helper, reboot your Linode for the changes to take effect. If Network Helper was already enabled, continue to the remaining troubleshooting suggestions in this section.

      Did You Upgrade to Ubuntu 18.04+ From an Earlier Version?

      If you performed an inline upgrade from an earlier version of Ubuntu to Ubuntu 18.04+, you may need to enable the systemd-networkd service:

      sudo systemctl enable systemd-networkd
      

      Afterwards, reboot your Linode.

      Run Diagnostic Commands

      To collect more information about your network configuration, collect output from the diagnostic commands appropriate for your distribution:

      Network diagnostic commands

      • Debian 7, Ubuntu 14.04

        sudo service network status
        cat /etc/network/interfaces
        ip a
        ip r
        sudo ifdown eth0 && sudo ifup eth0
        
      • Debian 8 and 9, Ubuntu 16.04

        sudo systemctl status networking.service -l
        sudo journalctl -u networking --no-pager | tail -20
        cat /etc/network/interfaces
        ip a
        ip r
        sudo ifdown eth0 && sudo ifup eth0
        
      • Ubuntu 18.04

        sudo networkctl status
        sudo systemctl status systemd-networkd -l
        sudo journalctl -u systemd-networkd --no-pager | tail -20
        cat /etc/systemd/network/05-eth0.network
        ip a
        ip r
        sudo netplan apply
        
      • Arch, CoreOS

        sudo systemctl status systemd-networkd -l
        sudo journalctl -u systemd-networkd --no-pager | tail -20
        cat /etc/systemd/network/05-eth0.network
        ip a
        ip r
        
      • CentOS 6

        sudo service network status
        cat /etc/sysconfig/network-scripts/ifcfg-eth0
        ip a
        ip r
        sudo ifdown eth0 && sudo ifup eth0
        
      • CentOS 7, Fedora

        sudo systemctl status NetworkManager -l
        sudo journalctl -u NetworkManager --no-pager | tail -20
        sudo nmcli
        cat /etc/sysconfig/network-scripts/ifcfg-eth0
        ip a
        ip r
        sudo ifdown eth0 && sudo ifup eth0
        

      Inspect Error Messages

      Your commands’ output may show error messages, including generic errors like Failed to start Raise network interfaces. There may also be more specific errors that appear. Two common errors that can appear are related to Sendmail and iptables:

      Sendmail

      If you find a message similar to the following, it is likely that a broken Sendmail update is at fault:

        
      /etc/network/if-up.d/sendmail: 44: .: Can't open /usr/share/sendmail/dynamic run-parts: /etc/network/if-up.d/sendmail exited with return code 2
      
      

      The Sendmail issue can usually be resolved by running the following command and restarting your Linode:

      sudo mv /etc/network/if-up.d/sendmail ~
      ifdown -a && ifup -a
      

      Note

      Read more about the Sendmail bug here.

      iptables

      Malformed rules in your iptables ruleset can sometimes cause issues for your network scripts. An error similar to the following can appear in your logs if this is the case:

        
      Apr 06 01:03:17 xlauncher ifup[6359]: run-parts: failed to exec /etc/network/if- Apr 06 01:03:17 xlauncher ifup[6359]: run-parts: /etc/network/if-up.d/iptables e
      
      

      Run the following command and restart your Linode to resolve this issue:

      sudo mv /etc/network/if-up.d/iptables ~
      

      Please note that your firewall will be down at this point, so you will need to re-enable it manually. Review the Control Network Traffic with iptables guide for help with managing iptables.

      Was your Interface Renamed?

      In your commands’ output, you might notice that your eth0 interface is missing and replaced with another name (for example, ensp or ensp0). This behavior can be caused by systemd’s Predictable Network Interface Names feature.

      1. Disable the use of Predictable Network Interface Names with these commands:

        ln -s /dev/null /etc/systemd/network/99-default.link
        ln -s /dev/null /etc/udev/rules.d/80-net-setup-link.rules
        
      2. Reboot your Linode for the changes to take effect.

      Review Firewall Rules

      If your interface is up but your networking is still down, your firewall (which is likely implemented by the iptables software) may be blocking all connections, including basic ping requests. To review your current firewall ruleset, run:

      sudo iptables -L # displays IPv4 rules
      sudo ip6tables -L # displays IPv6 rules
      

      Note

      Your deployment may be running FirewallD or UFW, which are frontend software packages used to more easily manage your iptables rules. Run these commands to find out if you are running either package:

      sudo ufw status
      sudo firewall-cmd --state
      

      Review How to Configure a Firewall with UFW and Introduction to FirewallD on CentOS to learn how to manage and inspect your firewall rules with those packages.

      Firewall rulesets can vary widely. Review our Control Network Traffic with iptables guide to analyze your rules and determine if they are blocking connections.

      Disable Firewall Rules

      In addition to analyzing your firewall ruleset, you can also temporarily disable your firewall to test if it is interfering with your connections. Leaving your firewall disabled increases your security risk, so we recommend re-enabling it afterwards with a modified ruleset that will accept your connections. Review Control Network Traffic with iptables for help with this subject.

      1. Create a temporary backup of your current iptables:

        sudo iptables-save > ~/iptables.txt
        
      2. Set the INPUT, FORWARD and OUTPUT packet policies as ACCEPT:

        sudo iptables -P INPUT ACCEPT
        sudo iptables -P FORWARD ACCEPT
        sudo iptables -P OUTPUT ACCEPT
        
      3. Flush the nat table that is consulted when a packet that creates a new connection is encountered:

        sudo iptables -t nat -F
        
      4. Flush the mangle table that is used for specialized packet alteration:

        sudo iptables -t mangle -F
        
      5. Flush all the chains in the table:

        sudo iptables -F
        
      6. Delete every non-built-in chain in the table:

        sudo iptables -X
        
      7. Repeat these steps with the ip6tables command to flush your IPv6 rules. Be sure to assign a different name to the IPv6 rules file. (e.g. ~/ip6tables.txt).

      Next Steps

      If you are able to restore basic networking, but you still can’t access SSH or other services, refer to the Troubleshooting SSH or Troubleshooting Web Servers, Databases, and Other Services guides.

      If your connection issues were the result of maintenance performed by Linode, review the Reboot Survival Guide for methods to prepare a Linode for any future maintenance.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Getting Started with Puppet – Basic Installation and Setup


      Updated by Linode Written by Linode

      Puppet is a configuration management tool that simplifies system administration. Puppet uses a client/server model in which your managed nodes, running a process called the Puppet agent, talk to and pull down configuration profiles from a Puppet master.

      Puppet deployments can range from small groups of servers up to enterprise-level operations. This guide will demonstrate how to install Puppet 6.1 on three servers:

      • A Puppet master running Ubuntu 18.04
      • A managed Puppet node running Ubuntu 18.04
      • A managed Puppet node running CentOS 7

      After installation, the next section will show you how to secure these servers via Puppet. This section will demonstrate core features of the Puppet language.

      Note

      Most guides will instruct you to follow the How to Secure your Server guide before proceeding. Because Puppet will be used to perform this task, you should begin this guide as the root user. A limited user with administrative privileges will be configured via Puppet in later steps.

      Before You Begin

      The following table displays example system information for the servers that will be deployed in this guide:

      Description OS Hostname FQDN IP
      Puppet master Ubuntu 18.04 puppet puppet.example.com 192.0.2.2
      Node 1 (Ubuntu) Ubuntu 18.04 puppet-agent-ubuntu puppet-agent-ubuntu.example.com 192.0.2.3
      Node 2 (CentOS) CentOS 7 puppet-agent-centos puppet-agent-centos.example.com 192.0.2.4

      You can choose different hostnames and fully qualified domain names (FQDN) for each of your servers, and the IP addresses for your servers will be different from the example addresses listed. You will need to have a registered domain name in order to specify FQDNs for your servers.

      Throughout this guide, commands and code snippets will reference the values displayed in this table. Wherever such a value appears, replace it with your own value.

      Create your Linodes

      1. Create three Linodes corresponding to the servers listed in the table above. Your Puppet master Linode should have at least four CPU cores; the Linode 8GB plan is recommended. The two other nodes can be of any plan size, depending on how you intend to use them after Puppet is installed and configured.

      2. Configure your timezone on your master and agent nodes so that they all have the same time data.

      3. Set the hostname for each server.

      4. Set the FQDN for each Linode by editing the servers’ /etc/hosts files.

        Example content for the hosts file

        You can model the contents of your /etc/hosts files on these snippets:

        Master
        1
        2
        3
        4
        5
        6
        7
        
        127.0.0.1   localhost
        192.0.2.2   puppet.example.com puppet
        
        # The following lines are desirable for IPv6 capable hosts
        ::1     localhost ip6-localhost ip6-loopback
        ff02::1 ip6-allnodes
        ff02::2 ip6-allrouters
        Node 1 (Ubuntu)
        1
        2
        3
        4
        5
        6
        7
        
        127.0.0.1   localhost
        192.0.2.3   puppet-agent-ubuntu.example.com puppet-agent-ubuntu
        
        # The following lines are desirable for IPv6 capable hosts
        ::1     localhost ip6-localhost ip6-loopback
        ff02::1 ip6-allnodes
        ff02::2 ip6-allrouters
        Node 2 (CentOS)
        1
        2
        3
        
        127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
        ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
        192.0.2.4   puppet-agent-centos.example.com puppet-agent-centos
      5. Set up DNS records for your Linodes’ FQDNs. For each Linode, create a new A record with the name specified by its FQDN and assign it to that Linode’s IP address.

        If you don’t use Linode’s name servers for your domain, consult your name server authority’s website for instructions on how to edit your DNS records.

        Updating DNS records at common nameserver authorities

        The following support documents describe how to update DNS records at common nameserver authorities:

      Puppet Master

      Install the Puppet Server Software

      The Puppet master runs the puppetserver service, which is responsible for compiling and supplying configuration profiles to your managed nodes.

      The puppetserver service has the Puppet agent service as a dependency (which is just called puppet when running on your system). This means that the agent software will also be installed and can be run on your master. Because your master can run the agent service, you can configure your master via Puppet just as you can configure your other managed nodes.

      1. Log in to your Puppet master via SSH (as root):

        ssh root@puppet.example.com
        
      2. Download the Puppet repository, update your system packages, and install puppetserver:

        wget https://apt.puppetlabs.com/puppet-release-bionic.deb
        dpkg -i puppet-release-bionic.deb
        apt update
        apt install puppetserver
        

      Configure the Server Software

      1. Use the puppet config command to set values for the dns_alt_names setting:

        /opt/puppetlabs/bin/puppet config set dns_alt_names 'puppet,puppet.example.com' --section main
        

        If you inspect the configuration file, you’ll see that the setting has been added:

        cat /etc/puppetlabs/puppet/puppet.conf
        
          
        [main]
        dns_alt_names = puppet,puppet.example.com
        # ...
        
        

        Note

        The puppet command by default is not added to your PATH. Using Puppet’s interactive commands requires a full file path. To avoid this, update your PATH for your existing shell session:

        export PATH=/opt/puppetlabs/bin:$PATH
        

        A more permanent solution would be to add this to your .profile or .bashrc files.

      2. Update your Puppet master’s /etc/hosts to resolve your managed nodes’ IP addresses. For example, your /etc/hosts file might look like the following:

        /etc/hosts
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        
        127.0.0.1   localhost
        192.0.2.2   puppet.example.com puppet
        
        192.0.2.3   puppet-agent-ubuntu.example.com puppet-agent-ubuntu
        192.0.2.4   puppet-agent-centos.example.com puppet-agent-centos
        
        # The following lines are desirable for IPv6 capable hosts
        ::1     localhost ip6-localhost ip6-loopback
        ff02::1 ip6-allnodes
        ff02::2 ip6-allrouters

        Note

      3. Start and enable the puppetserver service:

        systemctl start puppetserver
        systemctl enable puppetserver
        

        By default, the Puppet master listens for client connections on port 8140. If the puppetserver service fails to start, check that the port is not already in use:

        netstat -anpl | grep 8140
        

      Puppet Agents

      Install Puppet Agent

      1. On your managed node running Ubuntu 18.04, install the puppet-agent package:

        wget https://apt.puppetlabs.com/puppet-release-bionic.deb
        dpkg -i puppet-release-bionic.deb
        apt update
        apt install puppet-agent
        
      2. On your managed node running CentOS 7, enter:

        rpm -Uvh https://yum.puppet.com/puppet/puppet-release-el-7.noarch.rpm
        yum install puppet-agent
        

      Configure Puppet Agent

      1. Modify your managed nodes’ hosts files to resolve the Puppet master’s IP. To do so, add a line like:

        /etc/hosts
        1
        
        192.0.2.2    puppet.example.com puppet

        Example content for the hosts file

        You can model the contents of your managed nodes’ /etc/hosts files on the following snippets. These incorporate the FQDN declarations described in the Create your Linodes section:

        Node 1 (Ubuntu)
        1
        2
        3
        4
        5
        6
        7
        8
        9
        
        127.0.0.1   localhost
        192.0.2.3   puppet-agent-ubuntu.example.com puppet-agent-ubuntu
        
        192.0.2.2   puppet.example.com puppet
        
        # The following lines are desirable for IPv6 capable hosts
        ::1     localhost ip6-localhost ip6-loopback
        ff02::1 ip6-allnodes
        ff02::2 ip6-allrouters
        Node 2 (CentOS)
        1
        2
        3
        4
        5
        
        127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
        ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
        192.0.2.4   puppet-agent-centos.example.com puppet-agent-centos
        
        192.0.2.2   puppet.example.com puppet
      2. On each managed node, use the puppet config command to set the value for your server setting to the FQDN of the master:

        /opt/puppetlabs/bin/puppet config set server 'puppet.example.com' --section main
        

        If you inspect the configuration file on the nodes, you’ll see that the setting has been added:

        cat /etc/puppetlabs/puppet/puppet.conf
        
          
        [main]
        server = puppet.example.com
        # ...
        
        
      3. Use the puppet resource command to start and enable the Puppet agent service:

        /opt/puppetlabs/bin/puppet resource service puppet ensure=running enable=true
        

        Note

        On systemd systems, the above command is equivalent to using these two systemctl commands:

        systemctl start puppet
        systemctl enable puppet
        

      Generate and Sign Certificates

      Before your managed nodes can receive configurations from the master, they first need to be authenticated:

      1. On your Puppet agents, generate a certificate for the Puppet master to sign:

        /opt/puppetlabs/bin/puppet agent -t
        

        This command will output an error, stating that no certificate has been found. This error is because the generated certificate needs to be approved by the Puppet master.

      2. Log in to your Puppet master and list the certificates that need approval:

        /opt/puppetlabs/bin/puppetserver ca list
        

        It should output a list with your agent nodes’ hostnames.

      3. Approve the certificates:

        /opt/puppetlabs/bin/puppetserver ca sign --certname puppet-agent-ubuntu.example.com,puppet-agent-centos.example.com
        
      4. Return to the Puppet agent nodes and run the Puppet agent again:

        /opt/puppetlabs/bin/puppet agent -t
        

        You should see something like the following:

          
        Info: Downloaded certificate for hostname.example.com from puppet
        Info: Using configured environment 'production'
        Info: Retrieving pluginfacts
        Info: Retrieving plugin
        Info: Retrieving locales
        Info: Caching catalog for hostname.example.com
        Info: Applying configuration version '1547066428'
        Info: Creating state file /opt/puppetlabs/puppet/cache/state/state.yaml
        Notice: Applied catalog in 0.02 seconds
        
        

      Add Modules to Configure Agent Nodes

      The Puppet master and agent nodes are now functional, but they are not secure. Based on concepts from the How to Secure your Server guide, a limited user and a firewall should be configured. This can be done on all nodes through the creation of basic Puppet modules, shown below.

      Note

      This is not meant to provide a basis for a fully-hardened server, and is intended only as a starting point. Alter and add firewall rules and other configuration options, depending on your specific needs.

      Puppet modules are Puppet’s prescribed way of organizing configuration code to serve specific purposes, like installing and configuration an application. You can create custom modules, or you can download and use modules published on Puppet Forge.

      Add a Limited User

      To create a new limited user on your nodes, you will create and apply a new module called accounts. This module will employ the user resource.

      1. From the Puppet master, navigate to the /etc/puppetlabs/code/environments/production/modules directory. When a managed node requests its configuration from the master, the Puppet server process will look in this location for your modules:

        cd /etc/puppetlabs/code/environments/production/modules/
        
      2. Create the directory for a new accounts module:

        mkdir accounts
        cd accounts
        
      3. Create the following directories inside the accounts module:

        mkdir {examples,files,manifests,templates}
        
        Directory Description
        manifests The Puppet code which powers the module
        files Static files to be copied to managed nodes
        templates Template files to be copied to managed nodes that can e customized with variables
        examples Example code which shows how to use the module

        Note

        Review Puppet’s Module fundamentals article for more information on how a module is structured.
      4. Navigate to the manifests directory:

        cd manifests
        
      5. Any file which contains Puppet code is called a manifest, and each manifest file ends in .pp. When located inside a module, a manifest should only define one class. If a module’s manifests directory has an init.pp file, the class definition it contains is considered the main class for the module. The class definition inside init.pp should have the same name as the module.

        Create an init.pp file with the contents of the following snippet. Replace all instances of username with a username of your choosing:

        accounts/manifests/init.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        
        class accounts {
        
          user { 'username':
            ensure      => present,
            home        => '/home/username',
            shell       => '/bin/bash',
            managehome  => true,
            gid         => 'username',
          }
        
        }
        Option Description
        ensure Ensures that the user exists if set to present, or does not exist if set to absent
        home The path for the user’s home directory
        managehome Controls whether a home directory should be created when creating the user
        shell The path to the shell for the user
        gid The user’s primary group
      6. Although the class declares what the user’s primary group should be, it will not create the group itself. Create a new file called groups.pp inside the manifests directory with the following contents. Replace username with your chosen username:

        accounts/manifests/groups.pp
        1
        2
        3
        4
        5
        6
        7
        
        class accounts::groups {
        
          group { 'username':
            ensure  => present,
          }
        
        }
      7. Your accounts class can declare your new accounts::groups class for use within the accounts class scope. Open your init.pp in your editor and enter a new include declaration at the beginning of the class:

        accounts/manifests/init.pp
        1
        2
        3
        4
        5
        6
        7
        
        class accounts {
        
          include accounts::groups
        
          # ...
        
        }
      8. The new user should have administrative privileges. Because we have agent nodes on both Debian- and Red Hat-based systems, the new user needs to be in the sudo group on Debian systems, and the wheel group on Red Hat systems.

        This value can be set dynamically through the use of Puppet facts. The facts system collects system information about your nodes and makes it available in your manifests.

        Add a selector statement to the top of your accounts class:

        accounts/manifests/init.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        
        class accounts {
        
          $rootgroup = $osfamily ? {
            'Debian'  => 'sudo',
            'RedHat'  => 'wheel',
            default   => warning('This distribution is not supported by the Accounts module'),
          }
        
          include accounts::groups
        
          # ...
        
        }

        This code defines the value for the $rootgroup variable by checking the value of $osfamily, which is one of Puppet’s core facts. If the value for $osfamily does not match Debian or Red Hat, the default value will output a warning that the distribution selected is not supported by this module.

        Note

        The Puppet Configuration Language executes code from top to bottom. Because the user resource declaration will reference the $rootgroup variable, you must define $rootgroup before the user declaration.

      9. Update the user resource to include the groups option as follows:

        accounts/manifests/init.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        
        # ...
        
        user { 'username':
          ensure      => present,
          home        => '/home/username',
          shell       => '/bin/bash',
          managehome  => true,
          gid         => 'username',
          groups      => "$rootgroup",
        }
        
        # ...

        The value "$rootgroup" is enclosed in double quotes " " instead of single quotes ' ' because it is a variable which needs to be interpolated in your code.

      10. The final value that needs to be added is the user’s password. Since we do not want to use plain text, the password should be supplied to Puppet as a SHA1 digest, which is supported by default. Generate a digest with the openssl command:

        openssl passwd -1
        

        You will be prompted to enter your password. A hashed password will be output. Copy this value to your clipboard.

      11. Update the user resource to include the password option as follows; insert your copied password hash as the value for the option:

        accounts/manifests/init.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        
        # ...
        
        user { 'username':
          ensure      => present,
          home        => '/home/username',
          shell       => '/bin/bash',
          managehome  => true,
          gid         => 'username',
          groups      => "$rootgroup",
          password    => 'your_password_hash',
        }
        
        # ...

        Caution

        The hashed password must be included in single quotes ' '.

      12. After saving your changes, use the Puppet parser to ensure that the code is correct:

        /opt/puppetlabs/bin/puppet parser validate init.pp
        

        Any errors that need to be addressed will be logged to standard output. If nothing is returned, your code is valid.

      13. Navigate to the examples directory and create another init.pp file:

        cd ../examples
        
        accounts/examples/init.pp
      14. While still in the examples directory, test the module:

        /opt/puppetlabs/bin/puppet apply --noop init.pp
        

        Note

        The --noop parameter prevents Puppet from actually applying the module to your system and making any changes.

        It should return:

          
        Notice: Compiled catalog for puppet.example.com in environment production in 0.26 seconds
        Notice: /Stage[main]/Accounts::Groups/Group[username]/ensure: current_value absent, should be present (noop)
        Notice: Class[Accounts::Groups]: Would have triggered 'refresh' from 1 events
        Notice: /Stage[main]/Accounts/User[username]/ensure: current_value absent, should be present (noop)
        Notice: Class[Accounts]: Would have triggered 'refresh' from 1 events
        Notice: Stage[main]: Would have triggered 'refresh' from 2 events
        Notice: Finished catalog run in 0.02 seconds
        
        
      15. Again from the examples directory, run puppet apply to make these changes to the Puppet master server:

        /opt/puppetlabs/bin/puppet apply init.pp
        

        Puppet will create your limited Linux user on your master.

      16. Log out as root and log in to the Puppet master as your new user.

      Edit SSH Settings

      Although a new limited user has successfully been added to the Puppet master, it is still possible to login to the system as root. To properly secure your system, root access should be disabled.

      Note

      Because you are now logged in to the Puppet master as a limited user, you will need to execute commands and edit files with the user’s sudo privileges.

      1. Navigate to the files directory within the accounts module:

        cd /etc/puppetlabs/code/environments/production/modules/accounts/files
        
      2. Copy your system’s existing sshd_config file to this directory:

        sudo cp /etc/ssh/sshd_config .
        
      3. Open the file in your editor (making sure that you open it with sudo privileges) and set the PermitRootLogin value to no:

        accounts/files/sshd_config
      4. Navigate back to the manifests directory:

        cd ../manifests
        
      5. Create a new manifest called ssh.pp. Use the file resource to replace the default SSH configuration file with one managed by Puppet:

        accounts/manifests/ssh.pp
        1
        2
        3
        4
        5
        6
        7
        8
        
        class accounts::ssh {
        
          file { '/etc/ssh/sshd_config':
            ensure  => present,
            source  => 'puppet:///modules/accounts/sshd_config',
          }
        
        }

        Note

        The files directory is omitted from the source line because the files folder is the default location of files within a module. For more information on the format used to access resources in a module, refer to the official Puppet module documentation.
      6. Create a second resource to restart the SSH service and set it to run whenever sshd_config is changed. This will also require a selector statement because the SSH service is named ssh on Debian systems and sshd on Red Hat systems:

        accounts/manifests/ssh.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        
        class accounts::ssh {
        
          $sshname = $osfamily ? {
            'Debian'  => 'ssh',
            'RedHat'  => 'sshd',
            default   => warning('This distribution is not supported by the Accounts module'),
          }
        
          file { '/etc/ssh/sshd_config':
            ensure  => present,
            source  => 'puppet:///modules/accounts/sshd_config',
            notify  => Service["$sshname"],
          }
        
          service { "$sshname":
            hasrestart  => true,
          }
        
        }

        Note

      7. Include the accounts::ssh class within the accounts class in init.pp:

        accounts/manifests/init.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        
        class accounts {
        
          # ...
        
          include accounts::groups
          include accounts::ssh
        
          # ...
        
        }

        The complete init.pp

        The contents of your init.pp should now look like the following snippet:

        accounts/manifests/init.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        
        class accounts {
        
            $rootgroup = $osfamily ? {
                'Debian' => 'sudo',
                'RedHat' => 'wheel',
                default => warning('This distro not supported by Accounts module'),
            }
        
            include accounts::groups
            include accounts::ssh
        
            user { 'example':
                ensure  => present,
                home    => '/home/username',
                shell   => '/bin/bash',
                managehome  => true,
                gid     => 'username',
                groups  => "$rootgroup",
                password => 'your_password_hash'
            }
        
        }
      8. Run the Puppet parser to test the syntax of the new class, then navigate to the examples directory to test and run the update to your accounts class:

        sudo /opt/puppetlabs/bin/puppet parser validate ssh.pp
        cd ../examples
        sudo /opt/puppetlabs/bin/puppet apply --noop init.pp
        sudo /opt/puppetlabs/bin/puppet apply init.pp
        

        Note

        You may see the following line in your output when validating:

          
        Error: Removing mount "files": /etc/puppet/files does not exist or is not a directory
        
        

        This refers to a Puppet configuration file, not the module resource you’re trying to copy. If this is the only error in your output, the operation should still succeed.

      9. To ensure that the ssh class is working properly, log out of the Puppet master and then try to log in as root. You should not be able to do so.

      Add and Configure IPtables

      To complete this guide’s security settings, the firewall needs to be configure on your Puppet master and nodes. The iptables firewall software will be used.

      1. By default, changes to your iptables rules will not persist across reboots. To avoid this, install the appropriate package on your Puppet master and nodes:

        Ubuntu/Debian:

        sudo apt install iptables-persistent
        

        CentOS 7:

        CentOS 7 uses firewalld by default as a controller for iptables. Be sure firewalld is stopped and disabled before starting to work directly with iptables:

        sudo systemctl stop firewalld && sudo systemctl disable firewalld
        sudo yum install iptables-services
        
      2. On your Puppet master, install Puppet Lab’s firewall module from the Puppet Forge:

        sudo /opt/puppetlabs/bin/puppet module install puppetlabs-firewall
        

        The module will be installed in your /etc/puppetlabs/code/environments/production/modules directory.

      3. Navigate to the manifests directory inside the new firewall module:

        cd /etc/puppetlabs/code/environments/production/modules/firewall/manifests/
        
      4. Create a file titled pre.pp, which will contain all basic networking rules that should be run first:

        firewall/manifests/pre.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        34
        35
        36
        37
        38
        39
        40
        41
        42
        43
        44
        45
        46
        47
        48
        49
        50
        51
        52
        53
        54
        55
        
        class firewall::pre {
        
          Firewall {
            require => undef,
          }
        
           # Accept all loopback traffic
          firewall { '000 lo traffic':
            proto       => 'all',
            iniface     => 'lo',
            action      => 'accept',
          }->
        
           #Drop non-loopback traffic
          firewall { '001 reject non-lo':
            proto       => 'all',
            iniface     => '! lo',
            destination => '127.0.0.0/8',
            action      => 'reject',
          }->
        
           #Accept established inbound connections
          firewall { '002 accept established':
            proto       => 'all',
            state       => ['RELATED', 'ESTABLISHED'],
            action      => 'accept',
          }->
        
           #Allow all outbound traffic
          firewall { '003 allow outbound':
            chain       => 'OUTPUT',
            action      => 'accept',
          }->
        
           #Allow ICMP/ping
          firewall { '004 allow icmp':
            proto       => 'icmp',
            action      => 'accept',
          }
        
           #Allow SSH connections
          firewall { '005 Allow SSH':
            dport    => '22',
            proto   => 'tcp',
            action  => 'accept',
          }->
        
           #Allow HTTP/HTTPS connections
          firewall { '006 HTTP/HTTPS connections':
            dport    => ['80', '443'],
            proto   => 'tcp',
            action  => 'accept',
          }
        
        }
      5. In the same directory, create post.pp, which will run any firewall rules that need to be input last:

        firewall/manifests/post.pp
        1
        2
        3
        4
        5
        6
        7
        8
        9
        
        class firewall::post {
        
          firewall { '999 drop all':
            proto  => 'all',
            action => 'drop',
            before => undef,
          }
        
        }

        These rules will direct the system to drop all inbound traffic that is not already permitted in the firewall.

      6. Run the Puppet parser on both files to check their syntax for errors:

        sudo /opt/puppetlabs/bin/puppet parser validate pre.pp
        sudo /opt/puppetlabs/bin/puppet parser validate post.pp
        
      7. Navigate to the main manifests directory:

        cd /etc/puppetlabs/code/environments/production/manifests
        
      8. Create a file named site.pp inside /etc/puppetlabs/code/environments/production/manifests. This file is the main manifest for the Puppet server service. It is used to map modules, classes, and resources to the nodes that they should be applied to.

        site.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        
        node default {
        
        }
        
        node 'puppet.example.com' {
        
          include accounts
        
          resources { 'firewall':
            purge => true,
          }
        
          Firewall {
            before        => Class['firewall::post'],
            require       => Class['firewall::pre'],
          }
        
          class { ['firewall::pre', 'firewall::post']: }
        
          firewall { '200 Allow Puppet Master':
            dport         => '8140',
            proto         => 'tcp',
            action        => 'accept',
          }
        
        }
      9. Run the site.pp file through the Puppet parser to check its syntax for errors. Then, test the file with the --noop option to see if it will run:

        sudo /opt/puppetlabs/bin/puppet parser validate site.pp
        sudo /opt/puppetlabs/bin/puppet apply --noop site.pp
        

        If successful, run puppet apply without the --noop option:

        sudo /opt/puppetlabs/bin/puppet apply site.pp
        
      10. Once Puppet has finished applying the changes, check the Puppet master’s iptables rules:

        sudo iptables -L
        

        It should return:

        Chain INPUT (policy ACCEPT)
        target     prot opt source               destination
        ACCEPT     all  --  anywhere             anywhere             /* 000 lo traffic */
        REJECT     all  --  anywhere             127.0.0.0/8          /* 001 reject non-lo */ reject-with icmp-port-unreachable
        ACCEPT     all  --  anywhere             anywhere             /* 002 accept established */ state RELATED,ESTABLISHED
        ACCEPT     icmp --  anywhere             anywhere             /* 004 allow icmp */
        ACCEPT     tcp  --  anywhere             anywhere             multiport ports ssh /* 005 Allow SSH */
        ACCEPT     tcp  --  anywhere             anywhere             multiport ports http,https /* 006 HTTP/HTTPS connections */
        ACCEPT     tcp  --  anywhere             anywhere             multiport ports 8140 /* 200 Allow Puppet Master */
        DROP       all  --  anywhere             anywhere             /* 999 drop all */
        
        Chain FORWARD (policy ACCEPT)
        target     prot opt source               destination
        
        Chain OUTPUT (policy ACCEPT)
        target     prot opt source               destination
        ACCEPT     tcp  --  anywhere             anywhere             /* 003 allow outbound */
        

      Apply Modules to the Agent Nodes

      Now that the accounts and firewall modules have been created, tested, and run on the Puppet master, it is time to apply them to your managed nodes.

      1. On the Puppet master, navigate to /etc/puppetlabs/code/environments/production/manifests:

        cd /etc/puppetlabs/code/environments/production/manifests
        
      2. Update site.pp to declare the modules, classes, and resources that should be applied to each managed node:

        site.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        34
        35
        36
        37
        38
        39
        40
        41
        
        node default {
        
        }
        
        node 'puppet.example.com' {
          # ...
        }
        
        node 'puppet-agent-ubuntu.example.com' {
        
          include accounts
        
          resources { 'firewall':
            purge => true,
          }
        
          Firewall {
            before        => Class['firewall::post'],
            require       => Class['firewall::pre'],
          }
        
          class { ['firewall::pre', 'firewall::post']: }
        
        }
        
        node 'puppet-agent-centos.example.com' {
        
          include accounts
        
          resources { 'firewall':
            purge => true,
          }
        
          Firewall {
            before        => Class['firewall::post'],
            require       => Class['firewall::pre'],
          }
        
          class { ['firewall::pre', 'firewall::post']: }
        
        }
      3. By default, the Puppet agent service on your managed nodes will automatically check with the master once every 30 minutes and apply any new configurations from the master. You can also manually invoke the Puppet agent process in-between automatic agent runs.

        Log in to each managed node (as root) and run the Puppet agent:

        /opt/puppetlabs/bin/puppet agent -t
        
      4. To ensure the Puppet agent worked:

      Congratulations! You’ve successfully installed Puppet on a master and two managed nodes. Now that you’ve confirmed everything is working, you can create additional modules to automate configuration management on your nodes. For more information, review Puppet’s open source documentation. You can also install and use modules others have created on the Puppet Forge.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link