One place for hosting & domains

      Configure

      How To Install and Configure an Apache ZooKeeper Cluster on Ubuntu 18.04


      The author selected Wikimedia Foundation Inc. to receive a donation as part of the Write for DOnations program.

      Introduction

      Apache ZooKeeper is open-source software that enables resilient and highly reliable distributed coordination. It is commonly used in distributed systems to manage configuration information, naming services, distributed synchronization, quorum, and state. In addition, distributed systems rely on ZooKeeper to implement consensus, leader election, and group management.

      In this guide, you will install and configure Apache ZooKeeper 3.4.13 on Ubuntu 18.04. To achieve resilience and high availability, ZooKeeper is intended to be replicated over a set of hosts, called an ensemble. First, you will create a standalone installation of a single-node ZooKeeper server and then add in details for setting up a multi-node cluster. The standalone installation is useful in development and testing environments, but a cluster is the most practical solution for production environments.

      Prerequisites

      Before you begin this installation and configuration guide, you’ll need the following:

      • The standalone installation needs one Ubuntu 18.04 server with a minimum of 4GB of RAM set up by following the Ubuntu 18.04 initial server setup guide, including a non-root user with sudo privileges and a firewall. You need two additional servers, set up by following the same steps, for the multi-node cluster.
      • OpenJDK 8 installed on your server, as ZooKeeper requires Java to run. To do this, follow the “Install Specific Versions of OpenJDK” step from the How To Install Java with `apt` on Ubuntu 18.04 guide.

      Because ZooKeeper keeps data in memory to achieve high throughput and low latency, production systems work best with 8GB of RAM. Lower amounts of RAM may lead to JVM swapping, which could cause ZooKeeper server latency. High ZooKeeper server latency could result in issues like client session timeouts that would have an adverse impact on system functionality.

      Step 1 — Creating a User for ZooKeeper

      A dedicated user should run services that handle requests over a network and consume resources. This practice creates segregation and control that will improve your environment’s security and manageability. In this step, you’ll create a non-root sudo user, named zk in this tutorial, to run the ZooKeeper service.

      First, log in as the non-root sudo user that you created in the prerequisites.

      ssh sammy@your_server_ip
      

      Create the user that will run the ZooKeeper service:

      Passing the -m flag to the useradd command will create a home directory for this user. The home directory for zk will be /home/zk by default.

      Set bash as the default shell for the zk user:

      • sudo usermod --shell /bin/bash zk

      Set a password for this user:

      Next, you will add the zk user to the sudo group so it can run commands in a privileged mode:

      In terms of security, it is recommended that you allow SSH access to as few users as possible. Logging in remotely as sammy and then using su to switch to the desired user creates a level of separation between credentials for accessing the system and running processes. You will disable SSH access for both your zk and root user in this step.

      Open your sshd_config file:

      • sudo nano /etc/ssh/sshd_config

      Locate the PermitRootLogin line and set the value to no to disable SSH access for the root user:

      /etc/ssh/sshd_config

      PermitRootLogin no
      

      Under the PermitRootLogin value, add a DenyUsers line and set the value as any user who should have SSH access disabled:

      /etc/ssh/sshd_config

      DenyUsers zk
      

      Save and exit the file and then restart the SSH daemon to activate the changes.

      • sudo systemctl restart sshd

      Switch to the zk user:

      The -l flag invokes a login shell after switching users. A login shell resets environment variables and provides a clean start for the user.

      Enter the password at the prompt to authenticate the user.

      Now that you have created, configured, and logged in as the zk user, you will create a directory to store your ZooKeeper data.

      Step 2 — Creating a Data Directory for ZooKeeper

      ZooKeeper persists all configuration and state data to disk so it can survive a reboot. In this step, you will create a data directory that ZooKeeper will use to read and write data. You can create the data directory on the local filesystem or on a remote storage drive. This tutorial will focus on creating the data directory on your local filesystem.

      Create a directory for ZooKeeper to use:

      • sudo mkdir -p /data/zookeeper

      Grant your zk user ownership to the directory:

      • sudo chown zk:zk /data/zookeeper

      chown changes the ownership and group of the /data/zookeeper directory so that the user zk, who belongs to the group zk, owns the data directory.

      You have successfully created and configured the data directory. When you move on to configure ZooKeeper, you will specify this path as the data directory that ZooKeeper will use to store its files.

      Step 3 — Downloading and Extracting the ZooKeeper Binaries

      In this step, you will manually download and extract the ZooKeeper binaries to the /opt directory. You can use the Advanced Packaging Tool, apt, to download ZooKeeper, but it may install an older version with different features. Installing ZooKeeper manually will give you full control to choose which version you would like to use.

      Since you are downloading these files manually, start by changing to the /opt directory:

      From your local machine, navigate to the Apache download page. This page will automatically provide you with the mirror closest to you for the fastest download. Click the link to the suggested mirror site, then scroll down and click zookeeper/ to view the available releases. Select the version of ZooKeeper that you would like to install. This tutorial will focus on using 3.4.13. Once you select the version, right click the binary file ending with .tar.gz and copy the link address.

      From your server, use the wget command along with the copied link to download the ZooKeeper binaries:

      • sudo wget http://apache.osuosl.org/zookeeper/zookeeper-3.4.13/zookeeper-3.4.13.tar.gz

      Extract the binaries from the compressed archive:

      • sudo tar -xvf zookeeper-3.4.13.tar.gz

      The .tar.gz extension represents a combination of TAR packaging followed by a GNU zip (gzip) compression. You will notice that you passed the flag -xvf to the command to extract the archive. The flag x stands for extract, v enables verbose mode to show the extraction progress, and f allows specifying the input, in our case zookeeper-3.4.13.tar.gz, as opposed to STDIN.

      Next, give the zk user ownership of the extracted binaries so that it can run the executables. You can change ownership like so:

      • sudo chown zk:zk -R zookeeper-3.4.13

      Next, you will configure a symbolic link to ensure that your ZooKeeper directory will remain relevant across updates. You can also use symbolic links to shorten directory names, which can lessen the time it takes to set up your configuration files.

      Create a symbolic link using the ln command.

      • sudo ln -s zookeeper-3.4.13 zookeeper

      Change the ownership of that link to zk:zk. Notice that you have passed a -h flag to change the ownership of the link itself. Not specifying -h changes the ownership of the target of the link, which you explicitly did in the previous step.

      • sudo chown -h zk:zk zookeeper

      With the symbolic links created, your directory paths in the configurations will remain relevant and unchanged through future upgrades. You can now configure ZooKeeper.

      Step 4 — Configuring ZooKeeper

      Now that you've set up your environment, you are ready to configure ZooKeeper.

      The configuration file will live in the /opt/zookeeper/conf directory. This directory contains a sample configuration file that comes with the ZooKeeper distribution. This sample file, named zoo_sample.cfg, contains the most common configuration parameter definitions and sample values for these parameters. Some of the common parameters are as follows:

      • tickTime: Sets the length of a tick in milliseconds. A tick is a time unit used by ZooKeeper to measure the length between heartbeats. Minimum session timeouts are twice the tickTime.
      • dataDir: Specifies the directory used to store snapshots of the in-memory database and the transaction log for updates. You could choose to specify a separate directory for transaction logs.
      • clientPort: The port used to listen for client connections.
      • maxClientCnxns: Limits the maximum number of client connections.

      Create a configuration file named zoo.cfg at /opt/zookeeper/conf. You can create and open a file using nano or your favorite editor:

      • nano /opt/zookeeper/conf/zoo.cfg

      Add the following set of properties and values to that file:

      /opt/zookeeper/conf/zoo.cfg

      tickTime=2000
      dataDir=/data/zookeeper
      clientPort=2181
      maxClientCnxns=60
      

      A tickTime of 2000 milliseconds is the suggested interval between heartbeats. A shorter interval could lead to system overhead with limited benefits. The dataDir parameter points to the path defined by the symbolic link you created in the previous section. Conventionally, ZooKeeper uses port 2181 to listen for client connections. In most situations, 60 allowed client connections are plenty for development and testing.

      Save the file and exit the editor.

      You have configured ZooKeeper and are ready to start the server.

      Step 5 — Starting ZooKeeper and Testing the Standalone Installation

      You've configured all the components needed to run ZooKeeper. In this step, you will start the ZooKeeper service and test your configuration by connecting to the service locally.

      Navigate back to the /opt/zookeeper directory.

      Start ZooKeeper with the zkServer.sh command.

      You will see the following on your standard output:

      Output

      ZooKeeper JMX enabled by default Using config: /opt/zookeeper/bin/../conf/zoo.cfg Starting zookeeper ... STARTED

      Connect to the local ZooKeeper server with the following command:

      • bin/zkCli.sh -server 127.0.0.1:2181

      You will get a prompt with the label CONNECTED. This confirms that you have a successful local, standalone ZooKeeper installation. If you encounter errors, you will want to verify that the configuration is correct.

      Output

      Connecting to 127.0.0.1:2181 ... ... [zk: 127.0.0.1:2181(CONNECTED) 0]

      Type help on this prompt to get a list of commands that you can execute from the client. The output will be as follows:

      Output

      [zk: 127.0.0.1:2181(CONNECTED) 0] help ZooKeeper -server host:port cmd args stat path [watch] set path data [version] ls path [watch] delquota [-n|-b] path ls2 path [watch] setAcl path acl setquota -n|-b val path history redo cmdno printwatches on|off delete path [version] sync path listquota path rmr path get path [watch] create [-s] [-e] path data acl addauth scheme auth quit getAcl path close connect host:port

      After you've done some testing, you will close the client session by typing quit on the prompt. The ZooKeeper service will continue running after you closed the client session. Shut down the ZooKeeper service, as you'll configure it as a systemd service in the next step:

      You have now installed, configured, and tested a standalone ZooKeeper service. This setup is useful to familiarize yourself with ZooKeeper, but is also helpful for developmental and testing environments. Now that you know the configuration works, you will configure systemd to simplify the management of your ZooKeeper service.

      Step 6 — Creating and Using a Systemd Unit File

      The systemd, system and service manager, is an init system used to bootstrap the user space and to manage system processes after boot. You can create a daemon for starting and checking the status of ZooKeeper using systemd.

      Systemd Essentials is a great introductory resource for learning more about systemd and its constituent components.

      Use your editor to create a .service file named zk.service at /etc/systemd/system/.

      • sudo nano /etc/systemd/system/zk.service

      Add the following lines to the file to define the ZooKeeper Service:

      /etc/systemd/system/zk.service

      [Unit]
      Description=Zookeeper Daemon
      Documentation=http://zookeeper.apache.org
      Requires=network.target
      After=network.target
      
      [Service]    
      Type=forking
      WorkingDirectory=/opt/zookeeper
      User=zk
      Group=zk
      ExecStart=/opt/zookeeper/bin/zkServer.sh start /opt/zookeeper/conf/zoo.cfg
      ExecStop=/opt/zookeeper/bin/zkServer.sh stop /opt/zookeeper/conf/zoo.cfg
      ExecReload=/opt/zookeeper/bin/zkServer.sh restart /opt/zookeeper/conf/zoo.cfg
      TimeoutSec=30
      Restart=on-failure
      
      [Install]
      WantedBy=default.target
      

      The Service section in the unit file configuration specifies the working directory, the user under which the service would run, and the executable commands to start, stop, and restart the ZooKeeper service. For additional information on all the unit file configuration options, you can read the Understanding Systemd Units and Unit Files article.

      Save the file and exit the editor.

      Now that your systemd configuration is in place, you can start the service:

      Once you've confirmed that your systemd file can successfully start the service, you will enable the service to start on boot.

      This output confirms the creation of the symbolic link:

      Output

      Created symlink /etc/systemd/system/multi-user.target.wants/zk.service → /etc/systemd/system/zk.service.

      Check the status of the ZooKeeper service using:

      Stop the ZooKeeper service using systemctl.

      Finally, to restart the daemon, use the following command:

      • sudo systemctl restart zk

      The systemd mechanism is becoming the init system of choice on many Linux distributions. Now that you've configured systemd to manage ZooKeeper, you can leverage this fast and flexible init model to start, stop, and restart the ZooKeeper service.

      Step 7 — Configuring a Multi-Node ZooKeeper Cluster

      While the standalone ZooKeeper server is useful for development and testing, every production environment should have a replicated multi-node cluster.

      Nodes in the ZooKeeper cluster that work together as an application form a quorum. Quorum refers to the minimum number of nodes that need to agree on a transaction before it's committed. A quorum needs an odd number of nodes so that it can establish a majority. An even number of nodes may result in a tie, which would mean the nodes would not reach a majority or consensus.

      In a production environment, you should run each ZooKeeper node on a separate host. This prevents service disruption due to host hardware failure or reboots. This is an important and necessary architectural consideration for building a resilient and highly available distributed system.

      In this tutorial, you will install and configure three nodes in the quorum to demonstrate a multi-node setup. Before you configure a three-node cluster, you will spin up two additional servers with the same configuration as your standalone ZooKeeper installation. Ensure that the two additional nodes meet the prerequisites, and then follow steps one through six to set up a running ZooKeeper instance.

      Once you've followed steps one through six for the new nodes, open zoo.cfg in the editor on each node.

      • sudo nano /opt/zookeeper/conf/zoo.cfg

      All nodes in a quorum will need the same configuration file. In your zoo.cfg file on each of the three nodes, add the additional configuration parameters and values for initLimit, syncLimit, and the servers in the quorum, at the end of the file.

      /opt/zookeeper/conf/zoo.cfg

      tickTime=2000
      dataDir=/data/zookeeper
      clientPort=2181
      maxClientCnxns=60
      initLimit=10
      syncLimit=5
      server.1=your_zookeeper_node_1:2888:3888
      server.2=your_zookeeper_node_2:2888:3888
      server.3=your_zookeeper_node_3:2888:3888
      

      initLimit specifies the time that the initial synchronization phase can take. This is the time within which each of the nodes in the quorum needs to connect to the leader. syncLimit specifies the time that can pass between sending a request and receiving an acknowledgment. This is the maximum time nodes can be out of sync from the leader. ZooKeeper nodes use a pair of ports, :2888 and :3888, for follower nodes to connect to the leader node and for leader election, respectively.

      Once you've updated the file on each node, you will save and exit the editor.

      To complete your multi-node configuration, you will specify a node ID on each of the servers. To do this, you will create a myid file on each node. Each file will contain a number that correlates to the server number assigned in the configuration file.

      On your_zookeeper_node_1, create the myid file that will specify the node ID:

      • sudo nano /data/zookeeper/myid

      Since your_zookeeper_node_1 is identified as server.1, you will enter 1 to define the node ID. After adding the value, your file will look like this:

      your_zookeeper_node_1 /data/zookeeper/myid

      1

      Follow the same steps for the remaining nodes. The myid file on each node should be as follows:

      your_zookeeper_node_1 /data/zookeeper/myid

      1

      your_zookeeper_node_2 /data/zookeeper/myid

      2

      your_zookeeper_node_3 /data/zookeeper/myid

      3

      You have now configured a three-node ZooKeeper cluster. Next, you will run the cluster and test your installation.

      Step 8 — Running and Testing the Multi-Node Installation

      With each node configured to work as a cluster, you are ready to start a quorum. In this step, you will start the quorum on each node and then test your cluster by creating sample data in ZooKeeper.

      To start a quorum node, first change to the /opt/zookeeper directory on each node:

      Start each node with the following command:

      • java -cp zookeeper-3.4.13.jar:lib/log4j-1.2.17.jar:lib/slf4j-log4j12-1.7.25.jar:lib/slf4j-api-1.7.25.jar:conf org.apache.zookeeper.server.quorum.QuorumPeerMain conf/zoo.cfg

      As nodes start up, you will intermittently see some connection errors followed by a stage where they join the quorum and elect a leader among themselves. After a few seconds of initialization, you can start testing your installation.

      Log in via SSH to your_zookeeper_node_3 as the non-root user you configured in the prerequisites:

      • ssh sammy@your_zookeeper_node_3

      Once logged in, switch to your zk user:

      your_zookeeper_node_3 /data/zookeeper/myid

      Enter the password for the zk user. Once logged in, change the directory to /opt/zookeeper:

      your_zookeeper_node_3 /data/zookeeper/myid

      You will now start a ZooKeeper command line client and connect to ZooKeeper on your_zookeeper_node_1:

      your_zookeeper_node_3 /data/zookeeper/myid

      • bin/zkCli.sh -server your_zookeeper_node_1:2181

      In the standalone installation, both the client and server were running on the same host. This allowed you to establish a client connection with the ZooKeeper server using localhost. Since the client and server are running on different nodes in your multi-node cluster, in the previous step you needed to specify the IP address of your_zookeeper_node_1 to connect to it.

      You will see the familiar prompt with the CONNECTED label, similar to what you saw in Step 5.

      Next, you will create, list, and then delete a znode. The znodes are the fundamental abstractions in ZooKeeper that are analogous to files and directories on a file system. ZooKeeper maintains its data in a hierarchical namespace, and znodes are the data registers of this namespace.

      Testing that you can successfully create, list, and then delete a znode is essential to establishing that your ZooKeeper cluster is installed and configured correctly.

      Create a znode named zk_znode_1 and associate the string sample_data with it.

      • create /zk_znode_1 sample_data

      You will see the following output once created:

      Output

      Created /zk_znode_1

      List the newly created znode:

      Get the data associated with it:

      ZooKeeper will respond like so:

      Output

      [zk: your_zookeeper_node_1:2181(CONNECTED)] ls / [zk_znode_1, zookeeper] [zk: your_zookeeper_node_1:2181(CONNECTED)] get /zk_znode_1 sample_data cZxid = 0x100000002 ctime = Tue Nov 06 19:47:41 UTC 2018 mZxid = 0x100000002 mtime = Tue Nov 06 19:47:41 UTC 2018 pZxid = 0x100000002 cversion = 0 dataVersion = 0 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 11 numChildren = 0

      The output confirms the value, sample_data, that you associated with zk_node_1. ZooKeeper also provides additional information about creation time, ctime, and modification time, mtime. ZooKeeper is a versioned data store, so it also presents you with metadata about the data version.

      Delete the zk_znode_1 znode:

      In this step, you successfully tested connectivity between two of your ZooKeeper nodes. You also learned basic znode management by creating, listing, and deleting znodes. Your multi-node configuration is complete, and you are ready to start using ZooKeeper.

      Conclusion

      In this tutorial, you configured and tested both a standalone and multi-node ZooKeeper environment. Now that your multi-node ZooKeeper deployment is ready to use, you can review the official ZooKeeper documentation for additional information and projects.



      Source link

      Configure SPF and DKIM With Postfix on Debian 9


      Updated by Linode

      Contributed by

      Linode

      This guide will instruct you on how to set up SPF and DKIM with Postfix.

      SPF (Sender Policy Framework) is a system that identifies to mail servers what hosts are allowed to send email for a given domain. Setting up SPF helps to prevent your email from being classified as spam.

      DKIM (DomainKeys Identified Mail) is a system that lets your official mail servers add a signature to headers of outgoing email and identifies your domain’s public key so other mail servers can verify the signature. As with SPF, DKIM helps keep your mail from being considered spam. It also lets mail servers detect when your mail has been tampered with in transit.

      DMARC (Domain Message Authentication, Reporting & Conformance) allows you to advertise to mail servers what your domain’s policies are regarding mail that fails SPF and/or DKIM validations. It additionally allows you to request reports on failed messages from receiving mail servers.

      The DNS instructions for setting up SPF, DKIM and DMARC are generic. The instructions for configuring the SPF policy agent and OpenDKIM into Postfix should work on any distribution after making respective code adjustments for the package tool, and identifying the exact path to the Unix socket file.

      Note

      The steps required in this guide require root privileges. Be sure to run the steps below as root or with the sudo prefix. For more information on privileges see our Users and Groups guide.

      Caution

      You must already have Postfix installed, configured and working. Refer to the Linode Postfix Guides for assistance.

      Publishing an SPF DNS record without having the SPF policy agent configured within Postfix is safe; however, publishing DKIM DNS records without having OpenDKIM working correctly within Postfix can result in your email being discarded by the recipient’s email server.

      Install DKIM, SPF and Postfix

      1. Install the four required packages:

        apt-get install opendkim opendkim-tools postfix-policyd-spf-python postfix-pcre
        
      2. Add user postfix to the opendkim group so that Postfix can access OpenDKIM’s socket when it needs to:

        adduser postfix opendkim
        

      Set up SPF

      Add SPF records to DNS

      The value in an SPF DNS record will look something like the following examples. The full syntax is at the SPF record syntax page.

      Example 1 Allow mail from all hosts listed in the MX records for the domain:

      v=spf1 mx -all
      

      Example 2 Allow mail from a specific host:

      v=spf1 a:mail.example.com -all
      
      • The v=spf1 tag is required and has to be the first tag.

      • The last tag, -all, indicates that mail from your domain should only come from servers identified in the SPF string. Anything coming from any other source is forging your domain. An alternative is ~all, indicating the same thing but also indicating that mail servers should accept the message and flag it as forged instead of rejecting it outright. -all makes it harder for spammers to forge your domain successfully; it is the recommended setting. ~all reduces the chances of email getting lost because an incorrect mail server was used to send mail. ~all can be used if you don’t want to take chances.

      The tags between identify eligible servers from which email to your domain can originate.

      • mx is a shorthand for all the hosts listed in MX records for your domain. If you’ve got a solitary mail server, mx is probably the best option. If you’ve got a backup mail server (a second MX record), using mx won’t cause any problems. Your backup mail server will be identified as an authorized source for email although it will probably never send any.

      • The a tag lets you identify a specific host by name or IP address, letting you specify which hosts are authorized. You’d use a if you wanted to prevent the backup mail server from sending outgoing mail or if you wanted to identify hosts other than your own mail server that could send mail from your domain (e.g., putting your ISP’s outgoing mail servers in the list so they’d be recognized when you had to send mail through them).

      For now, we’re going to stick with the mx version. It’s simpler and correct for most basic configurations, including those that handle multiple domains. To add the record, go to your DNS management interface and add a record of type TXT for your domain itself (i.e., a blank hostname) containing this string:

      v=spf1 mx -all
      

      If you’re using Linode’s DNS Manager, go to the domain zone page for the selected domain and add a new TXT record. The screen will look something like this once you’ve got it filled out:

      Linode DNS manager add SPF TXT record

      If your DNS provider allows it (DNS Manager doesn’t), you should also add a record of type SPF, filling it in the same way as you did the TXT record.

      Note

      The values for the DNS records above – and for the rest of this guide – are done in the style that Linode’s DNS Manager needs them to be in. If you’re using another provider, that respective system may require the values in a different style. For example freedns.afraid.org requires the values to be written in the style found in BIND zonefiles. Thus, the above SPF record’s value would need to be wrapped in double-quotes like this: "v=spf1 mx -all". You’ll need to consult your DNS provider’s documentation for the exact style required.

      Add the SPF policy agent to Postfix

      The Python SPF policy agent adds SPF policy-checking to Postfix. The SPF record for the sender’s domain for incoming mail will be checked and, if it exists, mail will be handled accordingly. Perl has its own version, but it lacks the full capabilities of Python policy agent.

      1. If you are using SpamAssassin to filter spam, you may want to edit /etc/postfix-policyd-spf-python/policyd-spf.conf to change the HELO_reject and Mail_From_reject settings to False. This edit will cause the SPF policy agent to run its tests and add a message header with the results in it while not rejecting any messages. You may also want to make this change if you want to see the results of the checks but not actually apply them to mail processing. Otherwise, just go with the standard settings.

      2. Edit /etc/postfix/master.cf and add the following entry at the end:

        /etc/postfix/master.cf
        1
        2
        
        policyd-spf  unix  -       n       n       -       0       spawn
            user=policyd-spf argv=/usr/bin/policyd-spf
      3. Open /etc/postfix/main.cf and add this entry to increase the Postfix policy agent timeout, which will prevent Postfix from aborting the agent if transactions run a bit slowly:

        /etc/postfix/main.cf
        1
        
        policyd-spf_time_limit = 3600
      4. Edit the smtpd_recipient_restrictions entry to add a check_policy_service entry:

        /etc/postfix/main.cf
        1
        2
        3
        4
        5
        
        smtpd_recipient_restrictions =
            ...
            reject_unauth_destination,
            check_policy_service unix:private/policyd-spf,
            ...

        Make sure to add the check_policy_service entry after the reject_unauth_destination entry to avoid having your system become an open relay. If reject_unauth_destination is the last item in your restrictions list, add the comma after it and omit the comma at the end of the check_policy_service item above.

      5. Restart Postfix:

        systemctl restart postfix
        

      You can check the operation of the policy agent by looking at raw headers on incoming email messages for the SPF results header. The header the policy agent adds to messages should look something like this:

      Received-SPF: Pass (sender SPF authorized) identity=mailfrom; client-ip=127.0.0.1; helo=mail.example.com; envelope-from=text@example.com; receiver=tknarr@silverglass.org
      

      This header indicates a successful check against the SPF policy of the sending domain. If you changed the policy agent settings in Step 1 to not reject mail that fails the SPF check, you may see Fail results in this header. You won’t see this header on outgoing or local mail.

      The SPF policy agent also logs to /var/log/mail.log. In the mail.log file you’ll see messages like this from the policy agent:

      Jan  7 06:24:44 arachnae policyd-spf[21065]: None; identity=helo; client-ip=127.0.0.1; helo=mail.example.com; envelope-from=test@example.com; receiver=tknarr@silverglass.org
      Jan  7 06:24:44 arachnae policyd-spf[21065]: Pass; identity=mailfrom; client-ip=127.0.0.1; helo=mail.example.com; envelope-from=test@example.com; receiver=tknarr@silverglass.org
      

      The first message is a check of the HELO command, in this case indicating that there wasn’t any SPF information matching the HELO (which is perfectly OK). The second message is the check against the envelope From address, and indicates the address passed the check and is coming from one of the outgoing mail servers the sender’s domain has said should be sending mail for that domain. There may be other statuses in the first field after the colon indicating failure, temporary or permanent errors and so on.

      Set up DKIM

      DKIM involves setting up the OpenDKIM package, hooking it into Postfix, and adding DNS records.

      Configure OpenDKIM

      1. The main OpenDKIM configuration file /etc/opendkim.conf needs to look like this:

        /etc/opendkim.conf
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        34
        35
        36
        37
        38
        39
        40
        41
        42
        
        # This is a basic configuration that can easily be adapted to suit a standard
        # installation. For more advanced options, see opendkim.conf(5) and/or
        # /usr/share/doc/opendkim/examples/opendkim.conf.sample.
        
        # Log to syslog
        Syslog          yes
        # Required to use local socket with MTAs that access the socket as a non-
        # privileged user (e.g. Postfix)
        UMask           002
        # OpenDKIM user
        # Remember to add user postfix to group opendkim
        UserID          opendkim
        
        # Map domains in From addresses to keys used to sign messages
        KeyTable        /etc/opendkim/key.table
        SigningTable        refile:/etc/opendkim/signing.table
        
        # Hosts to ignore when verifying signatures
        ExternalIgnoreList  /etc/opendkim/trusted.hosts
        InternalHosts       /etc/opendkim/trusted.hosts
        
        # Commonly-used options; the commented-out versions show the defaults.
        Canonicalization    relaxed/simple
        Mode            sv
        SubDomains      no
        #ADSPAction     continue
        AutoRestart     yes
        AutoRestartRate     10/1M
        Background      yes
        DNSTimeout      5
        SignatureAlgorithm  rsa-sha256
        
        # Always oversign From (sign using actual From and a null From to prevent
        # malicious signatures header fields (From and/or others) between the signer
        # and the verifier.  From is oversigned by default in the Debian package
        # because it is often the identity key used by reputation systems and thus
        # somewhat security sensitive.
        OversignHeaders     From
        
        # Define the location of the Socket and PID files
        Socket              local:/var/spool/postfix/opendkim/opendkim.sock
        PidFile             /var/run/opendkim/opendkim.pid

        Edit /etc/opendkim.conf and replace it’s contents with the above.

      2. Ensure that file permissions are set correctly:

        chmod u=rw,go=r /etc/opendkim.conf
        
      3. Create the directories to hold OpenDKIM’s data files, assign ownership to the opendkim user, and restrict the file permissions:

        mkdir /etc/opendkim
        mkdir /etc/opendkim/keys
        chown -R opendkim:opendkim /etc/opendkim
        chmod go-rw /etc/opendkim/keys
        
      4. Create the signing table /etc/opendkim/signing.table. It needs to have one line per domain that you handle email for. Each line should look like this:

        /etc/opendkim/signing.table

        Replace example.com with your domain and example with a short name for the domain. The first field is a pattern that matches e-mail addresses. The second field is a name for the key table entry that should be used to sign mail from that address. For simplicity’s sake, we’re going to set up one key for all addresses in a domain.

      5. Create the key table /etc/opendkim/key.table. It needs to have one line per short domain name in the signing table. Each line should look like this:

        /etc/opendkim/key.table
        1
        
        example     example.com:YYYYMM:/etc/opendkim/keys/example.private

        Replace example with the example value you used for the domain in the signing table (make sure to catch the second occurrence at the end, where it’s followed by .private). Replace example.com with your domain name and replace the YYYYMM with the current 4-digit year and 2-digit month (this is referred to as the selector). The first field connects the signing and key tables.

        The second field is broken down into 3 sections separated by colons.

        • The first section is the domain name for which the key is used.
        • The second section is a selector used when looking up key records in DNS.
        • The third section names the file containing the signing key for the domain.

        Note

        The flow for DKIM lookup starts with the sender’s address. The signing table is scanned until an entry whose pattern (first item) matches the address is found. Then, the second item’s value is used to locate the entry in the key table whose key information will be used. For incoming mail the domain and selector are then used to find the public key TXT record in DNS and that public key is used to validate the signature. For outgoing mail the private key is read from the named file and used to generate the signature on the message.

      6. Create the trusted hosts file /etc/opendkim/trusted.hosts. Its contents need to be:

        /etc/opendkim/trusted.hosts
        1
        2
        3
        4
        5
        6
        
        127.0.0.1
        ::1
        localhost
        myhostname
        myhostname.example.com
        example.com

        When creating the file, change myhostname to the name of your server and replace example.com with your own domain name. We’re identifying the hosts that users will be submitting mail through and should have outgoing mail signed, which for basic configurations will be your own mail server.

      7. Make sure the ownership and permissions on /etc/opendkim and it’s contents are correct (opendkim should own everything, the keys directory should only be accessible by the owner) by running the following commands:

        chown -R opendkim:opendkim /etc/opendkim
        chmod -R go-rwx /etc/opendkim/keys
        
      8. Generate keys for each domain:

        opendkim-genkey -b 2048 -h rsa-sha256 -r -s YYYYMM -d example.com -v
        

        Replace YYYYMM with the current year and month as in the key table. This will give you two files, YYYYMM.private containing the key and YYYYMM.txt containing the TXT record you’ll need to set up DNS. Rename the files so they have names matching the third section of the second field of the key table for the domain:

        mv YYYYMM.private example.private
        mv YYYYMM.txt example.txt
        

        Repeat the commands in this step for every entry in the key table. The -b 2048 indicates the number of bits in the RSA key pair used for signing and verification. 1024 bits is the minimum, but with modern hardware 2048 bits is safer. (It’s possible 4096 bits will be required at some point.)

      9. Make sure the ownership, permissions and contents on /etc/opendkim are correct by running the following commands:

        cd /etc
        chown -R opendkim:opendkim /etc/opendkim
        chmod -R go-rw /etc/opendkim/keys
        
      10. Check that OpenDKIM starts correctly:

        systemctl restart opendkim
        

        You should not get error messages, but if you do, use:

        systemctl status -l opendkim
        

        to get the status and untruncated error messages.

      Set up DNS

      As with SPF, DKIM uses TXT records to hold information about the signing key for each domain. Using YYYYMM as above, you need to make a TXT record for the host YYYYMM._domainkey for each domain you handle mail for. Its value can be found in the example.txt file for the domain. Those files look like this:

      example.txt
      1
      2
      3
      
      201510._domainkey  IN  TXT ( "**v=DKIM1; h=rsa-sha256; k=rsa; s=email; "
          "p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAu5oIUrFDWZK7F4thFxpZa2or6jBEX3cSL6b2TJdPkO5iNn9vHNXhNX31nOefN8FksX94YbLJ8NHcFPbaZTW8R2HthYxRaCyqodxlLHibg8aHdfa+bxKeiI/xABRuAM0WG0JEDSyakMFqIO40ghj/h7DUc/4OXNdeQhrKDTlgf2bd+FjpJ3bNAFcMYa3Oeju33b2Tp+PdtqIwXR"
          "ZksfuXh7m30kuyavp3Uaso145DRBaJZA55lNxmHWMgMjO+YjNeuR6j4oQqyGwzPaVcSdOG8Js2mXt+J3Hr+nNmJGxZUUW4Uw5ws08wT9opRgSpn+ThX2d1AgQePpGrWOamC3PdcwIDAQAB**" )  ; ----- DKIM key 201510 for example.com

      The value inside the parentheses is what you want. Select and copy the entire region from (but not including) the double-quote before v=DKIM1 on up to (but not including) the final double-quote before the closing parentheses. Then edit out the double-quotes within the copied text and the whitespace between them. Also change h=rsa-sha256 to h=sha256. From the above file the result would be:

      example-copied.txt
      1
      
      v=DKIM1; h=sha256; k=rsa; s=email; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAu5oIUrFDWZK7F4thFxpZa2or6jBEX3cSL6b2TJdPkO5iNn9vHNXhNX31nOefN8FksX94YbLJ8NHcFPbaZTW8R2HthYxRaCyqodxlLHibg8aHdfa+bxKeiI/xABRuAM0WG0JEDSyakMFqIO40ghj/h7DUc/4OXNdeQhrKDTlgf2bd+FjpJ3bNAFcMYa3Oeju33b2Tp+PdtqIwXRZksfuXh7m30kuyavp3Uaso145DRBaJZA55lNxmHWMgMjO+YjNeuR6j4oQqyGwzPaVcSdOG8Js2mXt+J3Hr+nNmJGxZUUW4Uw5ws08wT9opRgSpn+ThX2d1AgQePpGrWOamC3PdcwIDAQAB

      Paste that into the value for the TXT record.

      If you’re using Linode’s DNS manager, this is what the add TXT record screen will look like when you have it filled out:

      Linode DNS manager add SPF TXT record

      Repeat this for every domain you handle mail for, using the .txt file for that domain.

      Test your configuration

      Test the keys for correct signing and verification using the opendkim-testkey command:

      opendkim-testkey -d example.com -s YYYYMM
      

      If everything is OK you shouldn’t get any output. If you want to see more information, add -vvv to the end of the command. That produces verbose debugging output. The last message should be “key OK”. Just before that you may see a “key not secure” message. That’s normal and doesn’t signal an error, it just means your domain isn’t set up for DNSSEC yet.

      Hook OpenDKIM into Postfix

      1. Create the OpenDKIM socket directory in Postfix’s work area and make sure it has the correct ownership:

        mkdir /var/spool/postfix/opendkim
        chown opendkim:postfix /var/spool/postfix/opendkim
        
      2. Set the correct socket for Postfix in the OpenDKIM defaults file /etc/default/opendkim:

        /etc/default/opendkim
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        
        # Command-line options specified here will override the contents of
        # /etc/opendkim.conf. See opendkim(8) for a complete list of options.
        #DAEMON_OPTS=""
        #
        # Uncomment to specify an alternate socket
        # Note that setting this will override any Socket value in opendkim.conf
        SOCKET="local:/var/spool/postfix/opendkim/opendkim.sock"
        #SOCKET="inet:54321" # listen on all interfaces on port 54321
        #SOCKET="inet:12345@localhost" # listen on loopback on port 12345
        #SOCKET="inet:12345@192.0.2.1" # listen on 192.0.2.1 on port 12345

        Uncomment the first SOCKET line and edit it so it matches the uncommented line in the above file. The path to the socket is different from the default because on Debian 9 the Postfix process that handles mail runs in a chroot jail and can’t access the normal location.

      3. Edit /etc/postfix/main.cf and add a section to activate processing of e-mail through the OpenDKIM daemon:

        /etc/postfix/main.cf
        1
        2
        3
        4
        5
        6
        7
        
        # Milter configuration
        # OpenDKIM
        milter_default_action = accept
        # Postfix ≥ 2.6 milter_protocol = 6, Postfix ≤ 2.5 milter_protocol = 2
        milter_protocol = 6
        smtpd_milters = local:opendkim/opendkim.sock
        non_smtpd_milters = local:opendkim/opendkim.sock

        You can put this anywhere in the file. The usual practice is to put it after the smtpd_recipient_restrictions entry. You’ll notice the path to the socket isn’t the same here as it was in the /etc/defaults/opendkim file. That’s because of Postfix’s chroot jail, the path here is the path within that restricted view of the filesystem instead of within the actual filesystem.

      4. Restart the OpenDKIM daemon so it sets up the correct socket for Postfix:

        systemctl restart opendkim
        
      5. Restart Postfix so it starts using OpenDKIM when processing mail:

        systemctl restart postfix
        

      Verify that everything’s fully operational

      The easiest way to verify that everything’s working is to send a test e-mail to check-auth@verifier.port25.com using an email client configured to submit mail to the submission port on your mail server. It will analyze your message and mail you a report indicating whether your email was signed correctly or not. It also reports on a number of other things such as SPF configuration and SpamAssassin flagging of your domain. If there’s a problem, it’ll report what the problem was.

      Optional: Set up Author Domain Signing Practices (ADSP)

      As a final item, you can add an ADSP policy to your domain saying that all emails from your domain should be DKIM-signed. As usual, it’s done with a TXT record for host _adsp._domainkey in your domain with a value of dkim=all. If you’re using Linode’s DNS Manager, the screen for the new text record will look like this:

      Linode DNS Manager add ADSP TXT record

      You don’t need to set this up, but doing so makes it harder for anyone to forge email from your domains because recipient mail servers will see the lack of a DKIM signature and reject the message.

      Optional: Set up Domain Message Authentication, Reporting & Conformance (DMARC)

      The DMARC DNS record can be added to advise mail servers what you think they should do with emails claiming to be from your domain that fail validation with SPF and/or DKIM. DMARC also allows you to request reports about mail that fails to pass one or more validation check. DMARC should only be set up if you have SPF and DKIM set up and operating successfully. If you add the DMARC DNS record without having both SPF and DKIM in place, messages from your domain will fail validation which may cause them to be discarded or relegated to a spam folder.

      The DMARC record is a TXT record for host _dmarc in your domain containing the following recommended values:

      v=DMARC1;p=quarantine;sp=quarantine;adkim=r;aspf=r
      

      This requests mail servers to quarantine (do not discard, but separate from regular messages) any email that fails either SPF or DKIM checks. No reporting is requested. Very few mail servers implement the software to generate reports on failed messages, so it is often unnecessary to request them. If you do wish to request reports, the value would be similar to this example, added as a single string:

      v=DMARC1;p=quarantine;sp=quarantine;adkim=r;aspf=r;fo=1;rf=afrf;rua=mailto:user@example.com
      

      Replace user@example.com in the mailto: URL with your own email or an email address you own dedicated to receiving reports (an address such as dmarc@example.com). This requests aggregated reports in XML showing how many messages fell into each combination of pass and fail results and the mail server addresses sending them. If you’re using Linode’s DNS Manager, the screen for the new text record will look like this:

      Linode DNS Manager add DMARC TXT record

      DMARC records have a number of available tags and options. These tags are used to control your authentication settings:

      • v specifies the protocol version, in this case DMARC1.
      • p determines the policy for the root domain, such as “example.com.” The available options:
        • quarantine instructs that if an email fails validation, the recipient should set it aside for processing.
        • reject requests that the receiving mail server reject the emails that fail validation.
        • none requests that the receiver take no action if an email does not pass validation.
      • sp determines the policy for subdomains, such as “subdomain.example.com.” It takes the same arguments as the p tag.
      • adkim specifies the alignment mode for DKIM, which determines how strictly DKIM records are validated. The available options are:
        • r relaxed alignment mode, DKIM authentication is less strictly enforced.
        • s strict alignment mode. Only an exact match with the DKIM entry for the root domain will be seen as validated.
      • aspf determines the alignment mode for SPF verification. It takes the same arguments as adkim.

      If you wish to receive authentication failure reports, DMARC provides a number of configuration options. You can use the following tags to customize the formatting of your reports, as well as the criteria for report creation.

      • rua specifies the email address that will receive aggregate reports. This uses the mailto:user@example.com syntax, and accepts multiple addresses separated by commas. Aggregate reports are usually generated once per day.
      • ruf specifies the email address that will receive detailed authentication failure reports. This takes the same arguments as rua. With this option, each authentication failure would result in a separate report.
      • fo allows you to specify which failed authentication methods will be reported. One or more of the following options can be used:
        • 0 will request a report if all authentication methods fail. For example, if an SPF check were to fail but DKIM authentication was successful, a report would not be sent.
        • 1 requests a report if any authentication check fails.
        • d requests a report if a DKIM check fails.
        • s requests a report if an SPF check fails.
      • rf determines the format used for authentication failure reports. Available options:
        • afrf uses the Abuse Report format as defined by RFC 5965.
        • iodef uses the Incident Object Description Exchange format as defined by RFC 5070.

      Key rotation

      The reason the YYYYMM format is used for the selector is that best practice calls for changing the DKIM signing keys every so often (monthly is recommended, and no longer than every 6 months). To do that without disrupting messages in transit, you generate the new keys using a new selector. The process is:

      1. Generate new keys as in step 8 of Configure OpenDKIM. Do this in a scratch directory, not directly in /etc/opendkim/keys. Use the current year and month for the YYYYMM selector value, so it’s different from the selector currently in use.

      2. Use the newly-generated .txt files to add the new keys to DNS as in the DKIM Set Up DNS section, using the new YYYYMM selector in the host names. Don’t remove or alter the existing DKIM TXT records. Once this is done, verify the new key data using the following command (replacing example.com, example and YYYYMM with the appropriate values):

        opendkim-testkey -d example.com -s YYYYMM -k example.private
        

        Add the -vvv switch to get debugging output if you need it to diagnose any problems. Correct any problems before proceeding, beginning to use the new private key file and selector when opendkim-testkey doesn’t indicate a successful verification will cause problems with your email including non-receipt of messages.

      3. Stop Postfix and OpenDKIM with systemctl stop postfix opendkim so that they won’t be processing mail while you’re changing out keys.

      4. Copy the newly-generated .private files into place and make sure their ownership and permissions are correct by running these commands from the directory in which you generated the key files:

        cp *.private /etc/opendkim/keys/
        chown opendkim:opendkim /etc/opendkim/keys/*
        chmod go-rw /etc/opendkim/keys/*
        

        Use the opendkim-testkey command as described above to ensure that your new record is propagated before you continue.

      5. Edit /etc/opendkim/key.table and change the old YYYYMM values to the new selector, reflecting the current year and month. Save the file.

      6. Restart OpenDKIM and Postfix by:

        systemctl start opendkim
        systemctl start postfix
        

        Make sure they both start without any errors.

      7. After a couple of weeks, all email in transit should either have been delivered or bounced and the old DKIM key information in DNS won’t be needed anymore. Delete the old YYYYMM._domainkey TXT records in each of your domains, leaving just the newest ones (most recent year and month). Don’t worry if you forget and leave the old keys around longer than planned. There’s no security issue. Removing the obsolete records is more a matter of keeping things neat and tidy than anything else.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Configure Apache with Salt Stack


      Updated by Linode Written by Linode

      Salt is a powerful configuration management tool. In this guide you will create Salt state files that are capable of installing and configuring Apache on Ubuntu 18.04, Debian 9, or CentOS 7.

      Before You Begin

      You will need at least two Linodes with Salt installed. If you have not already, read our Getting Started with Salt – Basic Installation and Setup Guide and follow the instructions for setting up a Salt master and minion.

      The following steps will be performed on your Salt master.

      Note

      The steps in this guide require root privileges. Be sure to run the steps below as root or with the sudo prefix. For more information on privileges, see our Users and Groups guide.

      Setting Up Your Salt Master and Managed Files

      Salt Master SLS Files

      1. Create the /srv/salt directory if it does not already exist:

        mkdir /srv/salt
        
      2. Create a Salt top file in /srv/salt that will be Salt’s entry point to the Apache configuration:

        /srv/salt/top.sls
        1
        2
        3
        4
        5
        6
        7
        8
        
        base:
          'G@os_family:Debian':
            - match: compound
            - apache-debian
        
          'G@os:CentOS':
            - match: compound
            - apache-centos

        This top file uses compound matching to target your minions by operating system using Salt Grains. This will allow Salt to choose the appropriate Apache configuration depending on the Linux distribution. These matchers could be extended to be even more specific. For instance, if you wanted to only target minions with the ID of web-server that are running on Ubuntu, you can type web* and G@os:Ubuntu.

      Pillar Files

      1. Create the /srv/pillar directory if it does not already exist:

        mkdir /srv/pillar
        
      2. Create a Pillar top file. This top file references the apache.sls Pillar file that you will create in the next step:

        /srv/pillar/top.sls
        1
        2
        3
        
        base:
          '*':
            - apache
      3. Create the apache.sls file that was referenced in the previous step. This file defines Pillar data that will be used inside our Apache state file in the next section, in this case your domain name. Replace example.com with your domain:

        /srv/pillar/apache.sls

      Website Files

      1. Create a directory for your website files in the /srv/salt directory. Replace example.com with your website domain name:

        mkdir /srv/salt/example.com
        

        This directory will be accessible from your Salt state files at salt://example.com.

      2. Create an index.html file for your website in the /srv/salt/example.com directory, substituting example.com for the folder name you chose in the previous step. You will use this file as a test to make sure your website is functioning correctly.

        /srv/salt/example.com/index.html
        1
        2
        3
        4
        5
        
        <html>
          <body>
            <h1>Server Up and Running!</h1>
          </body>
        </html>

      Configuration Files

      1. Create a folder for your additional configuration files at /srv/salt/files. These files will be accessible at salt://files.

        mkdir /srv/salt/files
        
      2. Create a file called tune_apache.conf in /srv/salt/files and paste in the following block:

        /srv/salt/files/tune_apache.conf
        1
        2
        3
        4
        5
        6
        7
        
        <IfModule mpm_prefork_module>
        StartServers 4
        MinSpareServers 20
        MaxSpareServers 40
        MaxClients 200
        MaxRequestsPerChild 4500
        </IfModule>

        This MPM prefork module provides additional tuning for your Apache installation. This file will be managed by Salt and installed into the appropriate configuration directory in a later step.

      3. If you will be installing Apache on a CentOS machine, create a file called include_sites_enabled.conf in /srv/salt/files and paste in the following:

        /srv/salt/files/include_sites_enabled.conf
        1
        
        IncludeOptional sites-enabled/*.conf

        This file will allow us to use file directories like those found on Debian installations to help organize the Apache configuration.

      Creating the Apache State File for Debian and Ubuntu

      Individual Steps

      This guide will be going through the process of creating the Apache for Debian and Ubuntu state file step by step. If you would like to view the entirety of the state file, you can view it at the end of this section.

      1. Create a state file named apache-debian.sls in /srv/salt and open it in a text editor of your choice.

      2. Instruct Salt to install the apache2 package and start the apache2 service:

        /srv/salt/apache-debian.sls
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        
        apache2:
          pkg.installed
        
        apache2 Service:
          service.running:
            - name: apache2
            - enable: True
            - require:
              - pkg: apache2
        
        ...

        Here Salt makes sure the apache2 package is installed with pkg.installed. Likewise, it ensures the apache2 service is running and enabled under service.running. Also under service.running, apache-debian.sls uses require to ensure that this command does not run before the apache2 package is installed. This require step will be repeated throughout apache-debain.sls.

        Lastly, a watch statement is employed to restart the apache2 service if your site’s configuration file changes. You will define that configuration file in a later step. Note that this configuration file is named using the domain you supplied when creating your Salt Pillar file in the first section. This Pillar data will be used throughout apache-debian.sls.

      3. Turn off KeepAlive:

        /srv/salt/apache-debian.sls
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        
        ...
        
        Turn Off KeepAlive:
          file.replace:
            - name: /etc/apache2/apache2.conf
            - pattern: 'KeepAlive On'
            - repl: 'KeepAlive Off'
            - show_changes: True
            - require:
              - pkg: apache2
        ...

        KeepAlive allows multiple requests to be sent over the same TCP connection. For the purpose of this guide KeepAlive will be disabled. To disable it, Salt is instructed to find the KeepAlive directive in /etc/apache2/apache2.conf by matching a pattern and replacing it with KeepAlive Off. show_changes instructs Salt to display any changes it has made during a highstate.

      4. Transfer tune_apache.conf to your minion and enable it:

        /srv/salt/apache-debian.sls
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        
        ...
        
        /etc/apache2/conf-available/tune_apache.conf:
          file.managed:
            - source: salt://files/tune_apache.conf
            - require:
              - pkg: apache2
        
        Enable tune_apache:
          apache_conf.enabled:
            - name: tune_apache
            - require:
              - pkg: apache2
        
        ...

        This step takes the tune_apache.conf file you created in the Configuration Files step and transfers it to your Salt minion. Then, Salt enables that configuration file with the apache_conf module.

      5. Create the necessary directories:

        /srv/salt/apache-debian.sls
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        
        ...
        
        /var/www/html/{{ pillar['domain'] }}:
          file.directory
        
        /var/www/html/{{ pillar['domain'] }}/log:
          file.directory
        
        /var/www/html/{{ pillar['domain'] }}/backups:
          file.directory
        
        /var/www/html/{{ pillar['domain'] }}/public_html:
          file.directory
        
        ...
      6. Disable the default virtual host configuration file:

        /srv/salt/apache-debian.sls
        1
        2
        3
        4
        5
        6
        7
        8
        
        ...
        
        000-default:
          apache_site.disabled:
            - require:
              - pkg: apache2
        
        ...

        This step uses Salt’s apache_site module to disable the default Apache virtual host configuration file, and is the same as running a2dissite on a Debian-based machine.

      7. Create your site’s virtual host configuration file:

        /srv/salt/apache-debian.sls
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        
        ...
        
        /etc/apache2/sites-available/{{ pillar['domain'] }}.conf:
          apache.configfile:
            - config:
              - VirtualHost:
                  this: '*:80'
                  ServerName:
                    - {{ pillar['domain'] }}
                  ServerAlias:
                    - www.{{ pillar['domain'] }}
                  DocumentRoot: /var/www/html/{{ pillar['domain'] }}/public_html
                  ErrorLog: /var/www/html/{{ pillar['domain'] }}/log/error.log
                  CustomLog: /var/www/html/{{ pillar['domain'] }}/log/access.log combined
        
        ...

        This step uses Salt’s apache module, (not to be confused with the apache_site module used in the previous step), to create your site’s virtual host configuration file. The this variable signifies what would traditionally be include with VirtualHost within angle brackets in an Apache configuration file: <VirtualHost *:80>.

      8. Enable your new virtual host configuration file:

        /srv/salt/apache-debian.sls
        1
        2
        3
        4
        5
        6
        7
        8
        
        ...
        
        {{ pillar['domain'] }}:
          apache_site.enabled:
            - require:
              - pkg: apache2
        
        ...

        This step uses the same apache_site module you used to disable the default virtual host file to enable your newly created virtual host file. apache_site.enabled creates a symlink from /etc/apache2/sites-available/example.com.conf to /etc/apache2/sites-enabled/example.com.conf and is the same as running a2ensite on a Debian-based machine.

      9. Transfer your index.html website file to your minion:

        /srv/salt/apache-debian.sls
        1
        2
        3
        4
        5
        
        ...
        
        /var/www/html/{{ pillar['domain'] }}/public_html/index.html:
          file.managed:
            - source: salt://{{ pillar['domain'] }}/index.html

        Any changes made to your index.html file on your Salt master will be propagated to your minion.

        Note

        Since Salt is not watching configuration files for a change to trigger a restart for Apache, you may need to use the command below from your Salt master.

        salt '*' apache.signal restart
        

      Complete State File

      The complete apache-debian.sls file looks like this:

      /srv/salt/apache-debian.sls
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      45
      46
      47
      48
      49
      50
      51
      52
      53
      54
      55
      56
      57
      58
      59
      60
      61
      62
      63
      64
      65
      66
      67
      68
      69
      
      apache2:
        pkg.installed
      
      apache2 Service:
        service.running:
          - name: apache2
          - enable: True
          - require:
            - pkg: apache2
      
      Turn Off KeepAlive:
        file.replace:
          - name: /etc/apache2/apache2.conf
          - pattern: 'KeepAlive On'
          - repl: 'KeepAlive Off'
          - show_changes: True
          - require:
            - pkg: apache2
      
      /etc/apache2/conf-available/tune_apache.conf:
        file.managed:
          - source: salt://files/tune_apache.conf
          - require:
            - pkg: apache2
      
      Enable tune_apache:
        apache_conf.enabled:
          - name: tune_apache
          - require:
            - pkg: apache2
      
      /var/www/html/{{ pillar['domain'] }}:
        file.directory
      
      /var/www/html/{{ pillar['domain'] }}/log:
        file.directory
      
      /var/www/html/{{ pillar['domain'] }}/backups:
        file.directory
      
      /var/www/html/{{ pillar['domain'] }}/public_html:
        file.directory
      
      000-default:
        apache_site.disabled:
          - require:
            - pkg: apache2
      
      /etc/apache2/sites-available/{{ pillar['domain'] }}.conf:
        apache.configfile:
          - config:
            - VirtualHost:
                this: '*:80'
                ServerName:
                  - {{ pillar['domain'] }}
                ServerAlias:
                  - www.{{ pillar['domain'] }}
                DocumentRoot: /var/www/html/{{ pillar['domain'] }}/public_html
                ErrorLog: /var/www/html/{{ pillar['domain'] }}/log/error.log
                CustomLog: /var/www/html/{{ pillar['domain'] }}/log/access.log combined
      
      {{ pillar['domain'] }}:
        apache_site.enabled:
          - require:
            - pkg: apache2
      
      /var/www/html/{{ pillar['domain'] }}/public_html/index.html:
        file.managed:
          - source: salt://{{ pillar['domain'] }}/index.html

      Creating an Apache State File for CentOS

      Individual Steps

      1. Create a file called apache-centos.sls in /srv/salt and open it in a text editor of your choice.

      2. On CentOS Apache is named httpd. Instruct Salt to install httpd and run the httpd service:

        /srv/salt/apache-centos.sls
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        
        httpd:
          pkg.installed
        
        httpd Service:
          service.running:
            - name: httpd
            - enable: True
            - require:
              - pkg: httpd
            - watch:
              - file: /etc/httpd/sites-available/{{ pillar['domain'] }}.conf
        
        ...

        Here Salt makes sure the httpd package is installed with pkg.installed. Likewise, it ensures the httpd service is running and enabled under service.running. Also under service.running, apache-debian.sls uses require to ensure that this command does not run before the httpd package is installed. This require step will be repeated throughout apache-centos.sls.

        Lastly, a watch statement is employed to restart the httpd service if your site’s configuration file changes. You will define that configuration file in a later step. Note that this configuration file is named using the domain you supplied when creating your Salt Pillar file in the first section. This Pillar data will be used throughout apache-centos.sls.

      3. Turn off KeepAlive:

        /srv/salt/apache-centos.sls
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        
        ...
        
        Turn Off KeepAlive:
          file.replace:
            - name: /etc/httpd/conf/httpd.conf
            - pattern: 'KeepAlive On'
            - repl: 'KeepAlive Off'
            - show_changes: True
            - require:
              - pkg: httpd
        ...

        KeepAlive allows multiple requests to be sent over the same TCP connection. For the purpose of this guide KeepAlive will be disabled. To disable it, Salt is instructed to find the KeepAlive directive in /etc/httpd/conf/httpd.conf by matching a pattern and replacing it with KeepAlive Off. show_changes instructs Salt to display any changes it has made during a highstate.

      4. Change the DocumentRoot:

        /srv/salt/apache-centos.sls
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        
        ...
        
        Change DocumentRoot:
          file.replace:
            - name: /etc/httpd/conf/httpd.conf
            - pattern: 'DocumentRoot "/var/www/html"'
            - repl: 'DocumentRoot "/var/www/html/{{ pillar['domain'] }}/public_html"'
            - show_changes: True
            - require:
              - pkg: httpd
        
        ...

        Similar to the last step, in this step salt-centos.sls instructs Salt to search for the DocumentRoot directive in Apache’s httpd.conf file, and replaces that line with the new document root. This allows for the use of a Debian-style site directory architecture.

      5. Transfer the tune_apache.conf and include_sites_enabled.conf to your minion.

        /srv/salt/apache-centos.sls
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        
        ...
        
        /etc/httpd/conf.d/tune_apache.conf:
          file.managed:
            - source: salt://files/tune_apache.conf
            - require:
              - pkg: httpd
        
        /etc/httpd/conf.d/include_sites_enabled.conf:
          file.managed:
            - source: salt://files/include_sites_enabled.conf
            - require:
              - pkg: httpd
        
        ...
      6. Create the necessary directories:

        srv/salt/apache-centos.sls
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        
        ...
        
        /etc/httpd/sites-available:
          file.directory
        
        /etc/httpd/sites-enabled:
          file.directory
        
        /var/www/html/{{ pillar['domain'] }}:
          file.directory
        
        /var/www/html/{{ pillar['domain'] }}/backups:
          file.directory
        
        /var/www/html/{{ pillar['domain'] }}/public_html:
          file.directory
        
        ...
      7. Create your site’s virtual host configuration file:

        /srv/salt/apache-centos.sls
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        
        ...
        
        /etc/httpd/sites-available/{{ pillar['domain'] }}.conf:
          apache.configfile:
            - config:
              - VirtualHost:
                  this: '*:80'
                  ServerName:
                    - {{ pillar['domain'] }}
                  ServerAlias:
                    - www.{{ pillar['domain'] }}
                  DocumentRoot: /var/www/html/{{ pillar['domain'] }}/public_html
          file.symlink:
            - target: /etc/httpd/sites-enabled/{{ pillar['domain'] }}.conf
            - force: True
        
        ...

        This step uses Salt’s apache module to create your site’s virtual host configuration file. The this variable signifies what would traditionally be include with VirtualHost within angle brackets in an Apache configuration file: <VirtualHost *:80>.

      8. Transfer your index.html website file to your minion:

        /srv/salt/apache-debian.sls
        1
        2
        3
        4
        5
        6
        7
        
        ...
        
        /var/www/html/{{ pillar['domain'] }}/public_html/index.html:
          file.managed:
            - source: salt://{{ pillar['domain'] }}/index.html
        
        ...

        Any changes made to your index.html file on your Salt master will be propigated to your minion.

      9. Configure your firewall to allow http and https traffic:

        /srv/salt/apache-centos.sls
        1
        2
        3
        4
        5
        6
        7
        8
        9
        
        ...
        
        Configure Firewall:
          firewalld.present:
            - name: public
            - ports:
              - 22/tcp
              - 80/tcp
              - 443/tcp

        Note

        It is imperative that you list all ports you need open to your machine in this section. Failure to list these ports will result in their closure by Salt.

      Complete State File

      The complete apache-centos.sls file looks like this:

      /srv/salt/apache-centos.sls
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      45
      46
      47
      48
      49
      50
      51
      52
      53
      54
      55
      56
      57
      58
      59
      60
      61
      62
      63
      64
      65
      66
      67
      68
      69
      70
      71
      72
      73
      74
      75
      76
      77
      78
      79
      80
      81
      82
      
      httpd:
        pkg.installed
      
      httpd Service:
        service.running:
          - name: httpd
          - enable: True
          - require:
            - pkg: httpd
          - watch:
            - file: /etc/httpd/sites-available/{{ pillar['domain'] }}.conf
      
      Turn off KeepAlive:
        file.replace:
          - name: /etc/httpd/conf/httpd.conf
          - pattern: 'KeepAlive On'
          - repl: 'KeepAlive Off'
          - show_changes: True
          - require:
            - pkg: httpd
      
      Change DocumentRoot:
        file.replace:
          - name: /etc/httpd/conf/httpd.conf
          - pattern: 'DocumentRoot "/var/www/html"'
          - repl: 'DocumentRoot "/var/www/html/{{ pillar['domain'] }}/public_html"'
          - show_changes: True
          - require:
            - pkg: httpd
      
      /etc/httpd/conf.d/tune_apache.conf:
        file.managed:
          - source: salt://files/tune_apache.conf
          - require:
            - pkg: httpd
      
      /etc/httpd/conf.d/include_sites_enabled.conf:
        file.managed:
          - source: salt://files/include_sites_enabled.conf
          - require:
            - pkg: httpd
      
      /etc/httpd/sites-available:
        file.directory
      
      /etc/httpd/sites-enabled:
        file.directory
      
      /var/www/html/{{ pillar['domain'] }}:
        file.directory
      
      /var/www/html/{{ pillar['domain'] }}/backups:
        file.directory
      
      /var/www/html/{{ pillar['domain'] }}/public_html:
        file.directory
      
      /etc/httpd/sites-available/{{ pillar['domain'] }}.conf:
        apache.configfile:
          - config:
            - VirtualHost:
                this: '*:80'
                ServerName:
                  - {{ pillar['domain'] }}
                ServerAlias:
                  - www.{{ pillar['domain'] }}
                DocumentRoot: /var/www/html/{{ pillar['domain'] }}/public_html
        file.symlink:
          - target: /etc/httpd/sites-enabled/{{ pillar['domain'] }}.conf
          - force: True
      
      /var/www/html/{{ pillar['domain'] }}/public_html/index.html:
        file.managed:
          - source: salt://{{ pillar['domain'] }}/index.html
      
      Configure Firewall:
        firewalld.present:
          - name: public
          - ports:
            - 22/tcp
            - 80/tcp
            - 443/tcp

      Running the Apache State File

      On your Salt master, issue a highstate command:

      salt '*' state.apply
      

      After a few moments you should see a list of Salt commands and a summary of their successes. Navigate to your website’s domain name if you have your DNS set up already, or your website’s public IP address. You should see your index.html file. You have now used Salt to configure Apache. Visit the links in the section below for more information.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link