One place for hosting & domains

      Apache

      A Beginner's Guide to LXD: Setting Up an Apache Webserver In a Container


      Updated by Linode Contributed by Simos Xenitellis

      Access an Apache Web Server Inside a LXD Container

      What is LXD?

      LXD (pronounced “Lex-Dee”) is a system container manager build on top of LXC (Linux Containers) that is currently supported by Canonical. The goal of LXD is to provide an experience similar to a virtual machine but through containerization rather than hardware virtualization. Compared to Docker for delivering applications, LXD offers nearly full operating-system functionality with additional features such as snapshots, live migrations, and storage management.

      The main benefits of LXD are the support of high density containers and the performance it delivers compared to virtual machines. A computer with 2GB RAM can adequately support half a dozen containers. In addition, LXD officially supports the container images of major Linux distributions. We can choose the Linux distribution and version to run in the container.

      This guide covers how to install and setup LXD 3 on a Linode and how to setup an Apache Web server in a container.

      Note

      For simplicity, the term container is used throughout this guide to describe the LXD system containers.

      Before You Begin

      1. Complete the Getting Started guide. Select a Linode with at least 2GB of RAM memory, such as the Linode 2GB. Specify the Ubuntu 19.04 distribution. You may specify a different Linux distribution, as long as there is support for snap packages (snapd); see the More Information for more details.

      2. This guide will use sudo wherever necessary. Follow the Securing Your Server guide to create a limited (non-root) user account, harden SSH access, and remove unnecessary network services.

      3. Update your system:

        sudo apt update && sudo apt upgrade
        

      Configure the Snap Package Support

      LXD is available as a Debian package in the long-term support (LTS) versions of Ubuntu, such as Ubuntu 18.04 LTS. For other versions of Ubuntu and other distributions, LXD is available as a snap package. Snap packages are universal packages because there is a single package file that works on any supported Linux distributions. See the More Information section for more details on what a snap package is, what Linux distributions are supported, and how to set it up.

      1. Verify that snap support is installed correctly. The following command either shows that there are no snap packages installed, or that some are.

        snap list
        
          
        No snaps are installed yet. Try 'snap install hello-world'.
        
        
      2. View the details of the LXD snap package lxd. The output below shows that, currently, the latest version of LXD is 3.12 in the default stable channel. This channel is updated often with new features. There are also other channels such as the 3.0/stable channel which has the LTS LXD version (supported along with Ubuntu 18.04, until 2023) and the 2.0/stable channel (supported along with Ubuntu 16.04, until 2021). We will be using the latest version of LXD from the default stable channel.

        snap info lxd
        
          
        name:      lxd
        summary:   System container manager and API
        publisher: Canonical✓
        contact:   https://github.com/lxc/lxd/issues
        license:   Apache-2.0
        description: |
          **LXD is a system container manager**
        
          With LXD you can run hundreds of containers of a variety of Linux
          distributions, apply resource limits, pass in directories, USB devices
          or GPUs and setup any network and storage you want.
        
          LXD containers are lightweight, secure by default and a great
          alternative to running Linux virtual machines.
        
        
          **Run any Linux distribution you want**
        
          Pre-made images are available for Ubuntu, Alpine Linux, ArchLinux,
          CentOS, Debian, Fedora, Gentoo, OpenSUSE and more.
        
          A full list of available images can be [found
          here](https://images.linuxcontainers.org)
        
          Can't find the distribution you want? It's easy to make your own images
          too, either using our `distrobuilder` tool or by assembling your own image
          tarball by hand.
        
        
          **Containers at scale**
        
          LXD is network aware and all interactions go through a simple REST API,
          making it possible to remotely interact with containers on remote
          systems, copying and moving them as you wish.
        
          Want to go big? LXD also has built-in clustering support,
          letting you turn dozens of servers into one big LXD server.
        
        
          **Configuration options**
        
          Supported options for the LXD snap (`snap set lxd KEY=VALUE`):
           - criu.enable: Enable experimental live-migration support [default=false]
           - daemon.debug: Increases logging to debug level [default=false]
           - daemon.group: Group of users that can interact with LXD [default=lxd]
           - ceph.builtin: Use snap-specific ceph configuration [default=false]
           - openvswitch.builtin: Run a snap-specific OVS daemon [default=false]
        
          [Documentation](https://lxd.readthedocs.io)
        snap-id: J60k4JY0HppjwOjW8dZdYc8obXKxujRu
        channels:
          stable:        3.12        2019-04-16 (10601) 56MB -
          candidate:     3.12        2019-04-26 (10655) 56MB -
          beta:          ↑
          edge:          git-570aaa1 2019-04-27 (10674) 56MB -
          3.0/stable:    3.0.3       2018-11-26  (9663) 53MB -
          3.0/candidate: 3.0.3       2019-01-19  (9942) 53MB -
          3.0/beta:      ↑
          3.0/edge:      git-eaa62ce 2019-02-19 (10212) 53MB -
          2.0/stable:    2.0.11      2018-07-30  (8023) 28MB -
          2.0/candidate: 2.0.11      2018-07-27  (8023) 28MB -
          2.0/beta:      ↑
          2.0/edge:      git-c7c4cc8 2018-10-19  (9257) 26MB -
        
        
      3. Install the lxd snap package. Run the following command to install the snap package for LXD.

        sudo snap install lxd
        
          
        lxd 3.12 from Canonical✓ installed
        
        

      You can verify that the snap package has been installed by running snap list again. The core snap package is a prerequisite for any system with snap package support. When you install your first snap package, core is installed and shared among all other snap packages that will get installed in the future.

          snap list
      
      
        
      Name  Version  Rev    Tracking  Publisher   Notes
      core  16-2.38  6673   stable    canonical✓  core
      lxd   3.12     10601  stable    canonical✓  -
      
      

      Initialize LXD

      1. Add your non-root Unix user to the lxd group:

        sudo usermod -a -G lxd username
        

        Note

        By adding the non-root Unix user account to the lxd group, you are able to run any lxc commands without prepending sudo. Without this addition, you would have needed to prepend sudo to each lxc command.

      2. Start a new SSH session for the previous change to take effect. For example, log out and log in again.

      3. Verify the available free disk space:

        df -h /
        
          
        Filesystem      Size  Used Avail Use% Mounted on
        /dev/sda         49G  2.0G   45G   5% /
        
        

        In this case there is 45GB of free disk space. LXD requires at least 15GB of space for the storage needs of containers. We will allocate 15GB of space for LXD, leaving 30GB of free space for the needs of the server.

      4. Run lxd init to initialize LXD:

        sudo lxd init
        

        You will be prompted several times during the initialization process. Choose the defaults for all options.

          
        Would you like to use LXD clustering? (yes/no) [default=no]:
        Do you want to configure a new storage pool? (yes/no) [default=yes]:
        Name of the new storage pool [default=default]:
        Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=zfs]:
        Create a new ZFS pool? (yes/no) [default=yes]:
        Would you like to use an existing block device? (yes/no) [default=no]:
        Size in GB of the new loop device (1GB minimum) [default=15GB]:
        Would you like to connect to a MAAS server? (yes/no) [default=no]:
        Would you like to create a new local network bridge? (yes/no) [default=yes]:
        What should the new bridge be called? [default=lxdbr0]:
        What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
        What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
        Would you like LXD to be available over the network? (yes/no) [default=no]:
        Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
        Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
        
        

      Apache Web Server with LXD

      This section will create a container, install the Apache web server, and add the appropriate iptables rules in order to expose post 80.

      1. Launch a new container:

        lxc launch ubuntu:18.04 web
        
      2. Update the package list in the container.

        lxc exec web -- apt update
        
      3. Install the Apache in the LXD container.

        lxc exec web -- apt install apache2
        
      4. Get a shell in the LXD container.

        lxc exec web -- sudo --user ubuntu --login
        
      5. Edit the default web page for Apache to make a reference that it runs inside a LXD container.

        sudo nano /var/www/html/index.html
        

        Change the line It works! (line number 224) to It works inside a LXD container!. Then, save and exit.

      6. Exit back to the host. We have made all the necessary changes to the container.

        exit
        
      7. Add a LXD proxy device to redirect connections from the internet to port 80 (HTTP) on the server to port 80 at this container.

        sudo lxc config device add web myport80 proxy listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:80
        

        Note

        In recent versions of LXD, you need to specify an IP address (such as 127.0.0.1) instead of a hostname (such as localhost). If your container already has a proxy device that uses hostnames, you can edit the container configuration to replace with IP addresses by running lxc config edit web.

      8. From your local computer, navigate to your Linode’s public IP address in a web browser. You should see the default Apache page:

        Web page of Apache server running in a container

      Common LXD Commands

      • List all containers:

        lxc list
        
          
        To start your first container, try: lxc launch ubuntu:18.04
        
        +------+-------+------+------+------+-----------+
        | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
        +------+-------+------+------+------+-----------+
        
        
      • List all available repositories of container images:

        lxc remote list
        
          
        +-----------------+------------------------------------------+---------------+-------------+--------+--------+
        |      NAME       |                   URL                    |   PROTOCOL    |  AUTH TYPE  | PUBLIC | STATIC |
        +-----------------+------------------------------------------+---------------+-------------+--------+--------+
        | images          | https://images.linuxcontainers.org       | simplestreams | none        | YES    | NO     |
        +-----------------+------------------------------------------+---------------+-------------+--------+--------+
        | local (default) | unix://                                  | lxd           | file access | NO     | YES    |
        +-----------------+------------------------------------------+---------------+-------------+--------+--------+
        | ubuntu          | https://cloud-images.ubuntu.com/releases | simplestreams | none        | YES    | YES    |
        +-----------------+------------------------------------------+---------------+-------------+--------+--------+
        | ubuntu-daily    | https://cloud-images.ubuntu.com/daily    | simplestreams | none        | YES    | YES    |
        +-----------------+------------------------------------------+---------------+-------------+--------+--------+
        
        

        The repository ubuntu has container images of Ubuntu versions. The images repository has container images of a large number of different Linux distributions. The ubuntu-daily has daily container images to be used for testing purposes. The local repository is the LXD server that we have just installed. It is not public and can be used to store your own container images.

      • List all available container images from a repository:

        lxc image list ubuntu:
        
          
        +------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
        |      ALIAS       | FINGERPRINT  | PUBLIC |                  DESCRIPTION                  |  ARCH   |   SIZE   |          UPLOAD DATE          |
        +------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
        | b (11 more)      | 5b72cf46f628 | yes    | ubuntu 18.04 LTS amd64 (release) (20190424)   | x86_64  | 180.37MB | Apr 24, 2019 at 12:00am (UTC) |
        +------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
        | c (5 more)       | 4716703f04fc | yes    | ubuntu 18.10 amd64 (release) (20190402)       | x86_64  | 313.29MB | Apr 2, 2019 at 12:00am (UTC)  |
        +------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
        | d (5 more)       | faef94acf5f9 | yes    | ubuntu 19.04 amd64 (release) (20190417)       | x86_64  | 322.56MB | Apr 17, 2019 at 12:00am (UTC) |
        +------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
        .....................................................................
        
        

        Note

        The first two columns for the alias and fingerprint provide an identifier that can be used to specify the container image when launching it.

        The output snippet shows the container images Ubuntu versions 18.04 LTS, 18.10, and 19.04. When creating a container we can just specify the short alias. For example, ubuntu:b means that the repository is ubuntu and the container image has the short alias b (for bionic, the codename of Ubuntu 18.04 LTS).

      • Get more information about a container image:

        lxc image info ubuntu:b
        
          
        Fingerprint: 5b72cf46f628b3d60f5d99af48633539b2916993c80fc5a2323d7d841f66afbe
        Size: 180.37MB
        Architecture: x86_64
        Public: yes
        Timestamps:
            Created: 2019/04/24 00:00 UTC
            Uploaded: 2019/04/24 00:00 UTC
            Expires: 2023/04/26 00:00 UTC
            Last used: never
        Properties:
            release: bionic
            version: 18.04
            architecture: amd64
            label: release
            serial: 20190424
            description: ubuntu 18.04 LTS amd64 (release) (20190424)
            os: ubuntu
        Aliases:
            - 18.04
            - 18.04/amd64
            - b
            - b/amd64
            - bionic
            - bionic/amd64
            - default
            - default/amd64
            - lts
            - lts/amd64
            - ubuntu
            - amd64
        Cached: no
        Auto update: disabled
        
        

        The output shows the details of the container image including all the available aliases. For Ubuntu 18.04 LTS, we can specify either b (for bionic, the codename of Ubuntu 18.04 LTS) or any other alias.

      • Launch a new container with the name mycontainer:

        lxc launch ubuntu:18.04 mycontainer
        
          
        Creating mycontainer
        Starting mycontainer
        
        
      • Check the list of containers to make sure the new container is running:

        lxc list
        
          
        +-------------+---------+-----------------------+---------------------------+------------+-----------+
        |    NAME     |  STATE  |         IPV4          |          IPV6             |    TYPE    | SNAPSHOTS |
        +-------------+---------+-----------------------+---------------------------+------------+-----------+
        | mycontainer | RUNNING | 10.142.148.244 (eth0) | fde5:5d27:...:1371 (eth0) | PERSISTENT | 0         |
        +-------------+---------+-----------------------+---------------------------+------------+-----------+
        
        
      • Execute basic commands in mycontainer:

        lxc exec mycontainer -- apt update
        lxc exec mycontainer -- apt upgrade
        

        Note

        The characters -- instruct the lxc command not to parse any more command-line parameters.

      • Open a shell session within mycontainer:

        lxc exec mycontainer -- sudo --login --user ubuntu
        
          
        To run a command as administrator (user "root"), use "sudo ".
        See "man sudo_root" for details.
        
        ubuntu@mycontainer:~$
        
        

        Note

        The Ubuntu container images have by default a non-root account with username ubuntu. This account can use sudo and does not require a password to perform administrative tasks.

        The sudo command provides a login to the existing account ubuntu.

      • View the container logs:

        lxc info mycontainer --show-log
        
      • Stop the container:

        lxc stop mycontainer
        
      • Remove the container:

        lxc delete mycontainer
        

        Note

        A container needs to be stopped before it can be deleted.

      Troubleshooting

      Error “unix.socket: connect: connection refused”

      When you run any lxc command, you get the following error:

          lxc list
      
      
        
      Error: Get http://unix.socket/1.0: dial unix /var/snap/lxd/common/lxd/unix.socket: connect: connection refused
      
      

      This happens when the LXD service is not currently running. By default, the LXD service is running as soon as it is configured successfully. See Initialize LXD to configure LXD.

      Error “unix.socket: connect: permission denied”

      When you run any lxc command, you get the following error:

          lxc list
      
      
        
      Error: Get http://unix.socket/1.0: dial unix /var/snap/lxd/common/lxd/unix.socket: connect: permission denied
      
      

      This happens when your limited user account is not a member of the lxd group, or you did not log out and log in again so that the new group membership to the lxd group gets updated.

      If your user account is ubuntu, the following command shows whether you are a member of the lxd group:

          groups ubuntu
      
      
        
      ubuntu : ubuntu sudo lxd
      
      

      In this example, we are members of the lxd group and we just need to log out and log in again. If you are not a member of the lxd group, see Initialize LXD on how to make your limited account a member of the lxd group.

      Next Steps

      If you plan to use a single website, then a single proxy device to the website container will suffice. If you plan to use multiple websites, you may install virtual hosts inside the website container. If instead you would like to setup multiple websites on their own container, then you will need to set up a reverse proxy in a container. In that case, the proxy device would direct to the reverse proxy container to direct the connections to the individual websites containers.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      How To Back Up, Import, and Migrate Your Apache Kafka Data on Debian 9


      The author selected Tech Education Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Backing up your Apache Kafka data is an important practice that will help you recover from unintended data loss or bad data added to the cluster due to user error. Data dumps of cluster and topic data are an efficient way to perform backups and restorations.

      Importing and migrating your backed up data to a separate server is helpful in situations where your Kafka instance becomes unusable due to server hardware or networking failures and you need to create a new Kafka instance with your old data. Importing and migrating backed up data is also useful when you are moving the Kafka instance to an upgraded or downgraded server due to a change in resource usage.

      In this tutorial, you will back up, import, and migrate your Kafka data on a single Debian 9 installation as well as on multiple Debian 9 installations on separate servers. ZooKeeper is a critical component of Kafka’s operation. It stores information about cluster state such as consumer data, partition data, and the state of other brokers in the cluster. As such, you will also back up ZooKeeper’s data in this tutorial.

      Prerequisites

      To follow along, you will need:

      • A Debian 9 server with at least 4GB of RAM and a non-root sudo user set up by following the tutorial.
      • A Debian 9 server with Apache Kafka installed, to act as the source of the backup. Follow the How To Install Apache Kafka on Debian 9 guide to set up your Kafka installation, if Kafka isn’t already installed on the source server.
      • OpenJDK 8 installed on the server. To install this version, follow these instructions on installing specific versions of OpenJDK.
      • Optional for Step 7 — Another Debian 9 server with Apache Kafka installed, to act as the destination of the backup. Follow the article link in the previous prerequisite to install Kafka on the destination server. This prerequisite is required only if you are moving your Kafka data from one server to another. If you want to back up and import your Kafka data to a single server, you can skip this prerequisite.

      Step 1 — Creating a Test Topic and Adding Messages

      A Kafka message is the most basic unit of data storage in Kafka and is the entity that you will publish to and subscribe from Kafka. A Kafka topic is like a container for a group of related messages. When you subscribe to a particular topic, you will receive only messages that were published to that particular topic. In this section you will log in to the server that you would like to back up (the source server) and add a Kafka topic and a message so that you have some data populated for the backup.

      This tutorial assumes you have installed Kafka in the home directory of the kafka user (/home/kafka/kafka). If your installation is in a different directory, modify the ~/kafka part in the following commands with your Kafka installation’s path, and for the commands throughout the rest of this tutorial.

      SSH into the source server by executing:

      • ssh sammy@source_server_ip

      Run the following command to log in as the kafka user:

      Create a topic named BackupTopic using the kafka-topics.sh shell utility file in your Kafka installation's bin directory, by typing:

      • ~/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic BackupTopic

      Publish the string "Test Message 1" to the BackupTopic topic by using the ~/kafka/bin/kafka-console-producer.sh shell utility script.

      If you would like to add additional messages here, you can do so now.

      • echo "Test Message 1" | ~/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic BackupTopic > /dev/null

      The ~/kafka/bin/kafka-console-producer.sh file allows you to publish messages directly from the command line. Typically, you would publish messages using a Kafka client library from within your program, but since that involves different setups for different programming languages, you can use the shell script as a language-independent way of publishing messages during testing or while performing administrative tasks. The --topic flag specifies the topic that you will publish the message to.

      Next, verify that the kafka-console-producer.sh script has published the message(s) by running the following command:

      • ~/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic BackupTopic --from-beginning

      The ~/kafka/bin/kafka-console-consumer.sh shell script starts the consumer. Once started, it will subscribe to messages from the topic that you published in the "Test Message 1" message in the previous command. The --from-beginning flag in the command allows consuming messages that were published before the consumer was started. Without the flag enabled, only messages published after the consumer was started will appear. On running the command, you will see the following output in the terminal:

      Output

      Test Message 1

      Press CTRL+C to stop the consumer.

      You've created some test data and verified that it's persisted. Now you can back up the state data in the next section.

      Step 2 — Backing Up the ZooKeeper State Data

      Before backing up the actual Kafka data, you need to back up the cluster state stored in ZooKeeper.

      ZooKeeper stores its data in the directory specified by the dataDir field in the ~/kafka/config/zookeeper.properties configuration file. You need to read the value of this field to determine the directory to back up. By default, dataDir points to the /tmp/zookeeper directory. If the value is different in your installation, replace /tmp/zookeeper with that value in the following commands.

      Here is an example output of the ~/kafka/config/zookeeper.properties file:

      ~/kafka/config/zookeeper.properties

      ...
      ...
      ...
      # the directory where the snapshot is stored.
      dataDir=/tmp/zookeeper
      # the port at which the clients will connect
      clientPort=2181
      # disable the per-ip limit on the number of connections since this is a non-production config
      maxClientCnxns=0
      ...
      ...
      ...
      

      Now that you have the path to the directory, you can create a compressed archive file of its contents. Compressed archive files are a better option over regular archive files to save disk space. Run the following command:

      • tar -czf /home/kafka/zookeeper-backup.tar.gz /tmp/zookeeper/*

      The command's output tar: Removing leading / from member names you can safely ignore.

      The -c and -z flags tell tar to create an archive and apply gzip compression to the archive. The -f flag specifies the name of the output compressed archive file, which is zookeeper-backup.tar.gz in this case.

      You can run ls in your current directory to see zookeeper-backup.tar.gz as part of your output.

      You have now successfully backed up the ZooKeeper data. In the next section, you will back up the actual Kafka data.

      Step 3 — Backing Up the Kafka Topics and Messages

      In this section, you will back up Kafka's data directory into a compressed tar file like you did for ZooKeeper in the previous step.

      Kafka stores topics, messages, and internal files in the directory that the log.dirs field specifies in the ~/kafka/config/server.properties configuration file. You need to read the value of this field to determine the directory to back up. By default and in your current installation, log.dirs points to the /tmp/kafka-logs directory. If the value is different in your installation, replace /tmp/kafka-logs in the following commands with the correct value.

      Here is an example output of the ~/kafka/config/server.properties file:

      ~/kafka/config/server.properties

      ...
      ...
      ...
      ############################# Log Basics #############################
      
      # A comma separated list of directories under which to store log files
      log.dirs=/tmp/kafka-logs
      
      # The default number of log partitions per topic. More partitions allow greater
      # parallelism for consumption, but this will also result in more files across
      # the brokers.
      num.partitions=1
      
      # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
      # This value is recommended to be increased for installations with data dirs located in RAID array.
      num.recovery.threads.per.data.dir=1
      ...
      ...
      ...
      

      First, stop the Kafka service so that the data in the log.dirs directory is in a consistent state when creating the archive with tar. To do this, return to your server's non-root user by typing exit and then run the following command:

      • sudo systemctl stop kafka

      After stopping the Kafka service, log back in as your kafka user with:

      It is necessary to stop/start the Kafka and ZooKeeper services as your non-root sudo user because in the Apache Kafka installation prerequisite you restricted the kafka user as a security precaution. This step in the prerequisite disables sudo access for the kafka user, which leads to commands failing to execute.

      Now, create a compressed archive file of the directory's contents by running the following command:

      • tar -czf /home/kafka/kafka-backup.tar.gz /tmp/kafka-logs/*

      Once again, you can safely ignore the command's output (tar: Removing leading / from member names).

      You can run ls in the current directory to see kafka-backup.tar.gz as part of the output.

      You can start the Kafka service again — if you do not want to restore the data immediately — by typing exit, to switch to your non-root sudo user, and then running:

      • sudo systemctl start kafka

      Log back in as your kafka user:

      You have successfully backed up the Kafka data. You can now proceed to the next section, where you will be restoring the cluster state data stored in ZooKeeper.

      Step 4 — Restoring the ZooKeeper Data

      In this section you will restore the cluster state data that Kafka creates and manages internally when the user performs operations such as creating a topic, adding/removing additional nodes, and adding and consuming messages. You will restore the data to your existing source installation by deleting the ZooKeeper data directory and restoring the contents of the zookeeper-backup.tar.gz file. If you want to restore data to a different server, see Step 7.

      You need to stop the Kafka and ZooKeeper services as a precaution against the data directories receiving invalid data during the restoration process.

      First, stop the Kafka service by typing exit, to switch to your non-root sudo user, and then running:

      • sudo systemctl stop kafka

      Next, stop the ZooKeeper service:

      • sudo systemctl stop zookeeper

      Log back in as your kafka user:

      You can then safely delete the existing cluster data directory with the following command:

      Now restore the data you backed up in Step 2:

      • tar -C /tmp/zookeeper -xzf /home/kafka/zookeeper-backup.tar.gz --strip-components 2

      The -C flag tells tar to change to the directory /tmp/zookeeper before extracting the data. You specify the --strip 2 flag to make tar extract the archive's contents in /tmp/zookeeper/ itself and not in another directory (such as /tmp/zookeeper/tmp/zookeeper/) inside of it.

      You have restored the cluster state data successfully. Now, you can proceed to the Kafka data restoration process in the next section.

      Step 5 — Restoring the Kafka Data

      In this section you will restore the backed up Kafka data to your existing source installation (or the destination server if you have followed the optional Step 7) by deleting the Kafka data directory and restoring the compressed archive file. This will allow you to verify that restoration works successfully.

      You can safely delete the existing Kafka data directory with the following command:

      Now that you have deleted the data, your Kafka installation resembles a fresh installation with no topics or messages present in it. To restore your backed up data, extract the files by running:

      • tar -C /tmp/kafka-logs -xzf /home/kafka/kafka-backup.tar.gz --strip-components 2

      The -C flag tells tar to change to the directory /tmp/kafka-logs before extracting the data. You specify the --strip 2 flag to ensure that the archive's contents are extracted in /tmp/kafka-logs/ itself and not in another directory (such as /tmp/kafka-logs/kafka-logs/) inside of it.

      Now that you have extracted the data successfully, you can start the Kafka and ZooKeeper services again by typing exit, to switch to your non-root sudo user, and then executing:

      • sudo systemctl start kafka

      Start the ZooKeeper service with:

      • sudo systemctl start zookeeper

      Log back in as your kafka user:

      You have restored the kafka data, you can move on to verifying that the restoration is successful in the next section.

      Step 6 — Verifying the Restoration

      To test the restoration of the Kafka data, you will consume messages from the topic you created in Step 1.

      Wait a few minutes for Kafka to start up and then execute the following command to read messages from the BackupTopic:

      • ~/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic BackupTopic --from-beginning

      If you get a warning like the following, you need to wait for Kafka to start fully:

      Output

      [2018-09-13 15:52:45,234] WARN [Consumer clientId=consumer-1, groupId=console-consumer-87747] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)

      Retry the previous command in another few minutes or run sudo systemctl restart kafka as your non-root sudo user. If there are no issues in the restoration, you will see the following output:

      Output

      Test Message 1

      If you do not see this message, you can check if you missed out any commands in the previous section and execute them.

      Now that you have verified the restored Kafka data, this means you have successfully backed up and restored your data in a single Kafka installation. You can continue to Step 7 to see how to migrate the cluster and topics data to an installation in another server.

      Step 7 — Migrating and Restoring the Backup to Another Kafka Server (Optional)

      In this section, you will migrate the backed up data from the source Kafka server to the destination Kafka server. To do so, you will first use the scp command to download the compressed tar.gz files to your local system. You will then use scp again to push the files to the destination server. Once the files are present in the destination server, you can follow the steps used previously to restore the backup and verify that the migration is successful.

      You are downloading the backup files locally and then uploading them to the destination server, instead of copying it directly from your source to destination server, because the destination server will not have your source server's SSH key in its /home/sammy/.ssh/authorized_keys file and cannot connect to and from the source server. Your local machine can connect to both servers however, saving you an additional step of setting up SSH access from the source to destination server.

      Download the zookeeper-backup.tar.gz and kafka-backup.tar.gz files to your local machine by executing:

      • scp sammy@source_server_ip:/home/kafka/zookeeper-backup.tar.gz .

      You will see output similar to:

      Output

      zookeeper-backup.tar.gz 100% 68KB 128.0KB/s 00:00

      Now run the following command to download the kafka-backup.tar.gz file to your local machine:

      • scp sammy@source_server_ip:/home/kafka/kafka-backup.tar.gz .

      You will see the following output:

      Output

      kafka-backup.tar.gz 100% 1031KB 488.3KB/s 00:02

      Run ls in the current directory of your local machine, you will see both of the files:

      Output

      kafka-backup.tar.gz zookeeper.tar.gz

      Run the following command to transfer the zookeeper-backup.tar.gz file to /home/kafka/ of the destination server:

      • scp zookeeper-backup.tar.gz sammy@destination_server_ip:/home/sammy/zookeeper-backup.tar.gz

      Now run the following command to transfer the kafka-backup.tar.gz file to /home/kafka/ of the destination server:

      • scp kafka-backup.tar.gz sammy@destination_server_ip:/home/sammy/kafka-backup.tar.gz

      You have uploaded the backup files to the destination server successfully. Since the files are in the /home/sammy/ directory and do not have the correct permissions for access by the kafka user, you can move the files to the /home/kafka/ directory and change their permissions.

      SSH into the destination server by executing:

      • ssh sammy@destination_server_ip

      Now move zookeeper-backup.tar.gz to /home/kafka/ by executing:

      • sudo mv zookeeper-backup.tar.gz /home/sammy/zookeeper-backup.tar.gz

      Similarly, run the following command to copy kafka-backup.tar.gz to /home/kafka/:

      • sudo mv kafka-backup.tar.gz /home/kafka/kafka-backup.tar.gz

      Change the owner of the backup files by running the following command:

      • sudo chown kafka /home/kafka/zookeeper-backup.tar.gz /home/kafka/kafka-backup.tar.gz

      The previous mv and chown commands will not display any output.

      Now that the backup files are present in the destination server at the correct directory, follow the commands listed in Steps 4 to 6 of this tutorial to restore and verify the data for your destination server.

      Conclusion

      In this tutorial, you backed up, imported, and migrated your Kafka topics and messages from both the same installation and installations on separate servers. If you would like to learn more about other useful administrative tasks in Kafka, you can consult the operations section of Kafka's official documentation.

      To store backed up files such as zookeeper-backup.tar.gz and kafka-backup.tar.gz remotely, you can explore DigitalOcean Spaces. If Kafka is the only service running on your server, you can also explore other backup methods such as full instance backups.



      Source link

      How To Install Apache Kafka on Debian 9


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Apache Kafka is a popular distributed message broker designed to efficiently handle large volumes of real-time data. A Kafka cluster is not only highly scalable and fault-tolerant, but it also has a much higher throughput compared to other message brokers such as ActiveMQ and RabbitMQ. Though it is generally used as a publish/subscribe messaging system, a lot of organizations also use it for log aggregation because it offers persistent storage for published messages.

      A publish/subscribe messaging system allows one or more producers to publish messages without considering the number of consumers or how they will process the messages. Subscribed clients are notified automatically about updates and the creation of new messages. This system is more efficient and scalable than systems where clients poll periodically to determine if new messages are available.

      In this tutorial, you will install and use Apache Kafka 1.1.1 on Debian 9.

      Prerequisites

      To follow along, you will need:

      • One Debian 9 server and a non-root user with sudo privileges. Follow the steps specified in this guide if you do not have a non-root user set up.
      • At least 4GB of RAM on the server. Installations without this amount of RAM may cause the Kafka service to fail, with the Java virtual machine (JVM) throwing an “Out Of Memory” exception during startup.
      • OpenJDK 8 installed on your server. To install this version, follow these instructions on installing specific versions of OpenJDK. Kafka is written in Java, so it requires a JVM; however, its startup shell script has a version detection bug that causes it to fail to start with JVM versions above 8.

      Step 1 — Creating a User for Kafka

      Since Kafka can handle requests over a network, you should create a dedicated user for it. This minimizes damage to your Debian machine should the Kafka server be compromised. We will create a dedicated kafka user in this step, but you should create a different non-root user to perform other tasks on this server once you have finished setting up Kafka.

      Logged in as your non-root sudo user, create a user called kafka with the useradd command:

      The -m flag ensures that a home directory will be created for the user. This home directory, /home/kafka, will act as our workspace directory for executing commands in the sections below.

      Set the password using passwd:

      Add the kafka user to the sudo group with the adduser command, so that it has the privileges required to install Kafka's dependencies:

      Your kafka user is now ready. Log into this account using su:

      Now that we've created the Kafka-specific user, we can move on to downloading and extracting the Kafka binaries.

      Step 2 — Downloading and Extracting the Kafka Binaries

      Let's download and extract the Kafka binaries into dedicated folders in our kafka user's home directory.

      To start, create a directory in /home/kafka called Downloads to store your downloads:

      Install curl using apt-get so that you'll be able to download remote files:

      • sudo apt-get update && sudo apt-get install -y curl

      Once curl is installed, use it to download the Kafka binaries:

      • curl "http://www-eu.apache.org/dist/kafka/1.1.1/kafka_2.12-1.1.1.tgz" -o ~/Downloads/kafka.tgz

      Create a directory called kafka and change to this directory. This will be the base directory of the Kafka installation:

      • mkdir ~/kafka && cd ~/kafka

      Extract the archive you downloaded using the tar command:

      • tar -xvzf ~/Downloads/kafka.tgz --strip 1

      We specify the --strip 1 flag to ensure that the archive's contents are extracted in ~/kafka/ itself and not in another directory (such as ~/kafka/kafka_2.12-1.1.1/) inside of it.

      Now that we've downloaded and extracted the binaries successfully, we can move on to configuring Kafka to allow for topic deletion.

      Step 3 — Configuring the Kafka Server

      Kafka's default behavior will not allow us to delete a topic, the category, group, or feed name to which messages can be published. To modify this, let's edit the configuration file.

      Kafka's configuration options are specified in server.properties. Open this file with nano or your favorite editor:

      • nano ~/kafka/config/server.properties

      Let's add a setting that will allow us to delete Kafka topics. Add the following to the bottom of the file:

      ~/kafka/config/server.properties

      delete.topic.enable = true
      

      Save the file, and exit nano. Now that we've configured Kafka, we can move on to creating systemd unit files for running and enabling it on startup.

      Step 4 — Creating Systemd Unit Files and Starting the Kafka Server

      In this section, we will create systemd unit files for the Kafka service. This will help us perform common service actions such as starting, stopping, and restarting Kafka in a manner consistent with other Linux services.

      ZooKeeper is a service that Kafka uses to manage its cluster state and configurations. It is commonly used in many distributed systems as an integral component. If you would like to know more about it, visit the official ZooKeeper docs.

      Create the unit file for zookeeper:

      • sudo nano /etc/systemd/system/zookeeper.service

      Enter the following unit definition into the file:

      /etc/systemd/system/zookeeper.service

      [Unit]
      Requires=network.target remote-fs.target
      After=network.target remote-fs.target
      
      [Service]
      Type=simple
      User=kafka
      ExecStart=/home/kafka/kafka/bin/zookeeper-server-start.sh /home/kafka/kafka/config/zookeeper.properties
      ExecStop=/home/kafka/kafka/bin/zookeeper-server-stop.sh
      Restart=on-abnormal
      
      [Install]
      WantedBy=multi-user.target
      

      The [Unit] section specifies that ZooKeeper requires networking and the filesystem to be ready before it can start.

      The [Service] section specifies that systemd should use the zookeeper-server-start.sh and zookeeper-server-stop.sh shell files for starting and stopping the service. It also specifies that ZooKeeper should be restarted automatically if it exits abnormally.

      Next, create the systemd service file for kafka:

      • sudo nano /etc/systemd/system/kafka.service

      Enter the following unit definition into the file:

      /etc/systemd/system/kafka.service

      [Unit]
      Requires=zookeeper.service
      After=zookeeper.service
      
      [Service]
      Type=simple
      User=kafka
      ExecStart=/bin/sh -c '/home/kafka/kafka/bin/kafka-server-start.sh /home/kafka/kafka/config/server.properties > /home/kafka/kafka/kafka.log 2>&1'
      ExecStop=/home/kafka/kafka/bin/kafka-server-stop.sh
      Restart=on-abnormal
      
      [Install]
      WantedBy=multi-user.target
      

      The [Unit] section specifies that this unit file depends on zookeeper.service. This will ensure that zookeeper gets started automatically when the kafka service starts.

      The [Service] section specifies that systemd should use the kafka-server-start.sh and kafka-server-stop.sh shell files for starting and stopping the service. It also specifies that Kafka should be restarted automatically if it exits abnormally.

      Now that the units have been defined, start Kafka with the following command:

      • sudo systemctl start kafka

      To ensure that the server has started successfully, check the journal logs for the kafka unit:

      You should see output similar to the following:

      Output

      Mar 23 13:31:48 kafka systemd[1]: Started kafka.service.

      You now have a Kafka server listening on port 9092.

      While we have started the kafka service, if we were to reboot our server, it would not be started automatically. To enable kafka on server boot, run:

      • sudo systemctl enable kafka

      Now that we've started and enabled the services, let's check the installation.

      Step 5 — Testing the Installation

      Let's publish and consume a "Hello World" message to make sure the Kafka server is behaving correctly. Publishing messages in Kafka requires:

      • A producer, which enables the publication of records and data to topics.
      • A consumer, which reads messages and data from topics.

      First, create a topic named TutorialTopic by typing:

      • ~/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic TutorialTopic

      You can create a producer from the command line using the kafka-console-producer.sh script. It expects the Kafka server's hostname, port, and a topic name as arguments.

      Publish the string "Hello, World" to the TutorialTopic topic by typing:

      • echo "Hello, World" | ~/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic TutorialTopic > /dev/null

      Next, you can create a Kafka consumer using the kafka-console-consumer.sh script. It expects the ZooKeeper server's hostname and port, along with a topic name as arguments.

      The following command consumes messages from TutorialTopic. Note the use of the --from-beginning flag, which allows the consumption of messages that were published before the consumer was started:

      • ~/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic TutorialTopic --from-beginning

      If there are no configuration issues, you should see Hello, World in your terminal:

      Output

      Hello, World

      The script will continue to run, waiting for more messages to be published to the topic. Feel free to open a new terminal and start a producer to publish a few more messages. You should be able to see them all in the consumer's output.

      When you are done testing, press CTRL+C to stop the consumer script. Now that we have tested the installation, let's move on to installing KafkaT.

      Step 6 — Install KafkaT (Optional)

      KafkaT is a tool from Airbnb that makes it easier for you to view details about your Kafka cluster and perform certain administrative tasks from the command line. Because it is a Ruby gem, you will need Ruby to use it. You will also need the build-essential package to be able to build the other gems it depends on. Install them using apt:

      • sudo apt install ruby ruby-dev build-essential

      You can now install KafkaT using the gem command:

      KafkaT uses .kafkatcfg as the configuration file to determine the installation and log directories of your Kafka server. It should also have an entry pointing KafkaT to your ZooKeeper instance.

      Create a new file called .kafkatcfg:

      Add the following lines to specify the required information about your Kafka server and Zookeeper instance:

      ~/.kafkatcfg

      {
        "kafka_path": "~/kafka",
        "log_path": "/tmp/kafka-logs",
        "zk_path": "localhost:2181"
      }
      

      You are now ready to use KafkaT. For a start, here's how you would use it to view details about all Kafka partitions:

      You will see the following output:

      Output

      Topic Partition Leader Replicas ISRs TutorialTopic 0 0 [0] [0] __consumer_offsets 0 0 [0] [0] ... ...

      You will see TutorialTopic, as well as __consumer_offsets, an internal topic used by Kafka for storing client-related information. You can safely ignore lines starting with __consumer_offsets.

      To learn more about KafkaT, refer to its GitHub repository.

      Step 7 — Setting Up a Multi-Node Cluster (Optional)

      If you want to create a multi-broker cluster using more Debian 9 machines, you should repeat Step 1, Step 4, and Step 5 on each of the new machines. Additionally, you should make the following changes in the server.properties file for each:

      • The value of the broker.id property should be changed such that it is unique throughout the cluster. This property uniquely identifies each server in the cluster and can have any string as its value. For example, "server1", "server2", etc.

      • The value of the zookeeper.connect property should be changed such that all nodes point to the same ZooKeeper instance. This property specifies the ZooKeeper instance's address and follows the <HOSTNAME/IP_ADDRESS>:<PORT> format. For example, "203.0.113.0:2181", "203.0.113.1:2181" etc.

      If you want to have multiple ZooKeeper instances for your cluster, the value of the zookeeper.connect property on each node should be an identical, comma-separated string listing the IP addresses and port numbers of all the ZooKeeper instances.

      Step 8 — Restricting the Kafka User

      Now that all of the installations are done, you can remove the kafka user's admin privileges. Before you do so, log out and log back in as any other non-root sudo user. If you are still running the same shell session you started this tutorial with, simply type exit.

      Remove the kafka user from the sudo group:

      To further improve your Kafka server's security, lock the kafka user's password using the passwd command. This makes sure that nobody can directly log into the server using this account:

      At this point, only root or a sudo user can log in as kafka by typing in the following command:

      In the future, if you want to unlock it, use passwd with the -u option:

      You have now successfully restricted the kafka user's admin privileges.

      Conclusion

      You now have Apache Kafka running securely on your Debian server. You can make use of it in your projects by creating Kafka producers and consumers using Kafka clients, which are available for most programming languages. To learn more about Kafka, you can also consult its documentation.



      Source link