One place for hosting & domains

      How To Display Data from the DigitalOcean API with React


      The author selected the Wikimedia Foundation to receive a donation as part of the Write for DOnations program.

      Introduction

      Over the last few years, open-source web frameworks have greatly simplified the process of coding an application. React, for example, has only added to the popularity of JavaScript by making the language more accessible to new developers and increasing the productivity of seasoned developers. Created by Facebook, React allows developers to quickly create high-end user interfaces for highly-scalable web-applications by supporting such features as declarative views, state management, and client-side rendering, each of which can greatly reduce the complexity of building an app in JavaScript.

      You can leverage frameworks like React to load and display data from the DigitalOcean API, through which you can manage your Droplets and other products within the DigitalOcean cloud using HTTP requests. Although one can fetch data from an API with many other JavaScript frameworks, React provides useful benefits like lifecycles and local state management that make it particularly well-suited for the job. With React, the data retrieved from the API is added to the local state when the application starts and can go through various lifecycles as components mount and dismount. At any point, you can retrieve the data from your local state and display it accordingly.

      In this tutorial, you will create a simple React application that interacts with the DigitalOcean API v2 to make calls and retrieve information about your Droplets. Your app will display a list containing your current Droplets and their details, like name, region, and technical specifications, and you will use the front-end framework Bootstrap to style your application.

      Once you have finished this tutorial, you will have a basic interface displaying a list of your DigitalOcean Droplets, styled to look like the following:

      The final version of your React Application

      Prerequisites

      Before you begin this guide, you’ll need a DigitalOcean account and at least one Droplet set up, in addition to the following:

      Step 1 — Creating a Basic React Application

      In this first step, you’ll create a basic React application using the Create React App package from npm. This package automatically installs and configures the essential dependencies needed to run React, like the module builder Webpack and the JavaScript compiler Babel. After installing, you’ll run the Create React App package using the package runner npx, which comes pre-installed with Node.js.

      To install Create React App and create the first version of your application, run the following command, replacing my-app with the name you want to give to your application:

      • npx create-react-app my-app

      After the installation is complete, move into the new project directory and start running the application using these commands:

      The preceding command starts a local development server provided by Create React App, which disables the command line prompt in your terminal. To proceed with the tutorial, open up a new terminal window and navigate back to the project directory before proceeding to the next step.

      You now have the first version of your React application running in development mode, which you can view by opening http://localhost:3000 in a web browser. At this point, your app will only display the welcome screen from Create React App:

      The first version of your React application

      Now that you have installed and created the first version of your React application, you can add a table component to your app that will eventually hold the data from the DigitalOcean API.

      Step 2 — Creating a Component to Show the Droplet Data

      In this step, you will create the first component that displays information about your Droplets. This component will be a table that lists all of your Droplets and their corresponding details.

      The DigitalOcean API documentation states that you can retrieve a list containing all of your Droplets by sending a request to the following endpoint using cURL: https://api.digitalocean.com/v2/droplets. Using the output from this request, you can create a table component containing id, name, region, memory, vcpus, and disk for each Droplet. Later on in this tutorial, you'll insert the data retrieved from the API into the table component.

      To define a clear structure for your application, create a new directory called components inside the src directory where you'll store all the code you write. Create a new file called Table.js inside the src/components directory and open it with nano or a text editor of your choice:

      • mkdir src/components
      • nano src/components/Table.js

      Define the table component by adding the following code to the file:

      src/components/Table.js

      import React from 'react';
      
      const Table = () => {
        return (
          <table>
            <thead>
              <tr>
                <th>Id</th>
                <th>Name</th>
                <th>Region</th>
                <th>Memory</th>
                <th>CPUs</th>
                <th>Disk Size</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td></td>
                <td></td>
                <td></td>
                <td></td>
                <td></td>
                <td></td>
              </tr>
            </tbody>
          </table>
        );
      }
      
      export default Table
      

      The code block above imports the React framework and defines a new component called Table, which consists of a table with a heading and a body.

      When you have added these lines of code, save and exit the file. With the nano text editor, you can do this by pressing CTRL+X, typing y, and pressing ENTER.

      Now that you have created the table component, it is time to include this component in your application. You'll do this by importing the component into the entry point of the application, which is in the file src/App.js. Open this file with the following command:

      Next, remove the boilerplate code that displays the Create React App welcome message in src/App.js, which is highlighted in the following code block.

      src/App.js

      import React, { Component } from 'react';
      import logo from './logo.svg';
      import './App.css';
      
      class App extends Component {
        render() {
          return (
            <div className="App">
              <header className="App-header">
                <img src={logo} className="App-logo" alt="logo" />
                <p>
                  Edit <code>src/App.js</code> and save to reload.
                </p>
                <a
                  className="App-link"
                  href="https://reactjs.org"
                  target="_blank"
                  rel="noopener noreferrer"
                >
                  Learn React
                </a>
              </header>
            </div>
          );
        }
      }
      
      export default App;
      

      After removing the lines that displayed the welcome message, include the table component inside this same file by adding the following highlighted lines:

      src/App.js

      import React, { Component } from 'react';
      import Table from './components/Table.js';
      
      class App extends Component {
        render() {
          return (
            <div className="App">
              <Table />
            </div>
          );
        }
      }
      
      export default App;
      

      If you access http://localhost:3000 in your web browser again, your application will now display a basic table with table heads:

      The React application with a basic table

      In this step, you have created a table component and included this component into the entry point of your application. Next, you will set up a connection to the DigitalOcean API, which you'll use to retrieve the data that this table will display.

      Step 3 — Securing Your API Credentials

      Setting up a connection to the DigitalOcean API consists of several actions, starting with safely storing your Personal Access Token as an environment variable. This can be done by using dotenv, a package that allows you to store sensitive information in a .env file that your application can later access from the environment.

      Use npm to install the dotenv package:

      After installing dotenv, create an environment file called .env in the root directory of your application by executing this command:

      Add the following into .env, which contains your Personal Access Token and the URL for the DigitalOcean API :

      .env

      DO_API_URL=https://api.digitalocean.com/v2
      DO_ACCESS_TOKEN=YOUR_API_KEY
      

      To ensure this sensitive data doesn't get committed to a repository, add it to your .gitignore file with the following command:

      • echo ".env" >> .gitignore

      You have now created a safe and simple configuration file for your environment variables, which will provide your application with the information it needs to send requests to the DigitalOcean API. To ensure your API credentials aren't visible on the client side, you will next set up a proxy server to forward requests and responses between your application server and the DigitalOcean API.

      Install the middleware http-proxy-middleware by executing the following command:

      • npm install http-proxy-middleware

      After installing this, the next step is to set up your proxy. Create the setupProxy.js file in the src directory:

      Inside this file, add the following code to set up the proxy server:

      src/setupProxy.js

      const proxy = require('http-proxy-middleware')
      
      module.exports = function(app) {
      
        require('dotenv').config()
      
        const apiUrl = process.env.DO_API_URL
        const apiToken = process.env.DO_ACCESS_TOKEN
        const headers  = {
          "Content-Type": "application/json",
          "Authorization": "Bearer " + apiToken
        }
      
        // define http-proxy-middleware
        let DOProxy = proxy({
          target: apiUrl,
          changeOrigin: true,
        pathRewrite: {
          '^/api/' : '/'
        },
          headers: headers,
        })
      
        // define the route and map the proxy
        app.use('/api', DOProxy)
      
      };
      

      In the preceding code block, const apiURL = sets the url for the DigitalOcean API as the endpoint, and const apiToken = loads your Personal Access Token into the proxy server. The option pathRewrite mounts the proxy server to /api rather than / so that it does not interfere with the application server but still matches the DigitalOcean API.

      You've now successfully created a proxy server that will send all API requests made from your React application to the DigitalOcean API. This proxy server will make sure your Personal Access Token, which is safely stored as an environment variable, isn't exposed on the client side. Next, you will create the actual requests to retrieve your Droplet data for your application.

      Step 4 — Making API Calls to DigitalOcean

      Now that your display component is ready and the connection details to DigitalOcean are stored and secured through a proxy server, you can start retrieving data from the DigitalOcean API. First, add the following highlighted lines of code to src/App.js just before and after you declare the class App:

      src/App.js

      import React, { Component } from 'react';
      ...
      class App extends Component {
        constructor(props) {
          super(props);
          this.state = {
            droplets: []
          }
        }
      
          render() {
      ...
      

      These lines of code call a constructor method in your class component, which in React initializes the local state by providing this.state with an object or objects. In this case, the objects are your Droplets. From the code block above, you can see that the array containing your Droplets is empty, making it possible to fill it with the results from the API call.

      In order to display your current Droplets, you'll need to fetch this information from the DigitalOcean API. Using the JavaScript function Fetch, you'll send a request to the DigitalOcean API and update the state for droplets with the data you retrieve. You can do this using the componentDidMount method by adding the following lines of code after the constructor:

      src/App.js

      class App extends Component {
        constructor(props) {
          super(props);
          this.state = {
            droplets: []
          }
        }
      
        componentDidMount() {
          fetch('http://localhost:3000/api/droplets')
          .then(res => res.json())
          .then(json => json.droplets)
          .then(droplets => this.setState({ 'droplets': droplets }))
        }
      ...
      

      With your Droplet data stored into the state, it's time to retrieve it within the render function of your application and to send this data as a prop to the table component. Add the following highlighted statement to the table component in App.js:

      src/App.js

      ...
      class App extends Component {
        render() {
          return (
            <div className="App">
              <Table droplets={ this.state.droplets } />
            </div>
          );
        }
      }
      ...
      

      You have now created the functionality to retrieve data from the API, but you still need to make this data accessible via a web browser. In the next step, you will accomplish this by displaying your Droplet data in your table component.

      Step 5 — Displaying Droplet Data in Your Table Component

      Now that you have transferred the Droplet data to the table component, you can iterate this data over rows in the table. But since the application makes the request to the API after App.js is mounted, the property value for droplets will be empty at first. Therefore, you also need to add code to make sure droplets isn't empty before you try to display the data. To do this, add the following highlighted lines to the tbody section of Table.js:

      src/components/Table.js

      const Table = ({ droplets }) => {
        return (
          <table>
            <thead>
              <tr>
                <th>Id</th>
                <th>Name</th>
                <th>Region</th>
                <th>Memory</th>
                <th>CPUs</th>
                <th>Disk Size</th>
              </tr>
            </thead>
            <tbody>
              { (droplets.length > 0) ? droplets.map( (droplet, index) => {
                 return (
                  <tr key={ index }>
                    <td>{ droplet.id }</td>
                    <td>{ droplet.name }</td>
                    <td>{ droplet.region.slug}</td>
                    <td>{ droplet.memory }</td>
                    <td>{ droplet.vcpus }</td>
                    <td>{ droplet.disk }</td>
                  </tr>
                )
               }) : <tr><td colSpan="5">Loading...</td></tr> }
            </tbody>
          </table>
        );
      }
      

      With the addition of the preceding code, your application will display a Loading... placeholder message when no Droplet data is present. When the DigitalOcean API does return Droplet data, your application will iterate it over table rows containing columns for each data type and will display the result to your web browser:

      The React Application with Droplet data

      Note: If your web browser displays an error at http://localhost:3000, press CTRL+C in the terminal that is running your development server to stop your application. Run the following command to restart your application:

      In this step, you have modified the table component of your application to display your Droplet data in a web browser and added a placeholder message for when there are no Droplets found. Next, you will use a front-end web framework to style your data to make it more visually appealing and easier to read.

      Step 6 — Styling Your Table Component Using Bootstrap

      Your table is now populated with data, but the information is not displayed in the most appealing manner. To fix this, you can style your application by adding Bootstrap to your project. Bootstrap is an open-source styling and component library that lets you add responsive styling to a project with CSS templates.

      Install Bootstrap with npm using the following command:

      After Bootstrap has finished installing, import its CSS file into your project by adding the following highlighted line to src/App.js:

      src/App.js

      import React, { Component } from 'react';
      import Table from './components/Table.js';
      import 'bootstrap/dist/css/bootstrap.min.css';
      
      class App extends Component {
      ...
      

      Now that you have imported the CSS, apply the Bootstrap styling to your table component by adding the class table to the <table> tag in src/components/Table.js.

      src/components/Table.js

      import React from 'react';
      
      const Table = ({ droplets }) => {
        return (
          <table className="table">
            <thead>
      ...
      

      Next, finish styling your application by placing a header above your table with a title and the DigitalOcean logo. Click on Download Logos in the Brand Assets section of DigitalOcean's Press page to download a set of logos, pick your favorite from the SVG directory (this tutorial uses DO_Logo_icon_blue.svg), and add it to your project by copying the logo file into a new directory called assets within the src directory of your project. After uploading the logo, import it into the header by adding the highlighted lines to src/App.js:

      src/App.js

      import React, { Component } from 'react';
      import Table from './components/Table.js';
      import 'bootstrap/dist/css/bootstrap.min.css';
      import logo from './assets/DO_Logo_icon_blue.svg';
      
      class App extends Component {
      ...
        render() {
          return (
            <div className="App">
              <nav class="navbar navbar-light bg-light">
                <a class="navbar-brand" href="./">
                  <img src={logo} alt="logo" width="40" /> My Droplets
                </a>
              </nav>
              <Table droplets={ this.state.droplets } />
            </div>
          );
        }
      }
      
      export default App;
      

      In the preceding code block, the classes within the nav tag add a particular styling from Bootstrap to your header.

      Now that you have imported Bootstrap and applied its styling to your application, your data will show up in your web browser with an organized and legible display:

      The final version of your React Application

      Conclusion

      In this article, you've created a basic React application that fetches data from the DigitalOcean API through a secured proxy server and displays it with Bootstrap styling. Now that you are familiar with the React framework, you can apply the concepts you learned here to more complicated applications, such as the one found in How To Build a Modern Web Application to Manage Customer Information with Django and React on Ubuntu 18.04. If you want to find out what other actions are possible with the DigitalOcean API, have a look at the API documentation on DigitalOcean's website.



      Source link

      How to Back Up, Import, and Migrate Your Apache Kafka Data on Ubuntu 18.04


      The author selected Tech Education Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Backing up your Apache Kafka data is an important practice that will help you recover from unintended data loss or bad data added to the cluster due to user error. Data dumps of cluster and topic data are an efficient way to perform backups and restorations.

      Importing and migrating your backed up data to a separate server is helpful in situations where your Kafka instance becomes unusable due to server hardware or networking failures and you need to create a new Kafka instance with your old data. Importing and migrating backed up data is also useful when you are moving the Kafka instance to an upgraded or downgraded server due to a change in resource usage.

      In this tutorial, you will back up, import, and migrate your Kafka data on a single Ubuntu 18.04 installation as well as on multiple Ubuntu 18.04 installations on separate servers. ZooKeeper is a critical component of Kafka’s operation. It stores information about cluster state such as consumer data, partition data, and the state of other brokers in the cluster. As such, you will also back up ZooKeeper’s data in this tutorial.

      Prerequisites

      To follow along, you will need:

      • An Ubuntu 18.04 server with at least 4GB of RAM and a non-root sudo user set up by following the tutorial.
      • An Ubuntu 18.04 server with Apache Kafka installed, to act as the source of the backup. Follow the How To Install Apache Kafka on Ubuntu 18.04 guide to set up your Kafka installation, if Kafka isn’t already installed on the source server.
      • OpenJDK 8 installed on the server. To install this version, follow these instructions on installing specific versions of OpenJDK.

      • Optional for Step 7 — Another Ubuntu 18.04 server with Apache Kafka installed, to act as the destination of the backup. Follow the article link in the previous prerequisite to install Kafka on the destination server. This prerequisite is required only if you are moving your Kafka data from one server to another. If you want to back up and import your Kafka data to a single server, you can skip this prerequisite.

      Step 1 — Creating a Test Topic and Adding Messages

      A Kafka message is the most basic unit of data storage in Kafka and is the entity that you will publish to and subscribe from Kafka. A Kafka topic is like a container for a group of related messages. When you subscribe to a particular topic, you will receive only messages that were published to that particular topic. In this section you will log in to the server that you would like to back up (the source server) and add a Kafka topic and a message so that you have some data populated for the backup.

      This tutorial assumes you have installed Kafka in the home directory of the kafka user (/home/kafka/kafka). If your installation is in a different directory, modify the ~/kafka part in the following commands with your Kafka installation’s path, and for the commands throughout the rest of this tutorial.

      SSH into the source server by executing:

      • ssh sammy@source_server_ip

      Run the following command to log in as the kafka user:

      Create a topic named BackupTopic using the kafka-topics.sh shell utility file in your Kafka installation's bin directory, by typing:

      • ~/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic BackupTopic

      Publish the string "Test Message 1" to the BackupTopic topic by using the ~/kafka/bin/kafka-console-producer.sh shell utility script.

      If you would like to add additional messages here, you can do so now.

      • echo "Test Message 1" | ~/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic BackupTopic > /dev/null

      The ~/kafka/bin/kafka-console-producer.sh file allows you to publish messages directly from the command line. Typically, you would publish messages using a Kafka client library from within your program, but since that involves different setups for different programming languages, you can use the shell script as a language-independent way of publishing messages during testing or while performing administrative tasks. The --topic flag specifies the topic that you will publish the message to.

      Next, verify that the kafka-console-producer.sh script has published the message(s) by running the following command:

      • ~/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic BackupTopic --from-beginning

      The ~/kafka/bin/kafka-console-consumer.sh shell script starts the consumer. Once started, it will subscribe to messages from the topic that you published in the "Test Message 1" message in the previous command. The --from-beginning flag in the command allows consuming messages that were published before the consumer was started. Without the flag enabled, only messages published after the consumer was started will appear. On running the command, you will see the following output in the terminal:

      Output

      Test Message 1

      Press CTRL+C to stop the consumer.

      You've created some test data and verified that it's persisted. Now you can back up the state data in the next section.

      Step 2 — Backing Up the ZooKeeper State Data

      Before backing up the actual Kafka data, you need to back up the cluster state stored in ZooKeeper.

      ZooKeeper stores its data in the directory specified by the dataDir field in the ~/kafka/config/zookeeper.properties configuration file. You need to read the value of this field to determine the directory to back up. By default, dataDir points to the /tmp/zookeeper directory. If the value is different in your installation, replace /tmp/zookeeper with that value in the following commands.

      Here is an example output of the ~/kafka/config/zookeeper.properties file:

      ~/kafka/config/zookeeper.properties

      ...
      ...
      ...
      # the directory where the snapshot is stored.
      dataDir=/tmp/zookeeper
      # the port at which the clients will connect
      clientPort=2181
      # disable the per-ip limit on the number of connections since this is a non-production config
      maxClientCnxns=0
      ...
      ...
      ...
      

      Now that you have the path to the directory, you can create a compressed archive file of its contents. Compressed archive files are a better option over regular archive files to save disk space. Run the following command:

      • tar -czf /home/kafka/zookeeper-backup.tar.gz /tmp/zookeeper/*

      The command's output tar: Removing leading / from member names you can safely ignore.

      The -c and -z flags tell tar to create an archive and apply gzip compression to the archive. The -f flag specifies the name of the output compressed archive file, which is zookeeper-backup.tar.gz in this case.

      You can run ls in your current directory to see zookeeper-backup.tar.gz as part of your output.

      You have now successfully backed up the ZooKeeper data. In the next section, you will back up the actual Kafka data.

      Step 3 — Backing Up the Kafka Topics and Messages

      In this section, you will back up Kafka's data directory into a compressed tar file like you did for ZooKeeper in the previous step.

      Kafka stores topics, messages, and internal files in the directory that the log.dirs field specifies in the ~/kafka/config/server.properties configuration file. You need to read the value of this field to determine the directory to back up. By default and in your current installation, log.dirs points to the /tmp/kafka-logs directory. If the value is different in your installation, replace /tmp/kafka-logs in the following commands with the correct value.

      Here is an example output of the ~/kafka/config/server.properties file:

      ~/kafka/config/server.properties

      ...
      ...
      ...
      ############################# Log Basics #############################
      
      # A comma separated list of directories under which to store log files
      log.dirs=/tmp/kafka-logs
      
      # The default number of log partitions per topic. More partitions allow greater
      # parallelism for consumption, but this will also result in more files across
      # the brokers.
      num.partitions=1
      
      # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
      # This value is recommended to be increased for installations with data dirs located in RAID array.
      num.recovery.threads.per.data.dir=1
      ...
      ...
      ...
      

      First, stop the Kafka service so that the data in the log.dirs directory is in a consistent state when creating the archive with tar. To do this, return to your server's non-root user by typing exit and then run the following command:

      • sudo systemctl stop kafka

      After stopping the Kafka service, log back in as your kafka user with:

      It is necessary to stop/start the Kafka and ZooKeeper services as your non-root sudo user because in the Apache Kafka installation prerequisite you restricted the kafka user as a security precaution. This step in the prerequisite disables sudo access for the kafka user, which leads to commands failing to execute.

      Now, create a compressed archive file of the directory's contents by running the following command:

      • tar -czf /home/kafka/kafka-backup.tar.gz /tmp/kafka-logs/*

      Once again, you can safely ignore the command's output (tar: Removing leading / from member names).

      You can run ls in the current directory to see kafka-backup.tar.gz as part of the output.

      You can start the Kafka service again — if you do not want to restore the data immediately — by typing exit, to switch to your non-root sudo user, and then running:

      • sudo systemctl start kafka

      Log back in as your kafka user:

      You have successfully backed up the Kafka data. You can now proceed to the next section, where you will be restoring the cluster state data stored in ZooKeeper.

      Step 4 — Restoring the ZooKeeper Data

      In this section you will restore the cluster state data that Kafka creates and manages internally when the user performs operations such as creating a topic, adding/removing additional nodes, and adding and consuming messages. You will restore the data to your existing source installation by deleting the ZooKeeper data directory and restoring the contents of the zookeeper-backup.tar.gz file. If you want to restore data to a different server, see Step 7.

      You need to stop the Kafka and ZooKeeper services as a precaution against the data directories receiving invalid data during the restoration process.

      First, stop the Kafka service by typing exit, to switch to your non-root sudo user, and then running:

      • sudo systemctl stop kafka

      Next, stop the ZooKeeper service:

      • sudo systemctl stop zookeeper

      Log back in as your kafka user:

      You can then safely delete the existing cluster data directory with the following command:

      Now restore the data you backed up in Step 2:

      • tar -C /tmp/zookeeper -xzf /home/kafka/zookeeper-backup.tar.gz --strip-components 2

      The -C flag tells tar to change to the directory /tmp/zookeeper before extracting the data. You specify the --strip 2 flag to make tar extract the archive's contents in /tmp/zookeeper/ itself and not in another directory (such as /tmp/zookeeper/tmp/zookeeper/) inside of it.

      You have restored the cluster state data successfully. Now, you can proceed to the Kafka data restoration process in the next section.

      Step 5 — Restoring the Kafka Data

      In this section you will restore the backed up Kafka data to your existing source installation (or the destination server if you have followed the optional Step 7) by deleting the Kafka data directory and restoring the compressed archive file. This will allow you to verify that restoration works successfully.

      You can safely delete the existing Kafka data directory with the following command:

      Now that you have deleted the data, your Kafka installation resembles a fresh installation with no topics or messages present in it. To restore your backed up data, extract the files by running:

      • tar -C /tmp/kafka-logs -xzf /home/kafka/kafka-backup.tar.gz --strip-components 2

      The -C flag tells tar to change to the directory /tmp/kafka-logs before extracting the data. You specify the --strip 2 flag to ensure that the archive's contents are extracted in /tmp/kafka-logs/ itself and not in another directory (such as /tmp/kafka-logs/kafka-logs/) inside of it.

      Now that you have extracted the data successfully, you can start the Kafka and ZooKeeper services again by typing exit, to switch to your non-root sudo user, and then executing:

      • sudo systemctl start kafka

      Start the ZooKeeper service with:

      • sudo systemctl start zookeeper

      Log back in as your kafka user:

      You have restored the kafka data, you can move on to verifying that the restoration is successful in the next section.

      Step 6 — Verifying the Restoration

      To test the restoration of the Kafka data, you will consume messages from the topic you created in Step 1.

      Wait a few minutes for Kafka to start up and then execute the following command to read messages from the BackupTopic:

      • ~/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic BackupTopic --from-beginning

      If you get a warning like the following, you need to wait for Kafka to start fully:

      Output

      [2018-09-13 15:52:45,234] WARN [Consumer clientId=consumer-1, groupId=console-consumer-87747] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)

      Retry the previous command in another few minutes or run sudo systemctl restart kafka as your non-root sudo user. If there are no issues in the restoration, you will see the following output:

      Output

      Test Message 1

      If you do not see this message, you can check if you missed out any commands in the previous section and execute them.

      Now that you have verified the restored Kafka data, this means you have successfully backed up and restored your data in a single Kafka installation. You can continue to Step 7 to see how to migrate the cluster and topics data to an installation in another server.

      Step 7 — Migrating and Restoring the Backup to Another Kafka Server (Optional)

      In this section, you will migrate the backed up data from the source Kafka server to the destination Kafka server. To do so, you will first use the scp command to download the compressed tar.gz files to your local system. You will then use scp again to push the files to the destination server. Once the files are present in the destination server, you can follow the steps used previously to restore the backup and verify that the migration is successful.

      You are downloading the backup files locally and then uploading them to the destination server, instead of copying it directly from your source to destination server, because the destination server will not have your source server's SSH key in its /home/sammy/.ssh/authorized_keys file and cannot connect to and from the source server. Your local machine can connect to both servers however, saving you an additional step of setting up SSH access from the source to destination server.

      Download the zookeeper-backup.tar.gz and kafka-backup.tar.gz files to your local machine by executing:

      • scp sammy@source_server_ip:/home/kafka/zookeeper-backup.tar.gz .

      You will see output similar to:

      Output

      zookeeper-backup.tar.gz 100% 68KB 128.0KB/s 00:00

      Now run the following command to download the kafka-backup.tar.gz file to your local machine:

      • scp sammy@source_server_ip:/home/kafka/kafka-backup.tar.gz .

      You will see the following output:

      Output

      kafka-backup.tar.gz 100% 1031KB 488.3KB/s 00:02

      Run ls in the current directory of your local machine, you will see both of the files:

      Output

      kafka-backup.tar.gz zookeeper.tar.gz

      Run the following command to transfer the zookeeper-backup.tar.gz file to /home/kafka/ of the destination server:

      • scp zookeeper-backup.tar.gz sammy@destination_server_ip:/home/sammy/zookeeper-backup.tar.gz

      Now run the following command to transfer the kafka-backup.tar.gz file to /home/kafka/ of the destination server:

      • scp kafka-backup.tar.gz sammy@destination_server_ip:/home/sammy/kafka-backup.tar.gz

      You have uploaded the backup files to the destination server successfully. Since the files are in the /home/sammy/ directory and do not have the correct permissions for access by the kafka user, you can move the files to the /home/kafka/ directory and change their permissions.

      SSH into the destination server by executing:

      • ssh sammy@destination_server_ip

      Now move zookeeper-backup.tar.gz to /home/kafka/ by executing:

      • sudo mv zookeeper-backup.tar.gz /home/sammy/zookeeper-backup.tar.gz

      Similarly, run the following command to copy kafka-backup.tar.gz to /home/kafka/:

      • sudo mv kafka-backup.tar.gz /home/kafka/kafka-backup.tar.gz

      Change the owner of the backup files by running the following command:

      • sudo chown kafka /home/kafka/zookeeper-backup.tar.gz /home/kafka/kafka-backup.tar.gz

      The previous mv and chown commands will not display any output.

      Now that the backup files are present in the destination server at the correct directory, follow the commands listed in Steps 4 to 6 of this tutorial to restore and verify the data for your destination server.

      Conclusion

      In this tutorial, you backed up, imported, and migrated your Kafka topics and messages from both the same installation and installations on separate servers. If you would like to learn more about other useful administrative tasks in Kafka, you can consult the operations section of Kafka's official documentation.

      To store backed up files such as zookeeper-backup.tar.gz and kafka-backup.tar.gz remotely, you can explore Digital Ocean Spaces. If Kafka is the only service running on your server, you can also explore other backup methods such as full instance backups.



      Source link

      How to Choose a Data Center


      Updated by Linode Written by Linode

      Deploying your Linode to a geographically advantageous data center can make a big difference in connection speeds to your server. Ideally, your site or application should be served from multiple points around the world, with requests sent to the appropriate region based on client geolocation. On a smaller scale, deploying a Linode in the data center nearest to you will make it easier to work with than one in a different region or continent.

      There are many things can affect network congestion, connection speeds, and throughput, so you should never interpret one reading as the sole data point. Always perform tests in multiples of three or five for an average, and on both weekend and weekdays for the most accurate information.

      This page is a quick guide for choosing and speed testing a data center (DC). Start by creating a Linode in the data center in or near your region, or several Linodes in multiple regions if you’re close to more than one DC. From there, use Linode’s Facilities Speedtest page for test domains to ping and files to download.

      Network Latency

      The Linux ping tool sends IPv4 ICMP echo requests to a specified IP address or hostname. Pinging a server is often used to check whether the server is up and/or responding to ICMP. Because ping commands also return the time it takes a request’s packet to reach the server, ping is commonly used to measure network latency.

      Ping a data center to test your connection’s latency to that DC:

      ping -c 5 speedtest.dallas.linode.com
      

      Use ping6 for IPv6:

      ping6 -c 5 speedtest.dallas.linode.com
      

      Note

      Many internet connections still don’t support IPv6 so don’t be alarmed if ping6 commands don’t work to your Linode from your local machine. They will, work from your Linode to other IPv6-capable network connections (ex. between two Linodes in different data centers).

      Download Speed

      Download speed will be limited most heavily first by your internet service plan speed, and second from local congestion between you and your internet service provider. For example, if your plan is capped at 60 Mbps, you won’t be able to download much faster than that from any server on the internet. There are multiple terminologies to discuss download speeds with so here are a few pointers to avoid confusion:

      • Residential internet connection packages are sold in speeds of megabits per second (abbreviated as Mbps, Mb/s, or Mbit/s).

      • One megabit per second (1 Mbps or 1 Mb/s) is 0.125 megabytes per second (0.125 MB/s). Desktop applications (ex: web browsers, FTP managers, Torrent clients) often display download speeds in MB/s.

      • Mebibytes per second is also sometimes used (MiB/s). One Mbps is also equal to 0.1192 MiB/s.

      To test the download speed from your data center of choice, use the cURL or wget to download the bin file from a data center of your choice. You can find the URLs on our Facilities Speedtest page.

      For example:

      curl -O http://speedtest.dallas.linode.com/100MB-dallas.bin
      wget http://speedtest.dallas.linode.com/100MB-dallas.bin
      

      Below you can see that each time cURL is run, a different average download speed is reported and each takes a slightly different amount of time to complete. This is to be expected, and you should analyze multiple data sets to get a real feel for how fast a certain DC will behave for you.

      root@debian:~# curl -O http://speedtest.dallas.linode.com/100MB-dallas.bin
        % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                       Dload  Upload   Total   Spent    Left  Speed
      100  100M  100  100M    0     0  11.4M      0  0:00:08  0:00:08 --:--:-- 12.0M
      
      root@debian:~# curl -O http://speedtest.dallas.linode.com/100MB-dallas.bin
        % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                       Dload  Upload   Total   Spent    Left  Speed
      100  100M  100  100M    0     0  10.8M      0  0:00:09  0:00:09 --:--:--  9.9M
      
      root@debian:~# curl -O http://speedtest.dallas.linode.com/100MB-dallas.bin
        % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                       Dload  Upload   Total   Spent    Left  Speed
      100  100M  100  100M    0     0  9189k      0  0:00:11  0:00:11 --:--:-- 10.0M
      

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link