One place for hosting & domains

      How To Display Data from the DigitalOcean API with Django


      The author selected the Mozilla Foundation to receive a donation as part of the Write for DOnations program.

      Introduction

      As demand for full-stack development continues to grow, web frameworks are making development workflows less cumbersome and more efficient; Django is one of those frameworks. Django has been used in major websites such as Mozilla, Pinterest, and Instagram. Unlike Flask, which is a neutral micro-framework, the Django PyPI package includes everything you would need for full-stack development; no need to set up a database or control panel for development.

      One common use-case for Django is to use it to display information from APIs (such as Instagram posts or GitHub repositories) in your own websites and web apps. While this is possible with other frameworks, Django’s “batteries included” philosphy means there will be less hassle and fewer packages required to create the same result.

      In this tutorial you will build a Django project that will display your DigitalOcean account’s Droplet information using the DigitalOcean v2 API. Specifically, you will be creating a website that will display a table of Droplets listing each of their IP addresses, IDs, hosting regions, and resources. Your website will use BulmaCSS to style the page so you can focus on development while also having something nice to look at in the end.

      Once you complete this tutorial, you will have a Django project that can produce a webpage that looks like this:

      Template with Table of Droplet Data

      Prerequisites

      Before you begin this guide you’ll need the following:

      • A DigitalOcean account with at least one Droplet, and a personal access token. Make sure to record the token in a safe place; you’ll need it later on in this tutorial.
      • Familiarity in making requests to APIs. For a comprehensive tutorial on working with APIs, take a look at How to Use Web APIs in Python3.
      • A local virtual environment for Python for maintaining dependencies. In this tutorial we’ll use the name do_django_api for our project directory and env for our virtual environment.
      • Familiarity with Django’s template logic for rendering pages with API data.
      • Familiarity with Django’s view logic for handling data recieved from the API and giving it to a template for rendering.

      Step 1 — Making a Basic Django Project

      From within the virtual environment env, install Django:

      Now you can start a Django project and run some initial setup commands.

      Use django-admin startproject <name> to create a subdirectory in the project folder named after your Django project, then switch to that directory.

      • django-admin startproject do_django_project
      • cd do_django_project

      Once it’s created, inside this subdirectory, you will find manage.py, which is the usual way to interact with Django and run your project. Use migrate to update Django’s development database:

      • python3 manage.py migrate

      You’ll see output that looks like this as the database updates:

      Output

      Operations to perform: Apply all migrations: admin, auth, contenttypes, sessions Running migrations: Applying contenttypes.0001_initial... OK Applying auth.0001_initial... OK Applying admin.0001_initial... OK Applying admin.0002_logentry_remove_auto_add... OK Applying admin.0003_logentry_add_action_flag_choices... OK Applying contenttypes.0002_remove_content_type_name... OK Applying auth.0002_alter_permission_name_max_length... OK Applying auth.0003_alter_user_email_max_length... OK Applying auth.0004_alter_user_username_opts... OK Applying auth.0005_alter_user_last_login_null... OK Applying auth.0006_require_contenttypes_0002... OK Applying auth.0007_alter_validators_add_error_messages... OK Applying auth.0008_alter_user_username_max_length... OK Applying auth.0009_alter_user_last_name_max_length... OK Applying auth.0010_alter_group_name_max_length... OK Applying auth.0011_update_proxy_permissions... OK Applying sessions.0001_initial... OK

      Next, use the runserver command to run the project so you can test it out:

      • python3 manage.py runserver

      The output will look like this as the server starts:

      Output

      Watching for file changes with StatReloader Performing system checks... System check identified no issues (0 silenced). September 22, 2019 - 22:57:07 Django version 2.2.5, using settings 'do_django_project.settings' Starting development server at http://127.0.0.1:8000/ Quit the server with CONTROL-C.

      You now have a basic Django project and a development server running. To view your running development server, visit 127.0.0.1:8000 in a browser. It will display the Django startup page:

      Generic Django Start-Page

      Next you’ll create a Django app and configure your project to run a view from that app so you’ll see something more interesting than the default page.

      Step 2 — Making a Basic Django App

      In this step, you’ll create the skeleton of the app that will hold your Droplet results. You’ll come back to this app later once you’ve set up the API call to populate it with data.

      Make sure you’re in the do_django_project directory, and create a Django app using the following command:

      • python3 manage.py startapp display_droplets

      Now you need to add the new app to INSTALLED_APPS in the settings.py file, so Django will recognize it. settings.py is a Django configuration file that’s located inside another subdirectory in the Django project and has the same name as the project folder (do_django_project). Django created both folders for you. Switch to the do_django_project directory:

      Edit settings.py in the editor of your choice:

      Add your new app to the INSTALLED_APPS section of the file:

      do_django_api/do_django_project/do_django_project/settings.py

      INSTALLED_APPS = [
          'django.contrib.admin',
          'django.contrib.auth',
          'django.contrib.contenttypes',
          'django.contrib.sessions',
          'django.contrib.messages',
          'django.contrib.staticfiles',
          # The new app
          'display_droplets',
      ]
      

      Save and close the file when you’re done.

      GetDroplets View Function

      Next you’ll create a function, GetDroplets, inside the display_droplets app’s views.py file. This function will render the template you’ll use to display Droplet data, as context, from the API. context is a dictionary that is used to take data from Python code and send it to an HTML template so it can be displayed in a web page.

      Switch to the display_droplets directory:

      • cd ..
      • cd display_droplets

      Open views.py for editing:

      Add the following code to the file:

      do_django_api/do_django_project/display_droplets/views.py

      from django.views.generic import TemplateView
      
      class GetDroplets(TemplateView):
          template_name = 'droplets.html'
          def get_context_data(self, *args, **kwargs):
              pass
      

      Save and close the file.

      Later you will populate this function and create the droplets.html file, but first let’s configure urls.py to call this function when you visit the development server root directory (127.0.0.1:8000).

      Switch back to the do_django_project directory:

      • cd ..
      • cd do_django_project

      Open urls.py for editing:

      Add an import statement for GetDroplets, then add an additional path to urlpatterns that will point to the new view.

      do_django_api/do_django_project/do_django_project/urls.py

      from django.contrib import admin
      from django.urls import path
      from display_droplets.views import GetDroplets
      
      urlpatterns = [
          path('admin/', admin.site.urls),
          path('', GetDroplets.as_view(template_name='droplets.html'), name='Droplet View'),
      ]
      

      If you want to make your own custom paths, the first parameter is the URL (such as example.com/**admin**), the second parameter is the function to call to produce the web page, and the third is just a name for the path.

      Save and close the file.

      Droplets Template

      Next you’ll be working with templates. Templates are HTML files that Django uses to create web pages. In this case, you’ll use a template to construct an HTML page that displays the API data.

      Switch back to the display_droplets directory:

      • cd ..
      • cd display_droplets

      Inside this directory, create a template folder and switch to that directory:

      • mkdir templates
      • cd templates

      Create droplets.html and open it for editing:

      To avoid having to write any sort of CSS for this project, we’ll use Bulma CSS because it’s a free and lightweight CSS framework that allows you to create clean-looking web pages just by adding a few class attributes to the HTML.

      Now let’s create a template with a basic navigation bar. Add the following code to the droplets.html file.

      do_django_api/do_django_project/display_droplets/templates/droplets.html

      <!DOCTYPE html>
      <html lang="en">
      <head>
          <meta charset="UTF-8">
          <title>DigitalOcean Droplets</title>
          <link crossorigin="anonymous"
                href="https://cdnjs.cloudflare.com/ajax/libs/bulma/0.7.4/css/bulma.min.css"
                integrity="sha256-8B1OaG0zT7uYA572S2xOxWACq9NXYPQ+U5kHPV1bJN4="
                rel="stylesheet"/>
          <link rel="shortcut icon" type="image/png" href="https://assets.digitalocean.com/logos/favicon.png"/>
      </head>
      <body>
      <nav aria-label="main navigation" class="navbar is-light" role="navigation">
          <div class="navbar-brand">
              <div class="navbar-item">
                  <img atl="DigitalOcean" src="https://assets.digitalocean.com/logos/DO_Logo_icon_blue.png"
                       style="margin-right: 0.5em;">Droplets
              </div>
          </div>
      </nav>
      </body>
      </html>
      

      Save and close the file.

      This code imports Bulma into boilerplate HTML and creates a nav bar displaying “Droplets.”

      Refresh your browser tab to view the changes you made to the template.

      Template with Basic Header

      So far you haven’t touched anything related to APIs; you’ve created a foundation for the project. Next you’ll put this page to good use by making an API call and presenting the Droplet data.

      Step 3 — Making The API Call

      In this step, you’ll set up an API call and send the Droplet data as context to the template to display in a table.

      Getting Droplet Data

      Navigate back to the display_droplets app directory:

      Install the requests library so you can talk to the API:

      The requests library enables your code to request data from APIs and add headers (additional data sent along with our request).

      Next, you’ll create a services.py file, which is where you’ll make the API call. This function will use requests to talk to https://api.digitalocean.com/v2/droplets and append each Droplet in the JSON file returned to a list.

      Open services.py for editing:

      Add the following code to the file:

      do_django_api/do_django_project/display_droplets/services.py

      import os
      import requests
      
      def get_droplets():
          url = 'https://api.digitalocean.com/v2/droplets'
          r = requests.get(url, headers={'Authorization':'Bearer %s' % 'access_token'})
          droplets = r.json()
          droplet_list = []
          for i in range(len(droplets['droplets'])):
              droplet_list.append(droplets['droplets'][i])
          return droplet_list
      

      Inside the get_droplets function, two things occur: a request is made and data is parsed. url contains the URL requesting Droplet data from the DigitalOcean API. r stores the requested data.

      requests takes two parameters in this case: url and headers. If you want data from a different API, you’d replace the url value with the appropriate URL. headers sends DigitalOcean your access token, so they know you’re allowed to make the request and for what account the request is being made.

      droplets contains the information from the r variable, but now it has been converted from JSON, the format the API sends information in, into a dictionary which is easy to use in a for loop.

      The next three lines create an array, droplet_list[]. Then a for loop iterates over the information in droplets, and adds each item to the list. All of the information taken from the API and stored in droplets can be found in DigitalOcean’s Developer Docs.

      Note: Don’t forget to replace access_token with your access token. Also, keep it safe and never publish that token online.

      Save and close the file.

      Protecting Your Access Token

      You should always hide your access token, but if someone ever wanted to run your project, you should have an easy way for them to add their own access token without having to edit Python code. DotENV is the solution as variables are kept in a .env file that can be conveniently edited.

      Navigate back to the do_django_project directory:

      To start working with environment variables, install python-dotenv:

      • pip install python-dotenv

      Once it’s installed, you need to configure Django to handle environment variables, so you can reference them in code. To do that, you need to add a few lines of code to manage.py and wsgi.py.

      Open manage.py for editing:

      Add the following code:

      do_django_api/do_django_project/manage.py

      
      """Django's command-line utility for administrative tasks."""
      import os
      import sys
      import dotenv
      
      def main():
          os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'do_django_project.settings')
          try:
              from django.core.management import execute_from_command_line
          except ImportError as exc:
              raise ImportError(
                  "Couldn't import Django. Are you sure it's installed and "
                  "available on your PYTHONPATH environment variable? Did you "
                  "forget to activate a virtual environment?"
              ) from exc
          execute_from_command_line(sys.argv)
      
      if __name__ == '__main__':
          main()
      
      dotenv.load_dotenv(
          os.path.join(os.path.dirname(__file__), '.env')
      )
      

      Adding this in manage.py means that when you issue commands to Django in development it will handle environment variables from your .env file.

      Save and close the file.

      If you ever need to handle environment variables in your production projects, you can do that from the wsgi.py file. Change to the do_django_project directory:

      And open wsgi.py for editing:

      Add the following code to wsgi.py:

      do_django_api/do_django_project/do_django_project/wsgi.py

      
      import os
      import dotenv
      
      from django.core.wsgi import get_wsgi_application
      
      os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'do_django_project.settings')
      
      dotenv.load_dotenv(
          os.path.join(os.path.dirname(os.path.dirname(__file__)), '.env')
      )
      
      application = get_wsgi_application()
      

      This code snippet has an additional os.path.dirname() because wsgi.py needs to look two directories back to find the .env file. This snippet is not the same as the one used for manage.py.

      Save and close the file.

      Now you can use an environment variable in services.py instead of your access token. Switch back to the display_droplets directory:

      • cd ..
      • cd display_droplets

      Open services.py for editing:

      Now replace your access token with an environment variable:

      do_django_api/display_droplets/services.py

      import os
      import requests
      
      def get_droplets():
          url = "https://api.digitalocean.com/v2/droplets"
          r = requests.get(url, headers={'Authorization':'Bearer %s' % os.getenv('DO_ACCESS_TOKEN')})
          droplets = r.json()
          droplet_list = []
          for i in range(len(droplets['droplets'])):
              droplet_list.append(droplets['droplets'][i])
          return droplet_list
      

      Save and close the file.

      The next step is to create a .env file. Switch back to the do_django_project directory:

      Create a .env file and pen the file for editing:

      In .env, add your token as the variable DO_ACCESS_TOKEN:

      do_django_api/do_django_project/.env

      DO_ACCESS_TOKEN=access_token
      

      Save and close the file.

      Note: Add .env to your .gitignore file so it is never included in your commits.

      The API connection is now set up and configured, and you’ve protected your access token as well. It’s time to present the information you retrieved to the user.

      Step 4 — Handling Droplet Data in Views and Templates

      Now that you can make API calls, you need to send the Droplet data to the template for rendering. Let’s return to the stub of the function, GetDroplets you created earlier in views.py. In the function you’ll send droplet_list as context to the droplets.html template.

      Switch to the display_droplets directory:

      Open views.py for editing:

      Add the following code to views.py:

      do_django_api/do_django_project/display_droplets/views.py

      from django.shortcuts import render
      from django.views.generic import TemplateView
      from .services import get_droplets
      
      class GetDroplets(TemplateView):
          template_name = 'droplets.html'
          def get_context_data(self, *args, **kwargs):
              context = {
                  'droplets' : get_droplets(),
              }
              return context
      

      Information sent to the droplets.html template is handled via the context dictionary. This is why droplets acts as a key and the array returned from get_droplets() acts as a value.

      Save and close the file.

      Presenting the Data in the Template

      Inside the droplets.html template you’ll create a table and populate it with the droplet data.

      Switch to the templates directory:

      Open droplets.html for editing:

      Add the following code after the nav element in droplets.html:

      do_django_api/do_django_project/display_droplets/templates/droplets.html

      <table class="table is-fullwidth is-striped is-bordered">
          <thead>
          <tr>
              <th>Name</th>
              <th>IPv4 Address(es)</th>
              <th>Id</th>
              <th>Region</th>
              <th>Memory</th>
              <th>CPUs</th>
              <th>Disk Size</th>
          </tr>
          </thead>
          <tbody>
          {% for droplet in droplets %}
          <tr>
              <th>{{ droplet.name }}</th>
              {% for ip in droplet.networks.v4 %}
              <td>{{ ip.ip_address }}</td>
              {% endfor %}
              <td>{{ droplet.id }}</td>
              <td>{{ droplet.region.name }}</td>
              <td>{{ droplet.memory }}</td>
              <td>{{ droplet.vcpus }}</td>
              <td>{{ droplet.disk }}</td>
          </tr>
          {% endfor %}
          </tbody>
      </table>
      

      {% for droplet in droplets %} ... {% endfor %} is a loop that iterates through the array of Droplets retrieved from views.py. Each Droplet is inserted in a table row. The various {{ droplet.<attribute> }} lines retrieve that attribute for each Droplet in the loop, and inserts it in a table cell.

      Save and close the file.

      Refresh your browser and you will see a list of Droplets.

      Template with Table of Droplet Data

      You can now handle the DigitalOcean API inside your Django projects. You’ve taken the data retrieved from the API and plugged it into the template you created earlier, to display the information in a readable and flexible manner.

      Conclusion

      In this article you built a Django project that can display Droplet information from the DigitalOcean API with Bulma CSS styling. You’ve learned three important skills by following this tutorial:

      • How to handle API requests in Python using the requests and json modules.
      • How to display API data in a Django project using view and template logic.
      • How to safely handle your API tokens using dotenv in Django.

      Now that you’ve gotten an introduction to handling APIs in Django, you can create a project of your own using either another feature from the DigitalOcean API or another API altogether. You can also check out other Django tutorials or a similar tutorial with the React framework.





      Source link

      How To Migrate Redis Data with Replication on Ubuntu 18.04


      Introduction

      Redis is an in-memory, key-value data store known for its flexibility, performance, wide language support, and built-in features like replication. Replication is the practice of regularly copying data from one database to another in order to have a replica that always remains an exact duplicate of the primary instance. One common use of Redis replication is to migrate an existing Redis data store to a new server, as one might do when scaling up their infrastructure for better performance.

      This tutorial outlines the process of using Redis’s built-in replication features to migrate data from one Ubuntu 18.04 server (the “source”) to another (the “target”). This involves making a few configuration changes to each server, setting the target server to function as a replica of the source, and then promoting the replica back to a primary after the migration is completed.

      Prerequisites

      To complete this tutorial, you will need:

      Step 1 — (Optional) Loading Your Source Redis Instance with Sample Data

      This optional step involves loading your source Redis instance with some sample data so you can experiment with migrating data to your target instance. If you already have data that you want to migrate over to your target, you can move ahead to Step 2 which will go over how to back it up.

      To begin, connect to the Ubuntu server you’ll use as your source Redis instance as your non-root user:

      • ssh sammy@source_server_ip

      Then run the following command to access your Redis server:

      If you’ve configured your Redis server to require password authentication, run the auth command followed by your Redis password:

      • auth source_redis_password

      Next, run the following commands. These will create a number of keys holding a few strings, a hash, a list, and a set:

      • mset string1 "Redis" string2 "is" string3 "fun!"
      • hmset hash1 field1 "Redis" field2 "is" field3 "fast!"
      • rpush list1 "Redis" "is" "feature-rich!"
      • sadd set1 "Redis" "is" "free!"

      Additionally, run the following expire commands to provide a few of these keys with a timeout. This will make them volatile, meaning that Redis will delete them after a specified amount of time (7500 seconds, in this case):

      • expire string2 7500
      • expire hash1 7500
      • expire set1 7500

      With that, you have some example data you can export to your target Redis instance. Keep the redis-cli prompt open for now, since we will run a few more commands from it in the next step to back this data up.

      Step 2 — Backing Up Your Source Redis Instance

      Any time you plan to move data from one server to another, there’s a risk that something could go wrong and you could lose data as a result. Even though this risk is small, we will use Redis’s bgsave command to create a backup of your source Redis database in case you encounter an error during the replication process.

      If you don’t already have it open, start by opening up the Redis command line interface:

      Also, if you’ve configured your Redis server to require password authentication, run the auth command followed by your Redis password:

      Next, run the bgsave command. This will create a snapshot of your current data set and export it to a dump file held in Redis’s working directory:

      Note: You can take a snapshot of your Redis database with either the save or bgsave commands. The reason we use the bgsave command here, though, is that the save command runs synchronously, meaning it will block any other clients connected to the database. Because of this, the save command documentation recommends that you should almost never run it in a production environment.

      Instead, it suggests using the bgsave command which runs asynchronously. This will cause Redis to fork the database into two processes: the parent process will continue to serve clients while the child saves the database before exiting:

      Note that if clients add or modify data while the bgsave operation is running, these changes won’t be captured in the snapshot.

      Following that, you can close the connection to your Redis instance by running the exit command:

      If you need it in the future, you can find the data dump file in your Redis instance’s working directory. Recall how in the prerequisite Redis installation tutorial you set your Redis instance to use /var/lib/redis as its working directory.

      List the contents of your Redis working directory to confirm that it’s holding the data dump file:

      If the dump file was exported correctly, you will see it in this command’s output. By default, this file is named dump.rdb:

      Output

      dump.rdb

      After confirming that your data was backed up correctly, you’re all set to configure your source Redis server to accept external connections and allow for replication.

      Step 3 — Configuring Your Source Redis Instance

      By default, Redis isn’t configured to listen for external connections, meaning that any replicas you configure won’t be able to sync with your source instance unless you update its configuration. Here, we will update the source instance’s configuration file to allow for external connections and also set a password which the target instance will use to authenticate once replication begins. After that, we’ll add a firewall rule to allow connections to the port on which Redis is running.

      Open up your source Redis instance’s configuration file with your preferred text editor. Here, we’ll use nano:

      • sudo nano /etc/redis/redis.conf

      Navigate to the line that begins with the bind directive. It will look like this by default:

      /etc/redis/redis.conf

      . . .
      bind 127.0.0.1
      . . .
      

      This directive binds Redis to 127.0.0.1, an IPv4 loopback address that represents localhost. This means that this Redis instance is configured to only listen for connections that originate from the same server as the one where it’s installed. To allow your source instance to accept any connection made to its public IP address, such as those made from your target instance, add your source Redis server’s IP address after the 127.0.0.1. Note that you shouldn’t include any commas after 127.0.0.1:

      /etc/redis/redis.conf

      . . .
      bind 127.0.0.1 source_server_IP
      . . .
      

      Next, if you haven’t already done so, use the requirepass directive to configure a password which users must enter before they can interact with the data on the source instance. Do so by uncommenting the directive and setting it to a complex password or passphrase:

      /etc/redis/redis.conf

      . . .
      requirepass source_redis_password
      . . .
      

      Be sure to take note of the password you set here, as you will need it when you configure the target server.

      Following that change, you can save and close the Redis configuration file. If you edited it with nano, do so by pressing CTRL+X, Y, then ENTER.

      Then, restart the Redis service to put these changes into effect:

      • sudo systemctl restart redis

      That’s all you need to do in terms of configuring Redis, but if you configured a firewall on your server it will continue to block any attempts by your target server to connect with the source. Assuming you configured your firewall with ufw, you could update it to allow connections to the port on which Redis is running with the following command. Note that Redis is configured to use port 6379 by default:

      After making that final change you’re all done configuring your source Redis server. Continue on to configure your target Redis instance to function as a replica of the source.

      Step 4 — Configuring your Target Redis Instance

      By this point you’ve configured your source Redis instance to accept external connections. However, because you’ve locked down access to the source by uncommenting the requirepass directive, your target instance won’t be able to replicate the data held on the source. Here, you will configure your target Redis instance to be able to authenticate its connection to the source, thereby allowing replication.

      Begin by connecting to your target Redis server as your non-root user:

      • ssh sammy@target_server_ip

      Next, open up your target server’s Redis configuration file:

      • sudo nano /etc/redis/redis.conf

      If you haven’t done so already, you should configure a password for your target Redis instance with the requirepass directive:

      /etc/redis/redis.conf

      . . .
      requirepass target_redis_password
      . . .
      

      Next, uncomment the masterauth directive and set it to your source Redis instance’s authentication password. By doing this, your target server will be able to authenticate to the source instance after you enable replication:

      /etc/redis/redis.conf

      . . .
      masterauth source_redis_password
      . . .
      

      Lastly, if you have clients writing information to your source instance, you will want to configure them to write data to your target instance as well. This way, if a client writes any data after you promote the target back to being a primary instance, it won’t get lost.

      To do this, though, you will need to adjust the replica-read-only directive. This is set to yes by default, which means that it’s configured to become a “read-only” replica which clients won’t be able to write to. Set this directive to no to allow clients to write to it:

      /etc/redis/redis.conf

      . . .
      replica-read-only no
      . . .
      

      Those are all the changes you need to make to the target’s configuration file, so you can save and close it.

      Then, restart the Redis service to put these changes into effect:

      • sudo systemctl restart redis

      After restarting the Redis service your target server will be ready to become a replica of the source. All you’ll need to do to turn it into one is to run a single command, which we’ll do shortly.

      Note: If you have any clients writing data to your source Redis instance, now would be a good time to configure them to also write data to your target.

      Step 5 — Starting and Verifying Replication

      By this point, you have configured your source Redis instance to accept connections from your target server and you’ve configured your target Redis instance to be able to authenticate to the source as a replica. With these pieces in place, you’re ready to turn your target instance into a replica of the source.

      Begin by opening up the Redis command line interface on your target Redis server:

      Run the auth command to authenticate the connection:

      Next, turn the target instance into a replica of the source with the replicaof command. Be sure to replace source_server_ip with your source instance’s public IP address and source_port with the port used by Redis on your source instance:

      • replicaof source_server_ip source_port

      From the prompt, run the following scan command. This will return all the keys currently held by the replica:

      If replication is working as expected, you will see all the keys from your source instance held in the replica. If you loaded your source with the sample data in Step 1, the scan command’s output will look like this:

      Output

      1) "0" 2) 1) "string3" 2) "string1" 3) "set1" 4) "string2" 5) "hash1" 6) "list1"

      Note: Be aware that this command may return the keys in a different order than what’s shown in this example.

      However, if this command doesn’t return the same keys held on your source Redis instance, it may be that there is an error in one of your servers’ configuration files preventing the target database from connecting to the source. In this case, close the connection to your target Redis instance, and double check that you’ve edited the configuration files on both your source and target Redis servers correctly.

      While you have the connection open, you can also confirm that the keys you set to expire are still volatile. Do so by running the ttl command with one of these keys as an argument:

      This will return the number of seconds before this key will be deleted:

      Output

      5430

      Once you’ve confirmed that the data on your source instance was correctly synced to your target, you can promote the target back to being a primary instance by running the replicaof command once again. This time, however, instead of following replicaof with an IP address and port, follow it with no one. This will cause the target instance to stop syncing with the source immediately:

      To confirm that the data replicated from the source persist on the target, rerun the scan command you entered previously:

      scan 0
      

      You should see the same keys in this command’s output as when you ran the scan command when the target was still replicating the source:

      Output

      1) "0" 2) 1) "string3" 2) "string1" 3) "set1" 4) "string2" 5) "hash1" 6) "list1"

      With that, you’ve successfully migrated all the data from your source Redis instance to your target. If you have any clients that are still writing data to the source instance, now would be a good time to configure them to only write to the target.

      Conclusion

      There are several methods besides replication you can use to migrate data from one Redis instance to another, but replication has the advantages of requiring relatively few configuration changes to work and only a single command to initiate or stop.

      If you’d like to learn more about working with Redis, we encourage you to check out our tutorial series on How To Manage a Redis Database. Also, if you want to move your Redis data to a Redis instance managed by DigitalOcean, follow our guide on how to do so.



      Source link

      How To Migrate Redis Data to a DigitalOcean Managed Database


      Introduction

      There are a number of methods you can use to migrate data from one Redis instance to another, such as replication or snapshotting. However, migrations can get more complicated when you’re moving data to a Redis instance managed by a cloud provider, as managed databases often limit how much control you have over the database’s configuration.

      This tutorial outlines one method you can use to migrate data to a Redis instance managed by DigitalOcean. The method uses Redis’s internal migrate command to securely pass data through a TLS tunnel configured with stunnel. This guide will also go over a few other commonly-used migration strategies and why they’re problematic when migrating to a DigitalOcean Managed Database.

      Prerequisites

      To complete this tutorial, you will need:

      Note: To help keep things clear, this guide will refer to the Redis instance hosted on your Ubuntu server as the “source.” Likewise, it will refer to the instance managed by DigitalOcean as either the “target” or the “Managed Database.”

      Things To Consider When Migrating Redis Data to a Managed Database

      There are several methods you can employ to migrate data from one Redis instance to another. However, some of these approaches present problems when you’re migrating data to a Redis instance managed by DigitalOcean.

      For example, you can use replication to turn your target Redis instance into an exact copy of the source. To do this, you would connect to the target Redis server and run the replicaof command with the following syntax:

      • replicaof source_hostname_or_ip source_port

      This will cause the target instance to replicate all the data held on the source without destroying any data that was previously stored on it. Following this, you would promote the replica back to being a primary instance with the following command:

      However, Redis instances managed by DigitalOcean are configured to only become read-only replicas. If you have clients writing data to the source database, you won’t be able to configure them to write to the managed instance as it’s replicating data. This means you would lose any data sent by the clients after you promote the managed instance from being a replica and before you configure the clients to begin writing data to it, making replication suboptimal migration solution.

      Another method for migrating Redis data is to take a snapshot of the data held on your source instance with either Redis’s save or bgsave commands. Both of these commands export the snapshot to a file ending in .rdb, which you would then transfer to the target server. Following that, you’d restart the Redis service so it can load the data.

      However, many managed database providers — including DigitalOcean — don’t allow you to access the managed database server’s underlying file system. This means there’s no way to upload the snapshot file or make the necessary changes to the target database’s configuration file to allow the Redis to import the data.

      Because the configuration of DigitalOcean’s Managed Databases limit the efficacy of both replication and snapshotting as means of migrating data, this tutorial will instead use Redis’s migrate command to move data from the source to the target. The migrate command is designed to only move one key at a time, but we will use some handy command line tricks to move an entire Redis database with a single command.

      Step 1 — (Optional) Loading Your Source Redis Instance with Sample Data

      This optional step involves loading your source Redis instance with some sample data so you can experiment with migrating data to your Managed Redis Database. If you already have data that you want to migrate over to your target instance, you can move ahead to Step 2.

      To begin, run the following command to access your Redis server:

      If you’ve configured your Redis server to require password authentication, run the auth command followed by your Redis password:

      Then run the following commands. These will create a number of keys holding a few strings, a hash, a list, and a set:

      • mset string1 "Redis" string2 "is" string3 "fun!"
      • hmset hash1 field1 "Redis" field2 "is" field3 "fast!"
      • rpush list1 "Redis" "is" "feature-rich!"
      • sadd set1 "Redis" "is" "free!"

      Additionally, run the following expire commands to provide a few of these keys with a timeout. This will make them volatile, meaning that Redis will delete them after the specified amount of time, 7500 seconds:

      • expire string2 7500
      • expire hash1 7500
      • expire set1 7500

      With that, you have some example data you can export to your target Redis instance. You can keep the redis-cli prompt open for now, since we will run a few more commands from it in the next step in order to back up this data.

      Step 2 — Backing Up Your Data

      Previously, we discussed using Redis’s bgsave command to take a snapshot of a Redis database and migrate it to another instance. While we won’t use bgsave as a means of migrating Redis data, we will use it here to back up the data in case we encounter an error during the migration process.

      If you don’t already have it open, start by opening up the Redis command line interface:

      Also, if you’ve configured your Redis server to require password authentication, run the auth command followed by your Redis password:

      Next, run the bgsave command. This will create a snapshot of your current data set and export it to a dump file whose name ends in .rdb:

      Note: As mentioned in the previous Things To Consider section, you can take a snapshot of your Redis database with either the save or bgsave commands. The reason we use the bgsave command here is that the save command runs synchronously, meaning it will block any other clients connected to the database. Because of this, the save command documentation recommends that this command should almost never be run in a production environment.

      Instead, it suggests using the bgsave command which runs asynchronously. This will cause Redis to fork the database into two processes: the parent process will continue to serve clients while the child saves the database before exiting:

      Note that if clients add or modify data while the bgsave operation is running or after it finishes, these changes won’t be captured in the snapshot.

      Following that, you can close the connection to your Redis instance by running the exit command:

      If you need it in the future, you can find this dump file in your Redis installation’s working directory. If you’re not sure which directory this is, you can check by opening up your Redis configuration file with your preferred text editor. Here, we’ll use nano:

      • sudo nano /etc/redis/redis.conf

      Navigate to the line that begins with dbfilename. It will look like this by default:

      /etc/redis/redis.conf

      . . .
      # The filename where to dump the DB
      dbfilename dump.rdb
      . . .
      

      This directive defines the file to which Redis will export snapshots. The next line (after any comments) will look like this:

      /etc/redis/redis.conf

      . . .
      dir /var/lib/redis
      . . .
      

      The dir directive defines Redis’s working directory where any Redis snapshots are stored. By default, this is set to /var/lib/redis, as shown in this example.

      Close the redis.conf file. Assuming you didn’t make any changes to the file, you can do so by pressing CTRL+X.

      Then, list the contents of your Redis working directory to confirm that it’s holding the exported data dump file:

      If the dump file was exported correctly, you will see it in this command’s output:

      Output

      dump.rdb

      Once you’ve confirmed that you successfully backed up your data, you can begin the process of migrating it to your Managed Database.

      Step 3 — Migrating the Data

      Recall that this guide uses Redis’s internal migrate command to move keys one by one from the source database to the target. However, unlike the previous steps in this tutorial, we won’t run this command from the redis-cli prompt. Instead, we’ll run it directly from the server’s bash prompt. Doing so will allow us to use a few bash tricks to migrate all the keys on the source database with one command.

      Note: If you have clients writing data to your source Redis instance, now would be a good time to configure them to also write data to your Managed Database. This way, you can migrate the existing data from the source to your target without losing any writes that occur after the migration.

      Also, be aware that this migration command will not replace any existing keys on the target database unless one of the existing keys has the same name as a key you’re migrating.

      The migration will occur after running the following command. Before running it, though, we will break it down piece by piece:

      • redis-cli -n source_database -a source_password scan 0 | while read key; do redis-cli -n source_database -a source_password MIGRATE localhost 8000 "$key" target_database 1000 COPY AUTH managed_redis_password; done

      Let’s look at each part of this command separately:

      • redis-cli -n source_database -a source_password scan 0 . . .

      The first part of the command, redis-cli, opens a connection to the local Redis server. The -n flag specifies which of Redis’s logical databases to connect to. Redis has 16 databases out of the box (with the first being numbered 0, the second numbered 1, and so on), so source_database can be any number between 0 and 15. If your source instance only holds data on the default database (numbered 0), then you do not need to include the -n flag or specify a database number.

      Next, comes the -a flag and the source instance’s password, which together authenticate the connection. If your source instance does not require password authentication, then you do not need to include the -a flag.

      It then runs Redis’s scan command, which iterates over the keys held in the data set and returns them as a list. scan requires that you follow it with a cursor — the iteration begins when the cursor is set to 0, and terminates when the server returns a 0 cursor. Hence, we follow scan with a cursor of 0 so as to iterate over every key in the set.

      • . . . | while read key; do . . .

      The next part of the command begins with a vertical bar (|). In Unix-like systems, vertical bars are known as pipes and are used to direct the output of one process to the input of another.

      Following this is the start of a while loop. In bash, as well as in most programming languages, a while loop is a control flow statement that lets you repeat a certain process, code, or command as long as a certain condition remains true.

      The condition in this case is the sub-command read key, which reads the piped input and assigns it to the variable key. The semicolon (;) signifies the end of the while loop’s conditional statement, and the do following it precedes the action to be repeated as long as the while expression remains true. Every time the do statement completes, the conditional statement will read the next line piped from the scan command and assign that input to the key variable.

      Essentially, this section says “as long as there is output from the scan command to be read, perform the following action.”

      • . . . redis-cli -n source_database -a source_password migrate localhost 8000 "$key" . . .

      This section of the command is what performs the actual migration. After another redis-cli call, it once again specifies the source database number with the -n flag and authenticates with the -a flag. You have to include these again because this redis-cli call is distinct from the one at the start of the command. Again, though, you do not need to include the -n flag or database number if your source Redis instance only holds data in the default 0 database, and you don’t need to include the -a flag if it doesn’t require password authentication.

      Following this is the migrate command. Any time you use the migrate command, you must follow it with the target database’s hostname or IP address and its port number. Here, we follow the convention established in the prerequisite stunnel tutorial and point the migrate command to localhost at port 8000.

      $key is the variable defined in the first part of the while loop, and represents the keys from each line of the scan command’s output.

      • . . . target_database 1000 copy auth managed_redis_password; done

      This section is a continuation of the migrate command. It begins with target_database, which represents the logical database on the target instance where you want to store the data. Again, this can be any number from 0 to 15.

      Next is a number representing a timeout. This timeout is the maximum amount of idle communication time between the two machines. Note that this isn’t a time limit for the operation, just that the operation should always make some level of progress within the defined timeout. Both the database number and timeout arguments are required for every migrate command.

      Following the timeout is the optional copy flag. By default, migrate will delete each key from the source database after it transfers them to the target; by including this option, though, you’re instructing the migrate command to merely copy the keys so they will persist on the source.

      After copy comes the auth flag followed by your Managed Redis Database’s password. This isn’t necessary if you’re migrating data to an instance that doesn’t require authentication, but it is necessary when you’re migrating data to one managed by DigitalOcean.

      Following this is another semicolon, indicating the end of the action to be performed as long as the while condition holds true. Finally, the command closes with done, indicating the end of the loop. The command checks the condition in the while statement and repeats the action in the do statement until it’s no longer true.

      All together, this command performs the following steps:

      • Scan a database on the source Redis instance and return every key held within it
      • Pass each line of the scan command’s output into a while loop
      • Read the first line and assign its content to the key variable
      • Migrate any key in the source database that matches the key variable to a database on the Redis instance at the other end of the TLS tunnel held on localhost at port 8000
      • Go back and read the next line, and repeat the process until there are no more keys to read

      Now that we’ve gone over each part of the migration command, you can go ahead and run it.

      If your source instance only has data on the default 0 database, you do not need to include either of the -n flags or their arguments. If, however, you’re migrating data from any database other than 0 on your source instance, you must include the -n flags and change both occurrences of source_database to align with the database you want to migrate.

      If your source database requires password authentication, be sure to change source_password to the Redis instance’s actual password. If it doesn’t, though, make sure that you remove both occurrences of -a source_password from the command. Also, change managed_database_password to your own Managed Database’s password and be sure to change target_database to the number of whichever logical database on your target instance that you want to write the data to:

      Note: If you don’t have your Managed Redis Database’s password on hand, you can find it by first navigating to the DigitalOcean Control Panel. From there, click on Databases in the left-hand sidebar menu and then click on the name of the Redis instance to which you want to migrate the data. Scroll down to the Connection Details section where you’ll find a field labeled password. Click on the show button to reveal the password, then copy and paste it into the migration command — replacing managed_redis_password — in order to authenticate.

      • redis-cli -n source_database -a source_password scan 0 | while read key; do redis-cli -n source_database -a source_password MIGRATE localhost 8000 "$key" target_database 1000 COPY AUTH managed_redis_password; done

      You will see output similar to the following:

      Output

      NOKEY OK OK OK OK OK OK

      Note: Notice the first line of the command’s output which reads NOKEY. To understand what this means, run the first part of the migration command by itself:

      • redis-cli -n source_database -a source_password scan 0

      If you migrated the sample data added in Step 2, this command’s output will look like this:

      Output

      1) "0" 2) 1) "hash1" 2) "string3" 3) "list1" 4) "string1" 5) "string2" 6) "set1"

      The value "0" held in the first line is not a key held in your source Redis database, but a cursor returned by the scan command. Since there aren’t any keys on the server named “0”, there’s nothing there for the migrate command to send to your target instance and it returns NOKEY.

      However, the command doesn’t fail and exit. Instead, it continues on by reading and migrating the keys found in the next lines of the scan command’s output.

      To test whether the migration was successful, connect to your Managed Redis Database:

      • redis-cli -h localhost -p 8000 -a managed_redis_password

      If you migrated data to any logical database other than the default, connect to that database with the select command:

      Run a scan command to see what keys are held there:

      If you completed Step 2 of this tutorial and added the example data to your source database, you will see output like this:

      Output

      1) "0" 2) 1) "set1" 2) "string2" 3) "hash1" 4) "list1" 5) "string3" 6) "string1"

      Lastly, run a ttl command on any key which you’ve set to expire in order to confirm that it is still volatile:

      Output

      (integer) 3944

      This output shows that even though you migrated the key to your Managed Database, it still set to expire based on the expireat command you ran previously.

      Once you’ve confirmed that all the keys on your source Redis database were exported to your target successfully, you can close your connection to the Managed Database. If you have clients writing data to the source Redis instance and you’ve already configured them to send their writes to the target, you can at this point configure them to stop sending data to the source.

      Conclusion

      By completing this tutorial, you will have moved data from your self-managed Redis data store to a Redis instance managed by DigitalOcean. The process outlined in this guide may not be optimal in every case. For example, you’d have to run the migration command multiple times (once for every logical database holding data) if your source instance is using databases other than the default one. However, when compared to other methods like replication or snapshotting, it is a fairly straightforward process that works well with a DigitalOcean Managed Database’s configuration.

      Now that you’re using a DigitalOcean Managed Redis Database to store your data, you could measure its performance by running some benchmarking tests. Also, if you’re new to working with Redis, you could check out our series on How To Manage a Redis Database.



      Source link