One place for hosting & domains

      Databases

      Understanding Relational Databases


      Introduction

      Database management systems (DBMS) are computer programs that allow users to interact with a database. A DBMS allows users to control access to a database, write data, run queries, and perform any other tasks related to database management.

      In order to perform any of these tasks, though, the DBMS must have some kind of underlying model that defines how the data are organized. The relational model is one approach for organizing data that has found wide use in database software since it was first devised in the late 1960s, so much so that, as of this writing, four of the top five most popular DBMSs are relational.

      This conceptual article outlines the history of the relational model, how relational databases organize data, and how they’re used today.

      History of the Relational Model

      Databases are logically modelled clusters of information, or data. Any collection of data is a database, regardless of how or where it is stored. Even a file cabinet containing payroll information is a database, as is a stack of hospital patient forms, or a company’s collection of customer information spread across multiple locations. Before storing and managing data with computers was common practice, physical databases like these were the only ones available to government and business organizations that needed to store information.

      Around the middle of the 20th century, developments in computer science led to machines with more processing power, as well as greater local and external storage capacity. These advancements led computer scientists to start recognizing the potential these machines had for storing and managing ever larger amounts of data.

      However, there weren’t any theories for how computers could organize data in meaningful, logical ways. It’s one thing to store unsorted data on a machine, but it’s much more complicated to design systems that allow you to add, retrieve, sort, and otherwise manage that data in consistent, practical ways. The need for a logical framework for storing and organizing data led to a number of proposals for how to harness computers for data management.

      One early database model was the hierarchical model, in which data are organized in a tree-like structure, similar to modern-day filesystems. The following example shows how the layout of part of a hierarchical database used to categorize animals might look:

      Example Hierarchical Database: Animal categorization

      The hierarchical model was widely implemented in early database management systems, but it also proved to be somewhat inflexible. In this model, even though individual records can have multiple “children,” each record can only have one “parent” in the hierarchy. Because of this, these earlier hierarchical databases were limited to representing only “one-to-one” and “one-to-many” relationships. This lack of “many-to-many” relationships could lead to problems when you’re working with data points that you’d like to associate with more than one parent.

      In the late 1960s, Edgar F. Codd, a computer scientist working at IBM, devised the relational model of database management. Codd’s relational model allowed individual records to be associated with more than one table, thereby enabling “many-to-many” relationships between data points in addition to “one-to-many” relationships. This provided more flexibility than other existing models when it came to designing database structures, and meant that relational database management systems (RDBMSs) could meet a much wider range of business needs.

      Codd proposed a language for managing relational data, known as Alpha, which influenced the development of later database languages. Two of Codd’s colleagues at IBM, Donald Chamberlin and Raymond Boyce, created one such language inspired by Alpha. They called their language SEQUEL, short for Structured English Query Language, but because of an existing trademark they shortened the name of their language to SQL (referred to more formally as Structured Query Language).

      Due to hardware constraints, early relational databases were still prohibitively slow, and it took some time before the technology became widespread. But by the mid-1980s, Codd’s relational model had been implemented in a number of commercial database management products from both IBM and its competitors. These vendors also followed IBM’s lead by developing and implementing their own dialects of SQL. By 1987, both the American National Standards Institute and the International Organization for Standardization had ratified and published standards for SQL, solidifying its status as the accepted language for managing RDBMSs.

      The relational model’s wide use across multiple industries led to it becoming recognized as the standard model for data management. Even with the rise of various NoSQL databases in more recent years, relational databases remain the dominant tools for storing and organizing data.

      How Relational Databases Organize Data

      Now that you have a general understanding of the relational model’s history, let’s take a closer look at how the model organizes data.

      The most fundamental elements in the relational model are relations, which users and modern RDBMSs recognize as tables. A relation is a set of tuples, or rows in a table, with each tuple sharing a set of attributes, or columns:

      Diagram example of how relations, tuples, and attributes relate to one another

      A column is the smallest organizational structure of a relational database, and represents the various facets that define the records in the table. Hence their more formal name, attributes. You can think of each tuple as a unique instance of whatever type of people, objects, events, or associations the table holds. These instances might be things like employees at a company, sales from an online business, or lab test results. For example, in a table that holds employee records of teachers at a school, the tuples might have attributes like name, subjects, start_date, and so on.

      When creating columns, you specify a data type that dictates what kind of entries are allowed in that column. RDBMSs often implement their own unique data types, which may not be directly interchangeable with similar data types in other systems. Some common data types include dates, strings, integers, and Booleans.

      In the relational model, each table contains at least one column that can be used to uniquely identify each row, called a primary key. This is important, because it means that users don’t need to know where their data is physically stored on a machine; instead, their DBMS can keep track of each record and return them on an ad hoc basis. In turn, this means that records have no defined logical order, and users have the ability to return their data in whatever order or through whatever filters they wish.

      If you have two tables that you’d like to associate with one another, one way you can do so is with a foreign key. A foreign key is essentially a copy of one table’s (the “parent” table) primary key inserted into a column in another table (the “child”). The following example highlights the relationship between two tables, one used to record information about employees at a company and another used to track the company’s sales. In this example, the primary key of the EMPLOYEES table is used as the foreign key of the SALES table:

      Diagram example of how the EMPLOYEE table's primary key acts as the SALES table's foreign key

      If you try to add a record to the child table and the value entered into the foreign key column doesn’t exist in the parent table’s primary key, the insertion statement will be invalid. This helps to maintain relationship-level integrity, as the rows in both tables will always be related correctly.

      The relational model’s structural elements help to keep data stored in an organized way, but storing data is only useful if you can retrieve it. To retrieve information from an RDBMS, you can issue a query, or a structured request for a set of information. As mentioned previously, most relational databases use SQL to manage and query data. SQL allows you to filter and manipulate query results with a variety of clauses, predicates, and expressions, giving you fine control over what data will appear in the result set.

      Advantages and Limitations of Relational Databases

      With the underlying organizational structure of relational databases in mind, let’s consider some of their advantages and disadvantages.

      Today, both SQL and the databases that implement it deviate from Codd’s relational model in several ways. For instance, Codd’s model dictates that each row in a table should be unique while, for reasons of practicality, most modern relational databases do allow for duplicate rows. There are some that don’t consider SQL databases to be true relational databases if they fail to adhere to each of Codd’s specifications for the relational model. In practical terms, though, any DBMS that uses SQL and at least somewhat adheres to the relational model is likely to be referred to as a relational database management system.

      Although relational databases quickly grew in popularity, a few of the relational model’s shortcomings started to become apparent as data became more valuable and businesses began storing more of it. For one thing, it can be difficult to scale a relational database horizontally. Horizontal scaling, or scaling out, is the practice of adding more machines to an existing stack in order to spread out the load and allow for more traffic and faster processing. This is often contrasted with vertical scaling which involves upgrading the hardware of an existing server, usually by adding more RAM or CPU.

      The reason it’s difficult to scale a relational database horizontally has to do with the fact that the relational model is designed to ensure consistency, meaning clients querying the same database will always retrieve the same data. If you were to scale a relational database horizontally across multiple machines, it becomes difficult to ensure consistency since clients may write data to one node but not the others. There would likely be a delay between the initial write and the time when the other nodes are updated to reflect the changes, resulting in inconsistencies between them.

      Another limitation presented by RDBMSs is that the relational model was designed to manage structured data, or data that aligns with a predefined data type or is at least organized in some predetermined way, making it easily sortable and searchable. With the spread of personal computing and the rise of the internet in the early 1990s, however, unstructured data — such as email messages, photos, videos, etc. — became more common.

      None of this is to say that relational databases aren’t useful. Quite the contrary, the relational model is still the dominant framework for data management after over 40 years. Their prevalence and longevity mean that relational databases are a mature technology, which is itself one of their major advantages. There are many applications designed to work with the relational model, as well as many career database administrators who are experts when it comes to relational databases. There’s also a wide array of resources available in print and online for those looking to get started with relational databases.

      Another advantage of relational databases is that almost every RDBMS supports transactions. A transaction consists of one or more individual SQL statements performed in sequence as a single unit of work. Transactions present an all-or-nothing approach, meaning that every SQL statement in the transaction must be valid; otherwise, the entire transaction will fail. This is very helpful for ensuring data integrity when making changes to multiple rows or tables.

      Lastly, relational databases are extremely flexible. They’ve been used to build a wide variety of different applications, and continue working efficiently even with very large amounts of data. SQL is also extremely powerful, allowing you to add and change data on the fly, as well as alter the structure of database schemas and tables without impacting existing data.

      Conclusion

      Thanks to their flexibility and design for data integrity, relational databases are still the primary way data are managed and stored more than fifty years after they were first conceived of. Even with the rise of various NoSQL databases in recent years, understanding the relational model and how to work with RDBMSs are key for anyone who wants to build applications that harness the power of data.

      To learn more about a few popular open-source RDBMSs, we encourage you to check out our comparison of various open-source relational SQL databases. If you’re interested in learning more about databases generally, we encourage you to check out our complete library of database-related content.



      Source link

      How to Set Up a Scalable Laravel 6 Application using Managed Databases and Object Storage


      Introduction

      When scaling web applications horizontally, the first difficulties you’ll typically face are dealing with file storage and data persistence. This is mainly due to the fact that it is hard to maintain consistency of variable data between multiple application nodes; appropriate strategies must be in place to make sure data created in one node is immediately available to other nodes in a cluster.

      A practical way of solving the consistency problem is by using managed databases and object storage systems. The first will outsource data persistence to a managed database, and the latter will provide a remote storage service where you can keep static files and variable content such as images uploaded by users. Each node can then connect to these services at the application level.

      The following diagram demonstrates how such a setup can be used for horizontal scalability in the context of PHP applications:

      Laravel at scale diagram

      In this guide, we will update an existing Laravel 6 application to prepare it for horizontal scalability by connecting it to a managed MySQL database and setting up an S3-compatible object store to save user-generated files. By the end, you will have a travel list application running on an Nginx + PHP-FPM web server:

      Travellist v1.0

      Note: this guide uses DigitalOcean Managed MySQL and Spaces to demonstrate a scalable application setup using managed databases and object storage. The instructions contained here should work in a similar way for other service providers.

      Prerequisites

      To begin this tutorial, you will first need the following prerequisites:

      • Access to an Ubuntu 18.04 server as a non-root user with sudo privileges, and an active firewall installed on your server. To set these up, please refer to our Initial Server Setup Guide for Ubuntu 18.04.
      • Nginx and PHP-FPM installed and configured on your server, as explained in steps 1 and 3 of How to Install LEMP on Ubuntu 18.04. You should skip the step where MySQL is installed.
      • Composer installed on your server, as explained in steps 1 and 2 of How to Install and Use Composer on Ubuntu 18.04.
      • Admin credentials to a managed MySQL 8 database. For this guide, we’ll be using a DigitalOcean Managed MySQL cluster, but the instructions here should work similarly for other managed database services.
      • A set of API keys with read and write permissions to an S3-compatible object storage service. In this guide, we’ll use DigitalOcean Spaces, but you are free to use a provider of your choice.
      • The s3cmd tool installed and configured to connect to your object storage drive. For instructions on how to set this up for DigitalOcean Spaces, please refer to our product documentation.

      Step 1 — Installing the MySQL 8 Client

      The default Ubuntu apt repositories come with the MySQL 5 client, which is not compatible with the MySQL 8 server we’ll be using in this guide. To install the compatible MySQL client, we’ll need to use the MySQL APT Repository provided by Oracle.

      Begin by navigating to the MySQL APT Repository page in your web browser. Find the Download button in the lower-right corner and click through to the next page. This page will prompt you to log in or sign up for an Oracle web account. You can skip that and instead look for the link that says No thanks, just start my download. Copy the link address and go back to your terminal window.

      This link should point to a .deb package that will set up the MySQL APT Repository in your server. After installing it, you’ll be able to use apt to install the latest releases of MySQL. We’ll use curl to download this file into a temporary location.

      Go to your server’s tmp folder:

      Now download the package with curl and using the URL you copied from the MySQL APT Repository page:

      • curl -OL https://dev.mysql.com/get/mysql-apt-config_0.8.13-1_all.deb

      After the download is finished, you can use dpkg to install the package:

      • sudo dpkg -i mysql-apt-config_0.8.13-1_all.deb

      You will be presented with a screen where you can choose which MySQL version you’d like to select as default, as well as which MySQL components you’re interested in:

      MySQL APT Repository Install

      You don’t need to change anything here, because the default options will install the repositories we need. Select “Ok” and the configuration will be finished.

      Next, you’ll need to update your apt cache with:

      Now we can finally install the MySQL 8 client with:

      • sudo apt install mysql-client

      Once that command finishes, check the software version number to ensure that you have the latest release:

      You’ll see output like this:

      Output

      mysql Ver 8.0.18 for Linux on x86_64 (MySQL Community Server - GPL)

      In the next step, we’ll use the MySQL client to connect to your managed MySQL server and prepare the database for the application.

      Step 2 — Creating a new MySQL User and Database

      At the time of this writing, the native MySQL PHP library mysqlnd doesn’t support caching_sha2_authentication, the default authentication method for MySQL 8. We’ll need to create a new user with the mysql_native_password authentication method in order to be able to connect our Laravel application to the MySQL 8 server. We’ll also create a dedicated database for our demo application.

      To get started, log into your server using an admin account. Replace the highlighted values with your own MySQL user, host, and port:

      • mysql -u MYSQL_USER -p -h MYSQL_HOST -P MYSQL_PORT

      When prompted, provide the admin user’s password. After logging in, you will have access to the MySQL 8 server command line interface.

      First, we’ll create a new database for the application. Run the following command to create a new database named travellist:

      • CREATE DATABASE travellist;

      Next, we’ll create a new user and set a password, using mysql_native_password as default authentication method for this user. You are encouraged to replace the highlighted values with values of your own, and to use a strong password:

      • CREATE USER "http://www.digitalocean.com/travellist-user'@'%' IDENTIFIED WITH mysql_native_password BY "http://www.digitalocean.com/MYSQL_PASSWORD';

      Now we need to give this user permission over our application database:

      • GRANT ALL ON travellist.* TO "http://www.digitalocean.com/travellist-user'@'%';

      You can now exit the MySQL prompt with:

      You now have a dedicated database and a compatible user to connect from your Laravel application. In the next step, we’ll get the application code and set up configuration details, so your app can connect to your managed MySQL database.

      In this guide, we’ll use Laravel Migrations and database seeds to set up our application tables. If you need to migrate an existing local database to a DigitalOcean Managed MySQL database, please refer to our documentation on How to Import MySQL Databases into DigitalOcean Managed Databases.

      Step 3 — Setting Up the Demo Application

      To get started, we’ll fetch the demo Laravel application from its Github repository. Feel free to inspect the contents of the application before running the next commands.

      The demo application is a travel bucket list app that was initially developed in our guide on How to Install and Configure Laravel with LEMP on Ubuntu 18.04. The updated app now contains visual improvements including travel photos that can be uploaded by a visitor, and a world map. It also introduces a database migration script and database seeds to create the application tables and populate them with sample data, using artisan commands.

      To obtain the application code that is compatible with this tutorial, we’ll download the 1.1 release from the project’s repository on Github. We’ll save the downloaded zip file as travellist.zip inside our home directory:

      • cd ~
      • curl -L https://github.com/do-community/travellist-laravel-demo/archive/1.1.zip -o travellist.zip

      Now, unzip the contents of the application and rename its directory with:

      • unzip travellist.zip
      • mv travellist-laravel-demo-1.1 travellist

      Navigate to the travellist directory:

      Before going ahead, we’ll need to install a few PHP modules that are required by the Laravel framework, namely: php-xml, php-mbstring, php-xml and php-bcmath. To install these packages, run:

      • sudo apt install unzip php-xml php-mbstring php-xml php-bcmath

      To install the application dependencies, run:

      You will see output similar to this:

      Output

      Loading composer repositories with package information Installing dependencies (including require-dev) from lock file Package operations: 80 installs, 0 updates, 0 removals - Installing doctrine/inflector (v1.3.0): Downloading (100%) - Installing doctrine/lexer (1.1.0): Downloading (100%) - Installing dragonmantank/cron-expression (v2.3.0): Downloading (100%) - Installing erusev/parsedown (1.7.3): Downloading (100%) ... Generating optimized autoload files > IlluminateFoundationComposerScripts::postAutoloadDump > @php artisan package:discover --ansi Discovered Package: beyondcode/laravel-dump-server Discovered Package: fideloper/proxy Discovered Package: laravel/tinker Discovered Package: nesbot/carbon Discovered Package: nunomaduro/collision Package manifest generated successfully.

      The application dependencies are now installed. Next, we’ll configure the application to connect to the managed MySQL Database.

      Creating the .env configuration file and setting the App Key

      We’ll now create a .env file containing variables that will be used to configure the Laravel application in a per-environment basis. The application includes an example file that we can copy and then modify its values to reflect our environment settings.

      Copy the .env.example file to a new file named .env:

      Now we need to set the application key. This key is used to encrypt session data, and should be set to a unique 32 characters-long string. We can generate this key automatically with the artisan tool:

      Let’s edit the environment configuration file to set up the database details. Open the .env file using your command line editor of choice. Here, we will be using nano:

      Look for the database credentials section. The following variables need your attention:

      DB_HOST: your managed MySQL server host.
      DB_PORT: your managed MySQL server port.
      DB_DATABASE: the name of the application database we created in Step 2.
      DB_USERNAME: the database user we created in Step 2.
      DB_PASSWORD: the password for the database user we defined in Step 2.

      Update the highlighted values with your own managed MySQL info and credentials:

      ...
      DB_CONNECTION=mysql
      DB_HOST=MANAGED_MYSQL_HOST
      DB_PORT=MANAGED_MYSQL_PORT
      DB_DATABASE=MANAGED_MYSQL_DB
      DB_USERNAME=MANAGED_MYSQL_USER
      DB_PASSWORD=MANAGED_MYSQL_PASSWORD
      ...
      

      Save and close the file by typing CTRL+X then Y and ENTER when you’re done editing.

      Now that the application is configured to connect to the MySQL database, we can use Laravel’s command line tool artisan to create the database tables and populate them with sample data.

      Migrating and populating the database

      We’ll now use Laravel Migrations and database seeds to set up the application tables. This will help us determine if our database configuration works as expected.

      To execute the migration script that will create the tables used by the application, run:

      You will see output similar to this:

      Output

      Migration table created successfully. Migrating: 2019_09_19_123737_create_places_table Migrated: 2019_09_19_123737_create_places_table (0.26 seconds) Migrating: 2019_10_14_124700_create_photos_table Migrated: 2019_10_14_124700_create_photos_table (0.42 seconds)

      To populate the database with sample data, run:

      You will see output like this:

      Output

      Seeding: PlacesTableSeeder Seeded: PlacesTableSeeder (0.86 seconds) Database seeding completed successfully.

      The application tables are now created and populated with sample data.

      To finish the application setup, we also need to create a symbolic link to the public storage folder that will host the travel photos we’re using in the application. You can do that using the artisan tool:

      Output

      The [public/storage] directory has been linked.

      This will create a symbolic link inside the public directory pointing to storage/app/public, where we’ll save the travel photos. To check that the link was created and where it points to, you can run:

      You’ll see output like this:

      Output

      total 36 drwxrwxr-x 5 sammy sammy 4096 Oct 25 14:59 . drwxrwxr-x 12 sammy sammy 4096 Oct 25 14:58 .. -rw-rw-r-- 1 sammy sammy 593 Oct 25 06:29 .htaccess drwxrwxr-x 2 sammy sammy 4096 Oct 25 06:29 css -rw-rw-r-- 1 sammy sammy 0 Oct 25 06:29 favicon.ico drwxrwxr-x 2 sammy sammy 4096 Oct 25 06:29 img -rw-rw-r-- 1 sammy sammy 1823 Oct 25 06:29 index.php drwxrwxr-x 2 sammy sammy 4096 Oct 25 06:29 js -rw-rw-r-- 1 sammy sammy 24 Oct 25 06:29 robots.txt lrwxrwxrwx 1 sammy sammy 41 Oct 25 14:59 storage -> /home/sammy/travellist/storage/app/public -rw-rw-r-- 1 sammy sammy 1194 Oct 25 06:29 web.config

      Running the test server (optional)

      You can use the artisan serve command to quickly verify that everything is set up correctly within the application, before having to configure a full-featured web server like Nginx to serve the application for the long term.

      We’ll use port 8000 to temporarily serve the application for testing. If you have the UFW firewall enabled on your server, you should first allow access to this port with:

      Now, to run the built in PHP server that Laravel exposes through the artisan tool, run:

      • php artisan serve --host=0.0.0.0 --port=8000

      This command will block your terminal until interrupted with a CTRL+C. It will use the built-in PHP web server to serve the application for test purposes on all network interfaces, using port 8000.

      Now go to your browser and access the application using the server’s domain name or IP address on port 8000:

      http://server_domain_or_IP:8000
      

      You will see the following page:

      Travellist v1.0

      If you see this page, it means the application is successfully pulling data about locations and photos from the configured managed database. The image files are still stored in the local disk, but we’ll change this in a following step of this guide.

      When you are finished testing the application, you can stop the serve command by hitting CTRL+C.

      Don’t forget to close port 8000 again if you are running UFW on your server:

      Step 4 — Configuring Nginx to Serve the Application

      Although the built-in PHP web server is very useful for development and testing purposes, it is not intended to be used as a long term solution to serve PHP applications. Using a full featured web server like Nginx is the recommended way of doing that.

      To get started, we’ll move the application folder to /var/www, which is the usual location for web applications running on Nginx. First, use the mv command to move the application folder with all its contents to /var/www/travellist:

      • sudo mv ~/travellist /var/www/travellist

      Now we need to give the web server user write access to the storage and bootstrap/cache folders, where Laravel stores application-generated files. We’ll set these permissions using setfacl, a command line utility that allows for more robust and fine-grained permission settings in files and folders.

      To include read, write and execution (rwx) permissions to the web server user over the required directories, run:

      • sudo setfacl -R -m g:www-data:rwx /var/www/travellist/storage
      • sudo setfacl -R -m g:www-data:rwx /var/www/travellist/bootstrap/cache

      The application files are now in order, but we still need to configure Nginx to serve the content. To do this, we’ll create a new virtual host configuration file at /etc/nginx/sites-available:

      • sudo nano /etc/nginx/sites-available/travellist

      The following configuration file contains the recommended settings for Laravel applications on Nginx:

      /etc/nginx/sites-available/travellist

      server {
          listen 80;
          server_name server_domain_or_IP;
          root /var/www/travellist/public;
      
          add_header X-Frame-Options "SAMEORIGIN";
          add_header X-XSS-Protection "1; mode=block";
          add_header X-Content-Type-Options "nosniff";
      
          index index.html index.htm index.php;
      
          charset utf-8;
      
          location / {
              try_files $uri $uri/ /index.php?$query_string;
          }
      
          location = /favicon.ico { access_log off; log_not_found off; }
          location = /robots.txt  { access_log off; log_not_found off; }
      
          error_page 404 /index.php;
      
          location ~ .php$ {
              fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
              fastcgi_index index.php;
              fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
              include fastcgi_params;
          }
      
          location ~ /.(?!well-known).* {
              deny all;
          }
      }
      

      Copy this content to your /etc/nginx/sites-available/travellist file and adjust the highlighted values to align with your own configuration. Save and close the file when you’re done editing.

      To activate the new virtual host configuration file, create a symbolic link to travellist in sites-enabled:

      • sudo ln -s /etc/nginx/sites-available/travellist /etc/nginx/sites-enabled/

      Note: If you have another virtual host file that was previously configured for the same server_name used in the travellist virtual host, you might need to deactivate the old configuration by removing the corresponding symbolic link inside /etc/nginx/sites-enabled/.

      To confirm that the configuration doesn’t contain any syntax errors, you can use:

      You should see output like this:

      Output

      • nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
      • nginx: configuration file /etc/nginx/nginx.conf test is successful

      To apply the changes, reload Nginx with:

      • sudo systemctl reload nginx

      If you reload your browser now, the application images will be broken. That happens because we moved the application directory to a new location inside the server, and for that reason we need to re-create the symbolic link to the application storage folder.

      Remove the old link with:

      • cd /var/www/travellist
      • rm -f public/storage

      Now run once again the artisan command to generate the storage link:

      Now go to your browser and access the application using the server’s domain name or IP address, as defined by the server_name directive in your configuration file:

      http://server_domain_or_IP
      

      Travellist v1.0

      In the next step, we’ll integrate an object storage service into the application. This will replace the current local disk storage used for the travel photos.

      Step 5 — Integrating an S3-Compatible Object Storage into the Application

      We’ll now set up the application to use an S3-compatible object storage service for storing the travel photos exhibited on the index page. Because the application already has a few sample photos stored in the local disk, we’ll also use the s3cmd tool to upload the existing local image files to the remote object storage.

      Setting Up the S3 Driver for Laravel

      Laravel uses league/flysystem, a filesystem abstraction library that enables a Laravel application to use and combine multiple storage solutions, including local disk and cloud services. An additional package is required to use the s3 driver. We can install this package using the composer require command.

      Access the application directory:

      • composer require league/flysystem-aws-s3-v3

      You will see output similar to this:

      Output

      Using version ^1.0 for league/flysystem-aws-s3-v3 ./composer.json has been updated Loading composer repositories with package information Updating dependencies (including require-dev) Package operations: 8 installs, 0 updates, 0 removals - Installing mtdowling/jmespath.php (2.4.0): Loading from cache - Installing ralouphie/getallheaders (3.0.3): Loading from cache - Installing psr/http-message (1.0.1): Loading from cache - Installing guzzlehttp/psr7 (1.6.1): Loading from cache - Installing guzzlehttp/promises (v1.3.1): Loading from cache - Installing guzzlehttp/guzzle (6.4.1): Downloading (100%) - Installing aws/aws-sdk-php (3.112.28): Downloading (100%) - Installing league/flysystem-aws-s3-v3 (1.0.23): Loading from cache ...

      Now that the required packages are installed, we can update the application to connect to the object storage. First, we’ll open the .env file again to set up configuration details such as keys, bucket name, and region for your object storage service.

      Open the .env file:

      Include the following environment variables, replacing the highlighted values with your object store configuration details:

      /var/www/travellist/.env

      DO_SPACES_KEY=EXAMPLE7UQOTHDTF3GK4
      DO_SPACES_SECRET=exampleb8e1ec97b97bff326955375c5
      DO_SPACES_ENDPOINT=https://ams3.digitaloceanspaces.com
      DO_SPACES_REGION=ams3
      DO_SPACES_BUCKET=sammy-travellist
      

      Save and close the file when you’re done. Now open the config/filesystems.php file:

      • nano config/filesystems.php

      Within this file, we’ll create a new disk entry in the disks array. We’ll name this disk spaces, and we’ll use the environment variables we’ve set in the .env file to configure the new disk. Include the following entry in the disks array:

      config/filesystems.php

      
      'spaces' => [
         'driver' => 's3',
         'key' => env('DO_SPACES_KEY'),
         'secret' => env('DO_SPACES_SECRET'),
         'endpoint' => env('DO_SPACES_ENDPOINT'),
         'region' => env('DO_SPACES_REGION'),
         'bucket' => env('DO_SPACES_BUCKET'),
      ],
      
      

      Still in the same file, locate the cloud entry and change it to set the new spaces disk as default cloud filesystem disk:

      config/filesystems.php

      'cloud' => env('FILESYSTEM_CLOUD', "http://www.digitalocean.com/spaces'),
      

      Save and close the file when you’re done editing. From your controllers, you can now use the Storage::cloud() method as a shortcut to access the default cloud disk. This way, the application stays flexible to use multiple storage solutions, and you can switch between providers on a per-environment basis.

      The application is now configured to use object storage, but we still need to update the code that uploads new photos to the application.

      Let’s first examine the current uploadPhoto route, located in the PhotoController class. Open the file using your text editor:

      • nano app/Http/Controllers/PhotoController.php

      app/Http/Controllers/PhotoController.php

      …
      
      public function uploadPhoto(Request $request)
      {
         $photo = new Photo();
         $place = Place::find($request->input('place'));
      
         if (!$place) {
             //add new place
             $place = new Place();
             $place->name = $request->input('place_name');
             $place->lat = $request->input('place_lat');
             $place->lng = $request->input('place_lng');
         }
      
         $place->visited = 1;
         $place->save();
      
         $photo->place()->associate($place);
         $photo->image = $request->image->store('/', 'public');
         $photo->save();
      
         return redirect()->route('Main');
      }
      
      

      This method accepts a POST request and creates a new photo entry in the photos table. It begins by checking if an existing place was selected in the photo upload form, and if that’s not the case, it will create a new place using the provided information. The place is then set to visited and saved to the database. Following that, an association is created so that the new photo is linked to the designated place. The image file is then stored in the root folder of the public disk. Finally, the photo is saved to the database. The user is then redirected to the main route, which is the index page of the application.

      The highlighted line in this code is what we’re interested in. In that line, the image file is saved to the disk using the store method. The store method is used to save files to any of the disks defined in the filesystem.php configuration file. In this case, it is using the default disk to store uploaded images.

      We will change this behavior so that the image is saved to the object store instead of the local disk. In order to do that, we need to replace the public disk by the spaces disk in the store method call. We also need to make sure the uploaded file’s visibility is set to public instead of private.

      The following code contains the full PhotoController class, including the updated uploadPhoto method:

      app/Http/Controllers/PhotoController.php

      <?php
      
      namespace AppHttpControllers;
      
      use IlluminateHttpRequest;
      use AppPhoto;
      use AppPlace;
      use IlluminateSupportFacadesStorage;
      
      class PhotoController extends Controller
      {
         public function uploadForm()
         {
             $places = Place::all();
      
             return view('upload_photo', [
                 'places' => $places
             ]);
         }
      
         public function uploadPhoto(Request $request)
         {
             $photo = new Photo();
             $place = Place::find($request->input('place'));
      
             if (!$place) {
                 //add new place
                 $place = new Place();
                 $place->name = $request->input('place_name');
                 $place->lat = $request->input('place_lat');
                 $place->lng = $request->input('place_lng');
             }
      
             $place->visited = 1;
             $place->save();
      
             $photo->place()->associate($place);
             $photo->image = $request->image->store('/', 'spaces');
             Storage::setVisibility($photo->image, 'public');
             $photo->save();
      
             return redirect()->route('Main');
         }
      }
      
      
      

      Copy the updated code to your own PhotoController so that it reflects the highlighted changes. Save and close the file when you’re done editing.

      We still need to modify the application’s main view so that it uses the object storage file URL to render the image. Open the travel_list.blade.php template:

      • nano resources/views/travel_list.blade.php

      Now locate the footer section of the page, which currently looks like this:

      resources/views/travel_list.blade.php

      @section('footer')
         <h2>Travel Photos <small>[ <a href="{{ route('Upload.form') }}">Upload Photo</a> ]</small></h2>
         @foreach ($photos as $photo)
             <div class="photo">
                <img src="https://www.digitalocean.com/{{ asset('storage') . '/' . $photo->image }}" />
                 <p>{{ $photo->place->name }}</p>
             </div>
         @endforeach
      
      @endsection
      

      Replace the current image src attribute to use the file URL from the spaces storage disk:

      <img src="https://www.digitalocean.com/{{ Storage::disk('spaces')->url($photo->image) }}" />
      

      If you go to your browser now and reload the application page, it will show only broken images. That happens because the image files for those travel photos are still only in the local disk. We need to upload the existing image files to the object storage, so that the photos already stored in the database can be successfully exhibited in the application page.

      Syncing local images with s3cmd

      The s3cmd tool can be used to sync local files with an S3-compatible object storage service. We’ll run a sync command to upload all files inside storage/app/public/photos to the object storage service.

      Access the public app storage directory:

      • cd /var/www/travellist/storage/app/public

      To have a look at the files already stored in your remote disk, you can use the s3cmd ls command:

      • s3cmd ls s3://your_bucket_name

      Now run the sync command to upload existing files in the public storage folder to the object storage:

      • s3cmd sync ./ s3://your_bucket_name --acl-public --exclude=.gitignore

      This will synchronize the current folder (storage/app/public) with the remote object storage’s root dir. You will get output similar to this:

      Output

      upload: './bermudas.jpg' -> 's3://sammy-travellist/bermudas.jpg' [1 of 3] 2538230 of 2538230 100% in 7s 329.12 kB/s done upload: './grindavik.jpg' -> 's3://sammy-travellist/grindavik.jpg' [2 of 3] 1295260 of 1295260 100% in 5s 230.45 kB/s done upload: './japan.jpg' -> 's3://sammy-travellist/japan.jpg' [3 of 3] 8940470 of 8940470 100% in 24s 363.61 kB/s done Done. Uploaded 12773960 bytes in 37.1 seconds, 336.68 kB/s.

      Now, if you run s3cmd ls again, you will see that three new files were added to the root folder of your object storage bucket:

      • s3cmd ls s3://your_bucket_name

      Output

      2019-10-25 11:49 2538230 s3://sammy-travellist/bermudas.jpg 2019-10-25 11:49 1295260 s3://sammy-travellist/grindavik.jpg 2019-10-25 11:49 8940470 s3://sammy-travellist/japan.jpg

      Go to your browser and reload the application page. All images should be visible now, and if you inspect them using your browser debug tools, you’ll notice that they’re all using URLs from your object storage.

      Testing the Integration

      The demo application is now fully functional, storing files in a remote object storage service, and saving data to a managed MySQL database. We can now upload a few photos to test our setup.

      Access the /upload application route from your browser:

      http://server_domain_or_IP/upload
      

      You will see the following form:

      Travellist  Photo Upload Form

      You can now upload a few photos to test the object storage integration. After choosing an image from your computer, you can select an existing place from the dropdown menu, or you can add a new place by providing its name and geographic coordinates so it can be loaded in the application map.

      Step 6 — Scaling Up a DigitalOcean Managed MySQL Database with Read-Only Nodes (Optional)

      Because read-only operations are typically more frequent than writing operations on database servers, its is a common practice to scale up a database cluster by setting up multiple read-only nodes. This will distribute the load generated by SELECT operations.

      To demonstrate this setup, we’ll first add 2 read-only nodes to our DigitalOcean Managed MySQL cluster. Then, we’ll configure the Laravel application to use these nodes.

      Access the DigitalOcean Cloud Panel and follow these instructions:

      1. Go to Databases and select your MySQL cluster.
      2. Click Actions and choose Add a read-only node from the drop-down menu.
      3. Configure the node options and hit the Create button. Notice that it might take several minutes for the new node to be ready.
      4. Repeat steps 1-4 one more time so that you have 2 read-only nodes.
      5. Note down the hosts of the two nodes as we will need them for our Laravel configuration.

      Once you have your read-only nodes ready, head back to your terminal.

      We’ll now configure our Laravel application to work with multiple database nodes. When we’re finished, queries such as INSERT and UPDATE will be forwarded to your primary cluster node, while all SELECT queries will be redirected to your read-only nodes.

      First, go to the application’s directory on the server and open your .env file using your text editor of choice:

      • cd /var/www/travellist
      • nano .env

      Locate the MySQL database configuration and comment out the DB_HOST line:

      /var/www/travellist/.env

      DB_CONNECTION=mysql
      #DB_HOST=MANAGED_MYSQL_HOST
      DB_PORT=MANAGED_MYSQL_PORT
      DB_DATABASE=MANAGED_MYSQL_DB
      DB_USERNAME=MANAGED_MYSQL_USER
      DB_PASSWORD=MANAGED_MYSQL_PASSWORD
      

      Save and close the file when you’re done. Now open the config/database.php in your text editor:

      Look for the mysql entry inside the connections array. You should include three new items in this configuration array: read, write, and sticky. The read and write entries will set up the cluster nodes, and the sticky option set to true will reuse write connections so that data written to the database is immediately available in the same request cycle. You can set it to false if you don’t want this behavior.

      /var/www/travel_list/config/database.php

      ...
            'mysql' => [
               'read' => [
                 'host' => [
                    "http://www.digitalocean.com/READONLY_NODE1_HOST',
                    "http://www.digitalocean.com/READONLY_NODE2_HOST',
                 ],
               ],
               'write' => [
                 'host' => [
                   "http://www.digitalocean.com/MANAGED_MYSQL_HOST',
                 ],
               ],
             'sticky' => true,
      ...
      

      Save and close the file when you are done editing. To test that everything works as expected, we can create a temporary route inside routes/web.php to pull some data from the database and show details about the connection being used. This way we will be able to see how the requests are being load balanced between the read-only nodes.

      Open the routes/web.php file:

      Include the following route:

      /var/www/travel_list/routes/web.php

      ...
      
      Route::get('/mysql-test', function () {
        $places = AppPlace::all();
        $results = DB::select( DB::raw("SHOW VARIABLES LIKE 'server_id'") );
      
        return "Server ID: " . $results[0]->Value;
      });
      

      Now go to your browser and access the /mysql-test application route:

      http://server_domain_or_IP/mysql-test
      

      You’ll see a page like this:

      mysql node test page

      Reload the page a few times and you will notice that the Server ID value changes, indicating that the requests are being randomly distributed between the two read-only nodes.

      Conclusion

      In this guide, we’ve prepared a Laravel 6 application for a highly available and scalable environment. We’ve outsourced the database system to an external managed MySQL service, and we’ve integrated an S3-compatible object storage service into the application to store files uploaded by users. Finally, we’ve seen how to scale up the application’s database by including additional read-only cluster nodes in the app’s configuration file.

      The updated demo application code containing all modifications made in this guide can be found within the 2.1 tag in the application’s repository on Github.

      From here, you can set up a Load Balancer to distribute load and scale your application among multiple nodes. You can also leverage this setup to create a containerized environment to run your application on Docker.



      Source link

      How To Manage Redis Databases and Keys


      Introduction

      Redis is an open-source, in-memory key-value data store. A key-value data store is a type of NoSQL database in which keys serve as unique identifiers for their associated values. Any given Redis instance includes a number of databases, each of which can hold many different keys of a variety of data types. In this tutorial, we will go over how to select a database, move keys between databases, and manage and delete keys.

      How To Use This Guide
      This guide is written as a cheat sheet with self-contained examples. We encourage you to jump to any section that is relevant to the task you’re trying to complete.

      The commands and outputs shown in this guide were tested on an Ubuntu 18.04 server running Redis version 4.0.9. To obtain a similar setup, you can follow Step 1 of our guide on How To Install and Secure Redis on Ubuntu 18.04. We will demonstrate how these commands behave by running them with redis-cli, the Redis command line interface. Note that if you’re using a different Redis interface — Redli, for example — the exact outputs of certain commands may differ.

      Alternatively, you could provision a managed Redis database instance to test these commands, but note that depending on the level of control allowed by your database provider, some commands in this guide may not work as described. To provision a DigitalOcean Managed Database, follow our Managed Databases product documentation. Then, you must either install Redli or set up a TLS tunnel in order to connect to the Managed Database over TLS.

      Managing Databases

      Out of the box, a Redis instance supports 16 logical databases. These databases are effectively siloed off from one another, and when you run a command in one database it doesn’t affect any of the data stored in other databases in your Redis instance.

      Redis databases are numbered from 0 to 15 and, by default, you connect to database 0 when you connect to your Redis instance. However, you can change the database you’re using with the select command after you connect:

      If you’ve selected a database other than 0, it will be reflected in the redis-cli prompt:

      To swap all the data held in one database with the data held in another, use the swapdb command. The following example will swap the data held in database 6 with that in database 8, and any clients connected to either database will be able to see changes immediately:

      swapdb will return OK if the swap is successful.

      If you want to move a key to a different Redis instance, you can run migrate. This command ensures the key exists on the target instance before deleting it from the source instance. When you run migrate, the command must include the following elements in this order:

      • The hostname or IP address of the destination database
      • The target database’s port number
      • The name of the key you want to migrate
      • The database number where you want to store the key on the destination instance
      • A timeout, in milliseconds, which defines the maximum amount of idle communication time between the two machines. Note that this isn’t a time limit for the operation, just that the operation should always make some level of progress within the defined length of time

      To illustrate:

      • migrate 203.0.113.0 6379 key_1 7 8000

      Additionally, migrate allows the following options which you can add after the timeout argument:

      • COPY: Specifies that the key should not be deleted from the source instance
      • REPLACE: Specifies that if the key already exists on the destination, the migrate operation should delete and replace it
      • KEYS: Instead of providing a specific key to migrate, you can enter an empty string ("") and then use the syntax from the keys command to migrate any key that matches a pattern. For more information on how keys works, see our tutorial on How To Troubleshoot Issues in Redis.

      Managing Keys

      There are a number of Redis commands that are useful for managing keys regardless of what type of data they hold. We’ll go over a few of these in this section.

      rename will rename the specified key. If it’s successful, it will return OK:

      You can use randomkey to return a random key from the currently selected database:

      Output

      "any_key"

      Use type to determine what type of data the given key holds. This command’s output can be either string, list, hash, set, zset, or stream:

      Output

      "string"

      If the specified key doesn’t exist, type will return none instead.

      You can move an individual key to another database in your Redis instance with the move command. move takes the name of a key and the database where you want to move the key as arguments. For example, to move the key key_1 to database 8, you would run the following:

      move will return OK if moving the key was successful.

      Deleting Keys

      To delete one or more keys of any data type, use the del command followed by one or more keys that you want to delete:

      If this command deletes the key(s) successfully it will return (integer) 1. Otherwise, it will return (integer) 0.

      The unlink command performs a similar function as del, with the difference being that del blocks the client as the server reclaims the memory taken up by the key. If the key being deleted is associated with a small object, the amount of time it takes for del to reclaim the memory is very small and the blocking time may not even be noticeable.

      However, it can become inconvenient if, for example, the key you’re deleting is associated with many objects, such as a hash with thousands or millions of fields. Deleting such a key can take a noticeably long time, and you’ll be blocked from performing any other operations until it’s fully removed from the server’s memory.

      unlink, however, first determines the cost of deallocating the memory taken up by the key. If it’s small then unlink functions the same way as del by the key immediately while also blocking the client. However, if there’s a high cost to deallocate memory for a key, unlink will delete the key asynchronously by creating another thread and incrementally reclaim memory in the background without blocking the client:

      Since it runs in the background, it’s generally recommended that you use unlink to remove keys from your server to reduce errors on your clients, though del will also suffice in many cases.

      Warning: The following two commands are considered dangerous. The flushdb and flushall commands will irreversibly delete all the keys in a single database and all the keys in every database on the Redis server, respectively. We recommend that you only run these commands if you are absolutely certain that you want to delete all the keys in your database or server.

      It may be in your interest to rename these commands to something with a lower likelihood of being run accidentally.

      To delete all the keys in the selected database, use the flushdb command:

      To delete all the keys in every database on a Redis server (including the currently selected database), run flushall:

      Both flushdb and flushall accept the async option, which allows you to delete all the keys on a single database or every database in the cluster asynchronously. This allows them to function similarly to the unlink command, and they will create a new thread to incrementally free up memory in the background.

      Backing Up Your Database

      To create a backup of the currently selected database, you can use the save command:

      This will export a snapshot of the current dataset as an .rdb file, which is a database dump file that holds the data in an internal, compressed serialization format.

      save runs synchronously and will block any other clients connected to the database. Hence, the save command documentation recommends that this command should almost never be run in a production environment. Instead, it suggests using the bgsave command. This tells Redis to fork the database: the parent will continue to serve clients while the child process saves the database before exiting:

      Note that if clients add or modify data while the bgsave operation is occurring, these changes won’t be captured in the snapshot.

      You can also edit the Redis configuration file to have Redis save a snapshot automatically (known as snapshotting or RDB mode) after a certain amount of time if a minimum number of changes were made to the database. This is known as a save point. The following save point settings are enabled by default in the redis.conf file:

      /etc/redis/redis.conf

      . . .
      save 900 1
      save 300 10
      save 60 10000
      . . .
      dbfilename "nextfile.rdb"
      . . .
      

      With these settings, Redis will export a snapshot of the database to the file defined by the dbfilename parameter every 900 seconds if at least 1 key is changed, every 300 seconds if at least 10 keys are changed, and every 60 seconds if at least 10000 keys are changed.

      You can use the shutdown command to back up your Redis data and then close your connection. This command will block every client connected to the database and then perform a save operation if at least one save point is configured, meaning that it will export the database in its current state to an .rdb file while preventing clients from making any changes.

      Additionally, the shutdown command will flush changes to Redis’s append-only file before quitting if append-only mode is enabled. The append-only file mode (AOF) involves creating a log of every write operation on the server in a file ending in .aof after every snapshot. AOF and RDB modes can be enabled on the same server, and using both persistence methods is an effective way to back up your data.

      In short, the shutdown command is essentially a blocking save command that also flushes all recent changes to the append-only file and closes the connection to the Redis instance:

      Warning: The shutdown command is considered dangerous. By blocking your Redis server’s clients, you can make your data unavailable to users and applications that depend on it. We recommend that you only run this command if you are testing out Redis’s behavior or you are absolutely certain that you want to block all your Redis server’s clients.

      In fact, it may be in your interest to rename this command to something with a lower likelihood of being run accidentally.

      If you’ve not configured any save points but still want Redis to perform a save operation, append the save option to the `shutdown command:

      If you have configured at least one save point but you want to shut down the Redis server without performing a save, you can add the nosave argument to the command:

      Note that the append-only file can grow to be very long over time, but you can configure Redis to rewrite the file based on certain variables by editing the redis.conf file. You can also instruct Redis to rewrite the append-only file by running the bgrewriteaof command:

      bgrewriteaof will create the shortest set of commands needed to bring the database back to its current state. As this command’s name implies, it will run in the background. However, if another persistence command is running in a background process already, that command must finish before Redis will execute bgrewriteaof.

      Conclusion

      This guide details a number of commands used to manage databases and keys. If there are other related commands, arguments, or procedures you’d like to see in this guide, please ask or make suggestions in the comments below.

      For more information on Redis commands, see our tutorial series on How to Manage a Redis Database.



      Source link