One place for hosting & domains

      How to Choose a Data Center


      Updated by Linode Written by Linode

      Deploying your Linode to a geographically advantageous data center can make a big difference in connection speeds to your server. Ideally, your site or application should be served from multiple points around the world, with requests sent to the appropriate region based on client geolocation. On a smaller scale, deploying a Linode in the data center nearest to you will make it easier to work with than one in a different region or continent.

      There are many things can affect network congestion, connection speeds, and throughput, so you should never interpret one reading as the sole data point. Always perform tests in multiples of three or five for an average, and on both weekend and weekdays for the most accurate information.

      This page is a quick guide for choosing and speed testing a data center (DC). Start by creating a Linode in the data center in or near your region, or several Linodes in multiple regions if you’re close to more than one DC. From there, use Linode’s Facilities Speedtest page for test domains to ping and files to download.

      Network Latency

      The Linux ping tool sends IPv4 ICMP echo requests to a specified IP address or hostname. Pinging a server is often used to check whether the server is up and/or responding to ICMP. Because ping commands also return the time it takes a request’s packet to reach the server, ping is commonly used to measure network latency.

      Ping a data center to test your connection’s latency to that DC:

      ping -c 5 speedtest.dallas.linode.com
      

      Use ping6 for IPv6:

      ping6 -c 5 speedtest.dallas.linode.com
      

      Note

      Many internet connections still don’t support IPv6 so don’t be alarmed if ping6 commands don’t work to your Linode from your local machine. They will, work from your Linode to other IPv6-capable network connections (ex. between two Linodes in different data centers).

      Download Speed

      Download speed will be limited most heavily first by your internet service plan speed, and second from local congestion between you and your internet service provider. For example, if your plan is capped at 60 Mbps, you won’t be able to download much faster than that from any server on the internet. There are multiple terminologies to discuss download speeds with so here are a few pointers to avoid confusion:

      • Residential internet connection packages are sold in speeds of megabits per second (abbreviated as Mbps, Mb/s, or Mbit/s).

      • One megabit per second (1 Mbps or 1 Mb/s) is 0.125 megabytes per second (0.125 MB/s). Desktop applications (ex: web browsers, FTP managers, Torrent clients) often display download speeds in MB/s.

      • Mebibytes per second is also sometimes used (MiB/s). One Mbps is also equal to 0.1192 MiB/s.

      To test the download speed from your data center of choice, use the cURL or wget to download the bin file from a data center of your choice. You can find the URLs on our Facilities Speedtest page.

      For example:

      curl -O http://speedtest.dallas.linode.com/100MB-dallas.bin
      wget http://speedtest.dallas.linode.com/100MB-dallas.bin
      

      Below you can see that each time cURL is run, a different average download speed is reported and each takes a slightly different amount of time to complete. This is to be expected, and you should analyze multiple data sets to get a real feel for how fast a certain DC will behave for you.

      root@debian:~# curl -O http://speedtest.dallas.linode.com/100MB-dallas.bin
        % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                       Dload  Upload   Total   Spent    Left  Speed
      100  100M  100  100M    0     0  11.4M      0  0:00:08  0:00:08 --:--:-- 12.0M
      
      root@debian:~# curl -O http://speedtest.dallas.linode.com/100MB-dallas.bin
        % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                       Dload  Upload   Total   Spent    Left  Speed
      100  100M  100  100M    0     0  10.8M      0  0:00:09  0:00:09 --:--:--  9.9M
      
      root@debian:~# curl -O http://speedtest.dallas.linode.com/100MB-dallas.bin
        % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                       Dload  Upload   Total   Spent    Left  Speed
      100  100M  100  100M    0     0  9189k      0  0:00:11  0:00:11 --:--:-- 10.0M
      

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      How To Move a PostgreSQL Data Directory to a New Location on Ubuntu 18.04


      Introduction

      Databases grow over time, sometimes outgrowing the space on their original file system. When they’re located on the same partition as the rest of the operating system, this can also potentially lead to I/O contention.

      RAID, network block storage, and other devices can offer redundancy and improve scalability, along with other desirable features. Whether you’re adding more space, evaluating ways to optimize performance, or looking to take advantage of other storage features, this tutorial will guide you through relocating PostgreSQL’s data directory.

      Prerequisites

      To complete this guide, you will need:

      In this example, we’re moving the data to a block storage device mounted at /mnt/volume_nyc1_01. If you are using Block Storage on DigitalOcean, this guide can help you mount your volume before continuing with this tutorial.

      Regardless of what underlying storage you use, though, the following steps can help you move the data directory to a new location.

      Step 1 — Moving the PostgreSQL Data Directory

      Before we get started with moving PostgreSQL’s data directory, let’s verify the current location by starting an interactive PostgreSQL session. In the following command, psql is the command to enter the interactive monitor and -u postgres tells sudo to execute psql as the system’s postgres user:

      Once you have the PostgreSQL prompt opened up, use the following command to show the current data directory:

      Output

      data_directory ------------------------------ /var/lib/postgresql/10/main (1 row)

      This output confirms that PostgreSQL is configured to use the default data directory, /var/lib/postgresql/10/main, so that’s the directory we need to move. Once you've confirmed the directory on your system, type q and press ENTER to close the PostgreSQL prompt.

      To ensure the integrity of the data, stop PostgreSQL before you actually make changes to the data directory:

      • sudo systemctl stop postgresql

      systemctl doesn't display the outcome of all service management commands. To verify that you’ve successfully stopped the service, use the following command:

      • sudo systemctl status postgresql

      The final line of the output should tell you that PostgreSQL has been stopped:

      Output

      . . . Jul 12 15:22:44 ubuntu-512mb-nyc1-01 systemd[1]: Stopped PostgreSQL RDBMS.

      Now that the PostgreSQL server is shut down, we’ll copy the existing database directory to the new location with rsync. Using the -a flag preserves the permissions and other directory properties, while -v provides verbose output so you can follow the progress. We’re going to start the rsync from the postgresql directory in order to mimic the original directory structure in the new location. By creating that postgresql directory within the mount-point directory and retaining ownership by the PostgreSQL user, we can avoid permissions problems for future upgrades.

      Note: Be sure there is no trailing slash on the directory, which may be added if you use tab completion. If you do include a trailing slash, rsync will dump the contents of the directory into the mount point instead of copying over the directory itself.

      The version directory, 10, isn’t strictly necessary since we’ve defined the location explicitly in the postgresql.conf file, but following the project convention certainly won’t hurt, especially if there’s a need in the future to run multiple versions of PostgreSQL:

      • sudo rsync -av /var/lib/postgresql /mnt/volume_nyc1_01

      Once the copy is complete, we'll rename the current folder with a .bak extension and keep it until we’ve confirmed that the move was successful. This will help to avoid confusion that could arise from having similarly-named directories in both the new and the old location:

      • sudo mv /var/lib/postgresql/10/main /var/lib/postgresql/10/main.bak

      Now we’re ready to configure PostgreSQL to access the data directory in its new location.

      Step 2 — Pointing to the New Data Location

      By default, the data_directory is set to /var/lib/postgresql/10/main in the /etc/postgresql/10/main/postgresql.conf file. Edit this file to reflect the new data directory:

      • sudo nano /etc/postgresql/10/main/postgresql.conf

      Find the line that begins with data_directory and change the path which follows to reflect the new location. In the context of this tutorial, the updated directive will look like this:

      /etc/postgresql/10/main/postgresql.conf

      . . .
      data_directory = '/mnt/volume_nyc1_01/postgresql/10/main'
      . . .
      

      Save and close the file by pressing CTRL + X, Y, then ENTER. This is all you need to do to configure PostgreSQL to use the new data directory location. All that’s left at this point is to start the PostgreSQL service again and check that it is indeed pointing to the correct data directory.

      Step 3 — Restarting PostgreSQL

      After changing the data-directory directive in the postgresql.conf file, go ahead and start the PostgreSQL server using systemctl:

      • sudo systemctl start postgresql

      To confirm that the PostgreSQL server started successfully, check its status by again using systemctl:

      • sudo systemctl status postgresql

      If the service started correctly, you will see the following line at the end of this command’s output:

      Output

      . . . Jul 12 15:45:01 ubuntu-512mb-nyc1-01[1]: Started PostgreSQL RDBMS. . . .

      Lastly, to make sure that the new data directory is indeed in use, open the PostgreSQL command prompt.

      Check the value for the data directory again:

      Output

      data_directory ----------------------------------------- /mnt/volume_nyc1_01/postgresql/10/main (1 row)

      This confirms that PostgreSQL is using the new data directory location. Following this, take a moment to ensure that you’re able to access your database as well as interact with the data within. Once you’ve verified the integrity of any existing data, you can remove the backup data directory:

      • sudo rm -Rf /var/lib/postgresql/10/main.bak

      With that, you have successfully moved your PostgreSQL data directory to a new location.

      Conclusion:

      If you’ve followed along, your database should be running with its data directory in the new location and you’ve completed an important step toward being able to scale your storage. You might also want to take a look at 5 Common Server Setups For Your Web Application for ideas on how to create a server infrastructure to help you scale and optimize web applications.



      Source link

      World Backup Day 2018: The Best Advice and Resources for Protecting Your Business’s Critical Data


      Celebrated since 2011 on March 31, World Backup Day is an annual clarion call to consumers and businesses alike to protect their most important information and data by making secure, accessible copies.

      The call to action has never been more important. The amount of data being produced and stored continues to exponentially grow in the digital era, and not coincidentally, cybersecurity threats are evolving in sophistication and volume.

      So even though backups shouldn’t be a focus just one day a year, especially for businesses, we fully endorse any trending Twitter hashtag that raises awareness.   

      Over the past few years, my colleagues and I at SingleHop’s ThinkIT blog have advocated numerous ways to take control. Below are few of my favorites pieces. If you haven’t read them yet, we hope you learn a thing or two. And if you’d rather just start backing up today, we’ve got a 30-day free trial of one of the best cloud backup services on the market ready and waiting.

      If you’re new to the world of cloud-based backup services, start with my overview of BaaS.

      In this blog, SingleHop’s Director of Solutions Architecture, Paul Painter, outlines the considerations for crafting a backup and retention policy based on business needs and criticality of data. A must read if you’re looking to kick your business continuity strategy up a notch.

      SingleHop’s very own Veeam Vanguard, Eugene K., shares his tips, tricks and go-to tools for calculating and planning storage requirements.

      While we’re already a quarter of the way through 2018, the Veeam-supported backup methods detailed in the this blog provide a great footing for any new World Backup Day resolutions.

      Finally, see how one of our clients transformed their business by implementing a Veeam Cloud Connect solution.

      Back Up to the Cloud with SingleHop & Veeam

      Off-site backups are critical. In just minutes, you could be backing up to the cloud with our free trial — no credit card required. Get Started.



      Source link