One place for hosting & domains

      Performance

      Network Route Optimization Made Easy with Performance IP (Demo)


      Latency. It’s the mortal enemy of virtual dragon slayers, the bane of digital advertisers and the adversary of online retailers. Every end user has experienced the negative effects of latency, and even though they don’t always understand the intricacies of routing traffic through a global network, their responses to that latency can have a lasting impact on the companies whose networks aren’t functioning at peak performance.

      Consider this: More than seven in 10 online gamers will play a lagging game for less than 10 minutes before quitting. As much as 78 percent of end users will go to a competitor’s site due to poor performance. And a one second delay can cause an 11 percent drop in page views, a seven percent drop in conversions and a 16 percent drop in customer satisfaction. For online merchants, even the big boys like Amazon, each one-second delay in page load time can lead to losses of $1.6 billion annually.

      Milliseconds matter. Anyone focused on network optimization knows this. But did you know that Border Gateway Protocol (BGP) only routes traffic through the best path around 18 percent of the time? The lowest number of hops does not equate to the fastest route. And yet seeking a path with the least hops is the default.

      What if there was a better way to find the lowest latency route to reach your end users?

      Find the Fastest Network Route with Performance IP®

      With INAP, finding the lowest latency route doesn’t require you to lift a finger. Customers in our data centers are connected to our robust global network and proprietary route optimization engine. Performance IP® enhances BGP by assessing the best-performing routes in real time.

      This technology makes a daily average of nearly 500 million optimization across our global network to automatically put your outbound traffic on the best-performing route. And with the meshed infrastructure of Tier 1 ISPs and our global network, you don’t have to choose between reliability, connectivity and speed. You can download the data sheet on Performance IP®here.

      “In online games, lag kills,” said Todd Harris, COO of Hi-Rez Studios, an INAP customer. “To deliver the best experience, we have to make sure that gamers are able to play on the best network while using the most efficient route. INAP delivers all of that.”

      Skeptical about what Performance IP® can do for you? Let’s run a destination test. Below, we’ll take you through the test step by step so you can get the most out of the demo when you try it for yourself.

      Breaking Down the Performance IP® Demo

      You can access the demo from the INAP homepage or the Performance IP® page. Get started by entering your website URL or any destination IP. We’ll use ca.gov for our test purposes.

      Performance IP Homepage

      Next, choose your source location. The locations in the drop-down menu represent INAP’s data centers and network points of presence where you can take advantage of the Performance IP® service. Each market has a different blend of Tier 1 ISPs. Performance IP® measures all carrier routes out of the data center and optimizes your traffic on the fastest route to your target address.

      Here, we’re running the test out of our Atlanta flagship data center, but you can test out all of our markets with the demo. We’ll run the route optimization test to our sample website, which is located in California. Once you have all your information entered, click “Run Destination Test.”

      Destination test
      Click to view full-size image.

      As you can see from the result of our test above, the shortest distance is not the lowest latency path. Each Greek letter on the chart represents an automonous system (AS). The Performance IP® service looked at seven carriers in this scenario and was able to optimize the route so that our traffic gets to its destination 21.50 percent faster—16.017 ms faster—than the slowest carrier.

      Destination Test Summary
      Click to view full-size image.

      In the traceroute chart above, we can study the latency for the each carrier more closely. Although in this scenario the best perfroming carrier passed though three automous systems while all of the other carriers passed through only two, it was still the fastest. Note that default BGP protocol would have sent us through any of the other carriers, including the slowest route through Carrier 3.

      Once you’ve had time to adequately study the outcome of the test, click “Continue” to see carrier performance over the last month. This chart measures the percentage of carrier prefixes originating from our Atlanta POP that had the best and worst performing routes for any given day of the month. While individual carrier performance can vary radically, if you’re a Performance IP® customer this won’t be a concern for you. Since the engine measures network paths millions of times a day, Performance IP® sends outbound traffic along the lowest latency path virtually 100 percent of the time.

      The final tab of the demo allows you to study our product line-up and open a chat to get a quote. Performance IP® is available for INAP colocation customers and is included with INAP Cloud products. If you’re not interested in these infrastructure solutions, you can still purchase Performance IP® from one of our data centers and connect it to your environment.

      Run the test for yourself, or chat with us now to get a quote.

      Explore the INAP Performance IP® Demo.

      LEARN MORE

      Laura Vietmeyer


      READ MORE



      Source link

      How to Implement Microsoft SQL Servers in a Private Cloud for Maximum Performance


      There are many considerations to take into account when implementing a Microsoft SQL Server in a private cloud environment. Today’s SQL dependent applications have different performance and high availability (HA) requirements. As a solutions architect, my goal is to provide our customers the best performing, highly available designs while managing budgetary concerns, scalability, supportability and total cost of ownership. Like so many tasks in IT infrastructure strategy, success is all about planning. There are many moving pieces and balancing everything to reach our goal becomes a challenge if we don’t ask the right questions up front.

      In this two-part series, I’ll share my approach to scoping, sizing and designing private cloud infrastructure capable of migrating or standing up new Microsoft SQL Server environments. In part one, I’ll identify performance considerations and provide real world examples to make sure that your SQL Server environment is ready to meet application, growth and DR requirements of your organization. In part two, I’ll focus on SQL Server deployment options with high availability.

      If you need a review on SQL server basics before we dive into the private cloud design, you can brush up here.

      Here’s what we’ll cover in this post, with links if you’d prefer to jump ahead:

      SQL Server Performance Considerations: RAM, IOPS, CPU and More

      What follows are the basic performance considerations to take into account when designing a Microsoft SQL Server environment.

      RAM—These requirements are based on database size and developer recommendations. Ideally, you’ll have enough RAM to put the entire database into RAM. However, that’s not always possible with large DB sizes. RAM is delicious to SQL and the server will eat it up, so be generous if your budget allows. Leave 20 percent of RAM reserved on the server for OS and other services.

      IOPS—The SQL Server is mostly a read/write machine, and its performance is dependent on disk IO and storage latency. With SSDs becoming more affordable, many SQL DBAs now prefer to run on SSDs due to high IOPS provided by SSD and NVMe drives. In the past, many 15K SAS disks in RAID10 were the norm for data volumes.

      Low latency is essential to SQL performance. By today’s standards, keeping latency below 5ms per IO is the norm. Sub 1ms latency is very common with local SSD storage. However, 10ms is still a good response time for most medium performance applications.

      CPU—Bare metal CPU allocation is easy. Your server has a number of CPUs and all of them can be allocated to SQL with no negative considerations, other than licensing costs. However, allocating CPU to a SQL Server in a VMWare private cloud environment should be done with caution. Licensing is based on CPU cores. Too much assigned, but unused, vCPU GHz in a VMWare VM can negatively impact performance. Rightsizing is key.

      The SQL Server is not normally a CPU hog. Unexpected and prolonged high CPU usage during production time means something is not right with the database or SQL code. Adding CPU in those cases may not solve the issue. These unexpected conditions should be checked by SQL DBAs and developers. Higher than normal CPU utilization is to be expected during maintenance time. In instances where a database may require high CPU utilization as a norm, a developer or SQL DBA should check the database for missing indexes or other issues before adding more vCPUs to virtual servers.

      SQL as a VM

      Based on the above considerations, when designing a SQL Server environment, budgetary and licensing considerations may start leaning your design toward SQL as a VM on a private cloud with a restricted number of assigned vCPUs. Running SQL on a private cloud is a great way to save on licensing costs. It’s also a good performer for most application loads, and because SQL is more of an IO-dependent system, its performance will greatly depend on the latency and IO availability of your storage system.

      Storage Systems IO Availability

      SAN storage will normally provide great amounts of IO based on the amount of disks and disk types on the SAN. The norm to expect from many SAN systems is 5ms to 15ms of latency per IO. When your application desires sub 1ms latency, such as gaming, financial or ad-tech systems, a local SSD RAID10 disk array will save the day. These local disk arrays are easy to set up and use with both VMs and bare metal server implementations. You may ask, “What about the HA and extra redundancy features of using a SAN instead of local disk?” I will be discussing High Availability in a performance-demanding environment in Part 2: Deploying Microsoft SQL Servers in a Private Cloud with High Availability. Check back soon.

      In the end, your application’s performance requirements and budget will drive your decision whether to run a private cloud or a bare metal SQL deployment. Both VM and bare metal deployments can be configured with high availability and will easily integrate into your private cloud environments.

      With these performance considerations in mind, let’s discuss other scoping and sizing matrices that we’ll need to design our high-performing and future-proofed SQL deployment.

      Scoping and Sizing for Microsoft SQL Deployments in a Private Cloud

      As a Solution Engineer at INAP, I collect numerous measurements during the SQL scoping process. Before delving into numbers, I start by evaluating pain points clients may have with their current SQL Servers. I want to know what hurts.

      Asking these and other questions helps me understand how to resolve issues and future proof your next SQL deployment:

      • Is it performing to your expectation?
      • Are you meeting your SQL database maintenance time window?
      • Do your users complain that their SQL dependent application freezes sometimes for no reason?
      • When was the last time you restored a database from your backup and how long did that take?
      • What’s your failover plan in case your production database server dies?
      • What backup software is being used for SQL?
      • Are your databases running in simple or full restore mode?

      Once I know the exact problem, or if I’m designing for a new application, I start by collecting specific technical details. The most common specific measurements and requirements collected during the scoping phase are:

      • DB sizes for all databases per SQL instance
      • Instances (Is there more than one SQL instance, and why?)
      • Growth requirements (Usually measured in daily change rate to help with other considerations like backups and replication for DR purposes)
      • RAM requirements & CPU requirements
      • Performance requirements, such as latency/ IO and IOPS requirements
      • HA requirements (Can your customer base wait for you to restore your SQL Server in case of an outage? How long does it take to restore?)
      • Regulatory/Compliance requirements such as HIPAA and PCI
      • Maintenance schedule (Do the maintenance jobs complete on time every day)
      • Replication requirements
      • Reporting requirements (Does this deployment need a reporting server so as to not interrupt production workloads)
      • Backup (What is used today to backup production data? Has it been tested? What are the challenges?)

      In environments requiring high performance database response times, we utilize native Windows Server tools, such as perfmon, to collect very detailed performance matrixes from exiting SQL Servers to identify performance bottlenecks and other considerations that help us resolve these issues in current or future database deployments. Because a good SQL Server’s performance is heavily dependent on the disk sub-systems, we will go deeper in disk design recommendations for private clouds, VMs and bare metal deployments.


      LEARN MORE

      SQL Server Disk Layout for Performance, HA and DR

      Separating database files into different disks is a best practice. It helps performance and helps your DBA easily identify performance issues when troubleshooting. For example, you could have a runaway query beating up your TempDB. If your TempDB is on a separate disk from your prod database files, you can easily identify that the TempDB disk is being thrashed and that same TempDB issue is not stepping on other workloads in your environments.

      For high performance requirements, SSDs are recommended. The following is a basic disk layout for performance:

      • Data (MDF, NDF files): Fast read/write disk, many drives in an array preferred for best performance
      • Index (NDF files, not often used): Fast read/write disk, many drives in an array preferred for best performance
      • Log (LDF files): Fast write performance, not much reading happens here
      • TempDB (Temp database to crunch numbers and formulas): Should be a fast disk. SSD is preferred. Do not combine with other data or log files.
      • Page File (NTFS): Keep this on a separate disk LUN or Array if possible. C: drive is not a good place for a page file on a SQL Server.

      In a SQL deployment that requires DR, the above disk layout will allow you to granularly replicate just the SQL data and the OS that needs to be replicated, leaving TempDB and pagefile out of the replication design since those files are reset during reboot. TempDB and pagefile also produce lots of noisy IO. Replicating those files will result in unnecessarily heavy bandwidth and disk utilization.

      Check back soon for Part 2: Deploying Microsoft SQL Servers in a Private Cloud with High Availability

      Rob Lerner


      READ MORE



      Source link

      How to Benchmark the Performance of a Redis Server on Ubuntu 18.04


      Introduction

      Benchmarking is an important practice when it comes to analyzing the overall performance of database servers. It’s helpful for identifying bottlenecks as well as opportunities for improvement within those systems.

      Redis is an in-memory data store that can be used as database, cache and message broker. It supports from simple to complex data structures including hashes, strings, sorted sets, bitmaps, geospatial data, among other types. In this guide, we’ll demonstrate how to benchmark the performance of a Redis server running on Ubuntu 18.04, using a few different tools and methods.

      Prerequisites

      To follow this guide, you’ll need:

      Note: The commands demonstrated in this tutorial were executed on a dedicated Redis server running on a 4GB DigitalOcean Droplet.

      Redis comes with a benchmark tool called redis-benchmark. This program can be used to simulate an arbitrary number of clients connecting at the same time and performing actions on the server, measuring how long it takes for the requests to be completed. The resulting data will give you an idea of the average number of requests that your Redis server is able to handle per second.

      The following list details some of the common command options used with redis-benchmark:

      • -h: Redis host. Default is 127.0.0.1.
      • -p: Redis port. Default is 6379.
      • -a: If your server requires authentication, you can use this option to provide the password.
      • -c: Number of clients (parallel connections) to simulate. Default value is 50.
      • -n: How many requests to make. Default is 100000.
      • -d: Data size for SET and GET values, measured in bytes. Default is 3.
      • -t: Run only a subset of tests. For instance, you can use -t get,set to benchmark the performance of GET and SET commands.
      • -P: Use pipelining for performance improvements.
      • -q: Quiet mode, shows only the average requests per second information.

      For instance, if you want to check the average number of requests per second that your local Redis server can handle, you can use:

      You will get output similar to this, but with different numbers:

      Output

      PING_INLINE: 85178.88 requests per second PING_BULK: 83056.48 requests per second SET: 72202.16 requests per second GET: 94607.38 requests per second INCR: 84961.77 requests per second LPUSH: 78988.94 requests per second RPUSH: 88652.48 requests per second LPOP: 87950.75 requests per second RPOP: 80971.66 requests per second SADD: 80192.46 requests per second HSET: 84317.03 requests per second SPOP: 78125.00 requests per second LPUSH (needed to benchmark LRANGE): 84175.09 requests per second LRANGE_100 (first 100 elements): 52383.45 requests per second LRANGE_300 (first 300 elements): 21547.08 requests per second LRANGE_500 (first 450 elements): 14471.78 requests per second LRANGE_600 (first 600 elements): 9383.50 requests per second MSET (10 keys): 71225.07 requests per second

      You can also limit the tests to a subset of commands of your choice using the -t parameter. The following command shows the averages for the GET and SET commands only:

      • redis-benchmark -t set,get -q

      Output

      SET: 76687.12 requests per second GET: 82576.38 requests per second

      The default options will use 50 parallel connections to create 100000 requests to the Redis server. If you want to increase the number of parallel connections to simulate a peak in usage, you can use the -c option for that:

      • redis-benchmark -t set,get -q -c 1000

      Because this will use 1000 concurrent connections instead of the default 50, you should expect a decrease in performance:

      Output

      SET: 69444.45 requests per second GET: 70821.53 requests per second

      If you want detailed information in the output, you can remove the -q option. The following command will use 100 parallel connections to run 1000000 SET requests on the server:

      • redis-benchmark -t set -c 100 -n 1000000

      You will get output similar to this:

      Output

      ====== SET ====== 1000000 requests completed in 11.29 seconds 100 parallel clients 3 bytes payload keep alive: 1 95.22% <= 1 milliseconds 98.97% <= 2 milliseconds 99.86% <= 3 milliseconds 99.95% <= 4 milliseconds 99.99% <= 5 milliseconds 99.99% <= 6 milliseconds 100.00% <= 7 milliseconds 100.00% <= 8 milliseconds 100.00% <= 8 milliseconds 88605.35 requests per second

      The default settings use 3 bytes for key values. You can change this with the option -d. The following command will benchmark GET and SET commands using 1MB key values:

      • redis-benchmark -t set,get -d 1000000 -n 1000 -q

      Because the server is working with a much bigger payload this time, a significant decrease of performance is expected:

      Output

      SET: 1642.04 requests per second GET: 822.37 requests per second

      It is important to realize that even though these numbers are useful as a quick way to evaluate the performance of a Redis instance, they don't represent the maximum throughput a Redis instance can sustain. By using pipelining, applications can send multiple commands at once in order to improve the number of requests per second the server can handle. With redis-benchmark, you can use the -P option to simulate real world applications that make use of this Redis feature.

      To compare the difference, first run the redis-benchmark command with default values and no pipelining, for the GET and SET tests:

      • redis-benchmark -t get,set -q

      Output

      SET: 86281.27 requests per second GET: 89847.26 requests per second

      The next command will run the same tests, but will pipeline 8 commands together:

      • redis-benchmark -t get,set -q -P 8

      Output

      SET: 653594.81 requests per second GET: 793650.75 requests per second

      As you can see from the output, there is a substantial performance improvement with the use of pipelining.

      Checking Latency with redis-cli

      If you'd like a simple measurement of the average time a request takes to receive a response, you can use the Redis client to check for the average server latency. In the context of Redis, latency is a measure of how long does a ping command take to receive a response from the server.

      The following command will show real-time latency stats for your Redis server:

      You'll get output similar to this, showing an increasing number of samples and a variable average latency:

      Output

      min: 0, max: 1, avg: 0.18 (970 samples)

      This command will keep running indefinitely. You can stop it with a CTRL+C.

      To monitor latency over a certain period of time, you can use:

      • redis-cli --latency-history

      This will track latency averages over time, with a configurable interval that is set to 15 seconds by default. You will get output similar to this:

      Output

      min: 0, max: 1, avg: 0.18 (1449 samples) -- 15.01 seconds range min: 0, max: 1, avg: 0.16 (1449 samples) -- 15.00 seconds range min: 0, max: 1, avg: 0.17 (1449 samples) -- 15.00 seconds range min: 0, max: 1, avg: 0.17 (1444 samples) -- 15.01 seconds range min: 0, max: 1, avg: 0.17 (1446 samples) -- 15.01 seconds range min: 0, max: 1, avg: 0.17 (1449 samples) -- 15.00 seconds range min: 0, max: 1, avg: 0.16 (1444 samples) -- 15.00 seconds range min: 0, max: 1, avg: 0.17 (1445 samples) -- 15.01 seconds range min: 0, max: 1, avg: 0.16 (1445 samples) -- 15.01 seconds range ...

      Because the Redis server on our example is idle, there's not much variation between latency samples. If you have a peak in usage, however, this should be reflected as an increase in latency within the results.

      If you'd like to measure the system latency only, you can use --intrinsic-latency for that. The intrinsic latency is inherent to the environment, depending on factors such as hardware, kernel, server neighbors and other factors that aren't controlled by Redis.

      You can see the intrinsic latency as a baseline for your overall Redis performance. The following command will check for the intrinsic system latency, running a test for 30 seconds:

      • redis-cli --intrinsic-latency 30

      You should get output similar to this:

      Output

      … 498723744 total runs (avg latency: 0.0602 microseconds / 60.15 nanoseconds per run). Worst run took 22975x longer than the average latency.

      Comparing both latency tests can be helpful for identifying hardware or system bottlenecks that could affect the performance of your Redis server. Considering the total latency for a request to our example server has an average of 0.18 microseconds to complete, an intrinsic latency of 0.06 microseconds means that one third of the total request time is spent by the system in processes that aren't controlled by Redis.

      Memtier is a high-throughput benchmark tool for Redis and Memcached created by Redis Labs. Although very similar to redis-benchmark in various aspects, Memtier has several configuration options that can be tuned to better emulate the kind of load you might expect on your Redis server, in addition to offering cluster support.

      To get Memtier installed on your server, you'll need to compile the software from source. First, install the dependencies necessary to compile the code:

      • sudo apt-get install build-essential autoconf automake libpcre3-dev libevent-dev pkg-config zlib1g-dev

      Next, go to your home directory and clone the memtier_benchmark project from its Github repository:

      • cd
      • git clone https://github.com/RedisLabs/memtier_benchmark.git

      Navigate to the project directory and run the autoreconf command to generate the application configuration scripts:

      • cd memtier_benchmark
      • autoreconf -ivf

      Run the configure script in order to generate the application artifacts required for compiling:

      Now run make to compile the application:

      Once the build is finished, you can test the executable with:

      • ./memtier_benchmark --version

      This will give you the following output:

      Output

      memtier_benchmark 1.2.17 Copyright (C) 2011-2017 Redis Labs Ltd. This is free software. You may redistribute copies of it under the terms of the GNU General Public License <http://www.gnu.org/licenses/gpl.html>. There is NO WARRANTY, to the extent permitted by law.

      The following list contains some of the most common options used with the memtier_benchmark command:

      • -s: Server host. Default is localhost.
      • -p: Server port. Default is 6379.
      • -a: Authenticate requests using the provided password.
      • -n: Number of requests per client (default is 10000).
      • -c: Number of clients (default is 50).
      • -t: Number of threads (default is 4).
      • --pipeline: Enable pipelining.
      • --ratio: Ratio between SET and GET commands, default is 1:10.
      • --hide-histogram: Hides detailed output information.

      Most of these options are very similar to the options present in redis-benchmark, but Memtier tests performance in a different way. To simulate common real-world environments better, the default benchmark performed by memtier_benchmark will test for GET and SET requests only, on a ratio of 1 to 10. With 10 GET operations for each SET operation in the test, this arrangement is more representative of a common web application using Redis as a database or cache. You can adjust the ratio value with the option --ratio.

      The following command runs memtier_benchmark with default settings, while providing only high-level output information:

      • ./memtier_benchmark --hide-histogram

      Note: if you have configured your Redis server to require authentication, you should provide the -a option along with your Redis password to the memtier_benchmark command:

      • ./memtier_benchmark --hide-histogram -a your_redis_password

      You'll see output similar to this:

      Output

      ... 4 Threads 50 Connections per thread 10000 Requests per client ALL STATS ========================================================================= Type Ops/sec Hits/sec Misses/sec Latency KB/sec ------------------------------------------------------------------------- Sets 8258.50 --- --- 2.19800 636.05 Gets 82494.28 41483.10 41011.18 2.19800 4590.88 Waits 0.00 --- --- 0.00000 --- Totals 90752.78 41483.10 41011.18 2.19800 5226.93

      According to this run of memtier_benchmark, our Redis server can execute about 90 thousand operations per second in a 1:10 SET/GET ratio.

      It's important to note that each benchmark tool has its own algorithm for performance testing and data presentation. For that reason, it's normal to have slightly different results on the same server, even when using similar settings.

      Conclusion

      In this guide, we demonstrated how to perform benchmark tests on a Redis server using two distinct tools: the included redis-benchmark, and the memtier_benchmark tool developed by Redis Labs. We also saw how to check for the server latency using redis-cli. Based on the data obtained from these tests, you'll have a better understanding of what to expect from your Redis server in terms of performance, and what are the bottlenecks of your current setup.



      Source link