One place for hosting & domains

      Performance

      Networks and Online Gaming: 3 Ways to Improve Performance and Retain Your Audience


      What makes or breaks the technical success of a new multiplayer video game? Or for that matter, the success of any given online gaming session or match? There are a lot of reasons, to be sure, but success typically boils down to factors outside of the end users’ control. At the top of the list, arguably, is network performance.

      In June, 2018 Fornite experienced a network interruption that caused world-famous streamer, Ninja, to swap mid-stream to Hi-Rez’s Realm Royale. Ninja gave the game rave reviews, resulting in a huge userbase jumping over to play Realm Royale. And just this month, the launch of Wolcen: Lords of Mayhem was darkened by infrastructure issues as the servers couldn’t handle the number of users flocking to the game. While both popular games might not have experienced long-term damage, ongoing issues like these can turn users toward a competitor’s game or drive them away for good.

      Low latency is so vital, that in a 2019 survey, seven in 10 gamers said they will play a laggy game for less than 10 minutes before quitting. And nearly three in 10 say what matters most about an online game is having a seamless gaming experience without lag. What can game publishers do to prevent lag, increase network performance and increase the chances that their users won’t “rage quit”?

      Taking Control of the Network to Avoid Log Offs

      There are a few different ways to answer the question and avoid scenario outlined above, but some solutions are stronger than others.

      Increase Network Presence with Edge Deployments

      One option is to spread nodes across multiple geographical presences to reduce the distance a user must traverse to connect. Latency starts as a physics problem, so the shorter the distance between data centers and users, the lower the latency.

      This approach isn’t always the best answer, however, as everyday there can be both physical and logical network issues just miles apart from a user and a host. Some of these problems can be the difference between tens to thousands of milliseconds across a single carrier.

      Games are also increasingly global. You can put a server in Los Angeles to be close to users on the West Coast, but they’re going to want to play with their friends on the East Coast, or somewhere even further away.

      Connect Through the Same Carriers as the End Users

      Another answer is to purchase connectivity to some of the same networks end users will connect from, such as Comcast, AT&T, Time Warner, Telecom, Verizon, etc.

      A drawback of this option, though, stems from the abolishment of Net Neutrality. Carriers don’t necessarily need to honor best-route methodology anymore, meaning they can prioritize cost efficiency over performance on network configurations. I’ve personally observed traffic going from Miami to Tampa being routed all the way to Houston and back, as show in the images below.

      Network routing
      The traffic on the left follows best-route methodology, while the traffic on the right going from Miami to Tampa is being routed through Houston. This is one consequence of the abolishment of Net Neutrality.

      Purchasing connectivity that gets you directly into the homes of end-users may seem like the best method to reduce latency, but bottlenecks or indirect routing inside these large carriers’ networks can cause issues. A major metro market in the United States can also have three to four incumbent consumer carriers providing residential services to gamers, necessitating and IP blend to effectively reach end users. However, startups or gaming companies don’t want to build their own blended IP solution in every market they want to build out in.

      Choose a Host with a Blended Carrier Agreement

      The best possible solution to the initial scenario is to host with a carrier that has a blended carrier agreement, with a network route optimization technology to algorithmically traverse all of those carriers.

      Take for example, INAP’s Performance IP® solution. This technology makes a daily average of nearly 500 million optimizations across INAP’s global network to automatically put a customer’s outbound traffic on the best-performing route. This type of technology reduces latency upwards of 44 percent and prevents packet loss, preventing users from experiencing the lag that can change the fate of a game’s commercial success. You can explore our IP solution by running your own performance test.

      Taking Control When Uncontrollable Factors are at Play

      There will be times that game play is affected by end user hardware. It makes a difference, and it always will, but unfortunately publishers can’t control the type of access their users have to the internet. In some regions of the world, high speed internet is just a dream, while in others it would be unfathomable to go without high-speed internet access.

      Inline end user networking equipment can also play a role in network behavior. Modems, switches, routers and carrier equipment can cause poor performance. Connectivity being switched through an entire neighborhood, throughput issues during peak neighborhood activities, satellite dishes angled in an unoptimized position limiting throughput—there’s a myriad of reasons that user experience can be impacted.

      With these scenarios, end users often understand what they are working with and make mental allowances to cope with any limitations. Or they’ll upgrade their internet service and gaming hardware accordingly.

      The impact of network performance on streaming services and game play can’t be underscored enough. Most end users will make the corrections they can in order to optimize game play and connectivity. The rest is up to the publisher.

      Explore INAP’s Global Network.

      LEARN MORE

      Dan Lotterman


      READ MORE



      Source link

      Check This Overlooked Setting to Troubleshoot ‘Strange’ Microsoft SQL Server Performance Issues


      As a SQL DBA or a system admin of highly transactional, performance demanding SQL databases, you may often find yourself perplexed by “strange” performance issues reported by your user base. By strange, I mean any issue where you are out of ideas, having exhausted standard troubleshooting tactics and when spending money on all-flash storage is just not in the budget.

      Working under pressure from customers or clients to resolve performance issues is not easy, especially when C-Level, sales and end users are breathing down your neck to solve the problem immediately. Contrary to popular belief from many end users, we all know that these types of issues are not resolved with a magic button or the flip of a switch.

      But what if there was a solution that came close?

      Let’s review the typical troubleshooting process, and an often-overlooked setting that may just be your new “magic button” for resolving unusual SQL server performance issues.

      Resolving SQL Server Performance Issues: The Typical Process

      Personally, I find troubleshooting SQL related performance issues very interesting. In my previous consulting gigs, I participated in many white boarding sessions and troubleshooting engagements as a highly paid last-resort option for many clients. When I dug into their troubleshooting process, I found a familiar set of events happening inside an IT department specific to SQL Server performance issues.

      Here are the typical steps:

      • Review monitoring tools for CPU, RAM, IO, Blocks and so on
      • Start a SQL Profiler to collect possible offending queries and get a live view of the slowness
      • Check underlying storage for latency per IO, and possible bottle necks
      • Check if anyone else is running any performance intensive processes during production hours
      • Find possible offending queries and stop them from executing
      • DBAs check their SQL indexes and other settings

      When nothing is found from the above process, the finger pointing starts. “It’s the query.” “No, it’s the index.” “It’s the storage.” “Nope. It’s the settings in your SQL server.” And so it goes.

      Sound familiar?

      An Often-Forgotten Setting to Improve SQL Server Performance

      Based on the typical troubleshooting process, IT either implements a solution to prevent identical issues from coming back or hope to fix the issue by adding all flash and other expensive resources. These solutions have their place and are all equally important to consider.

      There is, however, an often-forgotten setting that you should check first—the block allocation size of your NTFS partition in the Microsoft Windows Server.

      The block allocation setting of the NTFS partition is set at formatting time, which happens very early in the process and is often performed by a sysadmin building the VM or bare metal server well before Microsoft SQL is installed. In my experience, this setting is left as the default (4K) during the server build process and is never looked at again.

      Why is 4K a bad setting? A Microsoft SQL page is 8KB in size. With a 4K block, you are creating two IO operations for every page request. This is a big deal. The Microsoft recommended block size for SQL server is 64K. This way, the page is collected in one IO operation.

      In bench tests of highly transactional databases on 64K block allocation in the NTFS partition, I frequently observe improved database performance by as much as 50 percent or more. The more IO intensive your DB is, the more this setting helps. Assuming your SQL server’s drive layout is perfect, for many “strange performance” issues, this setting was the magic button. So, if you are experiencing unexplained performance issues, this simple formatting setting maybe just what you are looking for.

      A word of caution: We don’t want to confuse this NTFS block allocation with your underlying storage blocks. This storage should be set to the manufacturer’s recommended block size. For example, as of recently, Nimble storage bock allocation at 8k provided best results with medium and large database sizes. This could change depending on the storage vendor and other factors, so be sure to check this with your storage vendor prior to creating LUNs for SQL servers.

      How to Check the NTFS Block Allocation Setting

      Here is a simple way to check what block allocation is being used by your Window Server NTFS partition:

      Open the command prompt as administrator and run the following command replacing the C: drive with a drive letter of your database data files. Repeat this step for your drives containing the logs and TempDB files:

      • fsutil fsinfo ntfsinfo c:

      Look for the reading “Bytes Per Cluster.”  If it’s set to 4096, that is the undesirable 4K setting.

      The fix is easy but could be time consuming with large database sizes. If you have an AlwaysOn SQL cluster, this can be done with no downtime. If you don’t have an AlwaysOn MSSQL cluster, then a downtime window will be required. Or, perhaps it’s time to build an AlwaysOn SQL cluster and kill two birds with one stone.

      To address the issue, you will want to re-format the disks containing SQL data with 64K blocks.

      Concluding Thoughts

      If your NTFS block setting is at 4K right now, moving the DB files to 64K formatted disks will immediately improve performance. Don’t wait to check into this one.

      Explore INAP Cloud.

      LEARN MORE

      Rob Lerner


      READ MORE



      Source link

      Network Route Optimization Made Easy with Performance IP (Demo)


      Latency. It’s the mortal enemy of virtual dragon slayers, the bane of digital advertisers and the adversary of online retailers. Every end user has experienced the negative effects of latency, and even though they don’t always understand the intricacies of routing traffic through a global network, their responses to that latency can have a lasting impact on the companies whose networks aren’t functioning at peak performance.

      Consider this: More than seven in 10 online gamers will play a lagging game for less than 10 minutes before quitting. As much as 78 percent of end users will go to a competitor’s site due to poor performance. And a one second delay can cause an 11 percent drop in page views, a seven percent drop in conversions and a 16 percent drop in customer satisfaction. For online merchants, even the big boys like Amazon, each one-second delay in page load time can lead to losses of $1.6 billion annually.

      Milliseconds matter. Anyone focused on network optimization knows this. But did you know that Border Gateway Protocol (BGP) only routes traffic through the best path around 18 percent of the time? The lowest number of hops does not equate to the fastest route. And yet seeking a path with the least hops is the default.

      What if there was a better way to find the lowest latency route to reach your end users?

      Find the Fastest Network Route with Performance IP®

      With INAP, finding the lowest latency route doesn’t require you to lift a finger. Customers in our data centers are connected to our robust global network and proprietary route optimization engine. Performance IP® enhances BGP by assessing the best-performing routes in real time.

      This technology makes a daily average of nearly 500 million optimization across our global network to automatically put your outbound traffic on the best-performing route. And with the meshed infrastructure of Tier 1 ISPs and our global network, you don’t have to choose between reliability, connectivity and speed. You can download the data sheet on Performance IP®here.

      “In online games, lag kills,” said Todd Harris, COO of Hi-Rez Studios, an INAP customer. “To deliver the best experience, we have to make sure that gamers are able to play on the best network while using the most efficient route. INAP delivers all of that.”

      Skeptical about what Performance IP® can do for you? Let’s run a destination test. Below, we’ll take you through the test step by step so you can get the most out of the demo when you try it for yourself.

      Breaking Down the Performance IP® Demo

      You can access the demo from the INAP homepage or the Performance IP® page. Get started by entering your website URL or any destination IP. We’ll use ca.gov for our test purposes.

      Performance IP Homepage

      Next, choose your source location. The locations in the drop-down menu represent INAP’s data centers and network points of presence where you can take advantage of the Performance IP® service. Each market has a different blend of Tier 1 ISPs. Performance IP® measures all carrier routes out of the data center and optimizes your traffic on the fastest route to your target address.

      Here, we’re running the test out of our Atlanta flagship data center, but you can test out all of our markets with the demo. We’ll run the route optimization test to our sample website, which is located in California. Once you have all your information entered, click “Run Destination Test.”

      Destination test
      Click to view full-size image.

      As you can see from the result of our test above, the shortest distance is not the lowest latency path. Each Greek letter on the chart represents an automonous system (AS). The Performance IP® service looked at seven carriers in this scenario and was able to optimize the route so that our traffic gets to its destination 21.50 percent faster—16.017 ms faster—than the slowest carrier.

      Destination Test Summary
      Click to view full-size image.

      In the traceroute chart above, we can study the latency for the each carrier more closely. Although in this scenario the best perfroming carrier passed though three automous systems while all of the other carriers passed through only two, it was still the fastest. Note that default BGP protocol would have sent us through any of the other carriers, including the slowest route through Carrier 3.

      Once you’ve had time to adequately study the outcome of the test, click “Continue” to see carrier performance over the last month. This chart measures the percentage of carrier prefixes originating from our Atlanta POP that had the best and worst performing routes for any given day of the month. While individual carrier performance can vary radically, if you’re a Performance IP® customer this won’t be a concern for you. Since the engine measures network paths millions of times a day, Performance IP® sends outbound traffic along the lowest latency path virtually 100 percent of the time.

      The final tab of the demo allows you to study our product line-up and open a chat to get a quote. Performance IP® is available for INAP colocation customers and is included with INAP Cloud products. If you’re not interested in these infrastructure solutions, you can still purchase Performance IP® from one of our data centers and connect it to your environment.

      Run the test for yourself, or chat with us now to get a quote.

      Explore the INAP Performance IP® Demo.

      LEARN MORE

      Laura Vietmeyer


      READ MORE



      Source link