One place for hosting & domains

      Microsoft

      Check This Overlooked Setting to Troubleshoot ‘Strange’ Microsoft SQL Server Performance Issues


      As a SQL DBA or a system admin of highly transactional, performance demanding SQL databases, you may often find yourself perplexed by “strange” performance issues reported by your user base. By strange, I mean any issue where you are out of ideas, having exhausted standard troubleshooting tactics and when spending money on all-flash storage is just not in the budget.

      Working under pressure from customers or clients to resolve performance issues is not easy, especially when C-Level, sales and end users are breathing down your neck to solve the problem immediately. Contrary to popular belief from many end users, we all know that these types of issues are not resolved with a magic button or the flip of a switch.

      But what if there was a solution that came close?

      Let’s review the typical troubleshooting process, and an often-overlooked setting that may just be your new “magic button” for resolving unusual SQL server performance issues.

      Resolving SQL Server Performance Issues: The Typical Process

      Personally, I find troubleshooting SQL related performance issues very interesting. In my previous consulting gigs, I participated in many white boarding sessions and troubleshooting engagements as a highly paid last-resort option for many clients. When I dug into their troubleshooting process, I found a familiar set of events happening inside an IT department specific to SQL Server performance issues.

      Here are the typical steps:

      • Review monitoring tools for CPU, RAM, IO, Blocks and so on
      • Start a SQL Profiler to collect possible offending queries and get a live view of the slowness
      • Check underlying storage for latency per IO, and possible bottle necks
      • Check if anyone else is running any performance intensive processes during production hours
      • Find possible offending queries and stop them from executing
      • DBAs check their SQL indexes and other settings

      When nothing is found from the above process, the finger pointing starts. “It’s the query.” “No, it’s the index.” “It’s the storage.” “Nope. It’s the settings in your SQL server.” And so it goes.

      Sound familiar?

      An Often-Forgotten Setting to Improve SQL Server Performance

      Based on the typical troubleshooting process, IT either implements a solution to prevent identical issues from coming back or hope to fix the issue by adding all flash and other expensive resources. These solutions have their place and are all equally important to consider.

      There is, however, an often-forgotten setting that you should check first—the block allocation size of your NTFS partition in the Microsoft Windows Server.

      The block allocation setting of the NTFS partition is set at formatting time, which happens very early in the process and is often performed by a sysadmin building the VM or bare metal server well before Microsoft SQL is installed. In my experience, this setting is left as the default (4K) during the server build process and is never looked at again.

      Why is 4K a bad setting? A Microsoft SQL page is 8KB in size. With a 4K block, you are creating two IO operations for every page request. This is a big deal. The Microsoft recommended block size for SQL server is 64K. This way, the page is collected in one IO operation.

      In bench tests of highly transactional databases on 64K block allocation in the NTFS partition, I frequently observe improved database performance by as much as 50 percent or more. The more IO intensive your DB is, the more this setting helps. Assuming your SQL server’s drive layout is perfect, for many “strange performance” issues, this setting was the magic button. So, if you are experiencing unexplained performance issues, this simple formatting setting maybe just what you are looking for.

      A word of caution: We don’t want to confuse this NTFS block allocation with your underlying storage blocks. This storage should be set to the manufacturer’s recommended block size. For example, as of recently, Nimble storage bock allocation at 8k provided best results with medium and large database sizes. This could change depending on the storage vendor and other factors, so be sure to check this with your storage vendor prior to creating LUNs for SQL servers.

      How to Check the NTFS Block Allocation Setting

      Here is a simple way to check what block allocation is being used by your Window Server NTFS partition:

      Open the command prompt as administrator and run the following command replacing the C: drive with a drive letter of your database data files. Repeat this step for your drives containing the logs and TempDB files:

      • fsutil fsinfo ntfsinfo c:

      Look for the reading “Bytes Per Cluster.”  If it’s set to 4096, that is the undesirable 4K setting.

      The fix is easy but could be time consuming with large database sizes. If you have an AlwaysOn SQL cluster, this can be done with no downtime. If you don’t have an AlwaysOn MSSQL cluster, then a downtime window will be required. Or, perhaps it’s time to build an AlwaysOn SQL cluster and kill two birds with one stone.

      To address the issue, you will want to re-format the disks containing SQL data with 64K blocks.

      Concluding Thoughts

      If your NTFS block setting is at 4K right now, moving the DB files to 64K formatted disks will immediately improve performance. Don’t wait to check into this one.

      Explore INAP Cloud.

      LEARN MORE

      Rob Lerner


      READ MORE



      Source link

      Microsoft Licensing Changes: How Will Bring Your Own License in AWS be Affected?


      The option to deploy “on-premises” Microsoft software on any provider’s cloud without Software Assurance and License Mobility rights is now a thing of the past. In August, Microsoft updated its licensing terms, which went into effect on October 1, 2019. Under the updated terms, Windows licenses purchased without Software Assurance and License Mobility rights cannot be deployed on services offered by several public cloud providers.

      “The emergence of dedicated hosted cloud services has blurred the line between traditional outsourcing and cloud services and has led to the use of on-premises licenses on cloud services,” Microsoft said in a statement when the changes were first announced. “As a result, we’re updating the outsourcing terms for Microsoft on-premises licenses.”

      Microsoft noted that these updated terms create a clearer distinction between on-premises/traditional outsourcing and cloud services. Another goal of the changes is to create consistency in licensing terms across multitenant and dedicated cloud services.

      The changes, according to Microsoft, will allow the firm to better compete with other dedicated offerings from hyperscale cloud companies. The licensing change is applicable to the following providers:

      • Microsoft
      • Alibaba
      • Amazon (including VMware Cloud on AWS)
      • Google

      What does this mean for you?

      As of October 1, 2019, you can no longer bring your own Windows Licensing to the above providers without Software Assurance and License Mobility rights.

      This includes, but is not limited to, the following:

      • Windows Server Standard/Enterprise
      • Windows SQL Server Standard/Enterprise
      • System Center Server

      Options for Bringing Licenses to AWS

      Do you have Microsoft licenses you’ve brought to AWS, or are you currently considering bringing your own license? Consider your options:

      Without Software Assurance and licensing purchased prior to October 1, 2019—You can access hardware dedicated to your use if you’re using Amazon EC2 Dedicated Hosts, which makes it possible to bring Microsoft software licenses without Software Assurance or License Mobility benefits. However, these licenses must be purchased prior to October 1, 2019, and cannot be upgraded to versions released after October 1, 2019 in order to stay eligible.

      With Software Assurance—With Microsoft License Mobility through Software Assurance, you’re allowed to bring many Microsoft software licenses into AWS Cloud – e.g., for use with Amazon EC2 instances.

      Purchase licensing via AWS—Running a cost-analysis with publicly available pricing, this is the least cost-effective option, due to recurring monthly subscription fees. However, it is convenient. Not only does Amazon manage licensing compliance, they support many legacy versions of Microsoft software.

      Future Proofing Your Hybrid Cloud

      As Microsoft’s licensing changes go into effect, it’s worth doing a cost analysis for future projects. At INAP, we offer a variety of cloud solutions, and our solution engineers are familiar with creating cost-effective and full-feature clouds that best meet your workload and application needs.

      With our dedicated private cloud (DPC), virtual private cloud (VPC) and managed AWS and Azure offerings, we can run a cost analysis of multiple solutions, including hybrid DPC/VPC and AWS/Azure. No matter the solution best for your applications, our goal is to ensure you’re able to meet resiliency, scalability and security requirements without sacrificing speed and power.

      Explore INAP Cloud.

      LEARN MORE

      Sean Cannata


      READ MORE



      Source link

      Deploying Microsoft SQL Servers in a Private Cloud with High Availability


      There are many considerations to account for when implementing a Microsoft SQL server in a private cloud environment. Today’s SQL dependent applications have different performance and high availability (HA) requirements. In Part 1 of this series, we covered implementation of Microsoft SQL Servers in a private cloud for maximum performance. Here in Part 2, we’ll explore deployment in a private cloud with HA.

      HA Options Available Within Microsoft SQL Server

      When designing Microsoft SQL servers with HA, we must consider SQL dependent application requirements and features to ensure they work with highly available deployments. Below is a list of a few HA options natively available within the Microsoft SQL server and organized by deployment type.

      Physical SQL Server Deployment with Single SAN and SQL Clustering

      This HA option is a Microsoft Server native clustering with SQL clustering with a single SAN or direct-attached sub-storage system. In this option, we run two SQL server nodes with a single copy of the database running on a SAN. Only one instance of SQL is active and attached to the databases at any given time.

      Pros: This option protects against a single SQL node failure. The surviving node will start its SQL instance and attach to the same databases to continue serving client requests. It offers the best compute performance and good storage performance.

      Cons: This option does not protect against database corruption or storage sub-system issues. Both corruption and SAN issues will impact the entire cluster. Licensing costs may be an issue if licensing by CPU.

      Virtual SQL Server Deployment with SQL on SAN and VMWare HA

      Standard Windows clustering is not recommended in this option. VMWare and other hypervisors will provide a level of HA to protect against hypervisor node failure by restarting the SQL server VM on other nodes.

      Pros: This is an easy way to implement HA without having to configure and support windows server clustering. Per CPU SQL licensing costs are reduced. This option offers good computer performance and good storage performance.

      Cons: This option does not protect against database corruption or storage sub-system issues. Both problems will cause an outage of the SQL server.

      Virtual SQL Server Deployment with SQL on Local SSD Disk and SQL Native AlwaysOn HA

      In this option, we utilize two SQL virtual machines running on top of local SSD drives configured in a RAID10 directly on the hypervisor nodes. Each SQL VM is running on separate nodes with local storage. The database, which requires HA, is being protected using SQL AlwaysOn features by maintaining two copies of the database on two separate VMs and two separate, high-speed sub-storage systems.

      Pros: This option offers strong HA protection against single SQL Server VM Failure, single hardware node failure and single storage system failure. Automatic database page corruption protection is provided by the AlwaysOn technology. AlwaysOn keeps two separate copies of the database in sync. It offers good compute performance and the best storage performance in a virtual environment. The virtualized SQL server provides savings on per CPU licensing costs by assigning just the amount of CPU that your SQL server needs. Only the active SQL servers instance requires licensing.

      Cons: VMWare vMotion should not be used while the SQL VM is turned on. In this design, however, vMotion is not needed since the AlwaysOn protected database server will not need to vMotion during failover. By design, other VMWare HA and resource management services will not be used in this option. Local storage has to be scoped with growth in mind. My rule of thumb is to scope three years of required growth for local storage per server.

      Physical SQL Server Deployment with Local SSD Storage and SQL Native AlwaysOn HA

      We utilize two physical SQL servers in this option. Each hardware server has SSD RAID10 for local storage. Each SQL server is running on separate bare metal servers. The database, which requires HA, is being protected using SQL AlwaysOn features by maintaining two copies of the database on two separate bare metal servers, and two separate, high-speed sub-storage systems.

      Pros: This option provides strong HA protection against single SQL Server failure and single storage system failure. Automatic database page corruption protection is provided by the AlwaysOn technology. AlwaysOn keeps two separate copies of the database in sync. This option offers the best compute performance and best storage performance. Only the active SQL servers instance requires licensing.

      Cons: Per CPU licensing costs could get pricy depending on CPU core count. Local storage must be scoped with growth in mind. My rule of thumb is to scope three years of required local storage growth per server.


      LEARN MORE

      Closing Thoughts on Microsoft SQL Server HA Options

      Compare your workload requirements with the abilities of each option and your budgetary considerations to determine what works best for you. Development or test SQL servers and production workloads can easily run inside virtualized environments with SAN storage. Some of your more demanding production workloads may need to be placed into virtual or physical environments with local SSD-based storage for best performance and HA needs.

      Integrating these options into your private cloud environments is simple and can save on costs down the line. When working with local storage, be sure to future proof your disk space growth availability the first time. Future proofing your local storage for growth will save on maintenance headaches and costs in the long run.

      In this series, we have looked at basic SQL server concepts and performance factors to be considered when designing Microsoft SQL server deployments for HA and high performance. Download the whitepaper by filling out the form below to get this series in its entirety.

      High performance is measured differently for different applications and is greatly dependent on the end user’s expectations as they interact with your supported application. By collecting these simple measurements and requirements up-front, you will be able to make decisions to right size the environment for your end user and to help you stick to your budget.

      Read Part 1: How to Implement Microsoft SQL Servers in a Private Cloud for Maximum Performance

      Read: The Basics of a Microsoft SQL Server Architecture

      Download the full white paper below:

      Laura Vietmeyer


      READ MORE



      Source link