One place for hosting & domains

      Check This Overlooked Setting to Troubleshoot ‘Strange’ Microsoft SQL Server Performance Issues


      As a SQL DBA or a system admin of highly transactional, performance demanding SQL databases, you may often find yourself perplexed by “strange” performance issues reported by your user base. By strange, I mean any issue where you are out of ideas, having exhausted standard troubleshooting tactics and when spending money on all-flash storage is just not in the budget.

      Working under pressure from customers or clients to resolve performance issues is not easy, especially when C-Level, sales and end users are breathing down your neck to solve the problem immediately. Contrary to popular belief from many end users, we all know that these types of issues are not resolved with a magic button or the flip of a switch.

      But what if there was a solution that came close?

      Let’s review the typical troubleshooting process, and an often-overlooked setting that may just be your new “magic button” for resolving unusual SQL server performance issues.

      Resolving SQL Server Performance Issues: The Typical Process

      Personally, I find troubleshooting SQL related performance issues very interesting. In my previous consulting gigs, I participated in many white boarding sessions and troubleshooting engagements as a highly paid last-resort option for many clients. When I dug into their troubleshooting process, I found a familiar set of events happening inside an IT department specific to SQL Server performance issues.

      Here are the typical steps:

      • Review monitoring tools for CPU, RAM, IO, Blocks and so on
      • Start a SQL Profiler to collect possible offending queries and get a live view of the slowness
      • Check underlying storage for latency per IO, and possible bottle necks
      • Check if anyone else is running any performance intensive processes during production hours
      • Find possible offending queries and stop them from executing
      • DBAs check their SQL indexes and other settings

      When nothing is found from the above process, the finger pointing starts. “It’s the query.” “No, it’s the index.” “It’s the storage.” “Nope. It’s the settings in your SQL server.” And so it goes.

      Sound familiar?

      An Often-Forgotten Setting to Improve SQL Server Performance

      Based on the typical troubleshooting process, IT either implements a solution to prevent identical issues from coming back or hope to fix the issue by adding all flash and other expensive resources. These solutions have their place and are all equally important to consider.

      There is, however, an often-forgotten setting that you should check first—the block allocation size of your NTFS partition in the Microsoft Windows Server.

      The block allocation setting of the NTFS partition is set at formatting time, which happens very early in the process and is often performed by a sysadmin building the VM or bare metal server well before Microsoft SQL is installed. In my experience, this setting is left as the default (4K) during the server build process and is never looked at again.

      Why is 4K a bad setting? A Microsoft SQL page is 8KB in size. With a 4K block, you are creating two IO operations for every page request. This is a big deal. The Microsoft recommended block size for SQL server is 64K. This way, the page is collected in one IO operation.

      In bench tests of highly transactional databases on 64K block allocation in the NTFS partition, I frequently observe improved database performance by as much as 50 percent or more. The more IO intensive your DB is, the more this setting helps. Assuming your SQL server’s drive layout is perfect, for many “strange performance” issues, this setting was the magic button. So, if you are experiencing unexplained performance issues, this simple formatting setting maybe just what you are looking for.

      A word of caution: We don’t want to confuse this NTFS block allocation with your underlying storage blocks. This storage should be set to the manufacturer’s recommended block size. For example, as of recently, Nimble storage bock allocation at 8k provided best results with medium and large database sizes. This could change depending on the storage vendor and other factors, so be sure to check this with your storage vendor prior to creating LUNs for SQL servers.

      How to Check the NTFS Block Allocation Setting

      Here is a simple way to check what block allocation is being used by your Window Server NTFS partition:

      Open the command prompt as administrator and run the following command replacing the C: drive with a drive letter of your database data files. Repeat this step for your drives containing the logs and TempDB files:

      • fsutil fsinfo ntfsinfo c:

      Look for the reading “Bytes Per Cluster.”  If it’s set to 4096, that is the undesirable 4K setting.

      The fix is easy but could be time consuming with large database sizes. If you have an AlwaysOn SQL cluster, this can be done with no downtime. If you don’t have an AlwaysOn MSSQL cluster, then a downtime window will be required. Or, perhaps it’s time to build an AlwaysOn SQL cluster and kill two birds with one stone.

      To address the issue, you will want to re-format the disks containing SQL data with 64K blocks.

      Concluding Thoughts

      If your NTFS block setting is at 4K right now, moving the DB files to 64K formatted disks will immediately improve performance. Don’t wait to check into this one.

      Explore INAP Cloud.

      LEARN MORE

      Rob Lerner


      READ MORE



      Source link

      Deploying Microsoft SQL Servers in a Private Cloud with High Availability


      There are many considerations to account for when implementing a Microsoft SQL server in a private cloud environment. Today’s SQL dependent applications have different performance and high availability (HA) requirements. In Part 1 of this series, we covered implementation of Microsoft SQL Servers in a private cloud for maximum performance. Here in Part 2, we’ll explore deployment in a private cloud with HA.

      HA Options Available Within Microsoft SQL Server

      When designing Microsoft SQL servers with HA, we must consider SQL dependent application requirements and features to ensure they work with highly available deployments. Below is a list of a few HA options natively available within the Microsoft SQL server and organized by deployment type.

      Physical SQL Server Deployment with Single SAN and SQL Clustering

      This HA option is a Microsoft Server native clustering with SQL clustering with a single SAN or direct-attached sub-storage system. In this option, we run two SQL server nodes with a single copy of the database running on a SAN. Only one instance of SQL is active and attached to the databases at any given time.

      Pros: This option protects against a single SQL node failure. The surviving node will start its SQL instance and attach to the same databases to continue serving client requests. It offers the best compute performance and good storage performance.

      Cons: This option does not protect against database corruption or storage sub-system issues. Both corruption and SAN issues will impact the entire cluster. Licensing costs may be an issue if licensing by CPU.

      Virtual SQL Server Deployment with SQL on SAN and VMWare HA

      Standard Windows clustering is not recommended in this option. VMWare and other hypervisors will provide a level of HA to protect against hypervisor node failure by restarting the SQL server VM on other nodes.

      Pros: This is an easy way to implement HA without having to configure and support windows server clustering. Per CPU SQL licensing costs are reduced. This option offers good computer performance and good storage performance.

      Cons: This option does not protect against database corruption or storage sub-system issues. Both problems will cause an outage of the SQL server.

      Virtual SQL Server Deployment with SQL on Local SSD Disk and SQL Native AlwaysOn HA

      In this option, we utilize two SQL virtual machines running on top of local SSD drives configured in a RAID10 directly on the hypervisor nodes. Each SQL VM is running on separate nodes with local storage. The database, which requires HA, is being protected using SQL AlwaysOn features by maintaining two copies of the database on two separate VMs and two separate, high-speed sub-storage systems.

      Pros: This option offers strong HA protection against single SQL Server VM Failure, single hardware node failure and single storage system failure. Automatic database page corruption protection is provided by the AlwaysOn technology. AlwaysOn keeps two separate copies of the database in sync. It offers good compute performance and the best storage performance in a virtual environment. The virtualized SQL server provides savings on per CPU licensing costs by assigning just the amount of CPU that your SQL server needs. Only the active SQL servers instance requires licensing.

      Cons: VMWare vMotion should not be used while the SQL VM is turned on. In this design, however, vMotion is not needed since the AlwaysOn protected database server will not need to vMotion during failover. By design, other VMWare HA and resource management services will not be used in this option. Local storage has to be scoped with growth in mind. My rule of thumb is to scope three years of required growth for local storage per server.

      Physical SQL Server Deployment with Local SSD Storage and SQL Native AlwaysOn HA

      We utilize two physical SQL servers in this option. Each hardware server has SSD RAID10 for local storage. Each SQL server is running on separate bare metal servers. The database, which requires HA, is being protected using SQL AlwaysOn features by maintaining two copies of the database on two separate bare metal servers, and two separate, high-speed sub-storage systems.

      Pros: This option provides strong HA protection against single SQL Server failure and single storage system failure. Automatic database page corruption protection is provided by the AlwaysOn technology. AlwaysOn keeps two separate copies of the database in sync. This option offers the best compute performance and best storage performance. Only the active SQL servers instance requires licensing.

      Cons: Per CPU licensing costs could get pricy depending on CPU core count. Local storage must be scoped with growth in mind. My rule of thumb is to scope three years of required local storage growth per server.


      LEARN MORE

      Closing Thoughts on Microsoft SQL Server HA Options

      Compare your workload requirements with the abilities of each option and your budgetary considerations to determine what works best for you. Development or test SQL servers and production workloads can easily run inside virtualized environments with SAN storage. Some of your more demanding production workloads may need to be placed into virtual or physical environments with local SSD-based storage for best performance and HA needs.

      Integrating these options into your private cloud environments is simple and can save on costs down the line. When working with local storage, be sure to future proof your disk space growth availability the first time. Future proofing your local storage for growth will save on maintenance headaches and costs in the long run.

      In this series, we have looked at basic SQL server concepts and performance factors to be considered when designing Microsoft SQL server deployments for HA and high performance. Download the whitepaper by filling out the form below to get this series in its entirety.

      High performance is measured differently for different applications and is greatly dependent on the end user’s expectations as they interact with your supported application. By collecting these simple measurements and requirements up-front, you will be able to make decisions to right size the environment for your end user and to help you stick to your budget.

      Read Part 1: How to Implement Microsoft SQL Servers in a Private Cloud for Maximum Performance

      Read: The Basics of a Microsoft SQL Server Architecture

      Download the full white paper below:

      Laura Vietmeyer


      READ MORE



      Source link

      The Basics of a Microsoft SQL Server Architecture


      Microsoft SQL Server is one of the market leaders for database technology. It’s a relational database management system that supports a number of applications, including business intelligence, transaction processing, and analytics. Microsoft SQL Server is built on SQL, which is a programming language used to manage databases and query data.

      SQL Server follows a table structure based on rows, allowing connection of data and functions while maintaining the data’s security and consistency. Checks in the relational model of the server work to ensure that database transactions are processed consistently.

      Microsoft SQL Server also allows for simple installation and automatic updates, customization to meet your business needs and simple maintenance of your database. Below, you can get a quick overview of how a SQL Server manages data, how data is retrieved and modified.

      SQL Server Data Management

      A SQL Database is comprised of one or more data files (.mdf/.ndf) and one transaction log file (.ldf). Data files contain schema and data, and the Log file contains recent changes or adds. Data is organized by pages (like a book), each page is 8KB.

      SQL Server

      A SQL Server manages this data in three ways:

      • Reads
      • Writes
      • Modifies (Delete, replace, etc.)

      Data Retrieval with SQL

      A SQL Server accesses data by pulling down the entire 8KB page from disk into memory. Pages temporarily stay in memory until they are no longer needed. Often, the same page will be modified or frequently read as SQL works with the same data set.

      SQL Server

      Data Modification with SQL

      SQL changes data via delete or modify, or by writing new data. All modifications are written to the transaction log (which sits on disk where it is safe) in case the SQL server loses power before it writes data back to disk.

      The 8KB page is written back to disk after it has not been used for a certain time period. Once a transaction is written to disk (mdf/ndf file), it is marked as written in the transaction log. In case of a power outage, SQL can retrieve completed transactions that were not written and add them to the database files (MDF and NDF) once back in operation.

      Taking Steps to Implement a Microsoft SQL Server

      Today’s SQL dependent applications have different performance and high availability requirements, meaning there are many factors to consider for implementation. Thinking about implementing a Microsoft SQL Server, or want to make sure that yours is properly meeting your needs? INAP’s solutions architects can help with this process, and your SQL Servers can be hosted and managed on Bare Metal or Private Cloud.

      INAP’s latest managed cloud solution, Intelligent Monitoring, supports SQL Servers, monitoring for core application metrics. Get transparency and control over your servers with the support of INAP’s experts.

      Rob Lerner


      READ MORE



      Source link