Top 10 Tips for SQL Server Performance and Resiliency

This article is part 8 in a series on the top 10 most common mistakes that I have seen impacting SQL Server Performance and Resiliency. This post is not all inclusive.

Most common mistake #8: Default TempDB Settings

By default when you install SQL Server the TempDB database is not optimized.  For SQL Servers that use the TempDB even moderately, I recommend optimizing it for the best performance.  The TempDB is the storage area that SQL Server uses to store data for a short periods of time, information that can be found in the TempDB include temporary tables, and data that is being sorted for query results. The data in the TempDB is so temporary that each time the SQL Server service restarts, the TempDB is recreated.

Due to the specific nature of the data in TempDB and the frequency of use it is important to ensure you have the most optimal configuration.

Optimizing the TempDB is so important that Microsoft is changing how it is installed with SQL Server 2016 to encourage the optimization of Temp DB.  How do you optimize TempDB?  It’s not difficult at all if you follow these few pointers:

Place your Temp DB on that fastest storage you have.  Ultimately this storage should be pounding out a latency less than 5 milliseconds.  Does your server have access to SSD storage?  If so that is a great place for the TempDB.

There are a great number of studies that have been done to determine the ideal number of files you should split your Temp DB over.  With my experience I tend to create one temp DB file for each processor core on the server, however I don’t do this until I find there is some contention in the TempDB.

Grow your TempDB to the size you need it.  Your TempDB is going to be recreated each time your service is restarted, so if your default database size is smaller than the normal operational size  you are going to have to go through some grow events.  Speaking of growth events, it is better to have controlled growth rather than a number of little growth events, so we recommend reviewing the auto growth size.

Top 10 Tips for SQL Server Performance and Resiliency

  1. Improper Backups
  2. Improper Security
  3. Improper Maintenance
  4. Not having a Baseline
  5. SQL Server Max Memory
  6. Change History
  7. Disaster Recovery Plans

 

 

This article is part 7 in a series on the top 10 most common mistakes that I have seen impacting SQL Server Performance and Resiliency. This post is not all inclusive.

Most common mistake #7: Disaster Recovery Plans

Often people hear disaster recovery plan and the first reaction is to start worrying about the costs. Disaster recovery plans don’t have to be expensive, expensive disaster recovery plans come from strict requirements.

About 10 years ago when I started as an independent consultant one of my first clients was contacting me to help build out a disaster recovery plan for them. After our initial discussion I learned some consulting firms had forecasted one hundred thousand dollar solutions. Many large companies would look at that number and determine it was a bargain, however this clients company made less than 50k a year. The data changed about once a year, and if the database was down a week or two it was questioned if anyone would even notice. It was easy to see that the hundred thousand dollar solution was extremely over engineered.

Don’t ignore the basics

Disaster Recovery Solutions should start with two basic questions, what is the recovery point object and what is the recovery time objective.

  • RPO – Recovery Point Objectives – To what point must the database be restored after a disaster. Another way to ask this question would be, how much data can be lost.
  • RTO – Recovery Time Objectives – How much time can elapse after the disaster has occurred? Or, how long can your system can be down?

Depending on these answers additional questions will arise, however these two questions can help determine what potential solutions will work. SQL Server offers a number of solutions from Transaction Log shipping to AlwaysOn Availability Groups.

Pay Attention to the Details

Whenever I visit a datacenter for a client I make sure that I take some time to review how the cages are wired. On more than one occasion I have seen servers with redundant power supplies have both of the power cords plugged into one circuit. This configuration will protect you if one of the power supplies goes bad, however if the circuit goes down the redundant power supply isn’t any help.

When executing a disaster recovery plan ensure all the small details are checked. If there is a single point of failure in the system Murphy is going to find it.

Test

I can tell you the most common mistake I see on a regular basis with Disaster Recovery solutions is the lack of testing. Some testing is better than no testing, but the best testing is testing that mimic’s actual disasters. If there is a power outage for your servers and you have 5 min. to get everything moved do you know the steps to complete before the unlimited power supply loses its charge? What steps must you take if you don’t have the 5 minutes? I was working with the chief technology officer for a major education facility and he had a vendor that was telling him he we safe, and he didn’t have to worry about it. His contract was for a 15 minute recovery point. When we reached out to the vendor and asked them to prove it.

The lesson here is perform regular realistic tests, if they don’t work, find out why and fix it.

Top 10 Tips for SQL Server Performance and Resiliency

  1. Improper Backups
  2. Improper Security
  3. Improper Maintenance
  4. Not having a Baseline
  5. SQL Server Max Memory
  6. Change History

You might have been following along with my recent series of posts related to a number of resiliency tips for SQL Server. I have compiled these tips based off my observations over the last 20 or so years. Over the summer I have the opportunity to present a session that is based on these tips. This week I am in New York, to present at SQL Saturday #380. Attendance is free with a small fee to help cover the cost of lunch.

June 6th I will be in Colorado Springs, Co. presenting the same session at SQL Saturday #415.

I believe there are a couple more SQL Saturdays I will be at this summer. I hope to see many old friends and meet many new ones. Tip #7 should be posted Friday 5/29. If you are looking for a list of the tips that have already posted you can find these here:

Top 10 Tips for SQL Server Performance and Resiliency

  1. Improper Backups
  2. Improper Security
  3. Improper Maintenance
  4. Not having a Baseline
  5. SQL Server Max Memory
  6. Change History

Tip # 6 – Change History

Top 10 Tips for SQL Server Performance and Resiliency

This article is part 6 in a series on the top 10 most common mistakes that I have seen impacting SQL Server Performance and Resiliency. This post is not all inclusive.

Most common mistake #6: Not Keeping a Change History

Business today moves fast, companies change interfaces to keep content relevant and are continuing to offer new services to their client base. In a perfect world code is tested 10 ways to Sunday and is never released with bugs or design issues, however we don’t live in that world. I mean no disrespect to the developers that spend hour upon hours trying to provide the best changes possible. There are just so many variables that making a release perfect every time is just not a reality.

There is an inherent risk with change; software, hardware or configurations including SQL Server.

When a change is being planned no matter how small of a change, these changes should be documented. Chances are most DBA’s have heard this before; however, simple changes are often overlooked. Documenting the change is a great start; however, keep in mind that it is critical that the documented changes are easily researched. In my experience with working with SQL Server, most issues with SQL Server can be tracked back to a date when an issue started. Because of this, I recommend storing documents so they can be searched by date.

Stability increases when you embrace change history and take change management couple steps further. Consider setting up a change process. Implementing policies such as not allowing changes to be made to the SQL Server unless they are being made via scripts. There are some great benefits to only allowing changes via script. Consider a small table change, a few of the benefits that you will see by using a script over the user interface:

  • When changes are made via script, and the script has been saved it is easy to see exactly what was executed. When changes are made via the user interface you have to assume you know what the exact change was.
  • Changes made via the user interface are susceptible to the “fat finger” otherwise known as a typo. You can review your change with your eyes, but having others review your change is more difficult. If you mistype the name you are now left with having to make a second change to correct your mistake.
  • When using a script, you can share what the change is going to look like before the change is made. You can send the new table layout to other teams in the company and they can see exactly the totality of the change.

When making changes it is also important to have a roll back script. This is most likely the most overlooked part of change management. Having a roll back script is not often needed and when everything is rolling out as planned there is no thoughts to having to roll back these changes. When a changes are made and it isn’t going as well as it was planned someone may make the decision to cancel the change. A restore can often lead to extended hours of downtime and unexpected impacts to customers. These are the times when the extra effort put into a rollback script prove to be priceless.

 

Top 10 Tips for SQL Server Performance and Resiliency

  1. Improper Backups
  2. Improper Security
  3. Improper Maintenance
  4. Not having a Baseline
  5. SQL Server Max Memory

This article is part 5 in a series on the top 10 most common mistakes that I have seen impacting SQL Server Performance and Resiliency. This post is not all inclusive.

Most common mistake #5: Not Setting a Maximum Memory

Default configurations are not always the best configuration for your server. Maximum memory allocated to SQL Server is a great example of this. If this setting is left unchanged, there is a chance your SQL Server could be starving the operating system of memory. When SQL Server starves the operating system, the OS cannot provide a stable platform. A struggling OS will have a difficult time retrieving data from disk, swapping processes, and completing standard OS functions to name a few issues.

There are a number of ways to see if the operating system is running into memory issues. One of the fastest ways is to simply look at the task manager and see how much is available. If you review the available memory when the server is not performing well and the amount of available memory is low, there is a good chance the OS is being starved. With tasks like file copies, backups and virus scans, the operating system will look for memory to help complete the process.   If SQL Server has that memory reserved, the OS will not have the memory it needs to work efficiently.

Proper SQL Server memory configuration can take some fine tuning as a number of variables need to be taken into consideration. For example if there are 2 SQL Server instances installed you will need to leave the memory configured for each SQL Server low enough so the Operating System has enough to work with. And don’t forget processes like copying large files to different media, such as a file copy for a backup. Finding the balance can often leave DBA’s going in circles. When you add virtual servers into the mix the variables increase even more and sometimes the memory configurations go against SQL Server best practices (VMWare recommends min and max memory to be set to be the same). There are many formulas for recommend initial max memory settings.

So where do you start? A general rule of thumb is to leave the operating system 20% of the memory. With that being said, I hesitate to leave the operating system anything less than 2 gigs. After making a memory setting adjustment, close attention should be given to monitoring the memory counters in the performance monitor and the DMV’s in SQL Server to ensure the operating system has enough memory to perform well without fighting with SQL Server.

Top 10 Tips for SQL Server Performance and Resiliency

  1. Improper Backups
  2. Improper Security
  3. Improper Maintenance
  4. Not having a Baseline

Tip # 4 – Not having a Baseline

Top 10 Tips for SQL Server Performance and Resiliency

This article is part 4 in a series on the top 10 most common mistakes that I have seen impacting SQL Server Performance and Resiliency. This post is related to the most common errors I see and is not all inclusive. If you like this post check out the other tips already posted:

  1. Improper Backups
  2. Improper Security
  3. Improper Maintenance

Most common mistake #4: Not having a Baseline

Baselines can come in many different forms, however none of them are overly complex to understand. The core principal of a baseline is having a known set of metrics which can be used to make a comparison. A common and effective trouble shooting technique when something goes wrong is review what has changed. However, if you don’t have a starting point showing you what the system was like before something changed, finding what has changed is much more difficult.

Performance Baseline

I am often reviewing systems after they have started to exhibit performance issues. My goal with these type of engagements is to return the server to normal or better than normal performance as quickly as I can. Without a performance baseline of how the system runs under normal circumstances, leaves whoever is troubleshooting the performance issues without a valuable tool.

At the basic of level, SQL Server performance can be grouped into 3 categories where a performance issue may lie; hardware, operating system and SQL Server. Consider having a performance monitor counter run that includes some basic counter that could be relevant to each section.

Memory\Available Bytes

Memory\Pages/sec

PhysicalDisk(*)\Avg. Disk sec/Read

PhysicalDisk(*)\Avg. Disk sec/Write

PhysicalDisk(*)\Current Disk Queue Length

PhysicalDisk(*)\Avg. Disk Queue Length

Processor(_Total)\% Processor Time

SQLServer:Access Methods\Full Scans/sec

SQLServer:Access Methods\Index Searches/sec

SQLServer:Buffer Manager\Page life expectancy

SQLServer:General Statistics\User Connections

SQLServer:Latches\Average Latch Wait Time (ms)

SQLServer:Locks(_Total)\Lock Wait Time (ms)

SQLServer:Locks(_Total)\Lock Waits/sec

SQLServer:Locks(_Total)\Average Wait Time (ms)

SQLServer:Memory Manager\Memory Grants Pending

SQLServer:Memory Manager\Target Server Memory (KB)

SQLServer:Memory Manager\Total Server Memory (KB)

SQLServer:Plan Cache(_Total)\Cache Hit Ratio

SQLServer:SQL Statistics\Batch Requests/sec

SQLServer:SQL Statistics\SQL Compilations/sec

SQLServer:SQL Statistics\SQL Re-Compilations/sec

System\Processor Queue Length

System\Context Switches/sec

When you leave tasks to be run manually, such as taking a baseline each month, there is a risk of someone forgetting or the task being put aside for higher priorities. Consider scheduling your performance monitor data collections.

Configuration Baseline

The baseline you have shouldn’t stop at the performance monitor. The configuration of your SQL Server with a document explaining some of the non-standard configurations should be done as well.

Recently I was working on a performance problem for a client who has had their SQL Server online for years. The server had been slowing down over time and the client needed the performance to be better. When we were looking at the database configuration, we noticed the compatibility level on the database was set to SQL Server 2000 even though the SQL Server instance was 2008. Everyone who worked in the IT department when the server was configured had moved on a few years prior. The server and a number of settings configured that were not default, some of the settings were not best practice. The only way we could determine if the setting was correct or if it was just something someone overlooked was to make the change back to the default and complete regression testing to see the impact. This added a lot of time and delay to correcting the overall performance issue. A configuration baseline would have saved a lot of time and money. I recommend doing a health check\baseline on your SQL Servers once a year.

Top 10 Tips for SQL Server Performance and Resiliency

  1. Improper Backups
  2. Improper Security
  3. Improper Maintenance

Top 10 Tips for SQL Server Performance and Resiliency

This article is part 3 of 10 in a series on the most common mistakes that I have seen impacting SQL Server Performance and Resiliency. This post is related to the most common errors I see and is not all inclusive.

#1 Most common mistake – Incomplete Backups

#2 Most common mistake – Check Your Security

Most common mistake #3: Improper Maintenance

When it comes to maintenance, SQL Server is no different than an automobile; both require regular maintenance to keep them running at peak performance. In similar fashion, the greater the load, the more often the maintenance is required. Neglecting maintenance is the third biggest mistake that I frequently see impacting SQL Server’s performance.

Fragmented Indexes

Indexes in SQL Server are very similar to indexes in a book. The primary difference between indexes in a book and in your database is the indexes contained within the database are much easier to rebuild when new information is inserted. Imagine you had a book with 100 pages of text. If the book were indexed, finding specific information wouldn’t be too difficult and shouldn’t take much time. Now add 50 pages of new text in the middle of the book. Finding specific text after the 50 pages has been added would be difficult, all the page numbers would be off and the new text wouldn’t be indexed.

To solve this problem with SQL Server, we have the option to rebuild the index or reorganize an index. This should be completed on a regular basis. This ensures data location accuracy, which eases the work load on SQL server. This allows SQL Server to find the specific data it needs quickly and efficiently.

Stale Statistics

Sometimes, people have difficulty understanding SQL Server Statistics. Think of it this way: If you managed a sports team and it was your job to find new talent, would you want to see how much potential the talent has? You may want to know how strong they are or how fast they can run. Trying to pick up a new quarterback without that information would be difficult. What if the player’s statistics you were looking at were 5 years old? Maybe the statistics were from when they were in the best shape of their career and now they have gone a little past their prime. You may hire a player based on bad information if you didn’t know the statistics were out of date, they may not be that star performer you were looking for.

SQL Server keeps statistics for you as well. This information keeps your SQL Server aware of the information in the database and helps it decide on what the fastest way to retrieve data is. However, there are times when this information needs to be refreshed. Keeping your statistics up to date is going to keep your SQL Server aware of what shape your data is in.

Consistency Check

Isn’t it nice when your car starts to warn you before something really bad is going to happen? Sometimes it can be something as simple as change the oil in the engine or changing the battery.

SQL Server has warning signs as well; DBCC commands or database consistency checks should be executed to catch when something may be amiss in your data. This doesn’t mean that it can catch everything, but this layer of protection can save you some serious pain down the road.

Monitoring

Children that are being watched at a child care center or by parents are a lot less likely to get themselves into trouble. SQL Server is no different. A monitored SQL Server is less likely to have critical non-recoverable errors than servers that are not monitored. For example, pretend you have a database that requires 100 gigs of space for a backup file. If that storage falls below having 100 gigs of space available, your SQL Server backup is going to fail. What would happen if you lost the storage due to a hardware failure and now you need to restore from a backup? You could lose a tremendous amount of data and time, assuming you are able to recover it at all.

Now consider the same situation with a monitored SQL Server. A monitored SQL Server should inform you that you had a backup failure, and if it is scheduled via a Job in SQL Server, you can receive an alert from the SQL Server itself. In addition, there would be entries in the error log that you would see as well. A monitored SQL Server is going to give you an opportunity to correct the backup issue, before you need to restore the database.

It’s getting easier to complete a basic install of SQL Server with each and every version. It takes very little time to have a database up and running. It is so simple that many organizations install their SQL Server and forget about it – at least until there becomes a problem with it. Most of the time when I am asked to perform an emergency fix or a restore of a database, it is being done on a server that isn’t being monitored.

Database Mail, Alerts and Operators not configured

As mentioned in the monitoring section of this post SQL Server can be configured to send emails in a number of situations; if a job fails or errors are raised with specific severities. These emails are sent to operators that are configured within SQL Server. So when a backup fails, SQL Server can send you an email, allowing you to fix the issue.

When my son started to learn how to drive a car, we spent a number of hours talking about the maintenance. We reviewed how he should keep sand in his trunk to help get traction on the ice, how his oil needs to be changed and the tires need to be rotated, all to ensure his safety while on the road. A well maintained SQL Server can give your organization a stable and well performing database.