Archive for January, 2016

Top 10 Tips for SQL Server Performance and Resiliency

This article is part 10 in a series on the top 10 most common mistakes that I have seen impact SQL Server Performance and Resiliency. This post is not all-inclusive.

Most common mistake #10: Storage

For the final post in the top 10 items that influence the performance and resiliency of the databases, we will talk about storage.  Storage is by far the number one cause of performance issues I have experienced, and over the last 20 years of my career, the changes with storage have been huge. When I first started as a professional, we were using internal spinning disks and then quickly changed up to internal RAID arrays. Five or so years later, we moved to external RAID cages.  Eventually the road took us to SAN and NAS storage and most recently SSD.  With all these changes, it is easy to see why we have administrators who focus on nothing but storage.  If you are fortunate enough to have a Storage Administrator, do you as a DBA still need to understand what the storage is doing?

How can you identify if you are having some sort of performance bottleneck?  There are a number of indicators that can provide you with the evidence your database is having a storage issue. A common indicator used for a number of years is the storage latency.  Storage latency information is collected by using the Performance Monitor in Windows.  Add the counters Average Disk/sec Read and Average Disk/sec Write.

The storage latency can be monitored in real time or the data can be recorded by starting a data collection.

According to Microsoft Best Practices, the latency on the disk the log file resides should be less than five milliseconds, and the data file latency should be less than 20 milliseconds.  In my experience, I have seen log file latency climb as high as 10 millisecond and sometimes a little higher during spikes without any end user impact.   In addition take location note of the TempDB database as we talked about in Tip 8. You will want to ensure you are keeping the TempDB on the fastest storage you can.

There are additional tools you may want to consider using to dig deeper into the performance of your storage systems such as SQLIO and SQLIOSIM both from Microsoft.  A couple popular third party tools include IOMeter and CrystalDiskMark.

Please do not miss my other blogs regarding this topic.

Top 10 Tips for SQL Server Performance and Resiliency

  1. Improper Backups
  2. Improper Security
  3. Improper Maintenance
  4. Not having a Baseline
  5. SQL Server Max Memory
  6. Change History
  7. Disaster Recovery Plans
  8. TempDB
  9. AutoShrink

 

Top 10 Tips for SQL Server Performance and Resiliency

This article is part 9 in a series on the top 10 most common mistakes that I have seen impact SQL Server Performance and Resiliency. This post is not all-inclusive.

Most common mistake #9: Automatically Shrinking Your Database

This is a topic that has been written about frequently, and most often, I try not to re-hash what many people have already blogged about.  However, as often as I see this I would be amiss if I did not add auto shrink to the list.

Often you will see IT professionals approaching their tasks from different angles.  Consider if you were a Systems Admin and you knew you needed some additional storage on a server you might send a request to the storage admin requesting an additional 50 gigs, or whatever amount you need.  As a Database Professional, you would be wise to not only include the size of storage that you need but also the performance specifications that you require.  As a DBA, we need to understand that SQL Server management may not always translate well to other types of systems management.  Now granted this should be no surprise, it is understand we do not approach all things the same way, but where this comes into play is the understanding we all have different backgrounds.  We became DBA’s from different career paths.

If you are new to being a Database Administrator or the Primary focus of your job is not to be a DBA you may see the benefits of shrinking a database automatically.  If the database shrinks by itself, it might be considered self-management; however, there is a problem when doing this.

When you shrink a data file SQL Server goes in and recovers all the unused pages, during the process it is giving that space back to the OS so the space can be used somewhere else.  The downstream effect of this is going to be the fact your indexes are going to become fragmented.  This can be demonstrated in a simple test.

I have a database in my lab based on the Chicago Crime Stats.  I have been doing a lot of testing in the database with an automated indexing script, that has me inserting a deleting a large number of rows at different times.  Over time this database has become rather large for my small lab, it is time to shrink it down to a more manageable size.  The first thing done is to check what the status of my indexes is.

This is a simple query that will return all the indexes in the database with its fragmentation level.

SELECT db_name() as [database],
      Object_Name(ps.object_id) as [table],
      i.name as Index_Name,
      round(avg_fragmentation_in_percent, 0) as Frag
FROM sys.dm_db_index_physical_stats(db_id(), null, null, NULL, NULL) ps
            Join sys.indexes i on ps.Object_ID = i.object_ID and ps.index_id = i.index_id

 

The results look like this:

image1

 

More or less the indexes are looking good; there is not a lot of fragmentation except in the one table (that is a discussion for later topics). What happens if I shrink the whole database, to include not only the log but also the data file as well?

 

Use the following T-SQL:

DBCC ShrinkDatabase ([ChicagoCrimeStats])

Rerunning the index fragmentation script, I now receive these results:

image2

 

If I have queries that use the IDX_Crimes_Frag_XCORD_Clustered index, there is a real good chance the performance on that query is going to degrade.

There are times when you may need to shrink a file, some considerations could be after a large delete of records or maybe you archived much of the data out of the database.  These sort of operations remove data leaving your databases with a lot of free space.  This free space can be reclaimed by using the DBCC Shrinkfile or DBCC Shrinkdatabase T-SQL commands, however be aware you should re-index after those statements are run.

It is not a bad thing to shrink a database as long as you do it in a controlled manor with proper maintenance afterwards.

Top 10 Tips for SQL Server Performance and Resiliency

  1. Improper Backups
  2. Improper Security
  3. Improper Maintenance
  4. Not having a Baseline
  5. SQL Server Max Memory
  6. Change History
  7. Disaster Recovery Plans
  8. TempDB

Top 10 Tips for SQL Server Performance and Resiliency

This article is part 8 in a series on the top 10 most common mistakes that I have seen impacting SQL Server Performance and Resiliency. This post is not all inclusive.

Most common mistake #8: Default TempDB Settings

By default when you install SQL Server the TempDB database is not optimized.  For SQL Servers that use the TempDB even moderately, I recommend optimizing it for the best performance.  The TempDB is the storage area that SQL Server uses to store data for a short periods of time, information that can be found in the TempDB include temporary tables, and data that is being sorted for query results. The data in the TempDB is so temporary that each time the SQL Server service restarts, the TempDB is recreated.

Due to the specific nature of the data in TempDB and the frequency of use it is important to ensure you have the most optimal configuration.

Optimizing the TempDB is so important that Microsoft is changing how it is installed with SQL Server 2016 to encourage the optimization of Temp DB.  How do you optimize TempDB?  It’s not difficult at all if you follow these few pointers:

Place your Temp DB on that fastest storage you have.  Ultimately this storage should be pounding out a latency less than 5 milliseconds.  Does your server have access to SSD storage?  If so that is a great place for the TempDB.

There are a great number of studies that have been done to determine the ideal number of files you should split your Temp DB over.  With my experience I tend to create one temp DB file for each processor core on the server, however I don’t do this until I find there is some contention in the TempDB.

Grow your TempDB to the size you need it.  Your TempDB is going to be recreated each time your service is restarted, so if your default database size is smaller than the normal operational size  you are going to have to go through some grow events.  Speaking of growth events, it is better to have controlled growth rather than a number of little growth events, so we recommend reviewing the auto growth size.

Top 10 Tips for SQL Server Performance and Resiliency

  1. Improper Backups
  2. Improper Security
  3. Improper Maintenance
  4. Not having a Baseline
  5. SQL Server Max Memory
  6. Change History
  7. Disaster Recovery Plans