Posts Tagged ‘Best Practices’

Maintenance in General is a necessity for SQL Server.  No different than changing the oil in your car or going to the Doctor for the annual exam.  There is going to be times when you are going to need to run maintenance on your server.  The tricky part is trying to determine when you should start the maintenance so that it completes before the busy time.  The common approach to this problem is to simply determine how long a job executes (often determined by trial and error) and then adjust the start time to give the job enough time to execute.  There is another way…

SQL Server has a number of system stored procedures that you can use to perform tasks that you might be doing in the user interface, for example… If you want to stop a job you can open SQL Server Management Studio, navigate to the job, right click and stop the job.  Here is where the system supplied stored procedure comes into play.  What if your busy time of the day is at 6 AM, and you want to make sure that the indexing has finished by 5:00 AM so that the system is ready to take on the day.  Do you really want to wake up at 5:00 AM just to right click and stop job, in the chance that it is running?

Simply schedule a job that will execute at 5:00 AM (the time you want to make sure the maintenance job is done by), and create a step that will stop the job.

exec sp_stop_job @job_name=N’My Job Name’

Not to complex. But what if you want to add some logic to the job so that not only does it just try to stop the job it will check the job to determine if it is executing first?  And now that we are looking at some of the options there are, we should put a line of code in there that will email us whenever the maintenance job has run long and had to be stopped.

Select name

from msdb..sysjobs j

join msdb..sysjobactivity a on j.job_id = a.job_id and j.name = ‘My Job Name’

Where start_execution_date is not null and stop_execution_date is null

If @@rowcount > 0

Begin

EXEC msdb.dbo.sp_stop_job @job_name = ‘My Job Name’

EXEC msdb.dbo.sp_send_dbmail @profile_name = ‘MyMailProfile’, @recipients = ‘Me@xtivia.com’,

@body = ‘The Indexing Rebuild Job had to be stopped due to long run time.’, @subject = ‘Index Rebuild’ ;

End

Else Return

I hope this tip has helped you in one fashion or another.  If you would like my list of TOP 10 TIPS FOR SQL SERVER PERFORMANCE AND RESILIENCY can be found here with Tip # 1.

 

Advertisements

Top 10 Tips for SQL Server Performance and Resiliency

This article is part 10 in a series on the top 10 most common mistakes that I have seen impact SQL Server Performance and Resiliency. This post is not all-inclusive.

Most common mistake #10: Storage

For the final post in the top 10 items that influence the performance and resiliency of the databases, we will talk about storage.  Storage is by far the number one cause of performance issues I have experienced, and over the last 20 years of my career, the changes with storage have been huge. When I first started as a professional, we were using internal spinning disks and then quickly changed up to internal RAID arrays. Five or so years later, we moved to external RAID cages.  Eventually the road took us to SAN and NAS storage and most recently SSD.  With all these changes, it is easy to see why we have administrators who focus on nothing but storage.  If you are fortunate enough to have a Storage Administrator, do you as a DBA still need to understand what the storage is doing?

How can you identify if you are having some sort of performance bottleneck?  There are a number of indicators that can provide you with the evidence your database is having a storage issue. A common indicator used for a number of years is the storage latency.  Storage latency information is collected by using the Performance Monitor in Windows.  Add the counters Average Disk/sec Read and Average Disk/sec Write.

The storage latency can be monitored in real time or the data can be recorded by starting a data collection.

According to Microsoft Best Practices, the latency on the disk the log file resides should be less than five milliseconds, and the data file latency should be less than 20 milliseconds.  In my experience, I have seen log file latency climb as high as 10 millisecond and sometimes a little higher during spikes without any end user impact.   In addition take location note of the TempDB database as we talked about in Tip 8. You will want to ensure you are keeping the TempDB on the fastest storage you can.

There are additional tools you may want to consider using to dig deeper into the performance of your storage systems such as SQLIO and SQLIOSIM both from Microsoft.  A couple popular third party tools include IOMeter and CrystalDiskMark.

Please do not miss my other blogs regarding this topic.

Top 10 Tips for SQL Server Performance and Resiliency

  1. Improper Backups
  2. Improper Security
  3. Improper Maintenance
  4. Not having a Baseline
  5. SQL Server Max Memory
  6. Change History
  7. Disaster Recovery Plans
  8. TempDB
  9. AutoShrink

 

Top 10 Tips for SQL Server Performance and Resiliency

This article is part 9 in a series on the top 10 most common mistakes that I have seen impact SQL Server Performance and Resiliency. This post is not all-inclusive.

Most common mistake #9: Automatically Shrinking Your Database

This is a topic that has been written about frequently, and most often, I try not to re-hash what many people have already blogged about.  However, as often as I see this I would be amiss if I did not add auto shrink to the list.

Often you will see IT professionals approaching their tasks from different angles.  Consider if you were a Systems Admin and you knew you needed some additional storage on a server you might send a request to the storage admin requesting an additional 50 gigs, or whatever amount you need.  As a Database Professional, you would be wise to not only include the size of storage that you need but also the performance specifications that you require.  As a DBA, we need to understand that SQL Server management may not always translate well to other types of systems management.  Now granted this should be no surprise, it is understand we do not approach all things the same way, but where this comes into play is the understanding we all have different backgrounds.  We became DBA’s from different career paths.

If you are new to being a Database Administrator or the Primary focus of your job is not to be a DBA you may see the benefits of shrinking a database automatically.  If the database shrinks by itself, it might be considered self-management; however, there is a problem when doing this.

When you shrink a data file SQL Server goes in and recovers all the unused pages, during the process it is giving that space back to the OS so the space can be used somewhere else.  The downstream effect of this is going to be the fact your indexes are going to become fragmented.  This can be demonstrated in a simple test.

I have a database in my lab based on the Chicago Crime Stats.  I have been doing a lot of testing in the database with an automated indexing script, that has me inserting a deleting a large number of rows at different times.  Over time this database has become rather large for my small lab, it is time to shrink it down to a more manageable size.  The first thing done is to check what the status of my indexes is.

This is a simple query that will return all the indexes in the database with its fragmentation level.

SELECT db_name() as [database],
      Object_Name(ps.object_id) as [table],
      i.name as Index_Name,
      round(avg_fragmentation_in_percent, 0) as Frag
FROM sys.dm_db_index_physical_stats(db_id(), null, null, NULL, NULL) ps
            Join sys.indexes i on ps.Object_ID = i.object_ID and ps.index_id = i.index_id

 

The results look like this:

image1

 

More or less the indexes are looking good; there is not a lot of fragmentation except in the one table (that is a discussion for later topics). What happens if I shrink the whole database, to include not only the log but also the data file as well?

 

Use the following T-SQL:

DBCC ShrinkDatabase ([ChicagoCrimeStats])

Rerunning the index fragmentation script, I now receive these results:

image2

 

If I have queries that use the IDX_Crimes_Frag_XCORD_Clustered index, there is a real good chance the performance on that query is going to degrade.

There are times when you may need to shrink a file, some considerations could be after a large delete of records or maybe you archived much of the data out of the database.  These sort of operations remove data leaving your databases with a lot of free space.  This free space can be reclaimed by using the DBCC Shrinkfile or DBCC Shrinkdatabase T-SQL commands, however be aware you should re-index after those statements are run.

It is not a bad thing to shrink a database as long as you do it in a controlled manor with proper maintenance afterwards.

Top 10 Tips for SQL Server Performance and Resiliency

  1. Improper Backups
  2. Improper Security
  3. Improper Maintenance
  4. Not having a Baseline
  5. SQL Server Max Memory
  6. Change History
  7. Disaster Recovery Plans
  8. TempDB

Tip # 6 – Change History

Top 10 Tips for SQL Server Performance and Resiliency

This article is part 6 in a series on the top 10 most common mistakes that I have seen impacting SQL Server Performance and Resiliency. This post is not all inclusive.

Most common mistake #6: Not Keeping a Change History

Business today moves fast, companies change interfaces to keep content relevant and are continuing to offer new services to their client base. In a perfect world code is tested 10 ways to Sunday and is never released with bugs or design issues, however we don’t live in that world. I mean no disrespect to the developers that spend hour upon hours trying to provide the best changes possible. There are just so many variables that making a release perfect every time is just not a reality.

There is an inherent risk with change; software, hardware or configurations including SQL Server.

When a change is being planned no matter how small of a change, these changes should be documented. Chances are most DBA’s have heard this before; however, simple changes are often overlooked. Documenting the change is a great start; however, keep in mind that it is critical that the documented changes are easily researched. In my experience with working with SQL Server, most issues with SQL Server can be tracked back to a date when an issue started. Because of this, I recommend storing documents so they can be searched by date.

Stability increases when you embrace change history and take change management couple steps further. Consider setting up a change process. Implementing policies such as not allowing changes to be made to the SQL Server unless they are being made via scripts. There are some great benefits to only allowing changes via script. Consider a small table change, a few of the benefits that you will see by using a script over the user interface:

  • When changes are made via script, and the script has been saved it is easy to see exactly what was executed. When changes are made via the user interface you have to assume you know what the exact change was.
  • Changes made via the user interface are susceptible to the “fat finger” otherwise known as a typo. You can review your change with your eyes, but having others review your change is more difficult. If you mistype the name you are now left with having to make a second change to correct your mistake.
  • When using a script, you can share what the change is going to look like before the change is made. You can send the new table layout to other teams in the company and they can see exactly the totality of the change.

When making changes it is also important to have a roll back script. This is most likely the most overlooked part of change management. Having a roll back script is not often needed and when everything is rolling out as planned there is no thoughts to having to roll back these changes. When a changes are made and it isn’t going as well as it was planned someone may make the decision to cancel the change. A restore can often lead to extended hours of downtime and unexpected impacts to customers. These are the times when the extra effort put into a rollback script prove to be priceless.

 

Top 10 Tips for SQL Server Performance and Resiliency

  1. Improper Backups
  2. Improper Security
  3. Improper Maintenance
  4. Not having a Baseline
  5. SQL Server Max Memory

Tip # 2 – Check your Security

Top 10 Tips for SQL Server Performance and Resiliency

This article is Part 2 in a series on the Top 10 Most Common Mistakes: SQL Server Performance and Resiliency. Neglecting security is the second biggest mistake I frequently see impacting SQL Server’s performance. In case you have missed the prior posts:

#1 Most common mistake – Incomplete Backups

I can’t tell you how many times I have been told “We have a really good firewall so the internal security we have in place is not as important” or “We like to operate with a level of trust, and Chris, you need to learn how to trust people”. The second quote came from an IT Director from a public company. My response to the trust argument is that security has nothing to do with trust, security is about protecting the data from not only things that people may set out to do intentionally, but also protect the data from things people can do on accident.

I am not ashamed to admit I like to have my SQL Servers secure. I consider it not only my job, but a core responsibility in my career. If you have any doubt of how important security is today, just take a few moments and do a search on cyber-attacks. With recent headlines where major organizations are victims of cyber-attacks, hopefully this post becomes all that more important to you. Below are a few practices and procedures that can help secure your data, as well as accidental mistakes made by people without the proper knowledge of the database practices and procedures:

Not everyone should be a SA

The SA name is short for Systems Administrator. This account has permissions to everything inside SQL Server. When a user requests SA, they may be referring to a couple things. They may be asking for the password of the SA account or they may want you to assign their login to the Systems Administrator role. In my opinion, there are very few people who need this level of access to a SQL Server. Usually, these people have a job title that says, “Database Administrator”. Keep in mind: the more people that have this access level to your SQL Server, the more risk you take when it comes to how many people can make really big mistakes.

To be secure, make sure the SA account has a complex password. I recommend setting the SA password to a complex password and disabling the account to be a bit more secure. The administrator’s role should have their own accounts. Treat the SA account as hidden back door into your system and only use it when nothing else works.

Not everyone should be a DBO

In like fashion to SA access, not everyone needs DBO access to a database. DBO is short for database owner. Accounts with this access have permissions to everything internal on the database. This means they can delete users, tables, stored procedures and even data. Since SQL Server cannot make a determination between DELETE statements done on accident and on purpose, it becomes imperative that someone with DBO or SA access knows what they are doing.

I often see people treat security levels as a negotiation between two parties, almost like what you would experience when purchasing a car. The requestor asks for ‘SA’ level access and the DBA might respond by haggling them down to Data_Reader access, only to have the requestor push for something in the middle, like DBO access. As funny as this sounds, some DBA’s agree to it. The theory being that DBO is more restrictive than ‘SA’, so logically, security is intact. The end result in this hypothetical bargain is ‘DBO access’, which many people don’t need. Check the accounts and find out what is required to complete the work tasks. Don’t give employees any more room to break things than they need.

Securing your backups

The larger the organization, the more difficult public relations become after a data breach. The last thing any DBA wants to be part of is a letter to customers letting them know the data they are responsible for has been compromised.

Protect your databases, encrypt them with a password and manage the permissions on your backup locations like they contain your personal social security number. The protection of your database backups doesn’t stop there, make sure the offsite locations is secure as well.

Linked Servers

Linked servers can add tremendously to the ease of retrieving data from your SQL Servers. Often, people get pretty excited when they learn how easy it is to use and configure. However, a big mistake that I often see has to do with the security configuration of the link. When the linked server is configured, settings that should be set to protect the data on the server being linked to. I often see configurations that will allow a specific named user that may have Data_Reader on the primary server connect to the linked server using the SA account.

BUILTIN\Administrators

In older versions of SQL Server, if you had an administrator account on the windows machine such as local admin access, you would by default inherently have SA access to the SQL Server. This can be very helpful in small shops where a Windows Admin may also be a SQL Server Admin. The issue is sometimes the Windows Admin shouldn’t have that level of access to the data or configuration of SQL Server.

No Policy Management for SQL Server Accounts

SQL Server authenticated accounts, or accounts that reside only in SQL Server have the option to have the policy enforced by the Windows Policy. Sometimes these are configured this way because people don’t want to change the password for SQL Server, or have to worry about having a strong enough password. This might be related to the theory that if the server is behind a firewall it is protected enough. It is a good idea to have your passwords in SQL Server follow the same policy that your windows accounts follow.

This is just a short list of some of the common security mishaps I have encountered. There are others you should be aware of, like updating your service packs, staying on top of when people leave your company or auditing your group memberships. When you look at your database and you believe you have secured it enough, take another look. Consider creating a table called My Personal Info, and in that table insert your credit card, your debit card (don’t forget the P.I.N), your social security number and your banking information. Once that is all inserted there, how comfortable do you feel? Your data deserves to be secure and people every day are relying on you the data owner to keep it secure.

SQL Server Best Practices

Posted: August 29, 2008 in Uncategorized
Tags:

I had a new client call today; I know this is two days in a row. Either way I got to talking with this client on the phone for a while and started to really learn a few things about him and his system. The more we talked I mentioned to him that the conversation would make a good blog entry. See we were talking about best practices and he wants me to review a server for him to double check to see if there are any knobs we can turn or twist that would help his performance. As I mentioned in my previous post there are some standard questions that I need to know before we get started.

After we talk about the system and the hardware he made mention to me that he did not follow all the best practices when it came to database design. It appears that he has an extremely high number if inserts and selects that occur on the database and there needs to be sub-second returns on the queries. He wanted to make sure that when I looked at the system that I was aware that the tables were not normalized, even though he knew it was a best practice. See they recognized that based on how they were going to have to present the data and the speed it needed to be done was to try to leave the tables as flat and wide as they could.

The way I look at it is that recommended best practices are just that, recommend. There are going to be times when you don’t need to follow them and maybe for the better performance of your database it is ok to step out of that “Best Practices” and take a new look at what and how you are storing and retrieving the data. If you can do it faster by not following the best practices and you need the speed, disk space is not a concern then maybe that is the way you should do it. In most cases best practices are the best way to do it. They are recommend that way because most databases will work best with those practices.