Posts Tagged ‘Microsoft SQL Server’

Maintenance in General is a necessity for SQL Server.  No different than changing the oil in your car or going to the Doctor for the annual exam.  There is going to be times when you are going to need to run maintenance on your server.  The tricky part is trying to determine when you should start the maintenance jobs before the busy time. For example, what if you need to backup your database, then re-index your database and follow it up with a consistency check.

The common approach to this problem is to simply determine how long a job executes (often determined by trial and error) and then adjust the start time of each job to give the job enough time to execute, before starting the next job.  The problem with this method is you are hoping the first job in the chain completes on time before you start the second job.  The common way to avoid this is leaving gaps so one long running job does not step on the next job.  However, there are options…

Option 1

If you are using a maintenance plan you can keep all the tasks that are schedule to run at the same time in the same sub-plan.  Sometimes this does not provide the flexibility that individuals want, but it is an effective method.

Option 2

You can create multiple steps to a single job.  If we use the example above where you want to run a backups, than re-index and then DBCC, you can create 3 different steps, this way as soon as one step completes the next step is executed. This method removes the need for guessing when one job would finish and the next job start.

Option 3

Each task could have its own job, then the last step of each job would start the next job.  This will add a lot of flexibility to your maintenance.  I like to use this in a couple different kinds of situations.

  1. If your maintenance is done by using multiple tools, for example… a Red Gate Backup, a custom re-indexing plan and a simple t-sql script to run a consistency check.
  2. If your maintenance is done across multiple servers… If you have 3 servers that all backup to the same network share, you could have one server execute at a time to not clog up the network and the storage.

Adding a step to execute the next job is pretty simple.

exec sp_start_job @job_name=N’My Job Name’

 

If you need to schedule this to occur across server, you can simply make the call to the other server using a linked server.

I hope this tip has helped you in one fashion or another.

 

 

Maintenance in General is a necessity for SQL Server.  No different than changing the oil in your car or going to the Doctor for the annual exam.  There is going to be times when you are going to need to run maintenance on your server.  The tricky part is trying to determine when you should start the maintenance so that it completes before the busy time.  The common approach to this problem is to simply determine how long a job executes (often determined by trial and error) and then adjust the start time to give the job enough time to execute.  There is another way…

SQL Server has a number of system stored procedures that you can use to perform tasks that you might be doing in the user interface, for example… If you want to stop a job you can open SQL Server Management Studio, navigate to the job, right click and stop the job.  Here is where the system supplied stored procedure comes into play.  What if your busy time of the day is at 6 AM, and you want to make sure that the indexing has finished by 5:00 AM so that the system is ready to take on the day.  Do you really want to wake up at 5:00 AM just to right click and stop job, in the chance that it is running?

Simply schedule a job that will execute at 5:00 AM (the time you want to make sure the maintenance job is done by), and create a step that will stop the job.

exec sp_stop_job @job_name=N’My Job Name’

Not to complex. But what if you want to add some logic to the job so that not only does it just try to stop the job it will check the job to determine if it is executing first?  And now that we are looking at some of the options there are, we should put a line of code in there that will email us whenever the maintenance job has run long and had to be stopped.

Select name

from msdb..sysjobs j

join msdb..sysjobactivity a on j.job_id = a.job_id and j.name = ‘My Job Name’

Where start_execution_date is not null and stop_execution_date is null

If @@rowcount > 0

Begin

EXEC msdb.dbo.sp_stop_job @job_name = ‘My Job Name’

EXEC msdb.dbo.sp_send_dbmail @profile_name = ‘MyMailProfile’, @recipients = ‘Me@xtivia.com’,

@body = ‘The Indexing Rebuild Job had to be stopped due to long run time.’, @subject = ‘Index Rebuild’ ;

End

Else Return

I hope this tip has helped you in one fashion or another.  If you would like my list of TOP 10 TIPS FOR SQL SERVER PERFORMANCE AND RESILIENCY can be found here with Tip # 1.

 

Top 10 Tips for SQL Server Performance and Resiliency

This article is part 9 in a series on the top 10 most common mistakes that I have seen impact SQL Server Performance and Resiliency. This post is not all-inclusive.

Most common mistake #9: Automatically Shrinking Your Database

This is a topic that has been written about frequently, and most often, I try not to re-hash what many people have already blogged about.  However, as often as I see this I would be amiss if I did not add auto shrink to the list.

Often you will see IT professionals approaching their tasks from different angles.  Consider if you were a Systems Admin and you knew you needed some additional storage on a server you might send a request to the storage admin requesting an additional 50 gigs, or whatever amount you need.  As a Database Professional, you would be wise to not only include the size of storage that you need but also the performance specifications that you require.  As a DBA, we need to understand that SQL Server management may not always translate well to other types of systems management.  Now granted this should be no surprise, it is understand we do not approach all things the same way, but where this comes into play is the understanding we all have different backgrounds.  We became DBA’s from different career paths.

If you are new to being a Database Administrator or the Primary focus of your job is not to be a DBA you may see the benefits of shrinking a database automatically.  If the database shrinks by itself, it might be considered self-management; however, there is a problem when doing this.

When you shrink a data file SQL Server goes in and recovers all the unused pages, during the process it is giving that space back to the OS so the space can be used somewhere else.  The downstream effect of this is going to be the fact your indexes are going to become fragmented.  This can be demonstrated in a simple test.

I have a database in my lab based on the Chicago Crime Stats.  I have been doing a lot of testing in the database with an automated indexing script, that has me inserting a deleting a large number of rows at different times.  Over time this database has become rather large for my small lab, it is time to shrink it down to a more manageable size.  The first thing done is to check what the status of my indexes is.

This is a simple query that will return all the indexes in the database with its fragmentation level.

SELECT db_name() as [database],
      Object_Name(ps.object_id) as [table],
      i.name as Index_Name,
      round(avg_fragmentation_in_percent, 0) as Frag
FROM sys.dm_db_index_physical_stats(db_id(), null, null, NULL, NULL) ps
            Join sys.indexes i on ps.Object_ID = i.object_ID and ps.index_id = i.index_id

 

The results look like this:

image1

 

More or less the indexes are looking good; there is not a lot of fragmentation except in the one table (that is a discussion for later topics). What happens if I shrink the whole database, to include not only the log but also the data file as well?

 

Use the following T-SQL:

DBCC ShrinkDatabase ([ChicagoCrimeStats])

Rerunning the index fragmentation script, I now receive these results:

image2

 

If I have queries that use the IDX_Crimes_Frag_XCORD_Clustered index, there is a real good chance the performance on that query is going to degrade.

There are times when you may need to shrink a file, some considerations could be after a large delete of records or maybe you archived much of the data out of the database.  These sort of operations remove data leaving your databases with a lot of free space.  This free space can be reclaimed by using the DBCC Shrinkfile or DBCC Shrinkdatabase T-SQL commands, however be aware you should re-index after those statements are run.

It is not a bad thing to shrink a database as long as you do it in a controlled manor with proper maintenance afterwards.

Top 10 Tips for SQL Server Performance and Resiliency

  1. Improper Backups
  2. Improper Security
  3. Improper Maintenance
  4. Not having a Baseline
  5. SQL Server Max Memory
  6. Change History
  7. Disaster Recovery Plans
  8. TempDB

Tip # 6 – Change History

Top 10 Tips for SQL Server Performance and Resiliency

This article is part 6 in a series on the top 10 most common mistakes that I have seen impacting SQL Server Performance and Resiliency. This post is not all inclusive.

Most common mistake #6: Not Keeping a Change History

Business today moves fast, companies change interfaces to keep content relevant and are continuing to offer new services to their client base. In a perfect world code is tested 10 ways to Sunday and is never released with bugs or design issues, however we don’t live in that world. I mean no disrespect to the developers that spend hour upon hours trying to provide the best changes possible. There are just so many variables that making a release perfect every time is just not a reality.

There is an inherent risk with change; software, hardware or configurations including SQL Server.

When a change is being planned no matter how small of a change, these changes should be documented. Chances are most DBA’s have heard this before; however, simple changes are often overlooked. Documenting the change is a great start; however, keep in mind that it is critical that the documented changes are easily researched. In my experience with working with SQL Server, most issues with SQL Server can be tracked back to a date when an issue started. Because of this, I recommend storing documents so they can be searched by date.

Stability increases when you embrace change history and take change management couple steps further. Consider setting up a change process. Implementing policies such as not allowing changes to be made to the SQL Server unless they are being made via scripts. There are some great benefits to only allowing changes via script. Consider a small table change, a few of the benefits that you will see by using a script over the user interface:

  • When changes are made via script, and the script has been saved it is easy to see exactly what was executed. When changes are made via the user interface you have to assume you know what the exact change was.
  • Changes made via the user interface are susceptible to the “fat finger” otherwise known as a typo. You can review your change with your eyes, but having others review your change is more difficult. If you mistype the name you are now left with having to make a second change to correct your mistake.
  • When using a script, you can share what the change is going to look like before the change is made. You can send the new table layout to other teams in the company and they can see exactly the totality of the change.

When making changes it is also important to have a roll back script. This is most likely the most overlooked part of change management. Having a roll back script is not often needed and when everything is rolling out as planned there is no thoughts to having to roll back these changes. When a changes are made and it isn’t going as well as it was planned someone may make the decision to cancel the change. A restore can often lead to extended hours of downtime and unexpected impacts to customers. These are the times when the extra effort put into a rollback script prove to be priceless.

 

Top 10 Tips for SQL Server Performance and Resiliency

  1. Improper Backups
  2. Improper Security
  3. Improper Maintenance
  4. Not having a Baseline
  5. SQL Server Max Memory

This article is part 5 in a series on the top 10 most common mistakes that I have seen impacting SQL Server Performance and Resiliency. This post is not all inclusive.

Most common mistake #5: Not Setting a Maximum Memory

Default configurations are not always the best configuration for your server. Maximum memory allocated to SQL Server is a great example of this. If this setting is left unchanged, there is a chance your SQL Server could be starving the operating system of memory. When SQL Server starves the operating system, the OS cannot provide a stable platform. A struggling OS will have a difficult time retrieving data from disk, swapping processes, and completing standard OS functions to name a few issues.

There are a number of ways to see if the operating system is running into memory issues. One of the fastest ways is to simply look at the task manager and see how much is available. If you review the available memory when the server is not performing well and the amount of available memory is low, there is a good chance the OS is being starved. With tasks like file copies, backups and virus scans, the operating system will look for memory to help complete the process.   If SQL Server has that memory reserved, the OS will not have the memory it needs to work efficiently.

Proper SQL Server memory configuration can take some fine tuning as a number of variables need to be taken into consideration. For example if there are 2 SQL Server instances installed you will need to leave the memory configured for each SQL Server low enough so the Operating System has enough to work with. And don’t forget processes like copying large files to different media, such as a file copy for a backup. Finding the balance can often leave DBA’s going in circles. When you add virtual servers into the mix the variables increase even more and sometimes the memory configurations go against SQL Server best practices (VMWare recommends min and max memory to be set to be the same). There are many formulas for recommend initial max memory settings.

So where do you start? A general rule of thumb is to leave the operating system 20% of the memory. With that being said, I hesitate to leave the operating system anything less than 2 gigs. After making a memory setting adjustment, close attention should be given to monitoring the memory counters in the performance monitor and the DMV’s in SQL Server to ensure the operating system has enough memory to perform well without fighting with SQL Server.

Top 10 Tips for SQL Server Performance and Resiliency

  1. Improper Backups
  2. Improper Security
  3. Improper Maintenance
  4. Not having a Baseline

In 1995 or 1996 I had to make a decision on what direction I wanted to take with my career. I could not have been more than 24 or 25 at the time, but I was working as an Operations Director for a small anti-freeze recycling company in Southern California. Anyhow I was left with a decision, do I learn Oracle or Microsoft SQL Server. The discussion was with a new employer and that is a whole complete different story for another time, however I was making the decision not only for me, but for a company as well because they were looking for my advice and I was going to be the DBA. The decision was made and I started to read books and head to a few training classes. My first task was to create a database, and then import a whole ton of data into it. None of those classes prepared me for one of my first tasks as a DBA outside the basics of installing and creating a database and the table I would need. Don’t get me wrong, I needed those classes and without them I would have had a much more difficult time that I did. If I remember correctly I had 7 books on my shelf that had in it what we know now as books on-line. With these instruction manuals there was a good amount of information that told me how to complete a specific task, but no reference for what was the best way to approach a task. There was only one book I had on my shelf, that had a chapter about moving data around, and it had about a page on an application called BCP (this is all from memory, so don’t hold me to the specifics), or Bulk Copy Program (BCP). I had found my answer.

Here it is some 19 years later and I found myself using BCP again just the other day. I can tell you that I know many DBA’s that are not aware of this tool; it doesn’t come with an awesome looking UI or any UI at all, and doesn’t have the strength of DTS or SSIS as far as manipulating data. But the tool works, and it is fast. Taking some time to think outside the basic parameters, there are some really great uses for BCP, things like archiving data or generating an export for a customer. I want to make sure that inform you that to get the most out of BCP, just like the other features in SQL Server, you need to take into consideration all the downstream effects. Think about indexes, and how those can slow down and insert, or an identity column. If you prepare correctly I can assure you that you will not be disappointed.

A few tips along the way that I learned with BCP, include:

  • Use a format file, if you are having problems creating one, because well they are not very forgiving of mistakes just BCP out the table that you are trying to BCP data into and you can use the BCP out statement to create a format file for you. I can’t tell you how frustrating it use to make me to try to create a format file from scratch and have my package fail because I had a simple mistake.
  • Use batch files to call your BCP IN or BCP OUT statement. This makes it a whole lot easier to ensure that you have all the flags set just the way you want them, in addition you can also add tasks like copy the file or import the file elsewhere to even automate what you doing even more.
  • Native mode is great if you are going from a SQL Server to a SQL Server.
  • If you have mail configured you can even set up a call to send mail in your batch file to notify you when the data copy had completed.
  • Write the errors out to a file, the errors returned with BCP are not always the friendliest.

In the last 19 years that I have been working with BCP it has had some changes, but it is still around and still works like a champ. The changes have been pretty minor and most of them are to keep up with new features that have been added to SQL Server. So if you reach a point where you need to do something in SQL Server, think about all the tools that you have at hand.