Archive for the ‘Uncategorized’ Category

Maintenance in General is a necessity for SQL Server.  No different than changing the oil in your car or going to the Doctor for the annual exam.  There is going to be times when you are going to need to run maintenance on your server.  The tricky part is trying to determine when you should start the maintenance jobs before the busy time. For example, what if you need to backup your database, then re-index your database and follow it up with a consistency check.

The common approach to this problem is to simply determine how long a job executes (often determined by trial and error) and then adjust the start time of each job to give the job enough time to execute, before starting the next job.  The problem with this method is you are hoping the first job in the chain completes on time before you start the second job.  The common way to avoid this is leaving gaps so one long running job does not step on the next job.  However, there are options…

Option 1

If you are using a maintenance plan you can keep all the tasks that are schedule to run at the same time in the same sub-plan.  Sometimes this does not provide the flexibility that individuals want, but it is an effective method.

Option 2

You can create multiple steps to a single job.  If we use the example above where you want to run a backups, than re-index and then DBCC, you can create 3 different steps, this way as soon as one step completes the next step is executed. This method removes the need for guessing when one job would finish and the next job start.

Option 3

Each task could have its own job, then the last step of each job would start the next job.  This will add a lot of flexibility to your maintenance.  I like to use this in a couple different kinds of situations.

  1. If your maintenance is done by using multiple tools, for example… a Red Gate Backup, a custom re-indexing plan and a simple t-sql script to run a consistency check.
  2. If your maintenance is done across multiple servers… If you have 3 servers that all backup to the same network share, you could have one server execute at a time to not clog up the network and the storage.

Adding a step to execute the next job is pretty simple.

exec sp_start_job @job_name=N’My Job Name’

 

If you need to schedule this to occur across server, you can simply make the call to the other server using a linked server.

I hope this tip has helped you in one fashion or another.

 

 

Maintenance in General is a necessity for SQL Server.  No different than changing the oil in your car or going to the Doctor for the annual exam.  There is going to be times when you are going to need to run maintenance on your server.  The tricky part is trying to determine when you should start the maintenance so that it completes before the busy time.  The common approach to this problem is to simply determine how long a job executes (often determined by trial and error) and then adjust the start time to give the job enough time to execute.  There is another way…

SQL Server has a number of system stored procedures that you can use to perform tasks that you might be doing in the user interface, for example… If you want to stop a job you can open SQL Server Management Studio, navigate to the job, right click and stop the job.  Here is where the system supplied stored procedure comes into play.  What if your busy time of the day is at 6 AM, and you want to make sure that the indexing has finished by 5:00 AM so that the system is ready to take on the day.  Do you really want to wake up at 5:00 AM just to right click and stop job, in the chance that it is running?

Simply schedule a job that will execute at 5:00 AM (the time you want to make sure the maintenance job is done by), and create a step that will stop the job.

exec sp_stop_job @job_name=N’My Job Name’

Not to complex. But what if you want to add some logic to the job so that not only does it just try to stop the job it will check the job to determine if it is executing first?  And now that we are looking at some of the options there are, we should put a line of code in there that will email us whenever the maintenance job has run long and had to be stopped.

Select name

from msdb..sysjobs j

join msdb..sysjobactivity a on j.job_id = a.job_id and j.name = ‘My Job Name’

Where start_execution_date is not null and stop_execution_date is null

If @@rowcount > 0

Begin

EXEC msdb.dbo.sp_stop_job @job_name = ‘My Job Name’

EXEC msdb.dbo.sp_send_dbmail @profile_name = ‘MyMailProfile’, @recipients = ‘Me@xtivia.com’,

@body = ‘The Indexing Rebuild Job had to be stopped due to long run time.’, @subject = ‘Index Rebuild’ ;

End

Else Return

I hope this tip has helped you in one fashion or another.  If you would like my list of TOP 10 TIPS FOR SQL SERVER PERFORMANCE AND RESILIENCY can be found here with Tip # 1.

 

Top 10 Tips for SQL Server Performance and Resiliency

This article is part 10 in a series on the top 10 most common mistakes that I have seen impact SQL Server Performance and Resiliency. This post is not all-inclusive.

Most common mistake #10: Storage

For the final post in the top 10 items that influence the performance and resiliency of the databases, we will talk about storage.  Storage is by far the number one cause of performance issues I have experienced, and over the last 20 years of my career, the changes with storage have been huge. When I first started as a professional, we were using internal spinning disks and then quickly changed up to internal RAID arrays. Five or so years later, we moved to external RAID cages.  Eventually the road took us to SAN and NAS storage and most recently SSD.  With all these changes, it is easy to see why we have administrators who focus on nothing but storage.  If you are fortunate enough to have a Storage Administrator, do you as a DBA still need to understand what the storage is doing?

How can you identify if you are having some sort of performance bottleneck?  There are a number of indicators that can provide you with the evidence your database is having a storage issue. A common indicator used for a number of years is the storage latency.  Storage latency information is collected by using the Performance Monitor in Windows.  Add the counters Average Disk/sec Read and Average Disk/sec Write.

The storage latency can be monitored in real time or the data can be recorded by starting a data collection.

According to Microsoft Best Practices, the latency on the disk the log file resides should be less than five milliseconds, and the data file latency should be less than 20 milliseconds.  In my experience, I have seen log file latency climb as high as 10 millisecond and sometimes a little higher during spikes without any end user impact.   In addition take location note of the TempDB database as we talked about in Tip 8. You will want to ensure you are keeping the TempDB on the fastest storage you can.

There are additional tools you may want to consider using to dig deeper into the performance of your storage systems such as SQLIO and SQLIOSIM both from Microsoft.  A couple popular third party tools include IOMeter and CrystalDiskMark.

Please do not miss my other blogs regarding this topic.

Top 10 Tips for SQL Server Performance and Resiliency

  1. Improper Backups
  2. Improper Security
  3. Improper Maintenance
  4. Not having a Baseline
  5. SQL Server Max Memory
  6. Change History
  7. Disaster Recovery Plans
  8. TempDB
  9. AutoShrink

 

Top 10 Tips for SQL Server Performance and Resiliency

This article is part 9 in a series on the top 10 most common mistakes that I have seen impact SQL Server Performance and Resiliency. This post is not all-inclusive.

Most common mistake #9: Automatically Shrinking Your Database

This is a topic that has been written about frequently, and most often, I try not to re-hash what many people have already blogged about.  However, as often as I see this I would be amiss if I did not add auto shrink to the list.

Often you will see IT professionals approaching their tasks from different angles.  Consider if you were a Systems Admin and you knew you needed some additional storage on a server you might send a request to the storage admin requesting an additional 50 gigs, or whatever amount you need.  As a Database Professional, you would be wise to not only include the size of storage that you need but also the performance specifications that you require.  As a DBA, we need to understand that SQL Server management may not always translate well to other types of systems management.  Now granted this should be no surprise, it is understand we do not approach all things the same way, but where this comes into play is the understanding we all have different backgrounds.  We became DBA’s from different career paths.

If you are new to being a Database Administrator or the Primary focus of your job is not to be a DBA you may see the benefits of shrinking a database automatically.  If the database shrinks by itself, it might be considered self-management; however, there is a problem when doing this.

When you shrink a data file SQL Server goes in and recovers all the unused pages, during the process it is giving that space back to the OS so the space can be used somewhere else.  The downstream effect of this is going to be the fact your indexes are going to become fragmented.  This can be demonstrated in a simple test.

I have a database in my lab based on the Chicago Crime Stats.  I have been doing a lot of testing in the database with an automated indexing script, that has me inserting a deleting a large number of rows at different times.  Over time this database has become rather large for my small lab, it is time to shrink it down to a more manageable size.  The first thing done is to check what the status of my indexes is.

This is a simple query that will return all the indexes in the database with its fragmentation level.

SELECT db_name() as [database],
      Object_Name(ps.object_id) as [table],
      i.name as Index_Name,
      round(avg_fragmentation_in_percent, 0) as Frag
FROM sys.dm_db_index_physical_stats(db_id(), null, null, NULL, NULL) ps
            Join sys.indexes i on ps.Object_ID = i.object_ID and ps.index_id = i.index_id

 

The results look like this:

image1

 

More or less the indexes are looking good; there is not a lot of fragmentation except in the one table (that is a discussion for later topics). What happens if I shrink the whole database, to include not only the log but also the data file as well?

 

Use the following T-SQL:

DBCC ShrinkDatabase ([ChicagoCrimeStats])

Rerunning the index fragmentation script, I now receive these results:

image2

 

If I have queries that use the IDX_Crimes_Frag_XCORD_Clustered index, there is a real good chance the performance on that query is going to degrade.

There are times when you may need to shrink a file, some considerations could be after a large delete of records or maybe you archived much of the data out of the database.  These sort of operations remove data leaving your databases with a lot of free space.  This free space can be reclaimed by using the DBCC Shrinkfile or DBCC Shrinkdatabase T-SQL commands, however be aware you should re-index after those statements are run.

It is not a bad thing to shrink a database as long as you do it in a controlled manor with proper maintenance afterwards.

Top 10 Tips for SQL Server Performance and Resiliency

  1. Improper Backups
  2. Improper Security
  3. Improper Maintenance
  4. Not having a Baseline
  5. SQL Server Max Memory
  6. Change History
  7. Disaster Recovery Plans
  8. TempDB

Top 10 Tips for SQL Server Performance and Resiliency

This article is part 8 in a series on the top 10 most common mistakes that I have seen impacting SQL Server Performance and Resiliency. This post is not all inclusive.

Most common mistake #8: Default TempDB Settings

By default when you install SQL Server the TempDB database is not optimized.  For SQL Servers that use the TempDB even moderately, I recommend optimizing it for the best performance.  The TempDB is the storage area that SQL Server uses to store data for a short periods of time, information that can be found in the TempDB include temporary tables, and data that is being sorted for query results. The data in the TempDB is so temporary that each time the SQL Server service restarts, the TempDB is recreated.

Due to the specific nature of the data in TempDB and the frequency of use it is important to ensure you have the most optimal configuration.

Optimizing the TempDB is so important that Microsoft is changing how it is installed with SQL Server 2016 to encourage the optimization of Temp DB.  How do you optimize TempDB?  It’s not difficult at all if you follow these few pointers:

Place your Temp DB on that fastest storage you have.  Ultimately this storage should be pounding out a latency less than 5 milliseconds.  Does your server have access to SSD storage?  If so that is a great place for the TempDB.

There are a great number of studies that have been done to determine the ideal number of files you should split your Temp DB over.  With my experience I tend to create one temp DB file for each processor core on the server, however I don’t do this until I find there is some contention in the TempDB.

Grow your TempDB to the size you need it.  Your TempDB is going to be recreated each time your service is restarted, so if your default database size is smaller than the normal operational size  you are going to have to go through some grow events.  Speaking of growth events, it is better to have controlled growth rather than a number of little growth events, so we recommend reviewing the auto growth size.

Top 10 Tips for SQL Server Performance and Resiliency

  1. Improper Backups
  2. Improper Security
  3. Improper Maintenance
  4. Not having a Baseline
  5. SQL Server Max Memory
  6. Change History
  7. Disaster Recovery Plans

 

 

What a Great Weekend

Posted: April 14, 2014 in Uncategorized

SQL Saturday #297

 

On April 12th we had the 3rd SQL Saturday hosted in Colorado Springs, Co. The event meet and exceeded all of our goals, our goals may not line up with some of the other SQL Server Saturday traditional goals you may have heard of. We share similar goals of reaching the community and providing quality SQL Server based education for little or no cost. However, we get a bit more specific. We want to magnify the importance of networking within the community and this year in particular, we were hoping if the attendees were presented with an opportunity to give back to the community they would respond. The SQL Saturday Attendees responded with a statement that even exceeded the wildest imagination of the organizers.

The Food Drive

When touring potential facilities we found that our number one choice had a food bank that shared space in the facility. We saw this as an opportunity for the SQL Server Community to give back to the community that was hosting this event. The plan was to ask speakers, attendees and sponsors to donate food items for the food bank. We had two sponsors that offered us two SWAG gifts that were so above and beyond anything we had hoped for. Xtivia was fast to react and donated an Autographed AC-DC guitar, Dell was also quick to respond with a new Dell tablet. We had a Valorem Consulting that was not sponsors donate a gift card we well. We even had support from SQL Server Users Groups in neighboring cities show support with donations. We even had people ship us food to donate because they could not be at the event.

End Result? Early and conservative numbers shared with me from the Westside Cares food pantry show over 1000 pounds of food donated. That is a hard number to get a hold of, so we have attempted to put this into easier to understand numbers.

There was so much food, it will be spread across four food banks in Colorado Springs.

This is an average of 5 pounds of food per attendee.

The weight is equivalent to 4,000 quarter pound cheese burgers.

I hope to post a blog in the near future about the Food Drive specifically. I think if we as organizers give the attendees an opportunity to give back into the communities that we live and work that we can make great events even better. I just don’t think there is enough that I can say about how well this went.

What happened to the Guitar?

Remember our goal was to help show attendees the importance of networking? We had an attendee who works at a local office supply store. She is close to 22 years old and is working at the same time attending a local college to get a degree in either database technology or networking technology. This young lady showed one of our attendees a level of customer service that ended with an invite to the SQL Saturday. She came to the event looking to network with others who work with SQL Server and learn about the certification tests. This young lady was the winner of the Guitar. My personal hope is that she received a lot more than just a guitar; I hope she left with connections that can help her all the way thought her career, from learning about SQL Server to getting her first Full Time DBA job to retirement.

Who do we need to thank?

Attendees – What an outstanding response to the food drive.

Speakers – Most of the speakers travel to these events all around the world at their cost. They don’t get reimbursed or sponsored for these trips. They simply want to give back to the community and improve their speaking skills.

Sponsors – The end result is that the events cost a lot of money to put on. There are adjustments that we can make if we don’t have a good budget but the support from our sponsors help pay for: the facility, lunch, giveaways, cleanup, projectors, coffee, afternoon snacks and so many other costs. Without the sponsors this event just would not have been the same.

PASS – One of the reasons we can have an event like this is the support we receive from PASS. Not only do they provide us with a website, but countless management tools that removes some of the tasks that each SQL Saturday has to do. This year we were lucky to have PASS’s Karla Landrum and her husband come out to help us. They were a huge part in the success of our event.

What we did well.

The food drive – I am sure there are things that we could have done differently to make the food drive better, but honestly I just don’t think there is any argument that 1000 pounds of food is anything but an awesome response.

Speaker Dinner – Our SQL Saturday had been fortunate enough to have a Data Realized as a top sponsor; they have been with us for all of the SQL Saturdays that Colorado Springs has hosted. What they contribute to the event cannot be thanked enough. This year not only did they provide backpacks for each attendee, but they also wanted to host the speaker dinner. When we were talking about what to do for the speaker dinner we wanted to make sure that our speakers had an evening that gave everyone there exposure to our local area. Data Realized wanted to make sure it was nice, and showed that we really do appreciate all the speakers do for us.

We ended up at a local restaurant and theater called the Iron Springs Chateau. The facility did a great job hosting us. The 50 year old restaurant was a great choice for a venue and the melodrama we saw was a lot of fun for everyone.

Our Venue – Our event was at a 100 year old facility that started life and an elementary school. For the cost of the facility in comparison to what we received in addition to the staff that was happy to help us

Set up and tear down – I cannot say enough about all the volunteers. We had 2 hours to set up before we had to head over to the speaker dinner and the facility was closing for the night. There were a number of very strategic items that had to occur at this time, and we were worried that not everything would be completed in time. I was shocked when we were doing setting up an hour early. If you are looking into hosting a SQL Saturday, I cannot urge you enough to work closely with all the volunteers.

 

Time to Vote

Posted: September 23, 2013 in Uncategorized

Some of you may know that I am pretty passionate about the PASS community. The community has had its up’s and down’s over the last few years, maybe this is why I have become fascinated with who is severing on the Board Of Directors (BOD). I have considered running for the board a couple times in the past, I have determined that as much as I think I would enjoy serving on the BOD, I just don’t think I am the right person. For me to make the determination that I would not make a good BOD member I had to do a lot of soul searching. I had a lot of fears such as the unknown if I could even get past the vetting stage. Many of my fears I believe are based in the reality of it just isn’t good timing for me. However this post isn’t about me, the reason I tell you about my path of self-review in considering running for the BOD, is so that you might consider a few things before you cast your vote. During my time of self-consideration I determined that certain traits should be considered:

  • A BOD member needs to be a person that instills trust. There are a lot of items that are discussed in the Board Room. Sometimes we may or may not know all the facts around what was discussed. The Board is asked to vote on many items during the year, and it is important to me that the people we have in office are those that I know will consider all the facts, review the positives and negatives of each decision and then they will cast a vote in the direction that is best for the health of the PASS organization as a whole. I don’t want a board member that is out for their own gain.
  • A BOD member is NOT a lemming. Each one of our Board members need to make a decision on their own. Voting because that is the way everyone else is going to vote does not make the vote right nor does it make it wrong. A BOD member is someone who determines what is best for the community and uses that as the driving factor on who they vote.
  • A BOD member knows how to listen, if you watch them and watch them closely the board members are talking to the PASS members, they want to know what issues are important to us. They may not always be able to comment on different topics but that should never discourage you from telling them what is important to you as a member of PASS.
  • A BOD member is in it for the long haul. If you look into the history of what they have done you should see many interactions with the community. The people that have been vetted have a lot of experience with PASS and the SQL Community.

I am sure there are a number of other traits that I could add to this list, I am sure that you have a few of your own that you might add. The important thing is that you take the time to identify what those traits are and what it means to you. When I see the list, I know almost all the names a couple of them jump off the page at me at who I think would be a great Board Member…

Allen Kinsel

When I first started working with the Colorado Springs SQL Server users group, our relationship with PASS was a little unknown, we were not sure how this relationship worked. After a lot of growing, not only by PASS but our local users group I had the opportunity to work with Allen. He was the board member who was responsible for working with the chapters. I can tell you personally I never saw as much growth with the way PASS works with the local PASS chapters as I did when Allen was working with local chapters on the board. Granted he had a lot of help from Karla Landrum (one of my favorite people involved with PASS) and a number of other people as well. It was under Allen’s watch when I have seen the most progress. When I heard Allen was not going to be back working with the chapters, I can tell you it felt like a body punch being delivered by Tyson himself. Now this has a happy ending because the next board member to work with the chapters is a good friend Wendy Pastrick, and I believe she carried the torch well. When I look at what Allen has done, and I watched as Allen walked away with his head held high I knew that if his name ever came on the ballot again, I was going to do everything I could to support him. I can tell you without hesitation that Allen meets all the criteria and expectations I have for a Board Member. Allen you got my vote, you earned it.

Amy Lewis

When I saw Amy on the ballot I was thinking about all the interactions that I have had with her. I tried and tried to come up with some catchy way that I can tell you how she has influenced the community, and has motivated me. Every time I keep coming back to the same thing. When I look at how I have organized user group meetings, or SQL Saturdays I always feel like there is something else I should have done. Things like making sure the speakers know how much they are appreciated, or the consistent communication with the chapter. Amy does that, she has some of the best follow through that I have ever seen. Then I look at how she is involved. Her dedication to Women in Technology (WIT), or the PASS Summit serving as a committee chair. She just doesn’t stop. There are a number of what I would call example chapters around the world. Chapters that are strong, chapters that are growing, chapters that if you can use as an example to make things better in your chapter. I can say without a doubt that the Phoenix chapter is one of those, and I believe a lot of it has to do with Amy. When she is involved in something, no matter what it is I think that something just got a whole lot better. I would be proud to have her as part of any event that I worked on.

It is important to me that I say I support each of the candidates that are running. I applaud the Nomination Committee for providing us with such qualified candidates. This is going to be a very difficult decision, and I don’t want to take away from anyone who is on the ballot. To all the candidates, I wish you good luck, I know you will do well. We have had many great leaders in the past, and we have great leadership on the board now. I am excited to see what the coming years have in store for us.

 

For the next couple days

Posted: April 13, 2012 in Uncategorized

Just so that you are aware, if you are on my blog over the next few days, it may look at little crazy. I am working a little on making it a bit more usable and friendly. If you see something that looks out of whack, by all means let me know.

–Chris

Not a lot of time this week (working on taking the SQL Server 2012 certification exams), but I wanted to make sure this made it on the blog. Have you signed up for SQL Rally yet? TJay Belt and I are working on a pre-con together and we are having fun putting it together. TJay  (T|Band I were talking about the interview question that they had presented us, and as we were going around about ideas on a Billboard I was thinking about this poster that you see attached. A quick discussion with my good friend Tom Roush and the poster became reality.

The other piece of news I have this week has to do with the Microsoft MVP Award. This is an award that one needs to earn each year. So each year on April 1st, I sit around and worry if I will receive another award, yesterday I found out that I get to put my worries aside for one more year. Each time the award is handed out I spend some time looking to see who the new award winners. This April it looks like Jes Borland (T|B) received her first award. Of the large number of people I have met over the years working with SQL Server, Jes is one of those that left a long lasting impression. She is an active runner, and well, if you get a chance to meet her, be careful… she has a lot of energy. The other new MVP that I am aware of is Kendra Little (T|B). Both Jes and Kendra are great resources. If I missed any other new MVP’s I am so sorry, it is a little difficult to keep up with twitter sometimes and that is where I get the announcements from. Congrats to all the renewed MVP’s and all the new MVP’s.

I must say that I really enjoy working with a product from the cradle to the grave, and the news that I have today is something that I announce with mixed emotions.  However, I am convinced that I am making the right decision not only for myself, but for my family and my career.

For nearly the last two years I have been employed by a company named Alcohol Monitoring Systems (AMS).   Together we have taken a number of steps in not only learning new ways to do things with the technology but moving the environment forward to the next steps with high availability and SQL Server.  The product that they have is one of the coolest that I have seen and the technology is unmatchable by their peers. One of the things that I really enjoyed with AMS was that each day I came into work I had the opportunity to work on different aspects of SQL Server.  Some days I would be working on administrative features and others working on performance monitoring.  I made a number of friends that I will not soon forget and I hope them success in the future.

I am not sure how much I can say about my new job.  I will be working in Colorado Springs at one of the Air Force bases.  There is a lot of attraction for me to involved with the military again.  There are always new challenges when starting a new position and each one of those challenges provide me with a new opportunity to learn something new about SQL Server, my RDBMS of choice.

Thanks to everyone I have had the opportunity to meet along my career path to this point.