SQL Saturday San Antonio #550

Posted: August 8, 2016 in Events

SQL Saturday San Antonio #550

This weekend I am heading north to the great city of San Antonio, TX.  I am excited to see what the organizers have put together for us.  This is the first time San Antonio is hosting a SQL Saturday and from what I have seen so far it should be a great event.  I was looking at the lineup of speakers and sessions and I know from experience this is an event well worth attending.

At this event I will be speaking about common mistakes that I have seen clients make when it comes to their SQL Server environments, specifically as how these mistakes impact the servers. Here is the full title and abstract.

Tips for SQL Server Performance and Resiliency

After working 20 years in the industry on hundreds of SQL Servers clear patterns emerge. These observations of patterns and best practices directly relate to the health of the server.  During this session we will review the patterns of healthy SQL Servers and identify unhealthy behavior that clearly shows servers that are at risk.

We will review shared patterns, configuration and practices.   Which of these repeating patterns represent healthy Servers and which of these patterns are shared with servers that are having issues daily.

These patterns are so clear in fact that I many SQL Servers will benefit from the application of at least one if not more of the tips we will cover.  At the end of this session attendees will have the tools to identify and correct common issues that impact SQL Servers Performance and Resiliency.

If you are in town on Friday I recommend you check out one of the Pre-Con sessions.  I know both Kalen and Tim who will be presenting the bellow topics and I can ensure you both of these sessions are money well spent investing in your future.

What the Hekaton? SQL Server’s In-memory OLTP

Presented By Microsoft MVP Kalen Delaney

SQL Server In-Memory OLTP, will completely change the way you think about data management. But once you make the change, you won’t be sorry! In this session, you’ll find out how Microsoft is able to give up to 40X performance improvement!

Early bird pricing has ended!

Register Now for $199
 

Building Better SSIS Packages

Presented By Microsoft MVP Tim Mitchell

There’s nothing magical about building rock-solid SSIS packages, but it does take some discipline, experience, and a library of best practices. That is exactly the aim of this course: to demonstrate a set of proven practices that help frame the development of enterprise-ready SSIS packages.

In this full-day presentation, we will walk through each of these five facets of well-built packages, discussing and then demonstrating ways of applying these practices to design better SSIS packages

Register Now for $169

SQL Saturday Changes

Posted: July 27, 2016 in Events, News
Tags: ,

If you keep up on all the news coming out of PASS then you might have seen the recent changes that are going to be made to the SQL Saturday offering starting Jan 1st, 2017.  If you haven’t had a chance to see the news yet, you can take a look at it here:  Making SQL Saturday Sustainable.

In short there are a couple things PASS is addressing with this post, one is related to how PASS will sponsor SQL Saturdays in the future. The second has to do with event timing…  Taken from the PASS Blog the statement says:

“The event timing restriction means that events within the 600-mile radius cannot occur on the same weekend or one weekend prior or after.”

The timing restriction is what concerns me. I have been working with SQL Saturdays for some time now, in a few different roles, I have been an Organizer, a Sponsor, a Speaker and an attendee.  My thoughts are pretty jumbled up (much like this blog post), so I decided to look at each of these roles and consider what my goals were at SQL Saturday from each perspective.

As an Organizer – I wanted to bring SQL Saturday to Colorado Springs.  Denver is not far away (60 miles or so), however, we are considered a large city and have a number of SQL Server Professionals in the area. We could have had our own event, and my goal here may have been somewhat selfish, but I wanted to have an event that was local to my city I wanted an event where we could learn, network and have a bit of fun.  If you ever attended one of the early Colorado Springs SQL Saturdays, I think you would agree we tried to bring all those things together.

As a Speaker – I have a few reasons why I speak at these events. Speaking is a great way to improve myself professionally. Being able to address an audience has helped me a lot in my career. I am also a firm believer in the best way to know a subject is to teach it.

As an Attendee – I think this one is self-explanatory, I go to the events as an attendee because I want to continue to grow my skill set.

As a Sponsor – I had two goals when I sponsored SQL Saturday events.  First, I wanted to generate new sales leads. Second, I wanted to get my branding out there in front of potential customers. Developing new leads was a very short term goal, while branding recognition was a longer term goal.

 

Need some intro here to transition. Maybe something like: In each of these roles I faced different types of challenges. These vary from role to role, but here are my experiences.

 

Organizer Challenges

There are many challenges that face the organizer today: volunteers, space, and time to work on the event.  However, based on the policy change and my opinion, the biggest challenge is the money. There are so many things that need to be paid for.  Speaker shirts, speaker gifts, the space, lunch, swag and even insurance. Granted your event does not require you to have a speaker shirt or a speaker gift, but you do need a space to have the event. There are also other management costs that need to be considered. Currently, there are only a couple ways that an organizer can bring in money. Charging for lunch and finding sponsors.

 

Speaker Challenges

As a speaker, my biggest financial concern is getting to the event?  There are a few ways I can get to an event as a speaker.  I could pay for my own travel, however, if I were to do that it would really limit how many of these events that I could attend.  My company could pick up the travel, but in a similar fashion as an individual, the money bucket isn’t bottomless.  There is the option of a presenting a pre-con but then there is a risk that there may not be enough attendees to cover my travel expenses.

Granted, as a speaker, I am one of many people willing to go to an event, so if I can’t afford to go, there are others that might be able to.  The end result is, as a speaker, I can reach my goals. I would like to go to more events, but there are enough that are close enough to me that I can stay pretty active.

 

Attendee Challenges

In the time that SQL Saturday has been around there have been over 400 events so far. In my mind it is really easy to see why. The platform has great education quality for little to no cost to the attendees.

Consider this, a new movie is about to be released and the quality of the entertainment is top notch with great actors, writers and directors. Would you go?  Maybe not if you don’t like movies, or if you are not a fan of the actors in the movie, but if the movie was free would you go then?

I think when you look at the fact that quality education is being delivered for little to no cost, you would have to consider the event a success.  You know what they call a free movie with great actors?  Television, and even if you pay for television, you are going to get the commercials.

 

Sponsor Challenges

As a sponsor I want to be involved with as many events as I can, however, very few companies have the marketing budget that can allow for sponsorship of every event.  This means that the sponsors have to pick and choose.  As a sponsor with a limited budget, it is my responsibility to the company I work for to ensure I get the biggest bang for my buck.  If I know there were three events in Colorado last year, I would think there will be at least three events this year.  If each of the event’s demographics for attendees is close to being the same, why not pick the largest event and sponsor that event?

The reality is these events cost money.  The more I think about this, the more I think that PASS may have taken what they believe to be a proactive step.  This move is going to force events to plan more with other nearby events (not a bad idea anyway), but I am not sure this is the right way to go about it.

If we go back and look at the movie example that I mentioned before, what PASS is doing is saying we can’t run the same program on channels near each other at the same time.  I am not sure I understand that rational.  In like fashion does this mean they should consider moving the dates of the PASS Summit so that it is not located distance or time-wise with other conferences that may be going on at the same time?

I am sure there were a lot of discussion about this before making the decision, but I can’t help but think that there are options out there other than making this policy change.  Some of the items that come quickly to my mind:

  1. Why not charge a small fee for each attendee? If you had a SQL Saturday with 150 people and they charged $25.00 for the event this would add an additional $3,750.00 to the budget. That alone could pay for an event if the belt is tightened a notch or two. Add in a few sponsors and you have plenty of money. With a small fee, each event could be run without the need of so many sponsors yet the goal of SQL Saturday is still reached by providing low cost education to the technical community.
  2. Maybe there are ideas out there to help get sponsors more visibility.  As a sponsor I might be willing to pay more for SQL Saturday brought to you by XYZ Company. Look at how many organizations are going to new ideas so they can get the company name out there.  The Broncos use to play at Mile High stadium, but not any longer.  All I know about basketball is the Lakers play at the Staples Center.  Maybe it is time for events to help the sponsors engage with the attendees better.  Yes, I know the organizers have a lot on their plate already, but companies have to pay the bills.
  3. Maybe we just tone the whole thing down a bit.  Remember the point of a SQL Saturday? Education, not sales and free gifts.  Go with smaller events, meet at free locations, and don’t provide a lunch. Who says a SQL Saturday can’t have two speakers and twenty attendees?  Would an event that small be considered a failure if the cost was small to nothing and everyone learned one new thing they could use at work on Monday?

I have to say, I disagree with PASS on this one.  Each of these events need to look at what the goals of the event are, and what they consider a success.  I can see why there might be an event in Denver and an event in Las Vegas on the same day. I think we may all need to come back to why we do this, what is our goal.

 

Career Decision Time

Posted: June 28, 2016 in Career

They say that you cannot have the best of both worlds.  They say that you cannot have your cake and eat it too.  They say that the grass isn’t really greener on the other side.

For all of you who say things like that, you are wrong.

A number of years ago I was faced with a career path decision.  Do I remain a full time employee as a DBA for a company or do I focus on becoming a consultant.  There are benefits to both being a SQL Server Consultant and being a full time employee (FTE) with an organization.  As an FTE most of the time when you go into work each morning you have a good idea of what you are going to be working on.  You know the servers and the little quirks that need to be managed to keep your systems running at full speed.  There is a lot to be said for building a long term relationship with a company that you can work with for years to come, maybe even retire from.

The downside as an FTE is that new projects don’t always come around every day, and companies are not always adopting all the new technology with SQL Server, so as a FTE DBA, your experience with SQL Server becomes hampered by how the organization you work for uses SQL Server.  In like fashion as a FTE if the database technology isn’t progressing and is stable the need for a DBA to work on the same systems for an extended period of time may just not be there.

I have found that I can have my cake and eat it too.  I have found that yeah, its greener and in short, I’ve arrived.

I have accepted a position at Fortified Data.  This position is going to allow me to build a relationship with an established professional services organization that specializes in complex SQL Server environments while offering them managed services at the same time.

Maintenance in General is a necessity for SQL Server.  No different than changing the oil in your car or going to the Doctor for the annual exam.  There is going to be times when you are going to need to run maintenance on your server.  The tricky part is trying to determine when you should start the maintenance jobs before the busy time. For example, what if you need to backup your database, then re-index your database and follow it up with a consistency check.

The common approach to this problem is to simply determine how long a job executes (often determined by trial and error) and then adjust the start time of each job to give the job enough time to execute, before starting the next job.  The problem with this method is you are hoping the first job in the chain completes on time before you start the second job.  The common way to avoid this is leaving gaps so one long running job does not step on the next job.  However, there are options…

Option 1

If you are using a maintenance plan you can keep all the tasks that are schedule to run at the same time in the same sub-plan.  Sometimes this does not provide the flexibility that individuals want, but it is an effective method.

Option 2

You can create multiple steps to a single job.  If we use the example above where you want to run a backups, than re-index and then DBCC, you can create 3 different steps, this way as soon as one step completes the next step is executed. This method removes the need for guessing when one job would finish and the next job start.

Option 3

Each task could have its own job, then the last step of each job would start the next job.  This will add a lot of flexibility to your maintenance.  I like to use this in a couple different kinds of situations.

  1. If your maintenance is done by using multiple tools, for example… a Red Gate Backup, a custom re-indexing plan and a simple t-sql script to run a consistency check.
  2. If your maintenance is done across multiple servers… If you have 3 servers that all backup to the same network share, you could have one server execute at a time to not clog up the network and the storage.

Adding a step to execute the next job is pretty simple.

exec sp_start_job @job_name=N’My Job Name’

 

If you need to schedule this to occur across server, you can simply make the call to the other server using a linked server.

I hope this tip has helped you in one fashion or another.

 

 

Maintenance in General is a necessity for SQL Server.  No different than changing the oil in your car or going to the Doctor for the annual exam.  There is going to be times when you are going to need to run maintenance on your server.  The tricky part is trying to determine when you should start the maintenance so that it completes before the busy time.  The common approach to this problem is to simply determine how long a job executes (often determined by trial and error) and then adjust the start time to give the job enough time to execute.  There is another way…

SQL Server has a number of system stored procedures that you can use to perform tasks that you might be doing in the user interface, for example… If you want to stop a job you can open SQL Server Management Studio, navigate to the job, right click and stop the job.  Here is where the system supplied stored procedure comes into play.  What if your busy time of the day is at 6 AM, and you want to make sure that the indexing has finished by 5:00 AM so that the system is ready to take on the day.  Do you really want to wake up at 5:00 AM just to right click and stop job, in the chance that it is running?

Simply schedule a job that will execute at 5:00 AM (the time you want to make sure the maintenance job is done by), and create a step that will stop the job.

exec sp_stop_job @job_name=N’My Job Name’

Not to complex. But what if you want to add some logic to the job so that not only does it just try to stop the job it will check the job to determine if it is executing first?  And now that we are looking at some of the options there are, we should put a line of code in there that will email us whenever the maintenance job has run long and had to be stopped.

Select name

from msdb..sysjobs j

join msdb..sysjobactivity a on j.job_id = a.job_id and j.name = ‘My Job Name’

Where start_execution_date is not null and stop_execution_date is null

If @@rowcount > 0

Begin

EXEC msdb.dbo.sp_stop_job @job_name = ‘My Job Name’

EXEC msdb.dbo.sp_send_dbmail @profile_name = ‘MyMailProfile’, @recipients = ‘Me@xtivia.com’,

@body = ‘The Indexing Rebuild Job had to be stopped due to long run time.’, @subject = ‘Index Rebuild’ ;

End

Else Return

I hope this tip has helped you in one fashion or another.  If you would like my list of TOP 10 TIPS FOR SQL SERVER PERFORMANCE AND RESILIENCY can be found here with Tip # 1.

 

Top 10 Tips for SQL Server Performance and Resiliency

This article is part 10 in a series on the top 10 most common mistakes that I have seen impact SQL Server Performance and Resiliency. This post is not all-inclusive.

Most common mistake #10: Storage

For the final post in the top 10 items that influence the performance and resiliency of the databases, we will talk about storage.  Storage is by far the number one cause of performance issues I have experienced, and over the last 20 years of my career, the changes with storage have been huge. When I first started as a professional, we were using internal spinning disks and then quickly changed up to internal RAID arrays. Five or so years later, we moved to external RAID cages.  Eventually the road took us to SAN and NAS storage and most recently SSD.  With all these changes, it is easy to see why we have administrators who focus on nothing but storage.  If you are fortunate enough to have a Storage Administrator, do you as a DBA still need to understand what the storage is doing?

How can you identify if you are having some sort of performance bottleneck?  There are a number of indicators that can provide you with the evidence your database is having a storage issue. A common indicator used for a number of years is the storage latency.  Storage latency information is collected by using the Performance Monitor in Windows.  Add the counters Average Disk/sec Read and Average Disk/sec Write.

The storage latency can be monitored in real time or the data can be recorded by starting a data collection.

According to Microsoft Best Practices, the latency on the disk the log file resides should be less than five milliseconds, and the data file latency should be less than 20 milliseconds.  In my experience, I have seen log file latency climb as high as 10 millisecond and sometimes a little higher during spikes without any end user impact.   In addition take location note of the TempDB database as we talked about in Tip 8. You will want to ensure you are keeping the TempDB on the fastest storage you can.

There are additional tools you may want to consider using to dig deeper into the performance of your storage systems such as SQLIO and SQLIOSIM both from Microsoft.  A couple popular third party tools include IOMeter and CrystalDiskMark.

Please do not miss my other blogs regarding this topic.

Top 10 Tips for SQL Server Performance and Resiliency

  1. Improper Backups
  2. Improper Security
  3. Improper Maintenance
  4. Not having a Baseline
  5. SQL Server Max Memory
  6. Change History
  7. Disaster Recovery Plans
  8. TempDB
  9. AutoShrink

 

Top 10 Tips for SQL Server Performance and Resiliency

This article is part 9 in a series on the top 10 most common mistakes that I have seen impact SQL Server Performance and Resiliency. This post is not all-inclusive.

Most common mistake #9: Automatically Shrinking Your Database

This is a topic that has been written about frequently, and most often, I try not to re-hash what many people have already blogged about.  However, as often as I see this I would be amiss if I did not add auto shrink to the list.

Often you will see IT professionals approaching their tasks from different angles.  Consider if you were a Systems Admin and you knew you needed some additional storage on a server you might send a request to the storage admin requesting an additional 50 gigs, or whatever amount you need.  As a Database Professional, you would be wise to not only include the size of storage that you need but also the performance specifications that you require.  As a DBA, we need to understand that SQL Server management may not always translate well to other types of systems management.  Now granted this should be no surprise, it is understand we do not approach all things the same way, but where this comes into play is the understanding we all have different backgrounds.  We became DBA’s from different career paths.

If you are new to being a Database Administrator or the Primary focus of your job is not to be a DBA you may see the benefits of shrinking a database automatically.  If the database shrinks by itself, it might be considered self-management; however, there is a problem when doing this.

When you shrink a data file SQL Server goes in and recovers all the unused pages, during the process it is giving that space back to the OS so the space can be used somewhere else.  The downstream effect of this is going to be the fact your indexes are going to become fragmented.  This can be demonstrated in a simple test.

I have a database in my lab based on the Chicago Crime Stats.  I have been doing a lot of testing in the database with an automated indexing script, that has me inserting a deleting a large number of rows at different times.  Over time this database has become rather large for my small lab, it is time to shrink it down to a more manageable size.  The first thing done is to check what the status of my indexes is.

This is a simple query that will return all the indexes in the database with its fragmentation level.

SELECT db_name() as [database],
      Object_Name(ps.object_id) as [table],
      i.name as Index_Name,
      round(avg_fragmentation_in_percent, 0) as Frag
FROM sys.dm_db_index_physical_stats(db_id(), null, null, NULL, NULL) ps
            Join sys.indexes i on ps.Object_ID = i.object_ID and ps.index_id = i.index_id

 

The results look like this:

image1

 

More or less the indexes are looking good; there is not a lot of fragmentation except in the one table (that is a discussion for later topics). What happens if I shrink the whole database, to include not only the log but also the data file as well?

 

Use the following T-SQL:

DBCC ShrinkDatabase ([ChicagoCrimeStats])

Rerunning the index fragmentation script, I now receive these results:

image2

 

If I have queries that use the IDX_Crimes_Frag_XCORD_Clustered index, there is a real good chance the performance on that query is going to degrade.

There are times when you may need to shrink a file, some considerations could be after a large delete of records or maybe you archived much of the data out of the database.  These sort of operations remove data leaving your databases with a lot of free space.  This free space can be reclaimed by using the DBCC Shrinkfile or DBCC Shrinkdatabase T-SQL commands, however be aware you should re-index after those statements are run.

It is not a bad thing to shrink a database as long as you do it in a controlled manor with proper maintenance afterwards.

Top 10 Tips for SQL Server Performance and Resiliency

  1. Improper Backups
  2. Improper Security
  3. Improper Maintenance
  4. Not having a Baseline
  5. SQL Server Max Memory
  6. Change History
  7. Disaster Recovery Plans
  8. TempDB

Top 10 Tips for SQL Server Performance and Resiliency

This article is part 8 in a series on the top 10 most common mistakes that I have seen impacting SQL Server Performance and Resiliency. This post is not all inclusive.

Most common mistake #8: Default TempDB Settings

By default when you install SQL Server the TempDB database is not optimized.  For SQL Servers that use the TempDB even moderately, I recommend optimizing it for the best performance.  The TempDB is the storage area that SQL Server uses to store data for a short periods of time, information that can be found in the TempDB include temporary tables, and data that is being sorted for query results. The data in the TempDB is so temporary that each time the SQL Server service restarts, the TempDB is recreated.

Due to the specific nature of the data in TempDB and the frequency of use it is important to ensure you have the most optimal configuration.

Optimizing the TempDB is so important that Microsoft is changing how it is installed with SQL Server 2016 to encourage the optimization of Temp DB.  How do you optimize TempDB?  It’s not difficult at all if you follow these few pointers:

Place your Temp DB on that fastest storage you have.  Ultimately this storage should be pounding out a latency less than 5 milliseconds.  Does your server have access to SSD storage?  If so that is a great place for the TempDB.

There are a great number of studies that have been done to determine the ideal number of files you should split your Temp DB over.  With my experience I tend to create one temp DB file for each processor core on the server, however I don’t do this until I find there is some contention in the TempDB.

Grow your TempDB to the size you need it.  Your TempDB is going to be recreated each time your service is restarted, so if your default database size is smaller than the normal operational size  you are going to have to go through some grow events.  Speaking of growth events, it is better to have controlled growth rather than a number of little growth events, so we recommend reviewing the auto growth size.

Top 10 Tips for SQL Server Performance and Resiliency

  1. Improper Backups
  2. Improper Security
  3. Improper Maintenance
  4. Not having a Baseline
  5. SQL Server Max Memory
  6. Change History
  7. Disaster Recovery Plans

 

 

This article is part 7 in a series on the top 10 most common mistakes that I have seen impacting SQL Server Performance and Resiliency. This post is not all inclusive.

Most common mistake #7: Disaster Recovery Plans

Often people hear disaster recovery plan and the first reaction is to start worrying about the costs. Disaster recovery plans don’t have to be expensive, expensive disaster recovery plans come from strict requirements.

About 10 years ago when I started as an independent consultant one of my first clients was contacting me to help build out a disaster recovery plan for them. After our initial discussion I learned some consulting firms had forecasted one hundred thousand dollar solutions. Many large companies would look at that number and determine it was a bargain, however this clients company made less than 50k a year. The data changed about once a year, and if the database was down a week or two it was questioned if anyone would even notice. It was easy to see that the hundred thousand dollar solution was extremely over engineered.

Don’t ignore the basics

Disaster Recovery Solutions should start with two basic questions, what is the recovery point object and what is the recovery time objective.

  • RPO – Recovery Point Objectives – To what point must the database be restored after a disaster. Another way to ask this question would be, how much data can be lost.
  • RTO – Recovery Time Objectives – How much time can elapse after the disaster has occurred? Or, how long can your system can be down?

Depending on these answers additional questions will arise, however these two questions can help determine what potential solutions will work. SQL Server offers a number of solutions from Transaction Log shipping to AlwaysOn Availability Groups.

Pay Attention to the Details

Whenever I visit a datacenter for a client I make sure that I take some time to review how the cages are wired. On more than one occasion I have seen servers with redundant power supplies have both of the power cords plugged into one circuit. This configuration will protect you if one of the power supplies goes bad, however if the circuit goes down the redundant power supply isn’t any help.

When executing a disaster recovery plan ensure all the small details are checked. If there is a single point of failure in the system Murphy is going to find it.

Test

I can tell you the most common mistake I see on a regular basis with Disaster Recovery solutions is the lack of testing. Some testing is better than no testing, but the best testing is testing that mimic’s actual disasters. If there is a power outage for your servers and you have 5 min. to get everything moved do you know the steps to complete before the unlimited power supply loses its charge? What steps must you take if you don’t have the 5 minutes? I was working with the chief technology officer for a major education facility and he had a vendor that was telling him he we safe, and he didn’t have to worry about it. His contract was for a 15 minute recovery point. When we reached out to the vendor and asked them to prove it.

The lesson here is perform regular realistic tests, if they don’t work, find out why and fix it.

Top 10 Tips for SQL Server Performance and Resiliency

  1. Improper Backups
  2. Improper Security
  3. Improper Maintenance
  4. Not having a Baseline
  5. SQL Server Max Memory
  6. Change History

You might have been following along with my recent series of posts related to a number of resiliency tips for SQL Server. I have compiled these tips based off my observations over the last 20 or so years. Over the summer I have the opportunity to present a session that is based on these tips. This week I am in New York, to present at SQL Saturday #380. Attendance is free with a small fee to help cover the cost of lunch.

June 6th I will be in Colorado Springs, Co. presenting the same session at SQL Saturday #415.

I believe there are a couple more SQL Saturdays I will be at this summer. I hope to see many old friends and meet many new ones. Tip #7 should be posted Friday 5/29. If you are looking for a list of the tips that have already posted you can find these here:

Top 10 Tips for SQL Server Performance and Resiliency

  1. Improper Backups
  2. Improper Security
  3. Improper Maintenance
  4. Not having a Baseline
  5. SQL Server Max Memory
  6. Change History