Archive for the ‘SQL Server 2005’ Category

Maintenance in General is a necessity for SQL Server.  No different than changing the oil in your car or going to the Doctor for the annual exam.  There is going to be times when you are going to need to run maintenance on your server.  The tricky part is trying to determine when you should start the maintenance jobs before the busy time. For example, what if you need to backup your database, then re-index your database and follow it up with a consistency check.

The common approach to this problem is to simply determine how long a job executes (often determined by trial and error) and then adjust the start time of each job to give the job enough time to execute, before starting the next job.  The problem with this method is you are hoping the first job in the chain completes on time before you start the second job.  The common way to avoid this is leaving gaps so one long running job does not step on the next job.  However, there are options…

Option 1

If you are using a maintenance plan you can keep all the tasks that are schedule to run at the same time in the same sub-plan.  Sometimes this does not provide the flexibility that individuals want, but it is an effective method.

Option 2

You can create multiple steps to a single job.  If we use the example above where you want to run a backups, than re-index and then DBCC, you can create 3 different steps, this way as soon as one step completes the next step is executed. This method removes the need for guessing when one job would finish and the next job start.

Option 3

Each task could have its own job, then the last step of each job would start the next job.  This will add a lot of flexibility to your maintenance.  I like to use this in a couple different kinds of situations.

  1. If your maintenance is done by using multiple tools, for example… a Red Gate Backup, a custom re-indexing plan and a simple t-sql script to run a consistency check.
  2. If your maintenance is done across multiple servers… If you have 3 servers that all backup to the same network share, you could have one server execute at a time to not clog up the network and the storage.

Adding a step to execute the next job is pretty simple.

exec sp_start_job @job_name=N’My Job Name’

 

If you need to schedule this to occur across server, you can simply make the call to the other server using a linked server.

I hope this tip has helped you in one fashion or another.

 

 

Maintenance in General is a necessity for SQL Server.  No different than changing the oil in your car or going to the Doctor for the annual exam.  There is going to be times when you are going to need to run maintenance on your server.  The tricky part is trying to determine when you should start the maintenance so that it completes before the busy time.  The common approach to this problem is to simply determine how long a job executes (often determined by trial and error) and then adjust the start time to give the job enough time to execute.  There is another way…

SQL Server has a number of system stored procedures that you can use to perform tasks that you might be doing in the user interface, for example… If you want to stop a job you can open SQL Server Management Studio, navigate to the job, right click and stop the job.  Here is where the system supplied stored procedure comes into play.  What if your busy time of the day is at 6 AM, and you want to make sure that the indexing has finished by 5:00 AM so that the system is ready to take on the day.  Do you really want to wake up at 5:00 AM just to right click and stop job, in the chance that it is running?

Simply schedule a job that will execute at 5:00 AM (the time you want to make sure the maintenance job is done by), and create a step that will stop the job.

exec sp_stop_job @job_name=N’My Job Name’

Not to complex. But what if you want to add some logic to the job so that not only does it just try to stop the job it will check the job to determine if it is executing first?  And now that we are looking at some of the options there are, we should put a line of code in there that will email us whenever the maintenance job has run long and had to be stopped.

Select name

from msdb..sysjobs j

join msdb..sysjobactivity a on j.job_id = a.job_id and j.name = ‘My Job Name’

Where start_execution_date is not null and stop_execution_date is null

If @@rowcount > 0

Begin

EXEC msdb.dbo.sp_stop_job @job_name = ‘My Job Name’

EXEC msdb.dbo.sp_send_dbmail @profile_name = ‘MyMailProfile’, @recipients = ‘Me@xtivia.com’,

@body = ‘The Indexing Rebuild Job had to be stopped due to long run time.’, @subject = ‘Index Rebuild’ ;

End

Else Return

I hope this tip has helped you in one fashion or another.  If you would like my list of TOP 10 TIPS FOR SQL SERVER PERFORMANCE AND RESILIENCY can be found here with Tip # 1.

 

Have you heard and understand why processing a ‘select * from any table’ can be negative on the performance impact? While I was doing some demo prep for my upcoming PASS Summit Pre-Conference I saw this behavior at its worst.
Before we jump into why this is such a negative impact on your performance I should tell you about my test environment. I have a table with 4.6 million records in it. The table has the following structure:
CREATE TABLE [dbo].[Crimes](
[ID] [varchar](500) NULL,
[CaseNumber] [varchar](500) NULL,
[Date] [varchar](500) NULL,
[Block] [varchar](500) NULL,
[IUCR] [varchar](500) NULL,
[PrimaryType] [varchar](500) NULL,
[Description] [varchar](1000) NULL,
[LocationDescription] [varchar](500) NULL,
[Arrest] [varchar](500) NULL,
[Domestic] [varchar](500) NULL,
[Beat] [varchar](500) NULL,
[District] [varchar](500) NULL,
[Ward] [varchar](500) NULL,
[CommunityArea] [varchar](500) NULL,
[FBICode] [varchar](500) NULL,
[XCoordinate] [varchar](500) NULL,
[YCoordinate] [varchar](500) NULL,
[Year] [varchar](500) NULL,
[UpdatedOn] [varchar](500) NULL,
[Latitude] [varchar](500) NULL,
[Longitude] [varchar](500) NULL,
[Location] [varchar](500) NULL,
[BeatInt] [int] NULL
) ON [PRIMARY]

(One thing that you may have already noticed is all the varchar columns. The reason all the data types are variable characters is due to the poor source file I was working from. Data that I would expect to be all numeric had some text in it.)

I then added an index to improve query performance.
Create Index beat on crimes (beat)

If I want to get all the crimes in a beat, I can do this in a couple ways. First I can try the Select * method. The ‘*’ indicates I want all the columns returned.
Select * from Crimes
where beat like ‘1533’

My test execution took 17 seconds. When I changed the query to return only the specific columns that I needed by replacing the * with the specific column names:
Select PrimaryType,Description from Crimes
where beat like ‘1533’

The results were returned in 11 seconds rather than the 17 seconds (36% improvement on performance). One reason for this is simply the amount of data that needs to be read from storage is trimmed down. Both queries returned approximately 27 thousand rows.
Let us consider one more test. This time we will explicitly list out each of the columns in our select clause. This is more or less doing the exact same thing as Select *.
Select
[ID] ,[CaseNumber],[Date] ,[Block] ,[IUCR] ,[PrimaryType] ,[Description],[LocationDescription] ,[Arrest] ,
[Domestic] ,[Beat] ,[District] ,[Ward] ,
[CommunityArea] ,[FBICode] ,[XCoordinate] ,[YCoordinate] ,[Year] ,
[UpdatedOn] ,[Latitude] ,[Longitude] ,[Location] ,[BeatInt]
from Crimes
where beat like ‘1533’

When each of the columns are listed out it takes just as long as it does if I were to do a Select *. Even though two queries are comparable, I prefer to see the columns listed out. One of the reasons has to do with future compatibility, if sometime down the row a column gets added to the table then I don’t have to worry if I am sending more data to an application front end then in needs.
The cause of the performance increase is simple, there is less data to read from the storage and less data that needs to be transferred to the client. When retrieving data from SQL Server use the query to limit your results to just the data you need and the overall performance should be better than if you were to bring back all the data.
These examples have drastic differences. It is easy to see how when you are looking at return row set of 27,000 records how performance can be improved. Other areas you will see improvement in performance by removing the ‘Select *’, (that we have not talked about in this post) include: 

• Less data to be transferred over internal networks.
• Less data to load into applications
• Less data to send over the internet
This is one of many small adjustments which can make a big difference in how you’re SQL Server Performs.

December 2, 2013 I posted a blog titled, SQL Server Error Log. Often the things I post about are the items that I am using that day, or recently. Many of the topics, are simply based on how I may have made using SQL Server a bit easier for me to use one way or another. In the SQL Server Error Log post I had mentioned a few of the basic things I do when configuring a server to make the Error Log a little easier for me to navigate, or get the information I need quicker. It occurred to me after a question from a well-respected community member that an example on how I use xp_readerrorlog, could add to the usefulness of the post. Thanks for the suggestion.

In the post I recommend tabling your filtered error logs. I am fond of basing all my work on the negative (I will explain more of that in a later post). The term working with the negative for me means working on the things that I am not expecting. For example, I expect that every hour my transaction logs are going to be backed up, I don’t need an alert or an email each time that the transaction log completes without any issues. I do however want the alarms blasting every time they don’t execute as planned. I apply the same logic to my error logs, however even if I could remove entries from my error logs I always want them as they are. In order for me to have the best of both worlds, I like to take all the entries that I am not expecting and store them in a table so I can filter them with a query, join them to other tables to get other relevant events that were occurring at that time and perform other tasks that may be a bit difficult with the error logs as they stand.

Where to start?

When I start working with a new server, if I have the opportunity I like to create a database I call dbUtilities. The Tag on this post will show you some other posts where I have used this database. After that database has been created, or you have picked another location to store your table, start with a table that is similar to this one. I create this table so I can store my information long term. You may want to add some additional fields such as a PK.

On a quick side note, you should customize this code to meet your needs; the posted version here is in the simplest form. Make sure you look at the added notes at the bottom of this post before executing this script on your servers.

Create
Table
MyErrorlog
(LogDate
datetime,
ProcessorInfo VARCHAR (100),ErrorMSG
VARCHAR(2000))

Once I have my final destination created, I want to insert all my errors that exist in the error log into a temp table where I can start the filtering process.

DECLARE
@Errorlog
TABLE (LogDate
datetime,
ProcessorInfo VARCHAR (100),ErrorMSG
VARCHAR(2000))

INSERT
INTO
@Errorlog

EXEC
sp_executesql
N’xp_readerrorlog’

From this point I start to remove the entries that I expect to see, for each server this is on, the filtering could be different. I leave each of the delete statements as an individual statement, for ease of understanding and customizing. If performance is of any concern it would be better to combine the statements.

Delete

FROM
@Errorlog

WHERE
ErrorMSG
LIKE
‘%Log was backed up%’;

Delete

FROM
@Errorlog

WHERE
ErrorMSG
LIKE
‘%Setting database option COMPATIBILITY_LEVEL%’;

Delete

FROM
@Errorlog

WHERE
ErrorMSG
LIKE
‘%were backed up%’;


Delete

FROM
@Errorlog

WHERE
ErrorMSG
LIKE
‘%DBCC TRACEON%’;

Delete

FROM
@Errorlog

WHERE
ErrorMSG
LIKE
‘%without errors%’;

Once the filtering is done I insert these rows into a final table. (I create this all as one stored procedure, and execute it once a day. I am really only concerned with the last days’ worth of information.)

INSERT
INTO
MyErrorlog

        SELECT
Logdate,
‘Error Log’,
SUBSTRING(ErrorMSG, 1, 2000)

        FROM
@Errorlog

        WHERE
LogDate
>
DATEADD(dd,
1,
GETDATE())

Some additional notes:

  • In the post SQL Server Error Log I mentioned cycling your error log to keep it small and manageable. This can be key when executing xp_readerrorlog. Large logs could impact the performance on your system, and the way I use the code I am expecting a smaller log.
  • In my case, I normally complete an additional step where I insert the rows into a table that I truncate and rebuild each day. The purpose of this table is so I can use it as a base for a reporting services report and pull information from that table and then subscribe to the report. This way I can other items such as failed jobs, replication status, and job executions that have failed to name a few. This is why I use the ‘ProcessorInfo’ column in the tables. It gives me the space to identify the source of the data in my final table.
  • Don’t forget to have an archive plan for your old data. I am a pack rat, and hate the idea of getting rid of info, I often get values from trends. So I will copy the data on occasion and store it somewhere offline if I have to.

Often when I am requested to look at an issue with SQL Server I have to spend some time in the error log.

On a side note, I think the log itself is a commonly overlooked tool for troubleshooting. Often it can be a bit cumbersome to work with if the default settings are used.

When SQL Server is installed, the error log is configured to start a new log when the service is restarted and only a handful of logs are retained long term. In some cases where troubleshooting has lead people to restart the service, or server a number of times valuable information can be lost simply because of the number of error logs that are retained.

On the other side of the spectrum if the server has been on line for quite some time, the error log can grow to extremely large sizes. Depending on a few settings that may have been made elsewhere many of these log entries are not really much assistance in working though issues. Imagine having an error log with half a million rows, and even though these half a million rows are sorted by date time, you have to scan the log to see when problems started. While you are considering how you may filter the log, or copy the log to a format that is easier for you to manage and review. Don’t forget that a log that size will need to be copied or loaded and sometimes both. Many times this has to be done while there is an issue of some sort, that very well may be causing production issues, and time is critical (nothing like having management wanting a problem solved and you have to wait for a huge error log file to load). The good news is there are easier ways to do this. Here are a few tips I like to stick to with my error logs.

  • Cycle your error log – Large error logs can be avoided by taking a couple steps, one of these can be simply cycling the error log. When execute sp_cycle_errorlog will simply start a new log, I like to schedule this to happen on a regular basis for example every Sunday night. On servers that have a lot error log entries, because of security logging or frequent transaction log backups then I may even consider cycling the error log more often.
  • Change the retention – By default you are going to have 7 error logs, if you are considering cycling your error log, (I believe you should) I would recommend also changing the retention time of your error log, often I will keep 60 days’ worth of logs, but your requirements may be different. The easiest way to do this is to Right click on the Error Logs folder in Management Studio, select configure and increase the Max number of error logs before they are recycled.
  • Table Filtered Logs – One thing I like to do, purely for troubleshooting on super busy systems is to use the stored procedure xp_readerrorlog to insert the rows of data into a table. When I do this, I can then filter out the items that I don’t need to see. This is also an easy way to compile logs from multiple servers into one table, and used in combination with a mail profile send your self the results. This will keep you from having to review many error logs on many servers. Often what I do is use the stored procedure to insert into a temp table, and since this is a filtered error log I will remove known entries from that table that I am aware of, items like transaction log backups successful, log on attempts (I often count those before I remove them to make sure the failed ones are not indicating another issue) and other repeated entries. Once I filter the temp table of everything I am aware of, I then insert the log into a permanent table for safekeeping. The permanent table then is managed to a retention time as well. Remember this table is filtered view, I use it in place of the error log for quick access so I can filter it as I would any other table, but it does not replace having the detailed error log. It just gives me more filtering options (Like joining to job history when a job keeps failing).

The end result is that for my use when the log grows to a size like that, it is frustrating to use the log to find the information that is critical. Managing your error log is important and the end result is that it will make your systems easier to manage and troubleshoot.

** Note: Fixed a typo change sp_readerrorlog to xp_readerrorlog.

In the United States of America you are innocent until found guilty. I want to make sure that I am very clear on this fact, because one of the many databases that are involved in protecting this country came under fire recently. I have done a bit of research and there is not a lot of information that has been released about this. But I can tell you that there is an individual who in his own description calls himself a Systems Analyst at the Department of Homeland Security, I will call him “Bob” to protect his rights.

From what I have gathered “Bob” who worked with the TSA database, found out that he was being let go. It appears that “Bob” was not too happy about this and tried to take the database down. According to a few articles that I have found it looks as if the database was the one that is used to protect us as we take flights around this country…

So I have to ask? Are we being serious enough about the permissions? Is our data secure? Is your company data secure? This is not a joke and the data that we keep nowadays is more important than ever. But I am always surprised at how many databases are left open. In this case “Bob” may have been a DBA, and had all the permissions that he needed.

But I ask you to remember:

  • When someone turns in two weeks notice as a DBA, don’t be surprised when they are walked out the door that second.
  • Next time you are audited and the auditors are being a real pain, hope they are the ones that audit the database where your data is stored.
  • If you wonder why developers are not in production and only Admin’s and Change Managers can change production, it is to limit liability, it’s not a matter of trust.

Just in case you were wondering… “Bob” is up on two charges of the Federal kind. These are the kind of charges that can put you in a very bad place located underground. According to the Linked In site that I found “Bob” is working as a consultant for the last 6 months. (I found “Bob’s” LinkedIn Page by Name, City, past Employment)

Have you checked your consultants recently?

Story in EarthTimes

Denver Post

Computer World

 

So I started to do some research on the issue that I was looking at yesterday (By the way I believe we are still seeing the problem but has not been validated by the client). So this all got me thinking, I needed to take a new approach, Interesting thing here is I pulled from 3 sources none of them Books on Line. See the though process that started all this was a server upgrade. As I mentioned earlier there is a lot of Memory problems on this server. But the big thing is that I failed to mention before the migration we were on a 32 bit system and we upgraded it to a 64 bit system. Now here is where I am not sure if I understand all this and I am not sure how to approach this. Let’s start by the server configuration.

  • 64 bit server
  • 8 Gigs of memory
  • AWE switch is on
  • SQL Server Enterprise
  • SQL Server is Clustered
  • Windows 2003 Enterprise.

Ok so here is the big question…

If we are on a 64 bit system and if I remember off the top of my head it will support 32 gig’s of memory. Rather then the 2 that is with the 32 bit system. Then why did the hosting company configure it to use AWE memory? Is it needed, SQL Server will use it if it see it. Now I know there is more behind the AWE option then just using it. But here is what I found.

  • 1 article from SQL Server MVP says that you should not turn it on, it increases the amount of overhead that you do not need.
  • 1 blog from a Microsoft SQL Server engineer says leave it on.
  • 1 Article says leave it on but then give only 75% of the memory to SQL Server because the OS will need the rest.

I find that there is not as much information on how to monitor SQL Servers memory, and explain how SQL Server uses its memory. I am going to try to get in touch with a couple people that I know today, but I think my recommendation at this point is to turn off AWE for 24 hours and see how it runs. This goes back to the baseline that I was talking about. This way I can see if it made a positive difference, negative difference, or if it was no different.

At this point I can understand the clients concern. I shared with them that if there is a code problem upgrading to a bigger and better server may not help at all. At best it will speed the system up but when the load increases in proportion to the system then we may see the problem again.

I am interested in what you think. If you have time drop a comment in here for me.

Thanks,

Chris

 

I have read it about a thousand times in Books on line but never really understood why, or even really seen the impacts of the Memory management in a Cluster with SQL Server. Books on line well at least SQL Server 2000 says that if you use AWE Memory then you should not let SQL Server Dynamically manage your memory. I understand how the Min memory setting works and I have a good understanding of the Max memory setting. However, I have seen clusters configured to both manage the memory and to have a defined min and max. I have never seen the difference and impact until this week, or at least I believe this could be part of the problem.

I was working on a cluster this week and the max memory was set at that huge 64 gig (I think) default. The server was a SQL Server 2005 cluster. Memory was being used to the max and there was a lot of paging occurring. The page file was set to 6 Gigs, 1.5 times the memory in the server. The interesting thing that I found was once the server started to hit that full 6 gig’s worth of Page file usage the server would fail over. After some discussion with the hosting companies SQL Server team I noticed the 64 Gig max memory setting. I talked to a MVP friend of mine and it was clear that this configuration could be what was causing the failovers. SQL Server thought it has more memory but the OS was not having any of that.

This weekend the server was upgraded to 8 Gigs of memory. So far no memory problems, however we still have not set the Max allotment, since we are waiting for the client to approve.