Archive for the ‘dbUtilities’ Category

Sometimes managing multiple file groups and the amount of data that is in them can cause a few logistic issues. Recently I found myself needing to know what tables are in what file groups and what the sizes of those tables are. With the database the size it is and the number of objects that exist in it, I needed a script. The goal of this script is only to define the tables, and the sizes of the tables/indexes and what file group the tables are on (not the location of indexes). Consider it a quick reference. I have not tested it against partitioned tables yet however I plan to in the future.

To get the below script I started a profiler trace, and then executed the standard “Disk Usage By Table” report. After I found the query that was executed by the report I copied it and then added some basic file group data collecting to it.

SELECT
       --(row_number() over(order by a3.name, a2.name))%2 as l1,
       a3.name AS [schemaname],
       a2.name AS [tablename],
       a5.FileGroup as [filegroup],
       a1.rows as row_count,
       (a1.reserved + ISNULL(a4.reserved,0))* 8 /1024 AS 'reserved (mb)',
       a1.data * 8 /1024 AS 'data (mb)',
       (CASE WHEN (a1.used + ISNULL(a4.used,0)) > a1.data THEN (a1.used + ISNULL(a4.used,0)) - a1.data ELSE 0 END) * 8 /1024 AS 'index_size (mb)',
       (CASE WHEN (a1.reserved + ISNULL(a4.reserved,0)) > a1.used THEN (a1.reserved + ISNULL(a4.reserved,0)) - a1.used ELSE 0 END) * 8 / 1024 AS 'unused (mb)'
FROM
       (SELECT
              ps.object_id,
              SUM (
                     CASE
                           WHEN (ps.index_id < 2) THEN row_count
                           ELSE 0
                     END
                     ) AS [rows],
              SUM (ps.reserved_page_count) AS reserved,
              SUM (
                     CASE
                           WHEN (ps.index_id < 2) THEN (ps.in_row_data_page_count + ps.lob_used_page_count + ps.row_overflow_used_page_count)
                           ELSE (ps.lob_used_page_count + ps.row_overflow_used_page_count)
                     END
                     ) AS data,
              SUM (ps.used_page_count) AS used
       FROM sys.dm_db_partition_stats ps
       GROUP BY ps.object_id) AS a1
LEFT OUTER JOIN
       (SELECT
              it.parent_id,
              SUM(ps.reserved_page_count) AS reserved,
              SUM(ps.used_page_count) AS used
       FROM sys.dm_db_partition_stats ps
       INNER JOIN sys.internal_tables it ON (it.object_id = ps.object_id)
       WHERE it.internal_type IN (202,204)
       GROUP BY it.parent_id) AS a4 ON (a4.parent_id = a1.object_id)
INNER JOIN sys.all_objects a2 ON ( a1.object_id = a2.object_id)
INNER JOIN sys.schemas a3 ON (a2.schema_id = a3.schema_id)
Join
       (
       Select groupid, data_space_id, indx.id, ds.name as 'FileGroup', ob.name as 'tblName'
       from sysindexes indx
       join sys.data_spaces ds ON indx.groupid = ds.data_space_id
       join sysobjects ob ON [ob].[id] = [indx].[id]
       where indx.indid = 1 or indx.indid = 0
       and ob.type = 'u'
       ) AS a5 on a2.name = a5.tblName
WHERE a2.type <> N'S' and a2.type <> N'IT'
ORDER BY a3.name, a2.name
Advertisements

December 2, 2013 I posted a blog titled, SQL Server Error Log. Often the things I post about are the items that I am using that day, or recently. Many of the topics, are simply based on how I may have made using SQL Server a bit easier for me to use one way or another. In the SQL Server Error Log post I had mentioned a few of the basic things I do when configuring a server to make the Error Log a little easier for me to navigate, or get the information I need quicker. It occurred to me after a question from a well-respected community member that an example on how I use xp_readerrorlog, could add to the usefulness of the post. Thanks for the suggestion.

In the post I recommend tabling your filtered error logs. I am fond of basing all my work on the negative (I will explain more of that in a later post). The term working with the negative for me means working on the things that I am not expecting. For example, I expect that every hour my transaction logs are going to be backed up, I don’t need an alert or an email each time that the transaction log completes without any issues. I do however want the alarms blasting every time they don’t execute as planned. I apply the same logic to my error logs, however even if I could remove entries from my error logs I always want them as they are. In order for me to have the best of both worlds, I like to take all the entries that I am not expecting and store them in a table so I can filter them with a query, join them to other tables to get other relevant events that were occurring at that time and perform other tasks that may be a bit difficult with the error logs as they stand.

Where to start?

When I start working with a new server, if I have the opportunity I like to create a database I call dbUtilities. The Tag on this post will show you some other posts where I have used this database. After that database has been created, or you have picked another location to store your table, start with a table that is similar to this one. I create this table so I can store my information long term. You may want to add some additional fields such as a PK.

On a quick side note, you should customize this code to meet your needs; the posted version here is in the simplest form. Make sure you look at the added notes at the bottom of this post before executing this script on your servers.

Create
Table
MyErrorlog
(LogDate
datetime,
ProcessorInfo VARCHAR (100),ErrorMSG
VARCHAR(2000))

Once I have my final destination created, I want to insert all my errors that exist in the error log into a temp table where I can start the filtering process.

DECLARE
@Errorlog
TABLE (LogDate
datetime,
ProcessorInfo VARCHAR (100),ErrorMSG
VARCHAR(2000))

INSERT
INTO
@Errorlog

EXEC
sp_executesql
N’xp_readerrorlog’

From this point I start to remove the entries that I expect to see, for each server this is on, the filtering could be different. I leave each of the delete statements as an individual statement, for ease of understanding and customizing. If performance is of any concern it would be better to combine the statements.

Delete

FROM
@Errorlog

WHERE
ErrorMSG
LIKE
‘%Log was backed up%’;

Delete

FROM
@Errorlog

WHERE
ErrorMSG
LIKE
‘%Setting database option COMPATIBILITY_LEVEL%’;

Delete

FROM
@Errorlog

WHERE
ErrorMSG
LIKE
‘%were backed up%’;


Delete

FROM
@Errorlog

WHERE
ErrorMSG
LIKE
‘%DBCC TRACEON%’;

Delete

FROM
@Errorlog

WHERE
ErrorMSG
LIKE
‘%without errors%’;

Once the filtering is done I insert these rows into a final table. (I create this all as one stored procedure, and execute it once a day. I am really only concerned with the last days’ worth of information.)

INSERT
INTO
MyErrorlog

        SELECT
Logdate,
‘Error Log’,
SUBSTRING(ErrorMSG, 1, 2000)

        FROM
@Errorlog

        WHERE
LogDate
>
DATEADD(dd,
1,
GETDATE())

Some additional notes:

  • In the post SQL Server Error Log I mentioned cycling your error log to keep it small and manageable. This can be key when executing xp_readerrorlog. Large logs could impact the performance on your system, and the way I use the code I am expecting a smaller log.
  • In my case, I normally complete an additional step where I insert the rows into a table that I truncate and rebuild each day. The purpose of this table is so I can use it as a base for a reporting services report and pull information from that table and then subscribe to the report. This way I can other items such as failed jobs, replication status, and job executions that have failed to name a few. This is why I use the ‘ProcessorInfo’ column in the tables. It gives me the space to identify the source of the data in my final table.
  • Don’t forget to have an archive plan for your old data. I am a pack rat, and hate the idea of getting rid of info, I often get values from trends. So I will copy the data on occasion and store it somewhere offline if I have to.

In 1995 or 1996 I had to make a decision on what direction I wanted to take with my career. I could not have been more than 24 or 25 at the time, but I was working as an Operations Director for a small anti-freeze recycling company in Southern California. Anyhow I was left with a decision, do I learn Oracle or Microsoft SQL Server. The discussion was with a new employer and that is a whole complete different story for another time, however I was making the decision not only for me, but for a company as well because they were looking for my advice and I was going to be the DBA. The decision was made and I started to read books and head to a few training classes. My first task was to create a database, and then import a whole ton of data into it. None of those classes prepared me for one of my first tasks as a DBA outside the basics of installing and creating a database and the table I would need. Don’t get me wrong, I needed those classes and without them I would have had a much more difficult time that I did. If I remember correctly I had 7 books on my shelf that had in it what we know now as books on-line. With these instruction manuals there was a good amount of information that told me how to complete a specific task, but no reference for what was the best way to approach a task. There was only one book I had on my shelf, that had a chapter about moving data around, and it had about a page on an application called BCP (this is all from memory, so don’t hold me to the specifics), or Bulk Copy Program (BCP). I had found my answer.

Here it is some 19 years later and I found myself using BCP again just the other day. I can tell you that I know many DBA’s that are not aware of this tool; it doesn’t come with an awesome looking UI or any UI at all, and doesn’t have the strength of DTS or SSIS as far as manipulating data. But the tool works, and it is fast. Taking some time to think outside the basic parameters, there are some really great uses for BCP, things like archiving data or generating an export for a customer. I want to make sure that inform you that to get the most out of BCP, just like the other features in SQL Server, you need to take into consideration all the downstream effects. Think about indexes, and how those can slow down and insert, or an identity column. If you prepare correctly I can assure you that you will not be disappointed.

A few tips along the way that I learned with BCP, include:

  • Use a format file, if you are having problems creating one, because well they are not very forgiving of mistakes just BCP out the table that you are trying to BCP data into and you can use the BCP out statement to create a format file for you. I can’t tell you how frustrating it use to make me to try to create a format file from scratch and have my package fail because I had a simple mistake.
  • Use batch files to call your BCP IN or BCP OUT statement. This makes it a whole lot easier to ensure that you have all the flags set just the way you want them, in addition you can also add tasks like copy the file or import the file elsewhere to even automate what you doing even more.
  • Native mode is great if you are going from a SQL Server to a SQL Server.
  • If you have mail configured you can even set up a call to send mail in your batch file to notify you when the data copy had completed.
  • Write the errors out to a file, the errors returned with BCP are not always the friendliest.

In the last 19 years that I have been working with BCP it has had some changes, but it is still around and still works like a champ. The changes have been pretty minor and most of them are to keep up with new features that have been added to SQL Server. So if you reach a point where you need to do something in SQL Server, think about all the tools that you have at hand.

I was poking around today on a new tool that I have been using, Remote Desktop Manager by Devolutions. (I posted a blog about it a few months back if you are interested in seeing it.)

My Issue

I have this one server that process information and it has a critical role in the success of the company. If the database is not performing well and data is not processing as fast as it should be, there are some SLA’s with customers that might not be achieved. In normal day to day business I rely on a stored procedure that I execute in my dbutilities database, the procedure gets the status, records it and then evaluates if I need to be notified. There are times however, I am notified that something may not be moving as fast as it could and I should check on it. My reaction is to connect to the server, and run my procedure that checks on the status. With time being a critical component of this, I have found that if I was not already connected that my response is delayed due to creating a connection.

My Solution

There are a number of things that I could do to make this a quick and easy check on my server. SQL Server wise all I am doing is calling a procedure. So I could use many tools to complete the task at hand. However, I spend more of my day going from server to server and I am already working in the Remote Desktop Manager. So I thought it would be really cool if I had a link there that would give me the shortcut to the information I was looking for. I was surprised at how much of a shortcut this turned out to be.

First I created a new session. Under the “Other” section I found a Data Report link. I figured I would give it a shot.

In this screen shot case, you will notice a couple things. In the query section for security reasons I have created a very generic stored procedure in my test database that looks like this:

USE
MSDB

GO

 

Create
Procedure
EnabledJobs

AS

    Select
Name
from
sysjobs_view

    Where
enabled
= 1

 

I will call that stored procedure as an example.

Once I had selected the Data Report a new box appeared with all the info I needed to make my connection. I named the session, created a new group for it to reside and even changed the default image so I could tell that this was a report that I used on-demand not for my day to day checks. The server connection dialog box is the standard connection sting information that you may have seen many times before, In the Query section I added the execution of the stored procedure, in the case of the example it is called EnabledJobs.

The finished output?

 

I now have a new session in my tree, in a folder that I created. Each time I start the session, the window displays the results of the query. In this case it took me less than a second to connect to the server, run the stored procedure and get the results I needed. Meaning that in the original case that I started with, I either know I have an issue I need to work on, or I know my servers are fine, and I can continue on with my day.

Like I mentioned before in the last post on this tool, there is a lot more that I am not using, if you spend your day going from one server to the next, you might want to give it a try. I have been surprised so far on how I have been able to use it to make my life all that much easier.

In the new Pro SQL Server 2012 Practices chapter 9 is all about SQL Server compliance and auditing. Written by a friend of mine Jonathan Gardner (B|T) who is based out of New Orleans, LA. So if you don’t hold that against him I think you can find some great information about working with SQL Server Auditing features. This chapter in particular sits well with me because I don’t know if DBA’s understand how helpful a good Audit can be. When I first heard the feature name, the first thing that came to my mind was a feature that would assist administrators who are working within one of the current audit guidelines. Audits like Sarbanes-Oxley (SOX), HIPAA or PCI DSS. Now, granted it can do that. But it can do so much more as well. I have even tied in an Audit to my dbUtilties database, but that is more for another time.

The second thing that came to mind is the all too often heard question that I get when I am talking about features with SQL Server, and that is “Is this only available in Enterprise Edition?” Jonathan has been careful to include not only how to set up and configure your Audit, but letting us know about the changes with 2012. So if you think that you need a copy of Enterprise Edition to make the relevant to you and the organization you work with, I think you are in for a surprise.

The chapter starts with a review on what some of the common audits are and how they impact you as an administrator, then walks though some of the configurations options that you will want to know about. He then walks you through setting up and audit. The chapter then comes full circle by ending with some tips on auditing I had not considered before. There is even a section that shows audit group areas that it a must have if you are working with the HIPAA audit.

The more I consider the audit features with SQL Server I am not sure I can think of a reason that someone might not want to use it. I see the benefits in many areas including trouble shooting. Jonathan does a great job of presenting the information and the chapter is a great addition to the book.

It is that time of the year, where everyone is starting to think about what they need to do for next year. Many of us have time between Christmas and New Year’s day, where our systems are not working as hard as they do the rest of the year. I know that I like to take advantage of that time to get a lot of my maintenance completed when the impact to the end users is even less likely to occur. One of the tasks that I ensure I complete that week is my SQL Server Health Check. There are many aspects to these Health Checks:

Backups/Restores – I validate that all the backups are working as they should, granted it is one thing to make sure the backups are still kicking off like they should be, but just as important I validate that the backups meet the current recovery requirements. I validate the max potential data loss and ensure it meets business requirements.

Growth – Often SQL Server databases are hosted by hardware that just meets the requirements for performance and redundancy. As a database grows and more users rely on it the requirements may change. It is important to validate that the current host meets the current requirements.

Performance Baselines – Something that I prefer to do during busy times on the server. Perf Mon Counters and Monitoring Products give you part of the picture, but ensure you understand the system as a whole. Talk with the end users and watch performance indicators inside the database such has how fast the stored procedures are running. It is critical to know what these indicators are, what they mean and when they are considered good or bad. If you are looking for these indicators reflect back to when someone calls you to inform you that the system is slow, what are they using to make that determination?

Alerts/Notifications – Are you watching for all the alerts that you should be, things like free disk space, database growth, severity errors over 17? What happens when these alerts are fired, are you notified via email is the email configuration correct?

Security – Has the permissions of your users changed, what about new logins, or group membership?

Documentation – One of the more helpful tools is documentation; it can help when recovering a server, troubleshooting problems along with many other uses. Yet it is so easy for this to become outdated, and forgotten about. In your health check, make sure that your documentation is current.

Configurations – Validate the current configuration, has there been any unexpected changes? Are you seeing anything that could improve or hurt your performance?

POC’s – A current list of all your points of contacts is helpful when trying to complete tasks quickly. Do you have all the information you need to have backup tapes returned from off-site storage, or do you know all the technical support numbers for all the products that you use?

Service Packs and Patches – Are you testing these, and staying on top of keeping your system updates with the most recent releases?

I spend many hours each year performing health checks for many different clients, and this is just a quick insight into how I complete these studies. Yet I am always surprised at how many problems could have been avoided just by performing these simple checks. I hope you enjoy your holidays.

 

The New Book

Posted: October 18, 2012 in dbUtilities, SQLServerPedia Syndication

A number of months ago Oracle Ace Kellyn Pot’Vin (T|B) had talked with me about being an Author to an upcoming book about SQL Server Best Practices. After a number of discussions with the editors, and a lot of deliberation on my part on what I should write about I joined the team of authors. I got to tell you that this is a great team of Authors and I honored to be even mentioned in the same discussions as them. Here is the list that I pulled from the Apress site. Bradley Ball , TJay Belt , Glenn Berry , Jes Borland , Carlos Bossy , Louis Davidson , Ben DeBow , Grant Fritchey , Jonathan Gardner , Jeremy Lowell , Jesper Johansen , Wendy Pastrick , Kellyn Pot’vin , Mladen Prajdić , Herve Roggero , Gail Shaw , Jason Strate and then there is me… Chris Shaw.

One of the biggest challenges for me to committing to work on a book is the amount of time that it takes to complete the task. There are a number of authors who can write complete books, and continue with other commitments and still do a great job. For me, I think it is difficult to make sure that the information I am trying to pass along is clear and easy to read. I end up re-writing so many parts of it that I think it takes me much longer than other authors. This is why a book like this is so attractive to me; I get the opportunity to work on one chapter that is focused on something I am passionate about without the stress of many more chapters after that to complete is not looming over me.

So I am proud to announce that the Pro SQL Server 2012 Practices book is complete. If you have pre-ordered it I image it will not be long before you get your copy sent over to you. The book is described as:

“Expert SQL Server Practices 2012 Edition is an anthology of high-end wisdom from a group of accomplished database administrators and developers who are quietly but relentlessly pushing the performance and feature envelope of Microsoft SQL Server 2012.”

I hope you like it, and I look forward to hearing your comments about it. If you haven’t ordered it yet, I think you can do so on Amazon here. If you like this style of book with many Authors working on topics they are passionate about, I also worked on the SQL Server MVP Deep Dive Vol. 2 book. All the profits of the Deep Dives book are donated to Operation Smile.