It was at the PASS Summit 2011 this year as I sat in a session being presented by Jason Strate on extended events that I made a mental note…….why the heck aren’t you utilizing the DMV’s as much as I could be. Then last week I am reading an article by Steve Jones on his blog about two DMV’s that were released with Microsoft’s SQL Server 2008 R2 Service Pack 1 release.
Dynamic Views and Functions return information on the state of the server. They can be used to monitor the health of a server, assist in diagnosing problems, or just help in every day tuning performance. It is important to note that with each release or version release the views could be removed or altered.
Since the Dynamic Views and Functions are broken out into Server scoped and Database scoped you will need to take into account two permissions when viewing or trying to run queries against utilizing these views and functions:
- Viewer Server State
- Viewer Database State
Querying a view or function is pretty straight forward, an example would be:
I think Microsoft has done a pretty good job in busting these out into groups and I have noted them below. The two that were released with Service Pack 1 are for checking the Service account.
I get a chuckle sometimes of the off the wall answers I get when an end user approaches us as DBA’s and say, “The whole system is down, what is going on, fix it now” I think that for must of us as DBA’s this has happened at one point in time in our careers. I’ve learned over my 12 yrs of experience that the DBA has to dive into a problem and become and investigator. We should already have a pretty good working knowledge of how the front end and back end interact along with the structure of each. What’s the first thing you check on when there is a problem?
One of the main things I have done for myself is to keep a checklist I run through when something within the system goes awry. You may ask me, “Why do you have this if you consider yourself a strong DBA”. While I am confident in my abilities; we are all humans and we make mistakes. A checklist reinforces the checks of the most common mishaps before having to dive into bigger investigation purposes such as going through that plan cache or error logs. I learned early on to “keep it simple stupid” ~ just start going through your checklist and rule out the obvious first and foremost.
Another thing I have learned over the years is to treat people with true professionalism. There is an age old additive of the DBA and a DBA team. Developers will always blame the DBA’s and the DBA’s will always blame the Developers. One of my main goals is to bridge that gap at my current employment. I recently took a poll among my peers – developers, business analyst, QA personnel as to when they hear the name DBA Team what do they think of? Some of the responses were surprising and some were not but I left with the take a way that the team I’m on has some work to do in bridging gaps. How you treat people when a problem occurs goes along way, and not everyone has the ability to communicate.
I hope that if you don’t have a checklist that you’ll get one, and I hope that when situations do arise that as a DBA you treat everyone with true professionalism. In the end we all want the same goal; a good system that performs to the best of it’s ability.
I enjoy reading what other people in the industry have to say, I might do something one way whereas someone else might do the same thing totally different but equally good if not better. I try to read from some of the top DBA’s in the industry ~ I have picked up so much from them and different techniques that have enhanced my capabilities in my own work situations.
In saying that I happened to be reading on Steve Jones’ blog the other night and he laid down a challenge. Well…….maybe not a challenge per say but I think he hit the nail on the head and it challenged me. Many of us have many types of tools at our disposal; I just so happen to have some that Steve mentioned in his blog from Red Gate. Basically, it is easy for us to get stuck in a rut and rely on the same tools that we’ve been using for months, years, or even decades.
I personally have the Red Gate Tool Belt at my disposal. Have I used all of them ~ nope. Have I even opened all of them ~ nope. So in reading Steve’s blog I felt challenged to test drive each product that was in the tool belt. Well, you know what IT PAID OFF.
I will try to lay out my scenario in how this has helped me. I have many .SQL files stored in TFS in various directories for replication along with corresponding index files. When having to blow away replication or if updates have been made to the files and a deployment is needed then my normal routine would be to go to each directory pull what I need, place it into an analyzer and execute my code.
This was a painstakingly slow process. So instead I completed the following:
- Created two SSIS packages – I was dealing with .SQL files for publications and then files for indexes. Now I get the latest from TFS; execute the first package to grab all my publication files and dump them into one directory (will explain why in a second). The second package grabs the index files and pushes them out the the server in any of my environments.
- I go back to my directory with all my publications and because I use SQL CMD variables in them I open each one up in sequential order and remove the commenting out of the variables I need. Then execute each one in analyzer; each one is ran simultaneously.
- I then use the SQL Multi Script 1 utility from Red Gate that allows me to select .SQL files I created to start each agent by generating a snapshot automatically instead of having to start each one manually from the replication monitor.
Needless to say I took about a 45 minute process and turned into a 2 minute process. Using tools that you have available to you pays off; I’m making a point to go through what I have if I’ve never used it before and see if I can shore up any other processes I have.
It’s that time again; SQL Saturday is coming to the Louisville, KY area July 21, 2012. The event will be held at:
The University of Louisville School of Business
2301 3rd Street
Louisville, KY 40202
If you have never been to a SQL Saturday please check it out. This is a free one day SQL Server training event for the SQL Community that focus’s on local speakers, providing a variety of high-quality technical sessions, and making it all happen through the efforts of volunteers. Check out the link below for further details and if you are feeling giddy enough ways for you to volunteer. Let’s get the word out and make this one of the best SQL Saturday’s that has come to Louisville.
For those of you who have read my posts then you will know I am very big on Utility Databases. I like them because there is a plethora of information that can be gathered that you can trend, baseline, troubleshoot, etc.
In browsing some forums and whatnot I came across a good article regarding the data collector within SQL. I decided in some late night reading to study up on it; I found it to be something that in our line of business pretty beneficial. This component of SQL 2008 collects different data sets. I like it for many reasons; one perk that stood out to me was configuring the different types of monitors on a scheduled bases. You may ask what is the difference between this and the Utility Databases I always speak of. I am glad you asked; my Utility Databases are custom fitting to me to meet my exact needs; this data collector component comes out of the box you just have to configure it.
I’ve started utilizing both; configuring the data collector takes a matter of minutes and you can set schedules and retention periods of how long you want to keep your data. I basically set up an empty database shell; then the configuration takes care of creating the necessary stored procedures and schema to support it. Another feature that I liked are the 3 reports that come with it out of the box – disk usage, query stats history, and server stats history. Granted all these can be gathered via scripts but why not take advantage of something already built for you.
This link by Brad McGehee provides information on how to set up and configure the Data Collector.
This link on the MSDN site will provide a basic understanding of what the Data Collector does.
After research I found that this resides best on a CMS Server if you have one. If you are unfamiliar with the CMS Server check out John Sterret’s blog. I sat on one of his sessions at PASS and really enjoyed it.
If you aren’t collecting data your missing out; if you are collecting data and aren’t using the two methodologies take some time and review them; it might just meet your needs.
How many times does it happen, you have this masterful plan of what you are going to do as soon as you get into the office and bam ~ haven’t even made it out of the car and emails start exploding and coming in.
I would gather with most that this occurance is not out of the norm. In fact it probably happens more times than not. Today, such an issue came up ~ I was anticipating working with some VLF files when a process decided to timeout, a process that we shall say is of the utmost importance. I will do my best to lay out the scenario.
The issue was narrowed down to one procedure. That procedure had one parameter being passed in. Here are initial steps that were thought of
- Check Indexes (found one to be missing so it was added; will say that Aaron Bertrand’s blog where I found the SQL Sentry utility a few months ago was very helpful in looking at the estimated and actual execution plans).
This did not improve performance. Initially the timeout period for the app has a threshold of 30 seconds. We were consistently hitting over 2 minutes.
- Check the SP – nothing really stood out. Had some views that were being selected but they were pretty straight forward.
- Check for any blocking (this was done multiple times throughout the whole assesment process)
- Check any other sytem abnormalities
So with the procedure still timing out not many other options remaining one of the team members recalled a session they sat in on ~ Parameter Sniffing. The parameter, after being read in was then set to an internal parameter to that stored procedure. Then the newly created parameter was then passed down through the procedure where needed.
Upon executing the proc it took a mere second!
I’ve done a little bit of reading on parameter sniffing and find it very interesting. I would suggest reading Greg Larson’s article ~ he has it laid out very nicely.
This is one aspect I enjoy of being part of a core DBA team. All backgrounds come together and bring different aspects to solutions. I look forward to the next problem to find that unique solution.