Webcast by Brent Ozar: “How to prove it’s a SAN Problem”

I just found a very cool webcast by Brent Ozar (blog|twitter) in which he shows how we can really dig-out the ugly truth around SAN storage and its performance. Probably you all know that the issue is not in the SAN, it is in your SQL Server…at least before this brilliant video (and script banks provided in the post).

The post and the video can be checked out here

So much said in just 45 minutes – this webcast is just amazing. I will definitely use it as a starting point whenever I have similar issues Smile 

Enjoy the video and make sure to bookmark all resources provided in the post!

Advertisements

Performance dashboard in SQL Server Management Studio

Amazingly enough I have just learned for a new very convenient way of investigating bad performance. Usually when someone says “My SQL is running very slowly” I log on to the server and start looking at usual Task manager, Activity Monitor and of course – running a bunch of queries against system DMVs and decoding whatever info you find relevant. Well…shame on me Smile There are quite cool SQL Server management reports published just for that purpose and they go under the name  of SQL Server Performance Dashboard reports. They are released for SQL Server 2005 and you can download them from this link.

You can also use them with SQL Server 2008 but with some additional edit of the setup script. And all is very well documented here.

And for those of you who do not enough time to play with it on their own, there is a video that shows pretty much everything you need to know for Performance Dashboard – from setting it up, through using it and performing analysis.

Of course this reports are just providing you the data. You will have to interpret it yourself and turn it into proper information so you can react adequately to the situation Smile

Happy reporting and troubleshooting!

SQL Server performance and troubleshooting tools, tips and tricks

I’ve planned to write a series of post regarding different tools which you can use for performance analysis and troubleshooting but tonight I found that there is already such series posted. I have to say it is pretty neat and has a lot of amazing things to read and get to know. This includes RML Utilities, XPerf, Process monitor, Debug diagnostics and Crash rules. So instead of covering pretty much the same, I’d like to share the round-up post of the series so you can enjoy it.

Troubleshooting SQL: Tools Tips and Tricks round-up

I can only add to this that you can also consider PAL 2.0 (PAL@Codeplex) which  can help you directly analyze performance counters logs and prepare a template for counter collection. Also there is one more thing that I completely love about this tool – almost all counters have thresholds defined according to best practices and added explanation with reference links for each counter. The tool is pretty straight forward and after analysis it provides you with a report with marked problematic areas (IO, CPU, Memory or network).

So get introduced to the post I shared with you and all the tools, tips and tricks it provides and have a look at PAL 2.0 Tool which I think you will find very useful!

PSSDIAG and SQLDIAG Manager

Today I am going to share a completely new tool that has been released last couple of weeks. The tool is called “ PSSDIAG/SQLDIAG Manager”. The tools is aiming at providing graphical interface for configuring custom collection sets for SQLDIAG and PSSDIAG utilities. Those two utilities are used for collecting performance and statistic data by Microsoft. I will talk further more on those tools, their aim and how to use them, but it is going to be in another post. For now I will stick to the new tool I am completely thrilled by.

I have experienced a lot of difficulties configuring any of the predefined .XML files used for SQLDIAG collection and now when I have a graphical interface the life just became easier. So let’s start with the quick introduction. The tool has a single screen where you can configure everything:

image

On the top left we have the menu for choosing platform type – x86, x64 or Itanium. On the top of the screen you can choose the version of the SQL Server that you are preparing the configuration. Of course so far we have SQL Server 7.0, SQL Server 2000, 2005 and 2008. Hopefully we will have soon and SQL 11Winking smile

I was kind of amazed seeing on top left corner a small edit box for the MS Case that the configuration is prepared for. This will not be used by the general DBA however it surely shows that the tool is the same as the one MS Engineers are using (used or modified).

Next we have the connection part:

image

The important thing here is the Machine name and Instance name configuration. It is important to enter specific instances if you are troubleshooting a server with several instances. Machine name can come useful when you are collecting information for SQL clustered instance.

The other three sections of the screen are for configuring Performance counters collection, predefined diagnostics and traces for DB Engine and Analysis Services engine. I still have not played enough with those in order to share with you how and what but I believe its going to be easy enough for you to check that out by yourself Smile

Once you have the configuration in place you can proceed with saving it. The screen is as follows:

image

You have the possibility to just have the XML configuration file and use it directly or generate a CAB file which will include everything needed for immediate execution and starting of the collection on the specified instance. Personally I prefer using the CAB file as it comes with several other scripts/configurations.

Download SQLDIAG Manager tool.

Well from now on you can just enjoy configuring SQLDIAG and generate your own investigation templates and scripts. Even if you are not that keen on altering defaults you can still use the tool for entering Machine and Instance name if you need to.

I have to say that with this tool Microsoft have a complete set of tools for performance analysis that are easy to use configure and analyze.

– Configuration of analysis tools – SQLDIAG Manager, PAL Tool

– Analysis – PAL and SQL Nexus

I am planning to do another post for those tools so you can have the big picture in place (if you already don’t have it). So stay tuned!

Error: 30064, Severity: 17, State: 1 in SQL Server Error log

Recently I came across a strange message in SQL Server Error Log:

SQL Server failed to set security information on the full-text FilterData directory in the FTData folder. Full-text indexing of some types of documents may fail until this issue is resolved. You will need to repair the SQL Server installation.”.

 

The error number was 30064 with Severity 17. After a quick check in the web I did not find any information of info on troubleshooting such error and I was definitely was not up for “repairing” my SQL Server installation as the message suggested.

To resolve the problem, check the FTData folder path set in the registry – is the path valid, does the SQL Server Service has enough permissions on that folder, etc. The path to the registry key is:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\<instance id>\Setup\FullTextDefaultPath

In my case it was an invalid path in that folder. Unfortunately I was not aware of how this value got there after all the other paths were valid. Anyway, if you encounter such an error, do not hurry to repair the SQL Server. There might just be a simpler solution.

Extracting query plans and statements with execution details

In this article I will share with you a script extracting longest running and most CPU using query statements and their plans. As a starting point I used this article. It explains in details how to extract query statements from a plan. As you can check it is very useful, but imagine a situation where you do have the query, but still you are not allowed to play with it (even execute it) but still need information on plan, operations included and some optimizer info as optimization level, cost, estimated number of rows, etc. As this info is already stored in SQL Server and exposed through DMVs it would be good to be able to take a deeper look at what is going on.

So let’s get into details.

The backbone of the query is from the article mentioned above. I simply added the TOP statements at the bottom and now I will describe what they mean and how to interpret them. First part is showing currently running statements, their plans and some details on plan execution and optimization.

Second part extracts top 10 statements by average duration (again including their plans and optimization level). 

Third and fourth parts are focusing on top 50 by CPU time queries and their plans.

All four parts are returning the query plan in XML format. See and example of the output:

image

So having this information you can examine the plan, how the optimizer handled the statement and look for CPU intensive operations such as Hash Match for example. By clicking over the query plan you get an XML opened in new query window. You can save it as .sqlplan and analyze it further, or if you want you can query the result directly for operations as Hash Match, Sort, etc.

The good of this method for examination and troubleshooting is that you are actually getting the information that already happened and stored in SQL Server. Usually optimizations and monitoring is performed in controlled environment and more or less it does not reflect reality at 100%. But using such method would allow you to explain performance downgrades and provide some good clues to the application team or if there is some other issue.

 

The script you can download here.