Get drive data for ALL drives in T-SQL

There are a few methods you can use to get drive information using T-SQL. The problem is all these methods have some limitations. They provide free space but not drive capacity or they only provide data for volumes that contain database files. Ideally Microsoft would provide an easy way to get both drive capacity and free space for all drives. In this article I’ll demonstrate a way to do this using T-SQL.

Here are a few exiting methods that you might consider before we continue:

If these work for your needs, great! If you need capacity and free space for ALL drives you will need a different solution. The best option might be to query that data outside of SQL Server using PowerShell or your programming/scripting language of choice. If you need to do this in T-SQL though, the script below provides a way.

The script uses xp_cmdshell to run a powershell command to get data for ALL the volumes on your system. Powershell formats this data as XML which we capture in the @output table variable – 1 row per line of XML. Then we convert this back to a single string and store as XML in variable @x. We then shred the XML into a table format. This is a bit of a hack – sys.dm_os_volume_stats is a cleaner method but doesn’t give you data for all drives. As a bonus you get more detailed data about those drives; file system, allocation unit size & more.

A similar technique could be used to capture the output of other powershell commands in T-SQL. It might not be a good idea but I think it is an interesting technique to demo.

How to handle SQL Agent Jobs with Always On Availability Groups?

Introduction

When you configure your SQL servers for high availability you want the failover to be as seamless as possible. If you use availability groups, mirroring or log shipping as part of your high availability strategy you will need to take care of anything that lives outside the context of your databases. That might include logins, SQL Server agent jobs and ensuring consistent server level configuration across your availability nodes. This article will discuss how to handle SQL server agent jobs.

Not so seamless failover of agent jobs

If you have any SQL agent jobs that run in the context of your user databases these will need special handling to ensure a seamless failover. Ideally Microsoft would have added some improvements to the SQL agent to help with this. Maybe given us agent jobs that can live within the context of a user database instead of msdb?

As it stands you will need to keep your agent jobs in sync between the primary and secondaries and figure out a mechanism to activate the jobs on the new primary following failover and disable them on the old primary.

A Solution

There are a few ways to handle this, but my approach is to run all the jobs in the context of the master DB. I then check if the user DB is accessible and execute the command with a 3 part name. See the script below for an example. This will work for availability groups, mirroring and log shipping.

With this approach you can create your jobs and have them active on your primary and secondaries. You don’t need to worry about enabling/disabling jobs on failover – the jobs are already running but they only do any work if the database is primary. This makes the failover totally seamless. The downside to this approach is that the jobs will report that they ran successfully even if they didn’t do any work which could cause confusion.

The next problem is keeping the jobs in sync between your servers. You can script out the jobs using SSMS easy enough but this could get tedious if you have a larger number of jobs. I’ve created a powershell script below to help with this which you can use below. You can use this to report if your jobs are out of sync and also for performing a synchronization. See the comments at the top of the script for some usage examples.

Set Max Server Memory

Set Max Server Server Memory at Startup

Max server memory is one of the SQL Server configuration options that you should change from the default value. Brent Ozar has some good advice on this here. TLDR: You should leave some extra memory for the o/s.

A good starting point is to leave 10% (4GB minimum). If you are running other processes on your SQL server instances like SSRS, SSAS or if you are running multiple instances you might need to leave a lot more.

You can easily set the max server memory to your desired value in MB using sp_configure or using SSMS and simply forget about it:

What if you add/remove memory from your SQL instance? This is a setting that you need to remember to change. Gone are the days where you need to physically open your servers and install new DIMMS. With the advent of cloud and hypervisors – adding memory can be as easy as a few mouse clicks or running a script.

The point of this article is to consider doing this as a script at startup. If you scale up or down to a new instance type in the cloud – your max server memory could automatically be set to an appropriate value as a startup script. It’s something less to think about.

Show me the code!

This script will apply the 10%/4GB rule. You could adjust the formula as required.

You just need to run this as a SQL Agent job at startup which you can do using the script below: