Category: SQL

Gathering AG Information – sp_GetAGInformation

The release of SQL 2012 brought about SQL Server AlwaysOn Availability Groups (AGs) as a new way to manage HA for databases.

With AGs came a whole lot of new DMVs to give you information. They also provided a nice dashboard which gives a view into the status of a particular AG
AG+Dashboard

AG Dashboard

This can be quite useful, however it is missing a great deal of information, that as a DBA, I would find useful, like read routing and listener configurations. On top of that the dashboard only provides information on one of the AGs at a time. If you have more than one AG then you have to open up an entirely new dashboard.

This just wasn’t working out for me, and so I wrote a stored procedure (sp_GetAGInformation) to provide me with the configuration information for all the AGs running on a server.

When executed it provides:

  • Availability Group Name
  • Listener Name (if exists)
  • Primary Replica Name
  • Automatic Failover Partner (if exists)
  • Sync Secondary Replicas (if any)
  • Async Secondary Replicas (if any)
  • Read Routing Replicas (if any, in routing order)
  • List of Databases in Availability Group

sp_GetAGInformation+Results
Results of executing sp_GetAGInformation

As you can quickly see in the above example the AGAdvWrks AG has a listener, an auto-failover partner and two servers in the read routing order. It also contains two databases. AGTestAG doesn’t have any sync secondaries, or a listener, and only contains a single database.

If you have several AGs running in your environment this can be a real time saver. What’s also great is to pull this data centrally and report against it.

For example, right now I have a PowerShell process that queries every server, pulls the data back to a central location and reports on any changes in the configuration (if a servers gets pulled out for some reason, or a database added or removed from an AG). This can be an a real timesaver, in particular when you need to connect to a primary, but aren’t sure which server it is (given that neither SQLPS nor SSMS support multisubnet failover connection settings).

One of the limitations is that the data can only be obtained from the primary in an AG as certain sets of the data only reside there, and the read routing configuration can be (and should be) set differently on each server.

Give sp_GetAGInformation as try and let me know what you think. Any ideas for improvements are warmly welcomed.

A Better Way To Get SQL File Space Information

I wish this was actually a better way, but really it’s an idea for a better way to get at the data.

Something that has come up at every place I’ve worked is the desire to monitor growth on data and log files for databases. If you aren’t using some 3rd party monitoring software this is a pretty painful operation which forces us to jump back to cursor type operations where we have to gather information one database at a time (although this does not apply to transaction logs).

 

Gathering Log Space

To currently gather log space you would use DBCC SQLPERF(‘LOGSPACE’) which will give you size and used percentage for every log file on a server.

Dumping that information into a table allows you to manipulate the data to figure out KB/MB/GB usage (rather than just plain percent).

As an example (GatherLogSpace.sql):
download (1)

Gathering Log Space

This is a reasonable amount of work, but what if you want to get space used in data files? Well that gets a whole lot more complicated.

 

Gathering Data File Space

In order to gather information on space used in data files you actually have to go to each and every database and run a query using sys.dm_db_file_space_usage.

As an example (GatheringDataFileSpace.sql):
download (2)

Gathering the space available in data files

Strangely the sys.dm_db_file_space_usage DMV only contains the data file information. The log file data can only be obtained using the first process. This means that there is no single place that you can go to get file usage information for a database.

 

What about PowerShell?

PowerShell could give you the data, but it is still a cursor type operation, and you have to work through each database, drilling down to the log information, the filegroup and file inside each of those filegroups.

What we really need is a nice set based query that we could run in TSQL to give us all the information that is needed.

 

What I Would Like To See

Right now you could go and grab the size for each and every file for every database inside of SQL Server by querying sys.master_files:

download (3)
Basic information from sys.master_files

There’s a whole lot more information in sys.master_files, including growth data, so you can know exactly how much each file will grow the next time it does.

Even though there’s this great data there is nothing to give us space used data. It would probably take too much overhead to keep that information up to date, but there’s no reason that we shouldn’t be able to easily get at the data when we want to.

Ideally we would see the existing DMV changed into a function, which would accept a database_id and file_id, and return the same data that it does now.

If that were the case it would be simple to run one query that would give us the size of each file, used space, free space, and how much the file would grow the next time an autogrowth hit.

This could quickly and easily give us all the data we want:

download (4) Mock up of easily obtained file space information

As a hope and desire to see this happen I’ve created a Microsoft Connect item – sys.dm_db_file_space_usage Should Be A Function

If you manage databases, and it’s something that you’d really like to have (who wouldn’t?) then please go and give it an upvote. There’s a chance it might make it into the product at some point, and that would make a DBAs life a little bit easier.

 

Traffic Flow With Read-Intent Routing

One of the big advantages to using SQL Server Availability Groups is the ability to automatically push read traffic over to a secondary server. This is particularly useful for larger queries that would take a few seconds to run and consume large amounts of resources. It’s not something recommended for short, fast queries, just because the additional latency of connecting to the secondary could slow down the overall response time for the query.

The Microsoft documentation on setting up Read-Only Routing in SQL AGs is pretty solid and explains how to get this up and running.

 

Firewall and traffic routing

In secure environments there is usually a firewall that resides between the front end web, application or mid-tier servers and the backend database server. This firewall would block all traffic to the backend except for specific ports to specific IP addresses. This is one of the defense in depth items that helps to keep your databases secure.

When using a firewall in conjunction with SQL Server Availability Groups (AGs) it is common to just open up the firewall to the AG Listener. That way there is a single IP open for all the database servers that reside in the AG and any machine that is not acting as the AG primary is not available through the firewall (reducing attack vectors again, a nice side effect).

Given this you might well expect that when routing traffic off to a readable secondary in the AG that it would follow the flow of:

download (5)

Here the client (either directly or through a web, app, or mid-tier) performs an action that does a read query against the AG Listener. The expected traffic flow would be (from what we would see IP address wise, the AG Listener would actually connect to the primary, in this case SQL1):

Client – AG Listener – Readable Secondary – AG Listener – Client
so
Client – SQLAG01 – SQL2 – SQLAG01 – Client

This way the primary server (in this case SQL1) would arbitrate all the traffic for the query that comes in. In fact read routing does not function this way.

In order to perform the expected task of reducing the load on the primary the primary actually tells the client to redirect to the secondary server, and so the process goes:

download (6)

The correct communication is

Client – AG Listener – Secondary – AG Listener – Client – Secondary – Client
or
Client – SQLAGL01 – SQL2 – SQLAGL01 – Client – SQL2 – Client

When the client request comes in SQL has to check that the readable secondary is available to accept the query (otherwise it will go to the next server in the routing list, which is why you should always have the primary as the last server in the routing list, just in case every other server is out of service).

This means the query will take a little longer to execute as the arbitration and network changes will take additional milliseconds to complete (why it is not ideal for small, fast selects).

 

Where does the firewall come in?

Using a firewall and only opening up the IP of the Listener is the best way to handle security, but if you want to use readable secondary server and read-intent routing that’s not going to work. Due to the way that the traffic is routed you would need to open up the firewall to each individual server and port that would be a secondary.

So in our above example the firewall would need to be opened to SQLAGL01, SQL1 & SQL2 in order to support client requests. If those rules aren’t opened then you’re client traffic will be blocked and you’ll get the dreaded “Named Pipes Provider: Error 40” error, which isn’t much of a help.

 

Testing your read-intent connections

A really useful way of testing your read-intent connections is to use a quick PowerShell script from your front end server (if running Windows) prior to putting it into rotation. Download Check-ReadRouting.PS1 and enter the AG Listener name, or IP Address and the name of a database in the AG. If things are working correctly it will return the name of the primary and first server in your read-only routing list.

If you get a timeout then you have either not set the read-intent URL correctly for your secondary, or you are having firewall issues connecting, and so should investigate further.

 

Read-routing can be really powerful and useful, you just have to be careful of the gotchas in getting it working correctly.

 

Querying Change Tracking Tables Against a Secondary AG Replica

If you aren’t familiar with Change Tracking I would recommend heading out and reading Kendra Little’s Change Tracking master post which contains a wealth of information. 

I’ve been using CT for a while now and it does what it says on the box. The performance can be painful when querying tables that have changed a lot (the changetable function performs a huge amount of aggregation and seems to be optimized for 1000 rows of data). One of the things that I’ve always wanted to do is perform loads into an ODS from a DR site. 

I use AvailabilityGroups to ensure that a near real-time copy of the data is kept in another data center in another part of the country. I’ve tried a couple of times to query the change information from one of the secondary replicas, but sadly it’s not supported and so I would get the error

Msg 22117, Level 16, State 1, Line 1
For databases that are members of a secondary availability replica, change tracking is not supported. Run change tracking queries on the databases in the primary availability replica.

 

Yesterday I was messing around with database snapshots and was really happy to discover that it is possible to use the changetable function against a snapshot and not receive any errors. This will only work against readable secondary replicas (as the database needs to be online in order to be able to take the snapshot).

This is also the case with log shipped copies of databases. If the database is in standby then you can access the changetable function directly, or do so off a snapshot.

 

It doesn’t seem like this is a big deal, but if you like to load data into an ODS or Data Warehouse server and it’s not located in the same location as your AG primary, then this is huge as you can asynchronously write data over a WAN and then do all your data loads local to the ODS. This is far more efficient (and your network admin will like you a lot more) than pulling the data over in a large chunk nightly and saturating a part of the network.

Just another way that you can make your DR system work for you.

SHOWPLAN Permission Denied For A Sysadmin

I came across a curious issue the other day when trying to look at the actual execution plan of a stored procedure that was being developed. Strangely I couldn’t get an estimated plan either. All I got was the confusing error:

 

SHOWPLAN permission denied in database ‘TestDB’

As a sysadmin on the system in question I was very confused by this and it took a little bit of investigation to find out that the problem was actually being caused by the execution context of the stored procedure. 

Here’s a quick repro:

 

First, create a new user and table, then load that table up with a few rows (just for demo purposes)


IF (SELECT name FROM sys.databaseprincipals WHERE name = 'TestUser') IS NULL
  
CREATE USER [TestUser] WITHOUT LOGIN;
GO

IF (SELECT OBJECTID('dbo.SomeData', N'T')) IS NULL
BEGIN
   CREATE TABLE
dbo.SomeData
      
(
          
Id INT IDENTITY(1,1) NOT NULL
           ,
NothingNess CHAR(10) DEFAULT ''
          
, CONSTRAINT PK_SomeData PRIMARY KEY(Id)
       )
END
GO

INSERT INTO dbo.SomeData DEFAULT VALUES;
GO 100

Now we’ll create a couple of procs to pull data from the table. The first will execute under the context of the calling user, and the other under the context of the user TestUser that was just created.

CREATE PROCEDURE dbo.GetSomeData @Id INT
AS
SELECT
Id, NothingNess
FROM dbo.SomeData
WHERE Id = @Id
GO

CREATE PROCEDURE dbo.GetSomeDataTestUser @Id INT
WITH EXECUTE AS
'TestUser'
AS
SELECT
Id, NothingNess
FROM dbo.SomeData
WHERE Id = @Id
GO

Now we can execute both of these procedures and get results just fine.

 

EXEC dbo.GetSomeData 1;
EXEC dbo.GetSomeDataTestUser 1;

However if we attempt to run while grabbing the actual execution plan, or attempt to grab an estimated plan then the second call dbo.GetSomeDataTestUser will fail with the SHOWPLAN permission denied error.

What happens is that when the stored procedure runs it does so under the context of TestUser, and that user does not have SHOWPLAN permissions in the database. This means that you cannot display an execution plan, not could you gather TIME or IO statistics. 

This didn’t make much sense to me at first, just because I was a sysadmin, but after thinking for a while it is actually the correct way to handle things. All the permissions are contained for execution of the code within TestUser. So if TestUser does not have the rights to do something, or access something then the calling user would not be able to get to that information.

As a workaround for this all that needs to happen is to grant the showplan permission to TestUser and everything will be fine. For example:

GRANT SHOWPLAN TO TestUser;

It’s worth noting that this permission is database scoped, so don’t expect to execute it in one database and have it work everywhere.

Technet has a good write-up about all the things covered by SHOWPLAN, it is worth a read for more information.


TL;DR – If you get SHOWPLAN denied errors check that the context for the user actually execution the query has the requisite permissions, and grant showplan if needed. 

Change Tracking Cleanup – Example

In my last post I wrote about a limitation of Change Tracking and in particular a limitation in the cleanup process for the syscommittab table.

I thought that a quick example of seeing this in action might be useful in allowing you to check for the issue yourself.

 

First things first, we setup a database, enable CT, create and load up a table with rows.

 


CREATE DATABASE CTTest;
ALTER DATABASE CTTest SET RECOVERY SIMPLE;
ALTER DATABASE CTTest SET CHANGE_TRACKING = ON (CHANGE_RETENTION = 5 MINUTES, AUTO_CLEANUP = ON);
USE CTTest
GO
CREATE TABLE CTTable (C1 INT NOT NULL
,
C2 UNIQUEIDENTIFIER NOT NULL
,
C3 UNIQUEIDENTIFIER NOT NULL
,
CONSTRAINT PK_CTTable PRIMARY KEY CLUSTERED (C1)
);
GO
ALTER TABLE CTTable ENABLE CHANGE_TRACKING;
GO
CREATE TABLE CTLoader (C1 INT NOT NULL
,
C2 UNIQUEIDENTIFIER NOT NULL
,
C3 UNIQUEIDENTIFIER NOT NULL
,
CONSTRAINT PK_CTLoader PRIMARY KEY CLUSTERED (C1)
);
SET NOCOUNT ON;
DECLARE @i INT = 1
WHILE @i < 2000000
BEGIN
BEGIN TRAN
INSERT INTO
CTTable
SELECT @i, NEWID(), NEWID();
COMMIT TRAN
SELECT
@i += 1
END

With the table in place we create an XEvent session to capture when the cleanup runs for syscommittab.

CREATE EVENT SESSION [ChangeTrackingCleanup] ON SERVER
ADD EVENT sqlserver.syscommittab_cleanup
(
ACTION(
sqlserver.database_id
, sqlserver.sql_text
, sqlserver.session_id
, sqlserver.username
, sqlserver.client_hostname
, sqlos.task_time
, sqlserver.tsql_stack
, sqlserver.database_name
, package0.collect_system_time
)
WHERE (database_id=6)
)
ADD TARGET package0.event_file(SET filename=N'C:Program FilesMicrosoft SQL ServerMSSQL11.MSSQLSERVERMSSQLLogChangeTrackingCleanup.xel')
WITH (EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS)
GO
ALTER EVENT SESSION [ChangeTrackingCleanup] ON SERVER STATE = START
GO

 

With the session running we just need to wait a few minutes and pull in the XEvent data to see what’s been going on with the cleanup (HT for getting the XEM information to Jon Kehayias post)

DECLARE @path NVARCHAR(260), @mdpath NVARCHAR(260)
-- Get the log file name and substitute * wildcard in
SELECT
@path = LEFT(column_value, LEN(column_value)-CHARINDEX('.', REVERSE(column_value)))
+
'*'
+ RIGHT(column_value, CHARINDEX('.', REVERSE(column_value))-1)
FROM sys.dm_xe_sessions s
JOIN sys.dm_xe_session_object_columns soc
ON s.address = soc.event_session_address
WHERE s.name = 'ChangeTrackingCleanup'
AND soc.OBJECT_NAME = 'event_file'
AND soc.column_name = 'filename'
-- Get the metadata file name and substitute * wildcard in
SELECT
@mdpath = LEFT(column_value, LEN(column_value)-CHARINDEX('.', REVERSE(column_value)))
+
'*'
+ RIGHT(column_value, CHARINDEX('.', REVERSE(column_value))-1)
FROM sys.dm_xe_sessions s
JOIN sys.dm_xe_session_object_columns soc
ON s.address = soc.event_session_address
WHERE s.name = 'ChangeTrackingCleanup'
AND soc.OBJECT_NAME = 'event_file'
AND soc.column_name = ' metadatafile'
SELECT
row_number() OVER (
ORDER BY event_data.value('(event/@timestamp)[1]', 'datetimeoffset')) AS RowNum
, n.value('(data[@name="database_id"]/value)[1]', 'int') AS DatabaseId
, n.value('(data[@name="rows_deleted"]/value)[1]', 'bigint') AS CommitRowsDeleted
, CONVERT(datetime2(0), DATEADD(hour, -7, n.value('(data[@name="cleaned_upto_date"]/value)[1]', 'datetimeoffset'))) AS CleanedUpToDate
, event_data.value('(event/@name)[1]', 'nvarchar(200)') AS EventType
, CONVERT(datetime2(0), DATEADD(hour, -7, event_data.value('(event/@timestamp)[1]', 'datetimeoffset'))) AS EventTime
FROM (
SELECT
CAST(event_data AS XML) AS event_data
FROM
sys.fn_xe_file_target_read_file(@path, @mdpath, NULL, NULL)
WHERE
OBJECT_NAME = 'syscommittab_cleanup'
) AS tab
CROSS APPLY event_data.nodes('event') AS q(n)

syscommittab+cleanup

Running this shows that the cleanup process runs about every 62 seconds and only deletes 10,000 rows. Given that, there’s no way to be able to keep syscommittab under control if you run more than 14 million transactions through change tracked tables on a daily basis.

Change Tracking Cleanup Limitation

I’ve been “fortunate” enough to be working a great deal with Change Tracking recently and have run into a limitation that it’s worth keeping an eye out for on your servers.

If you have a database, with Change Tracking enabled, and the total number of transactions exceeds 14.4 million for all of the tables that are tracked in that database then you are going to be in for some hurt.

The problem is that the process to clean up the CT data in the (internal) syscommittab table only runs once a minute, and only cleans up a maximum of 10,000 rows every time it runs.

10000 (rows) x 60 (minutes an hour) x 24 (hours a day) = 14,400,000

Given that it cannot clear faster than that limitation the internal table will continue to grow over time and it will cause other cleanup processes to slow down, as well as your CT queries. This is definitely an issue in SQL 2012, I have no yet checked other versions.

Here’s a quick screen grab where you can see the cleanup and the number of rows cleaned up:

download (7)

 

You can see if you are running into this limitation by checking the age of the earliest syscommittab row and seeing if it exceeds your retention time

select datediff(hour, commit_time, getutcdate()) 
from sys.dm_tran_commit_table 
where commit_ts = (select min(min_valid_version) 
  from sys.change_tracking_tables)

 

You can also check the size and rowcount of the syscommittab table to gauge if it’s something you should be concerned with

selectt.Name as CTName
, sum(p.rows) as ChangeTrackingRows
, convert(numeric(10,2),(sum(u.total_pages)*8)/1024.0) as SizeMB
from sys.internal_tables t
join sys.partitions p on t.object_id = p.object_id
join sys.allocation_units u on (p.hobt_id = u.container_id OR p.partition_id = u.container_id)
left join sys.change_tracking_tables c on t.parent_object_id = c.object_id
where t.name = 'syscommittab'
group by t.name

 

I have a whole bunch more stuff coming on Change Tracking (including an undocumented workaround for this issue), but for now this is important to know.

Filtered Indexes on Computed Columns

I was trying to improve the performance of some code recently and decided that a filtered index could really help me out. Upon attempting to add the index I got an error stating that you cannot add filtered indexes to computed columns. This is the case even if the computed column is persisted.

That’s a shame as this would be a really useful thing to have. 

There is actually an open Connect item on this. It’s been open for 4 years now, so I’m not sure that there will ever be any traction on it, but feel free to upvote it yourself.

 

Code to repeat the problem below:

CREATE TABLE #SomeData 

    (

      RowID INT NOT NULL CONSTRAINT PKSomeData PRIMARY KEY

    , Quantity INT NOT NULL

    , Price MONEY NOT NULL

    , TotalCost AS (Quantity * Price) PERSISTED

    );

 

CREATE INDEX IDX_CostGreaterThan10 ON #SomeData (RowID) WHERE TotalCost > 10;

Msg 10609, Level 16, State 1, Line 1
Filtered index ‘IDX_CostGreaterThan10’ cannot be created on table ‘#SomeData’ because the column ‘TotalCost’ in the filter expression is a computed column. Rewrite the filter expression so that it does not include this column.

 

SSMS – Index Outside The Bounds Of An Array

Hit a strange issue this morning. I was working with one of the network guys testing out connecting to SQL behind an F5 load balancer, something I’ve done multiple times previously.

I was able to connect using SQLCMD, Invoke-SQLCmd and using a UDL source, but for some reason, every time I tried to connect using SSMS I would get an error stating that an “Index was outside the bounds of an array”.

SSMS+-+Index+out+of+bounds+on+an+array
Advanced Information for Error

A little research showed that this error cropped up when trying to connect to SQL 2012 from an earlier version of SSMS. This wasn’t the case here though, everything was SQL 2012, and I was able to connect to the server directly without any problems, it was only an error when going through the F5 VIP.

After a little work and research with the network admin we discovered that OneConnect was causing the issue. It was attempting to pool connections and was causing the problem, turning it off fixed the issue entirely.

Just something to keep an eye out for if you use F5’s to handle load balancing or DR client redirection.

Formatting Number Output

I’ve been working with some large data load processes recently and have been dumping upwards of a billion records into tables. As these are not instantaneous loads I found myself wanting to keep an eye on how much data was loaded.

This is nice and easy to do using the sys.partitions system view. Here’s an example from AdventureWorks

SELECT t.name, rows

FROM sys.partitions p

   INNER JOIN sys.tables t

       ON p.OBJECT_ID = t.OBJECT_ID

WHERE p.index_id IN (0,1)

ORDER BY rows DESC;

top+10

This is all good and well, but when you start getting to large values on the tables it becomes a little difficult to tell the difference between 31 million and 312 million. That’s when I resort back to using commas. Unfortunately SQL Server does not make this easy, although it can be accomplished with a couple of converts.

SELECT t.name, substring(convert(varchar, convert(money, rows), 1), 1, len(convert(varchar, convert(money, rows), 1))-3)

FROM sys.partitions p

   INNER JOIN sys.tables t

       ON p.OBJECT_ID = t.OBJECT_ID

WHERE p.index_id IN (0,1)

ORDER BY rows DESC

top+10+2

This is still a little arduous to type, so I created a function to do the work. Now it’s just a case of calling that function whenever I want to have the numbers presented to me in a much more readable format.

CREATE FUNCTION dbo.fnNumberComma (@iVal bigint)

RETURNS VARCHAR(30)

AS

BEGIN

DECLARE @oVal VARCHAR(30)


SELECT @oVal = SUBSTRING(CONVERT(VARCHAR(30), CONVERT(money, @iVal),1),1,LEN(CONVERT(VARCHAR(30),CONVERT(money,@iVal),1))-3)

RETURN (@oVal)

END

GO



SELECT t.name, dbo.fnNumberComma(rows) as [RowCount]

FROM sys.partitions p

   INNER JOIN sys.tables t

       ON p.OBJECT_ID = t.OBJECT_ID

WHERE p.index_id IN (0,1)

ORDER BY rows DESC;

top+10+3