Category: SQL

Refreshing Development Databases

Been a busy week at work and I’ve had to perform multiple dumps of production databases back to a development server. This is something that would normally be pretty quick and easy however a couple of things impacted that and led to having to make some choices about how to perform the task.

Here are the relevant details that I need to use to decide how to move the backup files:

  1. The servers are in two different datacenters and the WAN link between them is not very fast
  2. The backups are stored on a CIFS share that is non-routable for anything other than machines on that VLAN (for security reasons)
  3. There is only a single machine in the datacenter that has the ability to reach through the firewall to the development server
  4. I had to get the data across the WAN as fast as possible, however did not want to have to babysit what would be a several hour process (it should restarted in the case of any issues)
  5. All the hosts involved are running Windows 2008R2 SP1
  6. I did not want to install any extra software on any of the servers
  7. This is not an ongoing process and so I wanted something quick. No need to try and over-engineer the situation

 

How to get at the backup

My first thought was to use the improved TCP stack in Windows 2008R2, log on to the development server and pull the data from the CIFS share. Unfortunately this was out as the development machine does not have access to the subnet in order to pull the data.

Next was to perform a backup directly across the WAN to the development machine. This would not work due to firewall restrictions preventing communication between the machines (and this would also not be network optimized)

The only option left was to use the single machine in the datacenter (let’s call this the jump server), on the relevant VLAN, which had access through the firewall to push the backup to the development box.

 

How to optimize the copy and have it restartable

Opening a terminal server session to the jump server, opening an explorer window to the source and destination, then dragging the file over would copy the file. There have also been reported incidents where using the drag and drop method has caused memory problems (here’s an example from a forum thread http://social.technet.microsoft.com/Forums/en-US/winservergen/thread/3f8a80fd-914b-4fe7-8c93-b06787b03662/ ). This also does not address the concern with automatic restarts for the copy operation.

Opening a terminal server session to the jump server and running copy/xcopy would do the trick. There are no memory issues that I’m aware of with this method. We do have the problem whereby the copy process would not restart in the event of a problem. This method also does not do the best job of managing copies over a network. So this was out.

Opening a terminal server session to the jump server and using robocopy to copy the file made the most sense. Using the additional /r and /w switches you can have the copy process retry as many times as needed and have a defined interval between those retries. Robocopy also has a level of network optimization built in. As a bonus it also provides ongoing information as to the copy completion percentage and details statistics when complete. Robocopy is built in to Windows 2008R2 and so also meets the requirement of not wanting to install any more software.

 

Final copy solution

In the end I found myself opening up a terminal server remote desktop session to the jump server which had both a route to the backup file and access through the firewall to the development server. From there I used Robocopy to perform the file copy over the WAN, providing the best performance as well as automatic restarts from any failures.

 

I’m curious, would you have handled this differently?

Enabling CLR On Your SQL Server Instance

Strange one today. A dev came over with a strange question. He was trying to enable CLR on his local SQL instance but getting an error when trying to do so. Curious I thought, so I took a walk over to his desk to see what was going on.

He was executing a basic sp_configure script to enable CLR, nothing special, no frills.

EXEC sp_configure 'clr enabled', 1

RECONFIGURE

When he ran this he was getting an error:

Msg 259, Level 16, State 1, Line 1
Ad hoc updates to system catalogs are not allowed.

Most curious. I went back to my machine and wasn’t able to reproduce the problem. Rather than have him attempt to restart his instance I just had him reconfigure with override

EXEC sp_configure 'clr enabled', 1

RECONFIGURE WITH OVERRIDE

This time it ran correctly and gave a good result.

All the “WITH OVERRIDE” does is force SQL to be reconfigured with the updated value (unless it would cause a fatal error). Something to remember if this ever crops up for you.

Replication–Beware Reuse of View Names When Filtering Articles

I came across a very strange issue with replication this weekend.

As a part of a release a new table was being created with extra columns and a view was created with the old table name that referenced the old column set (so as not to break old legacy applications). This table was replicated. The dev team provided us with scripts to setup replication. During the attempt to perform the snapshot I ran into some problems and so dropped the publication.

All of a sudden users started to complain that they could no longer see the view and couldn’t query the data. I looked, and sure enough the view was not there. Very strange. I told the users that I’d get it created again once replication was done. I redid the publication and the users called me saying not to worry, the view was back.

What the heck?

After digging through I found the cause of the issue. The dev had created the replication scripts and included a filter to eliminate certain columns. This replication then gets wrapped into an article view. In this case the dev had manually named the @view_name in sp_articleview to be the same as the view in the database. This caused SQL to drop the existing view and create it’s own. When the publication was dropped so was the view.

 

Here are the steps to repro this behavior:

Firstly I’m creating a view against the local table SQLMonitor.dbo.SQLBuilds

CREATE VIEW Bananas

AS

    SELECT  Build ,

            [KB Article] ,

            URL ,

            Description ,

            MajorBuild ,

            MinorBuild

    FROM    SQLMonitor.dbo.SQLBuilds 

A quick select shows data:

image

 

Now let’s create a publication for replication with column filters against the SQLBuilds table. We’ll specify the @view_name to be Bananas. I’m also including an @filter_clause to eliminate rows (I’ll explain that in a bit).

-- Enabling the replication database

USE master

EXEC sp_replicationdboption @dbname = N'SQLMonitor', @optname = N'publish',

    @value = N'true'

GO

 

EXEC [SQLMonitor].sys.sp_addlogreader_agent @job_login = NULL,

    @job_password = NULL, @publisher_security_mode = 1

GO

EXEC [SQLMonitor].sys.sp_addqreader_agent @job_login = NULL,

    @job_password = NULL, @frompublisher = 1

GO

-- Adding the transactional publication

USE [SQLMonitor]

EXEC sp_addpublication @publication = N'Bananas Test',

    @description = N'Transactional publication of database ''SQLMonitor'' from Publisher ''local''.',

    @sync_method = N'concurrent', @retention = 0, @allow_push = N'true',

    @allow_pull = N'true', @allow_anonymous = N'false',

    @enabled_for_internet = N'false', @snapshot_in_defaultfolder = N'true',

    @compress_snapshot = N'false', @ftp_port = 21, @ftp_login = N'anonymous',

    @allow_subscription_copy = N'false', @add_to_active_directory = N'false',

    @repl_freq = N'continuous', @status = N'active',

    @independent_agent = N'true', @immediate_sync = N'false',

    @allow_sync_tran = N'false', @autogen_sync_procs = N'false',

    @allow_queued_tran = N'false', @allow_dts = N'false', @replicate_ddl = 1,

    @allow_initialize_from_backup = N'false', @enabled_for_p2p = N'false',

    @enabled_for_het_sub = N'false'

GO

 

 

EXEC sp_addpublication_snapshot @publication = N'Bananas Test',

    @frequency_type = 1, @frequency_interval = 0,

    @frequency_relative_interval = 0, @frequency_recurrence_factor = 0,

    @frequency_subday = 0, @frequency_subday_interval = 0,

    @active_start_time_of_day = 0, @active_end_time_of_day = 235959,

    @active_start_date = 0, @active_end_date = 0, @job_login = NULL,

    @job_password = NULL, @publisher_security_mode = 1

EXEC sp_grant_publication_access @publication = N'Bananas Test',

    @login = N'sa'

GO

EXEC sp_grant_publication_access @publication = N'Bananas Test',

    @login = N'NT AUTHORITYSYSTEM'

GO

EXEC sp_grant_publication_access @publication = N'Bananas Test',

    @login = N'NT SERVICESQLSERVERAGENT'

GO

EXEC sp_grant_publication_access @publication = N'Bananas Test',

    @login = N'NT SERVICEMSSQLSERVER'

GO

EXEC sp_grant_publication_access @publication = N'Bananas Test',

    @login = N'distributor_admin'

GO

 

-- Adding the transactional articles

USE [SQLMonitor]

EXEC sp_addarticle @publication = N'Bananas Test', @article = N'SQLBuilds',

    @source_owner = N'dbo', @source_object = N'SQLBuilds', @type = N'logbased',

    @description = N'', @creation_script = N'', @pre_creation_cmd = N'drop',

    @schema_option = 0x000000000803509F,

    @identityrangemanagementoption = N'none',

    @destination_table = N'SQLBuilds', @destination_owner = N'dbo',

    @status = 24, @vertical_partition = N'true',

    @ins_cmd = N'CALL [sp_MSins_dboSQLBuilds]',

    @del_cmd = N'CALL [sp_MSdel_dboSQLBuilds]',

    @upd_cmd = N'SCALL [sp_MSupd_dboSQLBuilds]'

 

-- Adding the article's partition column(s)

EXEC sp_articlecolumn @publication = N'Bananas Test', @article = N'SQLBuilds',

    @column = N'Build', @operation = N'add', @force_invalidate_snapshot = 1,

    @force_reinit_subscription = 1

EXEC sp_articlecolumn @publication = N'Bananas Test', @article = N'SQLBuilds',

    @column = N'KB Article', @operation = N'add',

    @force_invalidate_snapshot = 1, @force_reinit_subscription = 1

EXEC sp_articlecolumn @publication = N'Bananas Test', @article = N'SQLBuilds',

    @column = N'MajorBuild', @operation = N'add',

    @force_invalidate_snapshot = 1, @force_reinit_subscription = 1

EXEC sp_articlecolumn @publication = N'Bananas Test', @article = N'SQLBuilds',

    @column = N'MinorBuild', @operation = N'add',

    @force_invalidate_snapshot = 1, @force_reinit_subscription = 1

 

-- Adding the article synchronization object

EXEC sp_articleview @publication = N'Bananas Test', @article = N'SQLBuilds',

    @view_name = N'Bananas', @filter_clause = N'[KB Article] >0',

    @force_invalidate_snapshot = 1, @force_reinit_subscription = 1

GO

 

This does it’s job, creates the publication with the article and everything. Now let’s query the view again…

image

 

Several columns are now missing and the view matches what’s specified in the replication script. The old view has been dropped and a new system view has been created:

image

 

 

Now dropping the publication also drops the view

-- Dropping the transactional articles

USE [SQLMonitor]

EXEC sp_dropsubscription @publication = N'Bananas Test',

    @article = N'SQLBuilds', @subscriber = N'all', @destination_db = N'all'

GO

USE [SQLMonitor]

EXEC sp_droparticle @publication = N'Bananas Test', @article = N'SQLBuilds',

    @force_invalidate_snapshot = 1

GO

 

-- Dropping the transactional publication

USE [SQLMonitor]

EXEC sp_droppublication @publication = N'Bananas Test'

GO

 

image

 

This appears to be a bug and not the behavior I would expect.

If I recreate the view and then attempt to create the publication again, this time without the @filter_clause I get an error stating the view already exists:

USE [SQLMonitor]

EXEC sp_addpublication @publication = N'Bananas Test',

    @description = N'Transactional publication of database ''SQLMonitor'' from Publisher ''local''.',

    @sync_method = N'concurrent', @retention = 0, @allow_push = N'true',

    @allow_pull = N'true', @allow_anonymous = N'false',

    @enabled_for_internet = N'false', @snapshot_in_defaultfolder = N'true',

    @compress_snapshot = N'false', @ftp_port = 21, @ftp_login = N'anonymous',

    @allow_subscription_copy = N'false', @add_to_active_directory = N'false',

    @repl_freq = N'continuous', @status = N'active',

    @independent_agent = N'true', @immediate_sync = N'false',

    @allow_sync_tran = N'false', @autogen_sync_procs = N'false',

    @allow_queued_tran = N'false', @allow_dts = N'false', @replicate_ddl = 1,

    @allow_initialize_from_backup = N'false', @enabled_for_p2p = N'false',

    @enabled_for_het_sub = N'false'

GO

 

 

EXEC sp_addpublication_snapshot @publication = N'Bananas Test',

    @frequency_type = 1, @frequency_interval = 0,

    @frequency_relative_interval = 0, @frequency_recurrence_factor = 0,

    @frequency_subday = 0, @frequency_subday_interval = 0,

    @active_start_time_of_day = 0, @active_end_time_of_day = 235959,

    @active_start_date = 0, @active_end_date = 0, @job_login = NULL,

    @job_password = NULL, @publisher_security_mode = 1

EXEC sp_grant_publication_access @publication = N'Bananas Test',

    @login = N'sa'

GO

EXEC sp_grant_publication_access @publication = N'Bananas Test',

    @login = N'NT AUTHORITYSYSTEM'

GO

EXEC sp_grant_publication_access @publication = N'Bananas Test',

    @login = N'NT SERVICESQLSERVERAGENT'

GO

EXEC sp_grant_publication_access @publication = N'Bananas Test',

    @login = N'NT SERVICEMSSQLSERVER'

GO

EXEC sp_grant_publication_access @publication = N'Bananas Test',

    @login = N'distributor_admin'

GO

 

-- Adding the transactional articles

USE [SQLMonitor]

EXEC sp_addarticle @publication = N'Bananas Test', @article = N'SQLBuilds',

    @source_owner = N'dbo', @source_object = N'SQLBuilds', @type = N'logbased',

    @description = N'', @creation_script = N'', @pre_creation_cmd = N'drop',

    @schema_option = 0x000000000803509F,

    @identityrangemanagementoption = N'none',

    @destination_table = N'SQLBuilds', @destination_owner = N'dbo',

    @status = 24, @vertical_partition = N'true',

    @ins_cmd = N'CALL [sp_MSins_dboSQLBuilds]',

    @del_cmd = N'CALL [sp_MSdel_dboSQLBuilds]',

    @upd_cmd = N'SCALL [sp_MSupd_dboSQLBuilds]'

 

-- Adding the article's partition column(s)

EXEC sp_articlecolumn @publication = N'Bananas Test', @article = N'SQLBuilds',

    @column = N'Build', @operation = N'add', @force_invalidate_snapshot = 1,

    @force_reinit_subscription = 1

EXEC sp_articlecolumn @publication = N'Bananas Test', @article = N'SQLBuilds',

    @column = N'KB Article', @operation = N'add',

    @force_invalidate_snapshot = 1, @force_reinit_subscription = 1

EXEC sp_articlecolumn @publication = N'Bananas Test', @article = N'SQLBuilds',

    @column = N'MajorBuild', @operation = N'add',

    @force_invalidate_snapshot = 1, @force_reinit_subscription = 1

EXEC sp_articlecolumn @publication = N'Bananas Test', @article = N'SQLBuilds',

    @column = N'MinorBuild', @operation = N'add',

    @force_invalidate_snapshot = 1, @force_reinit_subscription = 1

 

-- Adding the article synchronization object

EXEC sp_articleview @publication = N'Bananas Test', @article = N'SQLBuilds',

    @view_name = N'Bananas', @filter_clause = N'',

    @force_invalidate_snapshot = 1, @force_reinit_subscription = 1

GO

 

(1 row(s) affected)
(1 row(s) affected)
(1 row(s) affected)
Msg 2714, Level 16, State 3: There is already an object named ‘Bananas’ in the database.
Msg 21745, Level 16, State 1, Procedure sp_MSrepl_articleview, Line 272
Cannot generate a filter view or procedure. Verify that the value specified for the @filter_clause parameter of sp_addarticle can be added to the WHERE clause of a SELECT statement to produce a valid query.
Msg 3931, Level 16, State 1, Procedure sp_MSrepl_articleview, Line 401
The current transaction cannot be committed and cannot be rolled back to a savepoint. Roll back the entire transaction.

 

Something to be aware of if you are going to be playing with replication.

I’ve opened a Connect bug with Microsoft to try and get this fixed. Please go vote on it. And be careful with your replication.

Logging Logons (or is that logins?)

Some say logon some say login. I tend to use them both (frequently within the same sentence). SQL Server really doesn’t help things by having you create a logon trigger that can do things with logins (see where I’m going here?).

Either way, we’re performing a server migration and needed to find out all the logins and clients that are connecting to the SQL instance. I also wanted to capture this data to a table to make it easy to report against (eliminating turning on successful logins). The ideal solution for me was to use a logon trigger, and as the instance in question is running SQL 2008 this worked out great.

I created a login, table of holding, table of rollup holding and granted the relevant permissions so that I could capture the data. Then I created the actual login logon trigger itself which captured the logon login, host name, application name and datetime information and logged the data into the table.

I added a procedure and scheduled job to rollup the data nightly so as to prevent the table from getting too large (which would be quite easy to do on a big server).

There are no primary keys and only a single clustered index. It’s all very light. If you are looking to capture logon/login data then this is a good starting point. Code below…

 

USE master 

go

--create the login which will be used to log the data

CREATE LOGIN LoginLogging WITH PASSWORD = 'somepassword123!@#' ;

GRANT VIEW SERVER STATE TO LoginLogging ;

 

 

--change this to the database of your choosing

USE DBAdmin

go

--create the user matching the login

CREATE USER LogonLogging FOR LOGIN LogonLogging ;

 

--create the table that will hold the data

CREATE TABLE LogonList

    (

      LoginName NVARCHAR(256) ,

      HostName NVARCHAR(256) ,

      ApplicationName NVARCHAR(256) ,

      LoginDate DATETIME2

    ) ;

CREATE INDEX CI ON dbo.LogonList(LoginDate)

 

CREATE TABLE LogonListDailyRollup

    (

      LoginName NVARCHAR(256) ,

      HostName NVARCHAR(256) ,

      ApplicationName NVARCHAR(256) ,

      LoginDate DATE,

      LoginCount INT

    ) ;

    

    

--allow the login to insert data to the table    

GRANT INSERT ON dbo.LogonList TO LogonLogging ;    

 

--proc to rollup the data daily to keep space low and ease reporting

CREATE PROCEDURE RollupLogonData

 

AS

 

SET NOCOUNT ON

 

BEGIN TRY

    BEGIN TRAN

 

    INSERT  INTO dbo.LogonListDailyRollup

            ( LoginName ,

              HostName ,

              ApplicationName ,

              LoginDate ,

              LoginCount

            )

            SELECT  LoginName ,

                    HostName ,

                    ApplicationName ,

                    CONVERT(DATE, LoginDate) AS LoginDate ,

                    COUNT(*) AS LoginCount

            FROM    dbo.LogonList

            WHERE   LoginDate < CONVERT(DATE, GETDATE())

            GROUP BY LoginName ,

                    HostName ,

                    ApplicationName ,

                    CONVERT(DATE, LoginDate);

 

 

    DELETE  FROM dbo.LogonList

    WHERE   LoginDate < CONVERT(DATE, GETDATE());

 

END TRY

BEGIN CATCH

    SELECT  ERROR_MESSAGE()

    ROLLBACK TRAN

END CATCH

COMMIT TRAN

 

GO

 

 

 

--db context back to master for creating the login trigger

USE master 

go

 

CREATE TRIGGER LogLogons ON ALL SERVER

    WITH EXECUTE AS 'LogonLogging'

    FOR LOGON

AS

    BEGIN

        INSERT  INTO DBAdmin.dbo.LogonList

                ( LoginName ,

                  HostName ,

                  ApplicationName ,

                  LoginDate 

                )

                SELECT  ORIGINAL_LOGIN() ,

                        HOST_NAME() ,

                        APP_NAME() ,

                        GETDATE()

                WHERE   ORIGINAL_LOGIN() NOT LIKE N'NT SERVICE%'

    END ;

 

GO

Mass Updating Job Step Retry Attempts

Sometimes I find that the SQL Server team make curious choices as regards default operations. When setting up replication by default the log reader and distribution agent job steps will retry 2,147,483,647 times before failing. Doing the math on the retry interval of 1 minute that means that the job wouldn’t actually hit a failed status for over 4000 years. While you can setup alerts around replication a lot of folks don’t do that and you could easily be in a situation where replication has been down for so long that you have to reinitialize the subscription and perform a snapshot to get things running again.

Having a great many distribution jobs running on my server I wanted to perform a mass update. It could have gone through and updated each job step but that would have taken me a couple of days. I needed a better way. Digging through the SQL Server Agent stored procedures I found that sp_update_jobstep would do the work for me. I just needed to pass in the relevant parameters and then restart the jobs so that the change took effect.

As such I ended up writing the following script. Just adjust the @retries variable to the value that you want to work with (in this instance it would be an hours worth of retries before the job failed). This script is limited to the Distribution job step but could be easily modified to manage the log reader also.

Running this script took two minutes and it updated 60 distribution agent jobs. I’m getting ready to kick it off on the rest of my servers.

SET NOCOUNT ON

      

DECLARE @retries INT = 60 --Set to the value that you want for retry attempts

 

DECLARE @JobIdUpd UNIQUEIDENTIFIER ,

    @JobStepUpd INT,

    @JobStatus INT,

    @JobName SYSNAME,

    @Message NVARCHAR(256)

 

 

--Create a temp table which will hold the results from xp_sqlagent_enum_jobs

--Note xp_sqlagent_enum_jobs is an undocumented stored procedure, it may not be there soon

CREATE TABLE #enumjobs

    (

      job_id UNIQUEIDENTIFIER NOT NULL ,

      last_run_date INT NOT NULL ,

      last_run_time INT NOT NULL ,

      next_run_date INT NOT NULL ,

      next_run_time INT NOT NULL ,

      next_run_schedule_id INT NOT NULL ,

      requested_to_run INT NOT NULL ,

      request_source INT NOT NULL ,

      request_source_id SYSNAME NULL ,

      running INT NOT NULL , 

      current_step INT NOT NULL ,

      current_retry_attempt INT NOT NULL ,

      job_state INT NOT NULL

    ) 

 

 

   

DECLARE UpdateJobsCrsr CURSOR

FOR

SELECT  sjs.job_id ,

        sjs.step_id ,

        sj.name

FROM    msdb.dbo.sysjobsteps sjs

        INNER JOIN msdb.dbo.sysjobs sj ON sjs.job_id = sj.job_id

WHERE   subsystem = 'Distribution'

        AND retry_attempts > 60

 

OPEN UpdateJobsCrsr ;

 

FETCH NEXT FROM UpdateJobsCrsr

INTO @JobIdUpd, @JobStepUpd,@JobName

WHILE ( @@FETCH_STATUS <> -1 ) 

    BEGIN

   

--Update the job step so that the number of retry attempts is set to the @retries variable                

EXEC msdb.dbo.sp_update_jobstep @job_id = @jobidUpd, @step_id = @jobstepUpd, @retry_attempts = @retries  

SELECT @Message = N'Job ' + @JobName + ' updated.'

RAISERROR (@Message, 0, 1) WITH nowait 

 

--We have to restart the job in order for this change to take effect

--Check to see if the job is running, if so stop and restart it, otherwise leave it alone

--Note, you'll need to be a sysadmin for this to work (or change up some stuff)

    TRUNCATE TABLE #enumjobs;

    INSERT INTO #enumjobs

                EXEC master.dbo.xp_sqlagent_enum_jobs 1, 'sa', @JobIdUpd;

                

SELECT @JobStatus = job_state FROM #enumjobs

 

IF @JobStatus <> 4 --This is the job idle state

    BEGIN

--Stop the job

        EXEC msdb.dbo.sp_stop_job @job_id = @JobIdUpd

--Wait a couple of seconds for the job to stop

        WAITFOR DELAY '00:00:03'        

--Start the job up again

        EXEC msdb.dbo.sp_start_job @job_id = @JobIdUpd    

    END

ELSE 

    BEGIN

        SELECT @Message = N'Job ' + @JobName + ' was not running, please start manually if required.'              

        RAISERROR (@Message, 0, 1) WITH NOWAIT

    END

                

        FETCH NEXT FROM UpdateJobsCrsr

            INTO @JobIdUpd, @JobStepUpd, @JobName

    END

 

 

CLOSE UpdateJobsCrsr ;

GO

DEALLOCATE UpdateJobsCrsr ;

GO

 

--Cleanup

DROP TABLE #enumjobs   

xp_logininfo–The Bad

In my last post I wrote about the great things that xp_logininfo can do for you, like tell you if a user account has access and if so the permission path for that access. It also allows you to find out the membership of groups that have logins.

There’s a problem with xp_logininfo however. If you attempt to query a group that contains a group you won’t get any of the information around the members of that second group. It’s a serious limitation of xp_logininfo.

Take the following example:

image

Here we have three users and a group.

If we query this we expect to see members of all the groups, however as xp_logininfo is not recursive we only get the top level group membership.

xp_logininfo 'awesomesauceDBAs', 'members'

image

 

I see this as an issue, as such I’ve created Connect item 718737 to get this fixed. Please go vote and help make SQL Server better.

xp_logininfo–Your View Into Active Directory

It’s always good to know what users have access to your SQL instances but when it comes to Active Directory users and groups it can quickly become complicated and you might not know who has access at any given time.

Requests can come in for you to provide access to a user, but how can you identify if that user might already have some level of access?

That’s where xp_logininfo comes to the rescue!

This extended stored procedure will check to see if your user does actually have access and give you the relevant information associated. Just call the proc with the name of the user account:

EXEC xp_logininfo 'awesomesaucenic'

image

 

As you can see it told me that the account I passed in is a user, it has admin level privileges on the server and that the access for the account is provided via the awesomesauceDBAs group.

Wouldn’t it be great to see what other accounts also have access permissions via that group? Well xp_logininfo can do that for you too. By passing the group name along with the extra parameter of members you can get a list of group members’:

EXEC xp_logininfo 'awesomesauceDBAs', 'members'

image

 

This is a nice quick and easy way to see who has access on your systems via group membership.

Reinitializing Replication For A Single Subscriber

Interesting thing happened today, but never mind that. We had a server go down recently in our staging environment, wasn’t any big deal, we had to have vendor support come out and replace the motherboard. It was stage, not a big deal. The thing was, the server was down for a couple of days. Actually it was down for a total of 4 days.

 

The server in question is a SQL 2008 subscriber to several published tables on the staging OLTP SQL 2008 Instance (using push subscriptions). Sadly, not thinking of this ahead of time, we neglected to adjust the maximum distribution retention period for transactional replication (which sits at 72 hours). Suffice to say, the server came back online, we restarted the distribution agents we got the dreaded “subscription marked as inactive” error requiring us to have to reinitialize all of the subscriptions to that server.

 

“There goes my afternoon” thought I. Not being a fan of the replication monitor in 2008 I dreaded having to work with it to go through and reinitialize each and every publication for that subscriber, especially given that there are around 80 different articles. There must be a better way I thought…and lo, there is!

 

Books online talks about sp_reinitsubscription and passing parameters for the publication, article, subscriber, destination database, for schema change, publisher, ignore distributor failover and invalidate snapshot.

 

What BOL is not clear about it that you can pass in just a subscriber, that’s it, nothing else.

 

exec sp_reinitsubscription @subscriber = ‘<sql instance>’;

 

Running this reinitialized all of the publications within the current database for that one subscriber. Very quick, very easy. From there I just had to kickstart the snapshot agents and we were back in business (sp_start_job in this case ftw).

Grabbing Table Row Counts Into SQL Using PowerShell

Sometimes Twitter gives you the impetus to get something done that you’ve been meaning to accomplish but just haven’t quite gotten around to. Yesterday was one of the days that happened.

Ed Watson (blog|twitter)  asked the following question on #SQLHelp

SQLHelp

 

This is something I’ve been meaning to do for a while so that I can get good ongoing trending information around table row counts.

There are several ways to accomplish this but PowerShell was the clear winner for me. I wanted the script to accomplish a couple of things

  • Capture the SQL instance name, database name, table name (including schema) and row count for all tables in a database
  • To grab the information for all the databases on a server (including system databases)
  • To write the data to a SQL table
  • Provide the option of dumping the data to the screen
  • Allow me to execute it quickly and easily against multiple servers without providing interactive input

 

The script I’ve written does all this. The only bugs I’ve found are that the model and tempdb table information is not accessible, but this isn’t really an issue as model doesn’t get any use and tempdb is far too frequently updated to be of any use.

 

The Scripts

Firstly we need to create a couple of SQL tables to hold the results. These should be created on the SQL instance and database of your choosing. The script itself should be updated to reference the locations of these tables (ultimately it made more sense to do it this way rather than pass the information in as a parameter).

 

CREATE TABLE dbo.SQLTableRowCounts

    (

      SQLInstance NVARCHAR(128) ,

      DBName NVARCHAR(128) ,

      TableName NVARCHAR(256) ,

      TableRowCount BIGINT, 

      LoadDate DATE DEFAULT CONVERT(DATE, GETDATE())

    );

    

CREATE TABLE dbo.SQLTableRowCountsArchive

    (

      SQLInstance NVARCHAR(128) ,

      DBName NVARCHAR(128) ,

      TableName NVARCHAR(256) ,

      TableRowCount BIGINT,

      LoadDate DATE DEFAULT CONVERT(DATE, GETDATE())

    );    

 

There are no indexes on the tables currently. Depending upon your usage patterns a couple of different indexes could apply. I’ll let you decide what works best for you.

 

The PowerShell script is quite well documented and includes execution examples (you’ll want to run this on a machine where the SQL PS providers have been installed).

<#

.SYNOPSIS

   Script takes in a list of SQL Instances and iterates through the databases to retrieve a row count for each table. This is then loaded into a SQL database.

   Within the script the $StorageSQLServer and $StorageDatabase should be updated to reflect the relevant locations to load the data.

.DESCRIPTION

   Script takes in a list of SQL Instances and iterates through the databases to retrieve a row count for each table. This is then loaded into a SQL database.

   Within the script the $StorageSQLServer and $StorageDatabase should be updated to reflect the relevant locations to load the data.

   Requires bulk admin permissions on the SQL Server that you are loading the data to.

   

   Accepts pipelined inputs so that you can pass an array for working with multiple SQL Instances

.PARAMETER <$SQLInstance>

   This is the SQL Instance that you'll be working with

.PARAMETER <$verbose>

    Initially set to false. When set to true will also output the data to the console

.EXAMPLE

   Basic example: Get-TableRowCounts MySQLInstance

   Advanced: Get-Content C:TempSQLInstances.txt | % { Get-TableRowCounts $_ }

   

#>

 

param

(

[string]$SQLInstance = "$(Read-Host 'SQL Server Instance [e.g. SQLPROD]')",

[switch]$verbose = $false

)

 

#Load the SQL snapins

get-pssnapin | where {$_.Name -match "SqlServerProviderSnapin100"} 

if (!$CheckSQLSnapins) 

{

    Add-PSSnapin SqlServerProviderSnapin100 

    Write-Host "SqlServerProviderSnapin100 Loaded"

}

 

#Set the following to the SQL instance and database that you want to load the data to

#You will need to have bulk insert permissions to be able to perform this action

$StorageSQLServer = "localhost"

$StorageDatabase = "SQLMonitor"

 

$BCPconnectionString = "Data Source=$StorageSQLServer;Integrated Security=true;Initial Catalog=$StorageDatabase;"

 

#Clean out the current data table if the data is not from today

Invoke-Sqlcmd -ServerInstance $StorageSQLServer -Database $StorageDatabase -Query "DELETE FROM dbo.SQLTableRowCounts WHERE LoadDate != CONVERT(DATE, GETDATE());"

 

 

 

function TableList ($SQLInstance, $DBName)

{

 

    #Handle default vs named instances

    if ($SQLInstance.Contains("`"))

    {        $location = "SQLSERVER:SQL$SQLInstanceDatabases$DBNameTables" }

    else

    {        $location = "SQLSERVER:SQL$SQLInstanceDEFAULTDatabases$DBNameTables" }

 

    <#

    Create data table to hold the results

    The datatable will live for the life of the function call to reduce memory overhead

    pulling from multiple servers

    #>

    $dataTable = New-Object system.Data.DataTable "Results"

    #Specify the column names for the data table

    $col1 = New-Object system.Data.DataColumn SQLInstance, ([string])

    $col2 = New-Object system.Data.DataColumn DatabaseName, ([string])

    $col3 = New-Object system.Data.DataColumn FullTableName,([string])

    $col4 = New-Object system.Data.DataColumn RowCounts,([int64])

 

    #Add the columns to the data table

    $dataTable.Columns.Add($col1)

    $dataTable.Columns.Add($col2)

    $dataTable.Columns.Add($col3)

    $dataTable.Columns.Add($col4)

 

    #Get our table list

    $TableList = dir $location -Force | select Owner, Name, RowCount

 

    #Iterate through the tables, load the rowcounts into the datatable

    foreach ($TableInfo in $TableList)

    {

        $TableOwner = $TableInfo.Owner 

        $TableName = $TableInfo.Name

        $TableRowCount = $TableInfo.RowCount

 

        $TableFullName = "$TableOwner.$TableName"

 

        #Provided there are rows in the table

        #Load the table/rowcounts into the datatable

        #This prevents errors due to nulls in model & tempdb

        if ($TableRowCount)

        {

            $row = $dataTable.NewRow()

            $row.SQLInstance = $SQLInstance

            $row.DatabaseName = $DBName

            $row.FullTableName = $TableFullName

            $row.RowCounts = $TableRowCount

            $dataTable.Rows.Add($row)

        }

 

    }

 

    if ($verbose) { Write-Output $dataTable }

 

    #Bulk load the data into SQL from the data table

    $bulkCopyRoles = new-object ("Data.SqlClient.SqlBulkCopy") $BCPconnectionString

    #Loads to the current table

    $bulkCopyRoles.DestinationTableName = "SQLTableRowCounts"

    $bulkCopyRoles.WriteToServer($dataTable)

    #Loads to the archive table

    $bulkCopyRoles.DestinationTableName = "SQLTableRowCountsArchive"

    $bulkCopyRoles.WriteToServer($dataTable)

 

    #Clean up after ourselves so we release memory used to hold the data table

    Remove-Variable dataTable

}

 

 

function DBList ($SQLInstance)

{

 

    #Handle default vs named instances

    if ($SQLInstance.Contains("`"))

    {        $BaseLocation = "SQLSERVER:SQL$SQLInstanceDatabases" }

    else

    {        $BaseLocation = "SQLSERVER:SQL$SQLInstanceDEFAULTDatabases" }

 

    $dblist = dir $BaseLocation -Force | select Name

    foreach ($db in $dblist)

    {

        $DBName = $db.Name

        TableList $SQLInstance $DBName 

    }

}

 

DBList $SQLInstance 

 

 

This script could easily be setup to execute daily to load the data into SQL so that you can perform quick and easy trends on table growth.

 

Give it a try, let me know what you think.

Stop Bad Database Design

Every year that goes by I sit in hope that I won’t see bad database design.

Every year I am disappointed.

 

As an example here is a table create statement that I saw the other day (table and column names have been changed to protect the innocent)

CREATE TABLE [dbo].BestestTableEVAR(

 Col1 [int] IDENTITY(1,1) NOT NULL,

 Col2 [uniqueidentifier] NULL,

 Col3 [uniqueidentifier] NOT NULL,

 Col4 [smallint] NULL,

 Col5 [smallint] NOT NULL,

 Col6 [bit] NOT NULL,

 Col7 [xml] NULL,

 Col8 [xml] NULL,

 ColA [xml] NULL,

 ColB [xml] NULL,

 ColC [datetime2](2) NULL,

 ColD [datetime2](2) NULL,

 COlE [datetime2](2) NULL,

 ColF [datetime2](2) NULL,

 CONSTRAINT [PK_BestestTableEVAR] PRIMARY KEY CLUSTERED 

(

 Col3 ASC

)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON, FILLFACTOR = 90) ON [PRIMARY]

) ON [PRIMARY]

 

So what’s wrong with this?

The clustered primary key on this table is a GUID. Is that bad? That’s an unequivocal YES! Read all that Kimberly Tripp (blog|twitter) has to say about GUIDs in database design.

What makes this all the more crazy is that the table has an identity column. That’s a natural clustering key ready and waiting to be used and yet for some reason it’s not.

This table is going to fragment like crazy, it won’t scale and performance will be hideous. Additionally, thanks to the XML columns this table can’t even be rebuilt online meaning there’s no way to help the fragmentation or performance without actually taking the table offline to do it, meaning it can’t handle any transactions. This is a problem in a table on an OLTP system.

 

I would go back and change some things. Let’s say you wanted to keep the table structure the same, that’s fine, but let’s be smart about the keys and indexes.

It would make sense to change the identity column to be clustered (I would also make this the primary key) and then, to ensure uniqueness on Col2 which is the current primary key a unique index is warranted.

CREATE TABLE [dbo].BestestTableEVAR(

 Col1 [int] IDENTITY(1,1) NOT NULL,

 Col2 [uniqueidentifier] NULL,

 Col3 [uniqueidentifier] NOT NULL,

 Col4 [smallint] NULL,

 Col5 [smallint] NOT NULL,

 Col6 [bit] NOT NULL,

 Col7 [xml] NULL,

 Col8 [xml] NULL,

 ColA [xml] NULL,

 ColB [xml] NULL,

 ColC [datetime2](2) NULL,

 ColD [datetime2](2) NULL,

 COlE [datetime2](2) NULL,

 ColF [datetime2](2) NULL,

 CONSTRAINT [PK_BestestTableEVAR] PRIMARY KEY CLUSTERED 

(

 Col1 ASC

)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON, FILLFACTOR = 90) ON [PRIMARY]

) ON [PRIMARY];

 

CREATE UNIQUE NONCLUSTERED INDEX UI_BestestTableEVAR_Col2 on BestestTableEVAR (Col2);

Sure, we still won’t be able to rebuild the indexes online, but we won’t have the same crazy levels of fragmentation that we would have had before.

 

I know I’ll be seeing a lot of bad design this year and I know that I’ll be forced to push that bad design into production. Doesn’t stop me trying to change things however. Help your devs out, let them know when their design is a problem. Who knows, maybe you’ll change things for the better.