Tag: SQL

SQL2012 It’s The Small Things–Default Schemas For Groups

If you’re reading this then I’m sure you know all about SQL 2012 which just went RTM and will be available for general release on April 1st. I’m sure you’ve also heard all about some of the big new changes such as AlwaysOn with readable secondaries, ColumnStore indexes, and FileTables. Missed in all the big announcements are some of the smaller things that make a huge difference in the day to day running of SQL Server. I’m going to put together a few posts outlining some of the smaller, but to me equally significant, changes that are in the new version.

 

Back in the day there was no facility to provide a default schema to a Windows Group, you would actually get an error were you to try:

USE Master;

CREATE LOGIN [sirsqlDBAs] from windows WITH DEFAULT_DATABASE = TempDB;

 

USE tempdb;

CREATE USER [sirsqlDBAs];

 

ALTER USER [sirsqlDBAs] with DEFAULT_SCHEMA = dbo;

 

image

 

This script now works in SQL 2012 and defines the default schema

image

 

If we check out the properties of the user we can see that the default schema has been set to dbo:

image

 

 

This is a very welcome addition. Prior to SQL 2012 you would have to create a separate login and user for each user in a windows group and then set the default schema for each, that’s a lot of administrative hassle eliminated.

The Modulo Operator Is Your Friend

Here’s another one of those sneaky little tricks that can be used to make your life easier.

I found myself needing to perform an insert of 240 million records from one table to another in the same database today. Rather than try to do this all in one transaction I wanted to keep the transaction log nice and small and so decided to do this in small batches (the database is in simple recovery).

Each batch would insert 50,000 records, with it only taking a few seconds to do each batch. Rather than sit there and have to execute this constantly I decided to throw the insert into a while loop with a begin/commit transaction and a checkpoint. I also wanted to know how many loops had been processed but didn’t want to overwhelm myself with data by printing every loop that was processed.

In the end I decided that knowing every 10 loops was more than enough information and so I put the modulo operator to good use.

 

In case you aren’t familiar with modulo (%) it simply returns the remainder when one number is divided by another. For example

SELECT 25 % 3

returns 1

 

In this case as I wanted to find out when the loop was divisible by 10 I just checked to see when the loop counter divided by 10 equaled zero (as there would be no remainder).

I threw that into my code and was able to quickly see how far along my processing was.

Here’s a quick example that you can just run to see what I mean.

DECLARE @msg NVARCHAR(255) ,

    @loopno INT = 0

 

WHILE @loopno < 1000 

    BEGIN

        IF ( @loopno % 10 ) = 0 

            BEGIN

                SELECT  @msg = 'Processing loop number '

                        + CONVERT(NVARCHAR, @loopno)

                RAISERROR (@msg, 0, 1) WITH NOWAIT

            END

        SET @loopno += 1

    END

 

As a DBA I really don’t think of using modulo often but it’s actually very useful.

Refreshing Development Databases

Been a busy week at work and I’ve had to perform multiple dumps of production databases back to a development server. This is something that would normally be pretty quick and easy however a couple of things impacted that and led to having to make some choices about how to perform the task.

Here are the relevant details that I need to use to decide how to move the backup files:

  1. The servers are in two different datacenters and the WAN link between them is not very fast
  2. The backups are stored on a CIFS share that is non-routable for anything other than machines on that VLAN (for security reasons)
  3. There is only a single machine in the datacenter that has the ability to reach through the firewall to the development server
  4. I had to get the data across the WAN as fast as possible, however did not want to have to babysit what would be a several hour process (it should restarted in the case of any issues)
  5. All the hosts involved are running Windows 2008R2 SP1
  6. I did not want to install any extra software on any of the servers
  7. This is not an ongoing process and so I wanted something quick. No need to try and over-engineer the situation

 

How to get at the backup

My first thought was to use the improved TCP stack in Windows 2008R2, log on to the development server and pull the data from the CIFS share. Unfortunately this was out as the development machine does not have access to the subnet in order to pull the data.

Next was to perform a backup directly across the WAN to the development machine. This would not work due to firewall restrictions preventing communication between the machines (and this would also not be network optimized)

The only option left was to use the single machine in the datacenter (let’s call this the jump server), on the relevant VLAN, which had access through the firewall to push the backup to the development box.

 

How to optimize the copy and have it restartable

Opening a terminal server session to the jump server, opening an explorer window to the source and destination, then dragging the file over would copy the file. There have also been reported incidents where using the drag and drop method has caused memory problems (here’s an example from a forum thread http://social.technet.microsoft.com/Forums/en-US/winservergen/thread/3f8a80fd-914b-4fe7-8c93-b06787b03662/ ). This also does not address the concern with automatic restarts for the copy operation.

Opening a terminal server session to the jump server and running copy/xcopy would do the trick. There are no memory issues that I’m aware of with this method. We do have the problem whereby the copy process would not restart in the event of a problem. This method also does not do the best job of managing copies over a network. So this was out.

Opening a terminal server session to the jump server and using robocopy to copy the file made the most sense. Using the additional /r and /w switches you can have the copy process retry as many times as needed and have a defined interval between those retries. Robocopy also has a level of network optimization built in. As a bonus it also provides ongoing information as to the copy completion percentage and details statistics when complete. Robocopy is built in to Windows 2008R2 and so also meets the requirement of not wanting to install any more software.

 

Final copy solution

In the end I found myself opening up a terminal server remote desktop session to the jump server which had both a route to the backup file and access through the firewall to the development server. From there I used Robocopy to perform the file copy over the WAN, providing the best performance as well as automatic restarts from any failures.

 

I’m curious, would you have handled this differently?

Enabling CLR On Your SQL Server Instance

Strange one today. A dev came over with a strange question. He was trying to enable CLR on his local SQL instance but getting an error when trying to do so. Curious I thought, so I took a walk over to his desk to see what was going on.

He was executing a basic sp_configure script to enable CLR, nothing special, no frills.

EXEC sp_configure 'clr enabled', 1

RECONFIGURE

When he ran this he was getting an error:

Msg 259, Level 16, State 1, Line 1
Ad hoc updates to system catalogs are not allowed.

Most curious. I went back to my machine and wasn’t able to reproduce the problem. Rather than have him attempt to restart his instance I just had him reconfigure with override

EXEC sp_configure 'clr enabled', 1

RECONFIGURE WITH OVERRIDE

This time it ran correctly and gave a good result.

All the “WITH OVERRIDE” does is force SQL to be reconfigured with the updated value (unless it would cause a fatal error). Something to remember if this ever crops up for you.

Replication–Beware Reuse of View Names When Filtering Articles

I came across a very strange issue with replication this weekend.

As a part of a release a new table was being created with extra columns and a view was created with the old table name that referenced the old column set (so as not to break old legacy applications). This table was replicated. The dev team provided us with scripts to setup replication. During the attempt to perform the snapshot I ran into some problems and so dropped the publication.

All of a sudden users started to complain that they could no longer see the view and couldn’t query the data. I looked, and sure enough the view was not there. Very strange. I told the users that I’d get it created again once replication was done. I redid the publication and the users called me saying not to worry, the view was back.

What the heck?

After digging through I found the cause of the issue. The dev had created the replication scripts and included a filter to eliminate certain columns. This replication then gets wrapped into an article view. In this case the dev had manually named the @view_name in sp_articleview to be the same as the view in the database. This caused SQL to drop the existing view and create it’s own. When the publication was dropped so was the view.

 

Here are the steps to repro this behavior:

Firstly I’m creating a view against the local table SQLMonitor.dbo.SQLBuilds

CREATE VIEW Bananas

AS

    SELECT  Build ,

            [KB Article] ,

            URL ,

            Description ,

            MajorBuild ,

            MinorBuild

    FROM    SQLMonitor.dbo.SQLBuilds 

A quick select shows data:

image

 

Now let’s create a publication for replication with column filters against the SQLBuilds table. We’ll specify the @view_name to be Bananas. I’m also including an @filter_clause to eliminate rows (I’ll explain that in a bit).

-- Enabling the replication database

USE master

EXEC sp_replicationdboption @dbname = N'SQLMonitor', @optname = N'publish',

    @value = N'true'

GO

 

EXEC [SQLMonitor].sys.sp_addlogreader_agent @job_login = NULL,

    @job_password = NULL, @publisher_security_mode = 1

GO

EXEC [SQLMonitor].sys.sp_addqreader_agent @job_login = NULL,

    @job_password = NULL, @frompublisher = 1

GO

-- Adding the transactional publication

USE [SQLMonitor]

EXEC sp_addpublication @publication = N'Bananas Test',

    @description = N'Transactional publication of database ''SQLMonitor'' from Publisher ''local''.',

    @sync_method = N'concurrent', @retention = 0, @allow_push = N'true',

    @allow_pull = N'true', @allow_anonymous = N'false',

    @enabled_for_internet = N'false', @snapshot_in_defaultfolder = N'true',

    @compress_snapshot = N'false', @ftp_port = 21, @ftp_login = N'anonymous',

    @allow_subscription_copy = N'false', @add_to_active_directory = N'false',

    @repl_freq = N'continuous', @status = N'active',

    @independent_agent = N'true', @immediate_sync = N'false',

    @allow_sync_tran = N'false', @autogen_sync_procs = N'false',

    @allow_queued_tran = N'false', @allow_dts = N'false', @replicate_ddl = 1,

    @allow_initialize_from_backup = N'false', @enabled_for_p2p = N'false',

    @enabled_for_het_sub = N'false'

GO

 

 

EXEC sp_addpublication_snapshot @publication = N'Bananas Test',

    @frequency_type = 1, @frequency_interval = 0,

    @frequency_relative_interval = 0, @frequency_recurrence_factor = 0,

    @frequency_subday = 0, @frequency_subday_interval = 0,

    @active_start_time_of_day = 0, @active_end_time_of_day = 235959,

    @active_start_date = 0, @active_end_date = 0, @job_login = NULL,

    @job_password = NULL, @publisher_security_mode = 1

EXEC sp_grant_publication_access @publication = N'Bananas Test',

    @login = N'sa'

GO

EXEC sp_grant_publication_access @publication = N'Bananas Test',

    @login = N'NT AUTHORITYSYSTEM'

GO

EXEC sp_grant_publication_access @publication = N'Bananas Test',

    @login = N'NT SERVICESQLSERVERAGENT'

GO

EXEC sp_grant_publication_access @publication = N'Bananas Test',

    @login = N'NT SERVICEMSSQLSERVER'

GO

EXEC sp_grant_publication_access @publication = N'Bananas Test',

    @login = N'distributor_admin'

GO

 

-- Adding the transactional articles

USE [SQLMonitor]

EXEC sp_addarticle @publication = N'Bananas Test', @article = N'SQLBuilds',

    @source_owner = N'dbo', @source_object = N'SQLBuilds', @type = N'logbased',

    @description = N'', @creation_script = N'', @pre_creation_cmd = N'drop',

    @schema_option = 0x000000000803509F,

    @identityrangemanagementoption = N'none',

    @destination_table = N'SQLBuilds', @destination_owner = N'dbo',

    @status = 24, @vertical_partition = N'true',

    @ins_cmd = N'CALL [sp_MSins_dboSQLBuilds]',

    @del_cmd = N'CALL [sp_MSdel_dboSQLBuilds]',

    @upd_cmd = N'SCALL [sp_MSupd_dboSQLBuilds]'

 

-- Adding the article's partition column(s)

EXEC sp_articlecolumn @publication = N'Bananas Test', @article = N'SQLBuilds',

    @column = N'Build', @operation = N'add', @force_invalidate_snapshot = 1,

    @force_reinit_subscription = 1

EXEC sp_articlecolumn @publication = N'Bananas Test', @article = N'SQLBuilds',

    @column = N'KB Article', @operation = N'add',

    @force_invalidate_snapshot = 1, @force_reinit_subscription = 1

EXEC sp_articlecolumn @publication = N'Bananas Test', @article = N'SQLBuilds',

    @column = N'MajorBuild', @operation = N'add',

    @force_invalidate_snapshot = 1, @force_reinit_subscription = 1

EXEC sp_articlecolumn @publication = N'Bananas Test', @article = N'SQLBuilds',

    @column = N'MinorBuild', @operation = N'add',

    @force_invalidate_snapshot = 1, @force_reinit_subscription = 1

 

-- Adding the article synchronization object

EXEC sp_articleview @publication = N'Bananas Test', @article = N'SQLBuilds',

    @view_name = N'Bananas', @filter_clause = N'[KB Article] >0',

    @force_invalidate_snapshot = 1, @force_reinit_subscription = 1

GO

 

This does it’s job, creates the publication with the article and everything. Now let’s query the view again…

image

 

Several columns are now missing and the view matches what’s specified in the replication script. The old view has been dropped and a new system view has been created:

image

 

 

Now dropping the publication also drops the view

-- Dropping the transactional articles

USE [SQLMonitor]

EXEC sp_dropsubscription @publication = N'Bananas Test',

    @article = N'SQLBuilds', @subscriber = N'all', @destination_db = N'all'

GO

USE [SQLMonitor]

EXEC sp_droparticle @publication = N'Bananas Test', @article = N'SQLBuilds',

    @force_invalidate_snapshot = 1

GO

 

-- Dropping the transactional publication

USE [SQLMonitor]

EXEC sp_droppublication @publication = N'Bananas Test'

GO

 

image

 

This appears to be a bug and not the behavior I would expect.

If I recreate the view and then attempt to create the publication again, this time without the @filter_clause I get an error stating the view already exists:

USE [SQLMonitor]

EXEC sp_addpublication @publication = N'Bananas Test',

    @description = N'Transactional publication of database ''SQLMonitor'' from Publisher ''local''.',

    @sync_method = N'concurrent', @retention = 0, @allow_push = N'true',

    @allow_pull = N'true', @allow_anonymous = N'false',

    @enabled_for_internet = N'false', @snapshot_in_defaultfolder = N'true',

    @compress_snapshot = N'false', @ftp_port = 21, @ftp_login = N'anonymous',

    @allow_subscription_copy = N'false', @add_to_active_directory = N'false',

    @repl_freq = N'continuous', @status = N'active',

    @independent_agent = N'true', @immediate_sync = N'false',

    @allow_sync_tran = N'false', @autogen_sync_procs = N'false',

    @allow_queued_tran = N'false', @allow_dts = N'false', @replicate_ddl = 1,

    @allow_initialize_from_backup = N'false', @enabled_for_p2p = N'false',

    @enabled_for_het_sub = N'false'

GO

 

 

EXEC sp_addpublication_snapshot @publication = N'Bananas Test',

    @frequency_type = 1, @frequency_interval = 0,

    @frequency_relative_interval = 0, @frequency_recurrence_factor = 0,

    @frequency_subday = 0, @frequency_subday_interval = 0,

    @active_start_time_of_day = 0, @active_end_time_of_day = 235959,

    @active_start_date = 0, @active_end_date = 0, @job_login = NULL,

    @job_password = NULL, @publisher_security_mode = 1

EXEC sp_grant_publication_access @publication = N'Bananas Test',

    @login = N'sa'

GO

EXEC sp_grant_publication_access @publication = N'Bananas Test',

    @login = N'NT AUTHORITYSYSTEM'

GO

EXEC sp_grant_publication_access @publication = N'Bananas Test',

    @login = N'NT SERVICESQLSERVERAGENT'

GO

EXEC sp_grant_publication_access @publication = N'Bananas Test',

    @login = N'NT SERVICEMSSQLSERVER'

GO

EXEC sp_grant_publication_access @publication = N'Bananas Test',

    @login = N'distributor_admin'

GO

 

-- Adding the transactional articles

USE [SQLMonitor]

EXEC sp_addarticle @publication = N'Bananas Test', @article = N'SQLBuilds',

    @source_owner = N'dbo', @source_object = N'SQLBuilds', @type = N'logbased',

    @description = N'', @creation_script = N'', @pre_creation_cmd = N'drop',

    @schema_option = 0x000000000803509F,

    @identityrangemanagementoption = N'none',

    @destination_table = N'SQLBuilds', @destination_owner = N'dbo',

    @status = 24, @vertical_partition = N'true',

    @ins_cmd = N'CALL [sp_MSins_dboSQLBuilds]',

    @del_cmd = N'CALL [sp_MSdel_dboSQLBuilds]',

    @upd_cmd = N'SCALL [sp_MSupd_dboSQLBuilds]'

 

-- Adding the article's partition column(s)

EXEC sp_articlecolumn @publication = N'Bananas Test', @article = N'SQLBuilds',

    @column = N'Build', @operation = N'add', @force_invalidate_snapshot = 1,

    @force_reinit_subscription = 1

EXEC sp_articlecolumn @publication = N'Bananas Test', @article = N'SQLBuilds',

    @column = N'KB Article', @operation = N'add',

    @force_invalidate_snapshot = 1, @force_reinit_subscription = 1

EXEC sp_articlecolumn @publication = N'Bananas Test', @article = N'SQLBuilds',

    @column = N'MajorBuild', @operation = N'add',

    @force_invalidate_snapshot = 1, @force_reinit_subscription = 1

EXEC sp_articlecolumn @publication = N'Bananas Test', @article = N'SQLBuilds',

    @column = N'MinorBuild', @operation = N'add',

    @force_invalidate_snapshot = 1, @force_reinit_subscription = 1

 

-- Adding the article synchronization object

EXEC sp_articleview @publication = N'Bananas Test', @article = N'SQLBuilds',

    @view_name = N'Bananas', @filter_clause = N'',

    @force_invalidate_snapshot = 1, @force_reinit_subscription = 1

GO

 

(1 row(s) affected)
(1 row(s) affected)
(1 row(s) affected)
Msg 2714, Level 16, State 3: There is already an object named ‘Bananas’ in the database.
Msg 21745, Level 16, State 1, Procedure sp_MSrepl_articleview, Line 272
Cannot generate a filter view or procedure. Verify that the value specified for the @filter_clause parameter of sp_addarticle can be added to the WHERE clause of a SELECT statement to produce a valid query.
Msg 3931, Level 16, State 1, Procedure sp_MSrepl_articleview, Line 401
The current transaction cannot be committed and cannot be rolled back to a savepoint. Roll back the entire transaction.

 

Something to be aware of if you are going to be playing with replication.

I’ve opened a Connect bug with Microsoft to try and get this fixed. Please go vote on it. And be careful with your replication.

Logging Logons (or is that logins?)

Some say logon some say login. I tend to use them both (frequently within the same sentence). SQL Server really doesn’t help things by having you create a logon trigger that can do things with logins (see where I’m going here?).

Either way, we’re performing a server migration and needed to find out all the logins and clients that are connecting to the SQL instance. I also wanted to capture this data to a table to make it easy to report against (eliminating turning on successful logins). The ideal solution for me was to use a logon trigger, and as the instance in question is running SQL 2008 this worked out great.

I created a login, table of holding, table of rollup holding and granted the relevant permissions so that I could capture the data. Then I created the actual login logon trigger itself which captured the logon login, host name, application name and datetime information and logged the data into the table.

I added a procedure and scheduled job to rollup the data nightly so as to prevent the table from getting too large (which would be quite easy to do on a big server).

There are no primary keys and only a single clustered index. It’s all very light. If you are looking to capture logon/login data then this is a good starting point. Code below…

 

USE master 

go

--create the login which will be used to log the data

CREATE LOGIN LoginLogging WITH PASSWORD = 'somepassword123!@#' ;

GRANT VIEW SERVER STATE TO LoginLogging ;

 

 

--change this to the database of your choosing

USE DBAdmin

go

--create the user matching the login

CREATE USER LogonLogging FOR LOGIN LogonLogging ;

 

--create the table that will hold the data

CREATE TABLE LogonList

    (

      LoginName NVARCHAR(256) ,

      HostName NVARCHAR(256) ,

      ApplicationName NVARCHAR(256) ,

      LoginDate DATETIME2

    ) ;

CREATE INDEX CI ON dbo.LogonList(LoginDate)

 

CREATE TABLE LogonListDailyRollup

    (

      LoginName NVARCHAR(256) ,

      HostName NVARCHAR(256) ,

      ApplicationName NVARCHAR(256) ,

      LoginDate DATE,

      LoginCount INT

    ) ;

    

    

--allow the login to insert data to the table    

GRANT INSERT ON dbo.LogonList TO LogonLogging ;    

 

--proc to rollup the data daily to keep space low and ease reporting

CREATE PROCEDURE RollupLogonData

 

AS

 

SET NOCOUNT ON

 

BEGIN TRY

    BEGIN TRAN

 

    INSERT  INTO dbo.LogonListDailyRollup

            ( LoginName ,

              HostName ,

              ApplicationName ,

              LoginDate ,

              LoginCount

            )

            SELECT  LoginName ,

                    HostName ,

                    ApplicationName ,

                    CONVERT(DATE, LoginDate) AS LoginDate ,

                    COUNT(*) AS LoginCount

            FROM    dbo.LogonList

            WHERE   LoginDate < CONVERT(DATE, GETDATE())

            GROUP BY LoginName ,

                    HostName ,

                    ApplicationName ,

                    CONVERT(DATE, LoginDate);

 

 

    DELETE  FROM dbo.LogonList

    WHERE   LoginDate < CONVERT(DATE, GETDATE());

 

END TRY

BEGIN CATCH

    SELECT  ERROR_MESSAGE()

    ROLLBACK TRAN

END CATCH

COMMIT TRAN

 

GO

 

 

 

--db context back to master for creating the login trigger

USE master 

go

 

CREATE TRIGGER LogLogons ON ALL SERVER

    WITH EXECUTE AS 'LogonLogging'

    FOR LOGON

AS

    BEGIN

        INSERT  INTO DBAdmin.dbo.LogonList

                ( LoginName ,

                  HostName ,

                  ApplicationName ,

                  LoginDate 

                )

                SELECT  ORIGINAL_LOGIN() ,

                        HOST_NAME() ,

                        APP_NAME() ,

                        GETDATE()

                WHERE   ORIGINAL_LOGIN() NOT LIKE N'NT SERVICE%'

    END ;

 

GO

Mass Updating Job Step Retry Attempts

Sometimes I find that the SQL Server team make curious choices as regards default operations. When setting up replication by default the log reader and distribution agent job steps will retry 2,147,483,647 times before failing. Doing the math on the retry interval of 1 minute that means that the job wouldn’t actually hit a failed status for over 4000 years. While you can setup alerts around replication a lot of folks don’t do that and you could easily be in a situation where replication has been down for so long that you have to reinitialize the subscription and perform a snapshot to get things running again.

Having a great many distribution jobs running on my server I wanted to perform a mass update. It could have gone through and updated each job step but that would have taken me a couple of days. I needed a better way. Digging through the SQL Server Agent stored procedures I found that sp_update_jobstep would do the work for me. I just needed to pass in the relevant parameters and then restart the jobs so that the change took effect.

As such I ended up writing the following script. Just adjust the @retries variable to the value that you want to work with (in this instance it would be an hours worth of retries before the job failed). This script is limited to the Distribution job step but could be easily modified to manage the log reader also.

Running this script took two minutes and it updated 60 distribution agent jobs. I’m getting ready to kick it off on the rest of my servers.

SET NOCOUNT ON

      

DECLARE @retries INT = 60 --Set to the value that you want for retry attempts

 

DECLARE @JobIdUpd UNIQUEIDENTIFIER ,

    @JobStepUpd INT,

    @JobStatus INT,

    @JobName SYSNAME,

    @Message NVARCHAR(256)

 

 

--Create a temp table which will hold the results from xp_sqlagent_enum_jobs

--Note xp_sqlagent_enum_jobs is an undocumented stored procedure, it may not be there soon

CREATE TABLE #enumjobs

    (

      job_id UNIQUEIDENTIFIER NOT NULL ,

      last_run_date INT NOT NULL ,

      last_run_time INT NOT NULL ,

      next_run_date INT NOT NULL ,

      next_run_time INT NOT NULL ,

      next_run_schedule_id INT NOT NULL ,

      requested_to_run INT NOT NULL ,

      request_source INT NOT NULL ,

      request_source_id SYSNAME NULL ,

      running INT NOT NULL , 

      current_step INT NOT NULL ,

      current_retry_attempt INT NOT NULL ,

      job_state INT NOT NULL

    ) 

 

 

   

DECLARE UpdateJobsCrsr CURSOR

FOR

SELECT  sjs.job_id ,

        sjs.step_id ,

        sj.name

FROM    msdb.dbo.sysjobsteps sjs

        INNER JOIN msdb.dbo.sysjobs sj ON sjs.job_id = sj.job_id

WHERE   subsystem = 'Distribution'

        AND retry_attempts > 60

 

OPEN UpdateJobsCrsr ;

 

FETCH NEXT FROM UpdateJobsCrsr

INTO @JobIdUpd, @JobStepUpd,@JobName

WHILE ( @@FETCH_STATUS <> -1 ) 

    BEGIN

   

--Update the job step so that the number of retry attempts is set to the @retries variable                

EXEC msdb.dbo.sp_update_jobstep @job_id = @jobidUpd, @step_id = @jobstepUpd, @retry_attempts = @retries  

SELECT @Message = N'Job ' + @JobName + ' updated.'

RAISERROR (@Message, 0, 1) WITH nowait 

 

--We have to restart the job in order for this change to take effect

--Check to see if the job is running, if so stop and restart it, otherwise leave it alone

--Note, you'll need to be a sysadmin for this to work (or change up some stuff)

    TRUNCATE TABLE #enumjobs;

    INSERT INTO #enumjobs

                EXEC master.dbo.xp_sqlagent_enum_jobs 1, 'sa', @JobIdUpd;

                

SELECT @JobStatus = job_state FROM #enumjobs

 

IF @JobStatus <> 4 --This is the job idle state

    BEGIN

--Stop the job

        EXEC msdb.dbo.sp_stop_job @job_id = @JobIdUpd

--Wait a couple of seconds for the job to stop

        WAITFOR DELAY '00:00:03'        

--Start the job up again

        EXEC msdb.dbo.sp_start_job @job_id = @JobIdUpd    

    END

ELSE 

    BEGIN

        SELECT @Message = N'Job ' + @JobName + ' was not running, please start manually if required.'              

        RAISERROR (@Message, 0, 1) WITH NOWAIT

    END

                

        FETCH NEXT FROM UpdateJobsCrsr

            INTO @JobIdUpd, @JobStepUpd, @JobName

    END

 

 

CLOSE UpdateJobsCrsr ;

GO

DEALLOCATE UpdateJobsCrsr ;

GO

 

--Cleanup

DROP TABLE #enumjobs   

xp_logininfo–The Bad

In my last post I wrote about the great things that xp_logininfo can do for you, like tell you if a user account has access and if so the permission path for that access. It also allows you to find out the membership of groups that have logins.

There’s a problem with xp_logininfo however. If you attempt to query a group that contains a group you won’t get any of the information around the members of that second group. It’s a serious limitation of xp_logininfo.

Take the following example:

image

Here we have three users and a group.

If we query this we expect to see members of all the groups, however as xp_logininfo is not recursive we only get the top level group membership.

xp_logininfo 'awesomesauceDBAs', 'members'

image

 

I see this as an issue, as such I’ve created Connect item 718737 to get this fixed. Please go vote and help make SQL Server better.

xp_logininfo–Your View Into Active Directory

It’s always good to know what users have access to your SQL instances but when it comes to Active Directory users and groups it can quickly become complicated and you might not know who has access at any given time.

Requests can come in for you to provide access to a user, but how can you identify if that user might already have some level of access?

That’s where xp_logininfo comes to the rescue!

This extended stored procedure will check to see if your user does actually have access and give you the relevant information associated. Just call the proc with the name of the user account:

EXEC xp_logininfo 'awesomesaucenic'

image

 

As you can see it told me that the account I passed in is a user, it has admin level privileges on the server and that the access for the account is provided via the awesomesauceDBAs group.

Wouldn’t it be great to see what other accounts also have access permissions via that group? Well xp_logininfo can do that for you too. By passing the group name along with the extra parameter of members you can get a list of group members’:

EXEC xp_logininfo 'awesomesauceDBAs', 'members'

image

 

This is a nice quick and easy way to see who has access on your systems via group membership.

Reinitializing Replication For A Single Subscriber

Interesting thing happened today, but never mind that. We had a server go down recently in our staging environment, wasn’t any big deal, we had to have vendor support come out and replace the motherboard. It was stage, not a big deal. The thing was, the server was down for a couple of days. Actually it was down for a total of 4 days.

 

The server in question is a SQL 2008 subscriber to several published tables on the staging OLTP SQL 2008 Instance (using push subscriptions). Sadly, not thinking of this ahead of time, we neglected to adjust the maximum distribution retention period for transactional replication (which sits at 72 hours). Suffice to say, the server came back online, we restarted the distribution agents we got the dreaded “subscription marked as inactive” error requiring us to have to reinitialize all of the subscriptions to that server.

 

“There goes my afternoon” thought I. Not being a fan of the replication monitor in 2008 I dreaded having to work with it to go through and reinitialize each and every publication for that subscriber, especially given that there are around 80 different articles. There must be a better way I thought…and lo, there is!

 

Books online talks about sp_reinitsubscription and passing parameters for the publication, article, subscriber, destination database, for schema change, publisher, ignore distributor failover and invalidate snapshot.

 

What BOL is not clear about it that you can pass in just a subscriber, that’s it, nothing else.

 

exec sp_reinitsubscription @subscriber = ‘<sql instance>’;

 

Running this reinitialized all of the publications within the current database for that one subscriber. Very quick, very easy. From there I just had to kickstart the snapshot agents and we were back in business (sp_start_job in this case ftw).