Passing SQL Data To Command Line Processes Within SQL Jobs

Wow, that title is a bit of a mouthful, let me give you a scenario to help understand what the business problem was that I ran in to today.

We have a production database running on SQL 2008 which contains a CLR procedure that accepts a reportID value, queries some data and writes out to the filesystem on a remote share. This procedure is called by multiple daily and weekly jobs to perform extracts for business partners. Yes, I know this is ugly. The database has TRUSTWORTHY on, which is a big security risk and we wanted to mitigate that risk with the minimum amount of work required.

Here’s an example of one of the jobs that makes the call in to that proc:

DECLARE @ReportID INT;

 

SELECT  @ReportID = ReportID

FROM    dbo.ReportList

WHERE   BusinessPartner = 'Customer1'

        AND Frequency = 'Daily';

 

EXEC Data2File @ReportID;

 

The first step to changing this was to get the CLR code out of source control and rebuild it as an executable file. This took the developer about 60 minutes. Now I had to figure out how we were going to call the executable with the appropriate ReportID.

The obvious way to call this would be to create a cmdline job step for D:ExportExeData2File.exe (the name and location of the new executable). This would be great except that it doesn’t contain the ReportID. The smart move here would be to just pass along the ReportID in the cmdline, except that we don’t know what that is for any particular report as they get deleted and added fairly frequently, we need to actually pass the results of the query along. The cmdline really wasn’t going to help here.

As is frequently the case, PowerShell came to the rescue.

All I had to do was create a PowerShell job step, run the query, pass the result into a variable and then call the executable with the variable. Sounds really simple, it took a few minutes to get it right, in the end I wound up with the following PowerShell script that runs from within a PowerShell job step:

$ErrorActionPreference  = "Stop"

 

$Query = @"

SELECT  ReportID

FROM    dbo.ReportList

WHERE   BusinessPartner = 'Customer1'

        AND Frequency = 'Daily';

"@

 

$ResultSet = invoke-sqlcmd -ServerInstance MySQLServer -Database MyDatabase -Query $Query -QueryTimeout 30

[int]$RptID = $ResultSet.ReportID

Write-Output "Calling Data2File.exe with ReportID: $RptID"

 

& "D:ExportExeData2File.exe" $RptId

 

In this script I’m building the query, calling it with invoke-sqlcmd and then passing the output ReportID to the external executable.

While this is still pretty ugly and not the way that this sort of thing should be done (SSIS anyone?) it does work and more importantly it allows me to turn off the trustworthy setting on that database and improve the security on my SQL Servers.

Changing SQL Agent Job Subsystem Queue Lengths

In the event you are running a lot of jobs on your SQL Server, or you happen to have a great number of jobs that kick off close to each other and use the same subsystem (i.e. PowerShell) then you might receive a warning in the SQL Agent Error Log stating that the job step “is being queued for the PowerShell subsystem”.

This queuing is deliberate. Microsoft put in safeguards around the various subsystems that the Agent calls in order to help prevent any particular subsystem from taking over all the worker threads (after all you wouldn’t want your 25 PowerShell job steps preventing your queries from running because they’d taken all the worker threads would you?)

This is one of the reasons that I like to use dedicated servers just to handle all jobs/ETL processes. I like to keep my local jobs to just maintenance processes (backup/reindex/checkdb) wherever possible. In doing this I need to be sure that I am able to increase the max worker threads associated to a subsystem to be sure that jobs don’t get delayed. As such I keep an eye on the SQL Agent Error Log and server resources to check that I don’t get thread starvation issues (and being just a job server that’s pretty unusual).

Today I saw a bunch of warnings in the log to do with PowerShell job steps being queued. This wasn’t a surprise as I had added a whole bunch of new jobs and had not adjusted things from the default values. First I went through and grabbed the default information for each of the subsystems so that I could easily revert if the need arose:

SELECT [subsystem_id]

      ,[subsystem]

      ,[max_worker_threads]

  FROM [msdb].[dbo].[syssubsystems]

image

 

As you can see here there are only 2 worker threads set for PowerShell. I need to bump this up to 20 so that it could handle the concurrent job load. I have plenty of memory available on the server so I’m not concerned with the overhead associated with the loading of PowerShell either. Always something you need to keep in mind (oh and you have to be a sysadmin to be able to update this table).

UPDATE msdb.dbo.syssubsystems

SET max_worker_threads = 20

WHERE subsystem = N'PowerShell'  

 

Checking the value again yields the updated information:

SELECT [subsystem_id]

      ,[subsystem]

      ,[max_worker_threads]

  FROM [msdb].[dbo].[syssubsystems]

image

 

Now it’s a simple case of restarting the SQL Server Agent service for the change to take effect.

Note: Applying Service Packs or Cumulative Updates could potentially reset this value, so make sure you document the change and confirm it after any upgrade (for example http://support.microsoft.com/kb/972759 )

Configuring DTC Security In A Cluster With PowerShell

The other day Allan Hirt (blog|twitter) write a fantastic post around “How to properly configure DTC for Clustered Instances of SQL Server with Windows Server 2008 R2”. He included a PowerShell script to handle all the funky setup stuff. The only downside was that after that you had to manually go through and configure Network Access and security for the clustered DTC using the GUI.

“There has to be a better way” I thought. I recalled how security back in 2003 could be set in the registry so I went digging. Eventually I was able to find the requisite keys under the Distribute Transaction Coordinator resource. From there it was just a case of tying things back so that those keys could be easily found from the SQL Resource Name and then updating them so that they matched what Allan demonstrated on his page.

I’ve tested it on a couple of my clusters and it’s worked exactly as designed, you just need to change the ServiceName variable so that it matches your clustered service name and then decided what authentication method you want to use (it defaults to the most secure level: Mutual Authentication Required).

Get the code below or download ConfigureMSDTC.ps1

<#

.SYNOPSIS

   Configures MSDTC on a cluster

.DESCRIPTION

   This configures a basic networked config for MSDTC for clustered SQL Server instances.

   Uncomment the transaction manager security setting and enter the correct ServiceName that can be found in FC manager

.PARAMETER <paramName>

   NONE

.EXAMPLE

   NONE

#>

 

$ServiceName = "SQL Server (SOX2)"

 

#--What MSDTC Transaction Manager Security setting is requested? (uncomment one)

$TranManSec = "Mutual" #Mutual Authentication Required

#$TranManSec = "Incoming" #Incoming Called Authentication Required

#$TranManSec = "None" #No Authentication Required

 

 

 

 

#Grab a list of cluster groups

$GroupPath = "HKLM:ClusterGroups"

$GroupList = dir $GroupPath

 

#Iterate through the groups to find the one matching the service name

foreach ($Group in $GroupList)

{

    $GroupChildPath = $Group.PSPath

    if ((Get-ItemProperty -path $GroupChildPath -name Name).Name -eq $ServiceName)

    {

        #we got a match! Now grab a list of the groups in this service

        $ReferencedResources = (Get-ItemProperty -path $GroupChildPath -name Contains).Contains

        foreach ($Resource in $ReferencedResources)

        {

            #Query each of the resources for their type and work with MSDTC

            $ResourcePath = "HKLM:ClusterResources$Resource"

            if ((Get-ItemProperty -path $ResourcePath).Type -eq "Distributed Transaction Coordinator")

            {

                #We found MSDTC resource for that service group, let's configure it

                $SecurityPath = "$ResourcePathMSDTCPRIVATEMSDTCSecurity"

                Set-ItemProperty -path $SecurityPath -name "NetworkDtcAccess" -value 1

                Set-ItemProperty -path $SecurityPath -name "NetworkDtcAccessClients" -value 0

                Set-ItemProperty -path $SecurityPath -name "NetworkDtcAccessTransactions" -value 0

                Set-ItemProperty -path $SecurityPath -name "NetworkDtcAccessInbound" -value 1

                Set-ItemProperty -path $SecurityPath -name "NetworkDtcAccessOutbound" -value 1

                Set-ItemProperty -path $SecurityPath -name "LuTransactions" -value 1

 

                #Now configure the authentication method for MSDTC (defaulting to Mutual Auth as it's most secure)

                $SecurityPath = "$ResourcePathMSDTCPRIVATEMSDTC"

                if ($TranManSec -eq "None")

                {

                    Set-ItemProperty -path $MSDTCPath -name "TurnOffRpcSecurity" -value 1

                    Set-ItemProperty -path $MSDTCPath -name "AllowOnlySecureRpcCalls" -value 0

                    Set-ItemProperty -path $MSDTCPath -name "FallbackToUnsecureRPCIfNecessary" -value 0

                }

 

                elseif ($TranManSec -eq "Incoming")

                {

                    Set-ItemProperty -path $MSDTCPath -name "TurnOffRpcSecurity" -value 0

                    Set-ItemProperty -path $MSDTCPath -name "AllowOnlySecureRpcCalls" -value 0

                    Set-ItemProperty -path $MSDTCPath -name "FallbackToUnsecureRPCIfNecessary" -value 1

                }

 

                else 

                {

                    Set-ItemProperty -path $MSDTCPath -name "TurnOffRpcSecurity" -value 0

                    Set-ItemProperty -path $MSDTCPath -name "AllowOnlySecureRpcCalls" -value 1

                    Set-ItemProperty -path $MSDTCPath -name "FallbackToUnsecureRPCIfNecessary" -value 0

                } 

 

            }

        }

    }

}

 

Querying SQL Error Logs With PowerShell

It’s a real pain trying to get data out of the SQL Error Logs I’ve noticed. You either don’t cycle your files very often and end up with stupid amounts of data in there to sift through or you cycle them frequently and then you end up having to try and figure out which of the files is related to the data you want. You could use T-SQL to import all the files and grab stuff, alternatively you can use PowerShell to get information.

I wanted to take this process a step further so I created a PowerShell script that will accept servername, start and end date/time parameters and return to you, the kind reader, all the entries in the logs between those times (and just between those times). This is great for when someone asks you to give them information from the log for 3-7 days ago. One quick execution of the script and the data is sitting there in PowerShell grid format, ready to go. Even better, using the grid you can filter the information further and quickly copy/paste it to excel or some other program.

I’ve pasted the code down below, but it might be easier to download Get-SQLErrorLogs.ps1

This requires the PowerShell SQL snappins that come with 2008/R2 but can query versions 2000 and up.

Next steps are to add filtering and options to output to file. I’m sure I’ll get to that eventually.

 

<#

.SYNOPSIS

   Retrieves SQL Server error log entries

.DESCRIPTION

   Pulls error log details from SQL Server and outputs them to the client

.PARAMETER <paramName>

   -SQLInstance (required) The SQL Instance to pull the logs from

   -MaxDate (required) The most recent log record to return

   -MinDate (required) the oldest log record to return

.EXAMPLE

   ./Get-SQLErrorLogs -SQLInstance MyInstance -MinDate 2012-05-10 -MaxDate "2012-05-15 09:00"

#>

 

param

(

[string]$SQLInstance = $(Read-Host -prompt "SQL Instance [e.g. MyClusterInstance]"),

[string] $MinDate = $(Read-Host -prompt "Lowest log date entry (yyyy-mm-dd hh:mm)"),

[string] $MaxDate = $(Read-Host -prompt "Highest log date entry (yyyy-mm-dd hh:mm"),

[int] $LogNumber

)

 

cls

 

$CheckDateMin = [datetime]::Parse($MinDate)

$CheckDateMax = [datetime]::Parse($MaxDate)

 

 

Function Main {

    #Load the SQL snapins

    Load-SQLSnapins

 

    #Create a datatable to hold the data

    $Results = New-Object system.Data.DataTable "Results"

    #Setup columns

    $LogDate = New-Object system.Data.DataColumn LogDate,([DateTime])

    $ProcessInfo = New-Object system.Data.DataColumn ProcessInfo,([string])

    $Text = New-Object system.Data.DataColumn Text,([string])

    #Add columns to datatable

    $Results.columns.add($LogDate)

    $Results.columns.add($ProcessInfo)

    $Results.columns.add($Text)

 

    #Another to hold logs read

    $ReadLogs = New-Object system.Data.DataTable "ReadLogs"

    $ReadLogNum = New-Object system.Data.DataColumn LogNumber,([int])

    $ReadLogs.columns.add($ReadLogNum)

 

 

    #Shred the SQL Instance to get the right data

    $SQLName = Shred-SQLName $SQLInstance

    $InstanceName = Shred-SQLInstance $SQLInstance

 

 

    #Grabs SQL Logs by Date

    Get-SQLLogByDate $SQLName $InstanceName $SQLInstance $CheckDateMin $CheckDateMax

 

    $results | Sort-Object LogDate | ogv

 

}

 

 

 

Function Shred-SQLInstance ($SQLInstance) {

 

    $SQLInstance = $SQLInstance.ToUpper()

 

    if ($SQLInstance -ilike "**") 

    {

        $string = $SQLInstance.Split("")

        $Instance = $string[1]

        Return $Instance

    }

    else

    {

        $Instance = "DEFAULT"

        Return $Instance

    }

}

 

Function Shred-SQLName ($SQLInstance) {

    $SQLInstance = $SQLInstance.ToUpper()

    if ($SQLInstance -ilike "**") 

    {

        $string = $SQLInstance.Split("")

        $Name = $string[0]

        Return $Name

    }

    else

    {

        $Name = $SQLInstance

        Return $Name

    }

}

 

Function Get-SQLLogByDate ($SQLName, $InstanceName, $SQLInstance,$CheckDateMin ,$CheckDateMax) {

 

    IF ($InstanceName -eq "DEFAULT") {$Logs = DIR SQLSERVER:SQL$SQLName | Where-Object {$_.Name -eq $SQLInstance}; }

    ELSE {$Logs = DIR SQLSERVER:SQL$SQLName | Where-Object {$_.Name -eq $SQLInstance}; }

 

    $LogCount = ($Logs.EnumErrorLogs() | Measure-Object).Count

    Write-Output "--------------------------------------------------------------------------------------------------------------"

    Write-Output "Checking error logs on server $SQLInstance for logs created between $CheckDateMin & $CheckDateMax"

    Write-Output "--------------------------------------------------------------------------------------------------------------"

    Write-Output "Checking through $LogCount logs, please wait."

    Write-Output "-----------------------------------------------"

 

    $LogList = $Logs.EnumErrorLogs()

    #we'll need to do something hokey here. SQL error log creation is not when the log was created, rather when it was closed

    #not intended circumstances. This means we'll have to get the min log and then get the one before that

    ForEach ($ErrorLog in $LogList) {

 

        IF ($ErrorLog.CreateDate -ge $CheckDateMin -and $ErrorLog.CreateDate -le $CheckDateMax)

        {

 

            $CurrentLog = $ErrorLog.CreateDate

            $CurrentLogName = $ErrorLog.Name

            Write-Output "Currently reading error log $CurrentLogName dated $CurrentLog..."

            #write the log number to the $ReadLogs datatable

            $NewLogRow = $ReadLogs.NewRow(); $NewLogRow.LogNumber = $CurrentLogName; $ReadLogs.Rows.Add($NewLogRow);

            $Logs.ReadErrorLog(($ErrorLog.Name)) | Where-Object {$_.LogDate -ge $CheckDateMin -AND $_.LogDate -le $CheckDateMax} |

            % {$NewRow = $Results.NewRow(); 

                $NewRow.LogDate = $_.LogDate; 

                $NewRow.ProcessInfo = $_.ProcessInfo;

                $NewRow.Text = $_.Text;

                $Results.Rows.Add($NewRow);

            }

        }

 

 

 

    }

    #Now we've read all the logs read the one before the newest so we get that extra data 

    #unless the most current log (0) is included in the results already

    #Write-Output $ReadLogs | ft -AutoSize

    $LowestLog = ($ReadLogs | Measure-Object LogNumber -Minimum).Minimum

    IF ($LowestLog -gt 0)

    {

        $LogToRead = $LowestLog - 1

        Write-Output "Currently reading error log $LogToRead..."

        #write the log number to the $ReadLogs datatable

        $NewLogRow = $ReadLogs.NewRow(); $NewLogRow.LogNumber = $CurrentLogName; $ReadLogs.Rows.Add($NewLogRow);

        $Logs.ReadErrorLog($LogToRead) | Where-Object {$_.LogDate -ge $CheckDateMin -AND $_.LogDate -le $CheckDateMax} |

        % {$NewRow = $Results.NewRow(); 

            $NewRow.LogDate = $_.LogDate; 

            $NewRow.ProcessInfo = $_.ProcessInfo;

            $NewRow.Text = $_.Text;

            $Results.Rows.Add($NewRow);

        }

    }

    #on the off chance that the dates reside in the current log only pull that one in

    #but only if no other logs have been loaded (edge case)

    IF (!$LowestLog) 

    {

        Write-Output "Currently reading error log 0..."

        $Logs.ReadErrorLog(0) | Where-Object {$_.LogDate -ge $CheckDateMin -AND $_.LogDate -le $CheckDateMax} |

        % {$NewRow = $Results.NewRow(); 

            $NewRow.LogDate = $_.LogDate; 

            $NewRow.ProcessInfo = $_.ProcessInfo;

            $NewRow.Text = $_.Text;

            $Results.Rows.Add($NewRow);

        }

    }

}

 

Function Load-SQLSnapins 

{    $CheckSQLSnapins = get-pssnapin | where {$_.Name -match "SqlServerCmdletSnapin100"} 

    if (!$CheckSQLSnapins) 

    {

        Add-PSSnapin SqlServerCmdletSnapin100 

        Write-Host "SqlServerCmdletSnapin100 Loaded"

    }

    else { Write-Host "SqlServerCmdletSnapin100 Already Loaded" }

 

    $CheckSQLSnapins = get-pssnapin | where {$_.Name -match "SqlServerProviderSnapin100"} 

    if (!$CheckSQLSnapins) 

    {

        Add-PSSnapin SqlServerProviderSnapin100 

        Write-Host "SqlServerProviderSnapin100 Loaded"

    }

    else { Write-Host "SqlServerProviderSnapin100 Already Loaded" }

}

 

main

Speedy Log Shipping Setup

As a follow up to last weeks post on quickly restoring multiple databases to default locations I’ve another script which might help you with quick deployments.

Last week I covered restoring the databases, I also needed to get log shipping setup for multiple databases. If you’ve ever had to configure log shipping you know that there’s lots of clicking and typing of things and entering information. It’s a bit of a pain for a single database, when you have to set it up for multiple databases it’s a long, slow, time consuming nightmare.

I’d had enough of dealing with this so I put together a script that will do the complete configuration of the primary and secondary servers with a single execution. The key to doing this is using SQLCMD mode from within SSMS.

Never used SQLCMD mode before? I’m not surprised, it’s not that well known. To enter SQLCMD mode you either choose the option from the Query menu item or add the button to one of your tool bars

image

This allows you to do some special things, like parameter substitution and, key to what I’m doing, change connections as a part of the script.

Try the following, replacing PrimaryServer and SecondaryServer with something relevant to your environment.

image

As you can see from the script it connected to each server in turn and grabbed the @@SERVERNAME value. That’s pretty special.

My log shipping script does very much the same thing. You just provide the primary and secondary servers, the file locations for the transaction log backups and the name of the database. The script takes care of the rest.

It will perform a very basic check to ensure that the local backup directory exists and error if not. The script also does not encompass a log shipping monitor server and has a default of 15 minutes for the backup/copy/restore processes. All of that could be quickly adjusted and made additional parameters.

View the script below or download SetupLogShipping.sql

Of course this doesn’t answer the question of when or why you would use log shipping, that’s for another post.

:ON ERROR exit

 

:setvar PrimaryServer "ServerA"

:setvar PrimaryDatabase "MyDB"

:setvar PrimaryBackupDirectory "E:BackupDBTrxLogs"

:setvar PrimaryBackupShare "ServerAtrxlogs$"

:setvar SecondaryServer "ServerB"

:setvar SecondaryDatabase "MyDB"

:setvar SecondaryBackupDirectory "ServerBTrxlogs$TrxForRestore"

 

SET NOCOUNT ON

/* Begin: Script to be run at Primary */

 

 

:CONNECT $(PrimaryServer)

 

BEGIN

USE msdb;

 

/* Check to see if the backup folder exists */

RAISERROR('Checking to see if $(PrimaryBackupDirectory) exists...',0,1) WITH NOWAIT

DECLARE @CheckDir NVARCHAR(2000)

DECLARE @ErrorMsg NVARCHAR(200)

DECLARE @DirInfo TABLE

    (

      FileExists BIT ,

      FileIsADirectory BIT ,

      ParentDirectoryExists BIT

    )

                                

SET @CheckDir = '$(PrimaryBackupDirectory)'

 

INSERT  INTO @DirInfo

        ( FileExists ,

          FileIsADirectory ,

          ParentDirectoryExists

        )

        EXECUTE [master].dbo.xp_fileexist @CheckDir

 

 

 

IF NOT EXISTS ( SELECT  *

                FROM    @DirInfo

                WHERE   FileExists = 0

                        AND FileIsADirectory = 1

                        AND ParentDirectoryExists = 1 ) 

    BEGIN

        SET @ErrorMsg = 'The primary backup directory $(PrimaryBackupDirectory) does not exist...quitting' 

        RAISERROR(@ErrorMsg,16,1) WITH NOWAIT

        RETURN

    END

RAISERROR('$(PrimaryBackupDirectory) exists, continuing...',0,1) WITH NOWAIT  

 

DECLARE @LS_BackupJobId    AS uniqueidentifier 

DECLARE @LS_PrimaryId    AS uniqueidentifier 

DECLARE @SP_Add_RetCode    As int 

 

RAISERROR('Enabling log shipping for $(PrimaryDatabase) on $(PrimaryServer)...',0,1) WITH NOWAIT

 

EXEC @SP_Add_RetCode = master.dbo.sp_add_log_shipping_primary_database 

        @database = N'$(PrimaryDatabase)' 

        ,@backup_directory = N'$(PrimaryBackupDirectory)' 

        ,@backup_share = N'$(PrimaryBackupShare)' 

        ,@backup_job_name = N'LSBackup_$(PrimaryDatabase)' 

        ,@backup_retention_period = 4320

        ,@backup_compression = 1

        ,@backup_threshold = 60 

        ,@threshold_alert_enabled = 1

        ,@history_retention_period = 5760 

        ,@backup_job_id = @LS_BackupJobId OUTPUT 

        ,@primary_id = @LS_PrimaryId OUTPUT 

        ,@overwrite = 1 

 

 

IF (@@ERROR = 0 AND @SP_Add_RetCode = 0) 

BEGIN 

 

DECLARE @LS_BackUpScheduleUID    As uniqueidentifier 

DECLARE @LS_BackUpScheduleID    AS int 

 

 

EXEC msdb.dbo.sp_add_schedule 

        @schedule_name =N'LSBackupSchedule_$(PrimaryServer)' 

        ,@enabled = 1 

        ,@freq_type = 4 

        ,@freq_interval = 1 

        ,@freq_subday_type = 4 

        ,@freq_subday_interval = 15 

        ,@freq_recurrence_factor = 0 

        ,@active_start_date = 20120607 

        ,@active_end_date = 99991231 

        ,@active_start_time = 0 

        ,@active_end_time = 235900 

        ,@schedule_uid = @LS_BackUpScheduleUID OUTPUT 

        ,@schedule_id = @LS_BackUpScheduleID OUTPUT 

 

EXEC msdb.dbo.sp_attach_schedule 

        @job_id = @LS_BackupJobId 

        ,@schedule_id = @LS_BackUpScheduleID  

 

EXEC msdb.dbo.sp_update_job 

        @job_id = @LS_BackupJobId 

        ,@enabled = 1 

 

 

END 

 

 

EXEC master.dbo.sp_add_log_shipping_alert_job 

 

EXEC master.dbo.sp_add_log_shipping_primary_secondary 

        @primary_database = N'$(PrimaryDatabase)' 

        ,@secondary_server = N'$(PrimaryServer)' 

        ,@secondary_database = N'$(SecondaryDatabase)' 

        ,@overwrite = 1 

 

 

RAISERROR('Log shipping enabled for $(PrimaryDatabase) on $(PrimaryServer)...',0,1) WITH NOWAIT

END

GO

/* ENDS THE PRIMARY SCRIPT */

 

 

 

/* Script to be run on secondary */

:CONNECT $(SecondaryServer)

USE msdb;

 

DECLARE @LS_Secondary__CopyJobId    AS uniqueidentifier 

DECLARE @LS_Secondary__RestoreJobId    AS uniqueidentifier 

DECLARE @LS_Secondary__SecondaryId    AS uniqueidentifier 

DECLARE @LS_Add_RetCode    As int 

 

RAISERROR('Adding log shipping secondary to $(SecondaryServer) with database name $(SecondaryDatabase)...',0,1) WITH NOWAIT

 

EXEC @LS_Add_RetCode = master.dbo.sp_add_log_shipping_secondary_primary 

        @primary_server = N'$(PrimaryServer)' 

        ,@primary_database = N'$(PrimaryDatabase)' 

        ,@backup_source_directory = N'$(PrimaryBackupShare)' 

        ,@backup_destination_directory = N'$(SecondaryBackupDirectory)' 

        ,@copy_job_name = N'LSCopy_$(PrimaryServer)_$(PrimaryDatabase)' 

        ,@restore_job_name = N'LSRestore_$(PrimaryServer)_$(PrimaryDatabase)' 

        ,@file_retention_period = 4320 

        ,@overwrite = 1 

        ,@copy_job_id = @LS_Secondary__CopyJobId OUTPUT 

        ,@restore_job_id = @LS_Secondary__RestoreJobId OUTPUT 

        ,@secondary_id = @LS_Secondary__SecondaryId OUTPUT 

 

IF (@@ERROR = 0 AND @LS_Add_RetCode = 0) 

BEGIN 

 

DECLARE @LS_SecondaryCopyJobScheduleUID    As uniqueidentifier 

DECLARE @LS_SecondaryCopyJobScheduleID    AS int 

 

RAISERROR('Creating log shipping secondary jobs...',0,1) WITH NOWAIT

 

EXEC msdb.dbo.sp_add_schedule 

        @schedule_name =N'DefaultCopyJobSchedule' 

        ,@enabled = 1 

        ,@freq_type = 4 

        ,@freq_interval = 1 

        ,@freq_subday_type = 4 

        ,@freq_subday_interval = 15 

        ,@freq_recurrence_factor = 0 

        ,@active_start_date = 20120607 

        ,@active_end_date = 99991231 

        ,@active_start_time = 0 

        ,@active_end_time = 235900 

        ,@schedule_uid = @LS_SecondaryCopyJobScheduleUID OUTPUT 

        ,@schedule_id = @LS_SecondaryCopyJobScheduleID OUTPUT 

 

EXEC msdb.dbo.sp_attach_schedule 

        @job_id = @LS_Secondary__CopyJobId 

        ,@schedule_id = @LS_SecondaryCopyJobScheduleID  

 

DECLARE @LS_SecondaryRestoreJobScheduleUID    As uniqueidentifier 

DECLARE @LS_SecondaryRestoreJobScheduleID    AS int 

 

 

EXEC msdb.dbo.sp_add_schedule 

        @schedule_name =N'DefaultRestoreJobSchedule' 

        ,@enabled = 1 

        ,@freq_type = 4 

        ,@freq_interval = 1 

        ,@freq_subday_type = 4 

        ,@freq_subday_interval = 15 

        ,@freq_recurrence_factor = 0 

        ,@active_start_date = 20120607 

        ,@active_end_date = 99991231 

        ,@active_start_time = 0 

        ,@active_end_time = 235900 

        ,@schedule_uid = @LS_SecondaryRestoreJobScheduleUID OUTPUT 

        ,@schedule_id = @LS_SecondaryRestoreJobScheduleID OUTPUT 

 

EXEC msdb.dbo.sp_attach_schedule 

        @job_id = @LS_Secondary__RestoreJobId 

        ,@schedule_id = @LS_SecondaryRestoreJobScheduleID  

 

 

END 

 

 

DECLARE @LS_Add_RetCode2    As int 

 

 

IF (@@ERROR = 0 AND @LS_Add_RetCode = 0) 

BEGIN 

 

EXEC @LS_Add_RetCode2 = master.dbo.sp_add_log_shipping_secondary_database 

        @secondary_database = N'$(SecondaryDatabase)' 

        ,@primary_server = N'$(PrimaryServer)' 

        ,@primary_database = N'$(PrimaryDatabase)' 

        ,@restore_delay = 0 

        ,@restore_mode = 0 

        ,@disconnect_users    = 0 

        ,@restore_threshold = 45   

        ,@threshold_alert_enabled = 1 

        ,@history_retention_period    = 5760 

        ,@overwrite = 1 

 

END 

 

 

IF (@@error = 0 AND @LS_Add_RetCode = 0) 

BEGIN 

 

EXEC msdb.dbo.sp_update_job 

        @job_id = @LS_Secondary__CopyJobId 

        ,@enabled = 1 

 

EXEC msdb.dbo.sp_update_job 

        @job_id = @LS_Secondary__RestoreJobId 

        ,@enabled = 1 

 

END 

 

RAISERROR('Setup of log shipping from database $(PrimaryDatabase) on server $(PrimaryServer) to database $(SecondaryDatabase) on $(SecondaryServer) complete!',0,1) WITH NOWAIT

 

 

 

Generic Database Restores

The other day I found myself in the position of needing to restore multiple databases to a server. In this case all the log files would be on one drive and all the data files on another. Rather than sit there and write restore scripts for ~20 databases I figured it would be quicker to create a quick wrapper that I could just pass the filename into a then let nature take its course.

This is very basic, no restore performance tuning enhancements for example, however it does just the job that I need it to.

If I was smart I would have made this a stored procedure and then iterated through the list of backups in the directory using powershell and then executed the proc for each one. Maybe next time.

Anyhow, view the script below or download Database Restores Generic.sql

 

SET NOCOUNT ON

 

DECLARE @File NVARCHAR(1000) = N'MyDatabase.bak'

DECLARE @Path NVARCHAR(1000) = N'D:SQLBackups'

DECLARE @MDFPath NVARCHAR(1000) = 'E:MSSQLData'

DECLARE @LDFPath NVARCHAR(1000) = 'L:MSSQLLogs'

 

DECLARE @FullLoc NVARCHAR(2000)= @Path + @File

 

DECLARE @DatabaseName NVARCHAR(128)

DECLARE @RestoreMDF NVARCHAR(2000)

DECLARE @RestoreLDF NVARCHAR(2000)

DECLARE @RestoreCommandFull NVARCHAR(4000)

DECLARE @sqlexec NVARCHAR(4000) 

 

DECLARE @MDFID INT

DECLARE @LDFID INT

DECLARE    @MDFName NVARCHAR(128)

DECLARE @LDFName NVARCHAR(128)

 

 

DECLARE @RestoreFiles TABLE

    (

      ID INT IDENTITY(1,1),

      LogicalName NVARCHAR(128) , PhysicalName NVARCHAR(260) , [Type] CHAR(1) , FileGroupName NVARCHAR(128) , [size] NUMERIC(20, 0) ,

      MAXSIZE NUMERIC(20, 0) , FileID BIGINT , CreateLSN NUMERIC(25, 0) , DropLSN NUMERIC(25, 0) , UniqueID UNIQUEIDENTIFIER ,

      ReadOnlyLSN NUMERIC(25, 0) , ReadWriteLSN NUMERIC(25, 0) , BackupSizeInBytes BIGINT , SourceBlockSize INT , FileGroupID INT ,

      LogGroupGUID UNIQUEIDENTIFIER , DifferentialBaseLSN NUMERIC(25, 0) , DifferentialBaseGUID UNIQUEIDENTIFIER ,

      IsReadOnly BIT , IsPresent BIT , TDEThumbprint VARBINARY(32)

    )

 

DECLARE @RestoreHeader TABLE

    (

        BackupName NVARCHAR(128), BackupDescription NVARCHAR(255), BackupType SMALLINT, ExpirationDate DATETIME,

        Compressed CHAR(1), POSITION SMALLINT, DeviceType TINYINT, UserName NVARCHAR(128), ServerName NVARCHAR(128),

        DatabaseName NVARCHAR(128), DatabaseVersion INT, DatabaseCreationDate DATETIME, BackupSize NUMERIC(20,0),

        FirstLSN NUMERIC(25,0), LastLSN NUMERIC(25,0), CheckpointLSN NUMERIC(25,0), DatabaseBackupLSN NUMERIC(25,0),

        BackupStartDate DATETIME, BackupFinishDate DATETIME, SortORder SMALLINT, [CodePage] SMALLINT, UnicodeLocaleId INT,

        UnicodeComparisonStyle INT, CompatabilityLevel TINYINT, SoftwareVendorId INT, SoftwareVersionMajor INT,

        SoftwareVersionMinor INT, SoftwareVersionBuild INT, MachineName NVARCHAR(128), Flags INT, BindingID UNIQUEIDENTIFIER,

        RecoveryForkID UNIQUEIDENTIFIER, [Collation] NVARCHAR(128), FamilyGUID UNIQUEIDENTIFIER, HasBulkLoggedData BIT,

        IsSnapshot BIT, IsReadOnly BIT, IsSingleUser BIT, HasBackupChecksums BIT, IsDamaged BIT, BeginsLogChain BIT,

        HasIncompleteMetaData BIT, IsForceOffline BIT, IsCopyOnly BIT, FirstRecoveryForkID UNIQUEIDENTIFIER,

        ForkPointLSN NUMERIC(25,0), RecoveryModel NVARCHAR(60), DifferentialBaseLSN NUMERIC(25,0), 

        DifferentialBaseGUID UNIQUEIDENTIFIER, BackupTypeDescription NVARCHAR(60), BackupSetGUID UNIQUEIDENTIFIER,

        CompressedBackupSize BIGINT

        )

 

SELECT @sqlexec = 'RESTORE FILELISTONLY FROM DISK = ''' + @FullLoc + ''''

INSERT INTO @RestoreFiles EXEC (@sqlexec)

 

 

SELECT @sqlexec = 'RESTORE HEADERONLY FROM DISK = ''' + @FullLoc + ''''

INSERT INTO @RestoreHeader EXEC (@sqlexec)

 

SELECT @DatabaseName = DatabaseName FROM @RestoreHeader

 

 

SELECT @MDFID = MIN(ID) FROM @RestoreFiles WHERE [Type] != 'L'

WHILE @MDFID IS NOT NULL

BEGIN

 

 

    IF @MDFID = 1

        BEGIN

            SELECT @RestoreMDF = 'WITH MOVE ' + CHAR(39) + LogicalName + CHAR(39) + ' TO ' + CHAR(39) + @MDFPath +  REVERSE(LEFT(REVERSE(PhysicalName), CHARINDEX('', REVERSE(PhysicalName))-1)) + CHAR(39) + CHAR(13) FROM @RestoreFiles WHERE ID = @MDFID

            END

    ELSE

        BEGIN

            SELECT @RestoreMDF = @RestoreMDF + ', MOVE ' + CHAR(39) + LogicalName + CHAR(39) + ' TO ' + CHAR(39) + @MDFPath +  REVERSE(LEFT(REVERSE(PhysicalName), CHARINDEX('', REVERSE(PhysicalName))-1)) + CHAR(39) FROM @RestoreFiles WHERE ID = @MDFID

        END

 

SELECT @MDFID = MIN(ID) FROM @RestoreFiles WHERE ID > @MDFID AND [Type] != 'L' 

END

 

 

SELECT @LDFID = MIN(ID) FROM @RestoreFiles WHERE [Type] = 'L'

WHILE @LDFID IS NOT NULL

BEGIN

    SELECT @RestoreLDF = ISNULL(@RestoreLDF,'') + ', MOVE ' + CHAR(39) +LogicalName + CHAR(39) + ' TO ' + CHAR(39) + @LDFPath +  REVERSE(LEFT(REVERSE(PhysicalName), CHARINDEX('', REVERSE(PhysicalName))-1)) + CHAR(39) FROM @RestoreFiles WHERE ID = @LDFID

 

SELECT @LDFID = MIN(ID) FROM @RestoreFiles WHERE ID > @LDFID AND [Type] = 'L' 

END

 

 

 

SELECT @RestoreCommandFull = 'RESTORE DATABASE ' + QUOTENAME(@DatabaseName) + ' ' + CHAR(13) + 'FROM DISK = ''' + @FullLoc + '''' + CHAR(13)

SELECT @RestoreCommandFull = @RestoreCommandFull + @RestoreMDF + @RestoreLDF + CHAR(13) + ', NORECOVERY, STATS=20;'

--PRINT @RestoreCommandFull

 

EXEC (@RestoreCommandFull)

 

Changing The Default Email Address In Office 365

Not SQL related today but a good example of PowerShell in action.

I recently migrated my personal email over to Office 365 from Microsoft. The migration itself went very smoothly but I ran into an issue where the primary email address on the account was showing with a @sirsql.onmicrosoft.com address instead of @sirsql.net. This seriously bugged me and I couldn’t find a way to change it in the Office 365 admin interface.

After some digging I found that you could only make a change such as this using PowerShell, and that you couldn’t do it using the account you want to change (makes sense).

So I created a new admin account just for the purpose and initiated a session in Office 365 (entering the admin users credentials when prompted)

$Livecred = get-credential

$Session = new-pssession -ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.outlook.com/powershell/ -Credential $Livecred -Authentication Basic -AllowRedirection

Import-PSSession $Session

Now I had to find the account, easy enough:

get-mailbox | where-object {$_.Identity -eq "<my account>"} | fl

This gave me all the details, including the PrimarySmtpAddress

image

As you can see from the membership properties you can get and set this information, so that’s what I went to do:

set-mailbox <my account> -PrimarySmtpAddress <my account>@sirsql.net

And got an error

A positional parameter cannot be found that accepts argument ‘-PrimarySmtpAddress’.
    + CategoryInfo          : InvalidArgument: (:) [Set-Mailbox], ParameterBindingException
    + FullyQualifiedErrorId : PositionalParameterNotFound,Set-Mailbox

 

Well this made no sense. No matter what I tried I could not do anything other than a get on the PrimarySmtpAddress. Back to the searching and I discovered that Microsoft have not provided the capability to run a set on that particular member object in Office 365 (although it’s available with Exchange). Way to do with that Microsoft.

After some more digging I was able to get the SMTP address changed by setting the WindowsEmailAddress to the correct value

set-mailbox <my account> -WindowsEmailAddress <my account>@sirsql.net

 

One other note, when using PowerShell to manage O365 and it says to use Identity use the Alias for the account instead or you might up with unintended consequences.

Vendor Support–The Good And Bad

When you go out and buy yourself new hardware or software you have the option of purchasing maintenance agreements at the same time. For software this generally provides the ability to constantly upgrade to the latest and greatest product. For hardware this tends to provide onsite support for when things go wrong and an SLA around that support arriving and the hardware being fixed.

I’ve been dealing with hardware and software vendors for years, I thought I’d share a couple of stories the really depict excellent service and the stuff that you never want to deal with as a customer.

 

The bad

When I started at one of my previous positions I walked into a Dell shop. If you aren’t familiar with that term it means that all hardware purchased was through Dell and that it gave us steeper discounts on the hardware that we would purchase.

I was put in charge of the Windows team pretty early at this company and we started to go through a hardware refresh. I sat down with the team and started asking questions about how things were with Dell. To a person they liked the hardware and what it delivered however they hated the service. There were common problems with SLAs not being met, the wrong replacement parts being delivered and phone support being unable to provide decent assistance.

I brought these issues to the Dell account rep and explained that we were looking at a fairly significant budget spend the next year on hardware (>$1m) and that I needed to see better results from the support team over there if I was going to spend any of that money with them.

Over the next 6 months I thoroughly documented every engagement with their support staff. This support engagements included:

  • Server down – customer impact
  • Hardware problem, replacement part needed – non-customer impacting
  • General troubleshooting assistance required – non-customer impacting

I’m sad to say that Dell was only able to meet the 4 x 7 x 365 agreement we had for hardware support in 10% of the cases that we opened. Techs would show up late (or not at all), parts would be incorrect even when the tech was onsite in time (techs did not bring the parts, they would be delivered separately) and we would have trouble getting anything above a level 2 tech person on the phone who’s troubleshooting ability seemed to be limited to “have you tried turning it off and back on again”.

This was several years ago and Dell might have significantly improved their support since then, however when I left the company we did not have a single Dell server in any of the three datacenters I had built out.

 

The good

Software has bugs. We all know that and have experienced problems with vendor applications, but what happens when you run into a significant issue and how does the vendor respond?

Recently SQLSentry released a new version of their Performance Advisor for SQL Server tool which is for monitoring and tuning SQL Server. I performed an upgrade to the new version and resumed monitoring, I didn’t run into any issues or problems.

A couple of days later I got a call from our Windows folks stating they had an alarm on high memory utilization on the monitoring server. I logged in to take a look and was shocked to see the SQLSentry monitoring service had consumed over 5GB of memory. I bounced the service and it reset itself. Over the next couple of days memory usage increased again, causing me to restart the service.

At this point I engaged the support folks, in particular Jason Hall (blog|twitter). We started triaging the issue.

We started up perfmon and captured a few counters to file to try and localize the memory leak. This allowed us to discover the leak was in unmanaged code, making the trouble a lot tougher to track down.

The next step was to install the Windows Debugging Tools from Microsoft. With these deployed and a set of symbols downloaded we used UMDH to capture the before and after log heap allocations for the monitoring service. One comparison log later and we were able to track the issue down to a leak in Microsoft’s managed wrapper for the VDS (Virtual Disk Service) subsystem which is used by SQLSentry to monitor mount points.

I’m running several multi-node, multi-instance SQL Server Failover Clustered Instances and make extensive use of mount points (current count is 136 monitored mount points).

To test and confirm that VDS was the actual issue one of the SQLSentry development team threw together a very small 50 line application that I could hit a couple of buttons on an watch memory usage. It took a bit of a tired mouse finger, but I was able to verify quickly that VDS was indeed the problem.

Now fully understanding the problem in hand the SQLSentry team quickly built their own COM wrapper to handle mount point monitoring and provided me with a new build of the product. I went through a standard deployment and started the services back up again. A week later and the service is still running at around 500MB.

 

Throughout the process of problem triage, issue identification and resolution it was a very engaged support process with an appropriate level of urgency for each of the steps. Everything was handled to completion and I have been very happy with the support I received. That’s why my maintenance for this product will be renewed next year. I know that the money spent is worth it.

 

TL;DR

In the past I have spent a lot of money on very high levels of support from Dell and received nothing but poor service. As a result they lost several million dollars of business.

On the flip side I spent a small amount of money on maintenance with SQLSentry and received excellent support and levels of engagement which will help retain me as a long term customer.

 

I’d be interested to hear about your experiences with these vendors,  or any other.

SQL Clusters And The Full-Text Search Service

I got an email from a friend this morning asking if I had a clustered resource for the Full-Text Search (FTS) service (actually called the SQL Full-Text Filter Daemon Launcher service). I pondered this for a few seconds and didn’t recall seeing one, so I logged on to one of my 2008 R2 test clusters and took a look. Nope, no FTS resource exists.

This made me a little concerned. If there’s no resource and the service fails or crashes would the cluster failover? To test this I logged on to one of the nodes and stopped the service.

Nothing happened.

The service stopped but there was no cluster failover or anything. As the FTS service is critical I thought this would break FTS. To test this out I created a new table, threw in some values and created a new Full-Text Index.

SET NOCOUNT ON;

 

/* Create the table */

CREATE TABLE test

    (

      c1 INT IDENTITY(1, 1) ,

      c2 NVARCHAR(MAX) DEFAULT 'some text'

    );

/* Add the PK */

ALTER TABLE test ADD CONSTRAINT Test_PK PRIMARY KEY CLUSTERED (c1);

 

/* Insert some records so we have something to see */

INSERT  INTO test

        DEFAULT VALUES

GO 500

 

 

/* Create the FT catalog and index */

CREATE FULLTEXT CATALOG [testft] AS DEFAULT;

 

CREATE FULLTEXT INDEX ON dbo.test (c2)

KEY INDEX Test_PK;

 

/* Wait a few seconds for population to take place and then query the FTI */

SELECT TOP 100

        *

FROM    test

WHERE   CONTAINS ( c2, 'text' )

 

 

I was expecting this to not work as I’d killed off the service.

Surprisingly I got results. How had that happened?

I went back to the server and found that the FTS service had started back up again. I killed it, ran the select statement again and still got results.

I read the description of the FTS service which states that it will “launch the full-text filter daemon process”. I went in search of that and killed off the fdhost.exe process and once more ran the query. This time I got an error which appears completely unrelated but is caused by the fdhost.exe process not running.

Msg 30053, Level 16, State 102, Line 1
Word breaking timed out for the full-text query string. This can happen if the wordbreaker took a long time to process the full-text query string, or if a large number of queries are running on the server. Try running the query again under a lighter load.

Now FTS was truly dead but I wanted to see if it could be resurrected without any intervention from me.

By default Full-Text indexes are created with automatic change tracking, so I just added a few more rows to the table to see what would happen.

Change tracking kicked in and SQL Server kicked off the FTS service, which in turn launched the fdhost.exe process. With fdhost.exe running I was then able to run the Full-Text query again and return results.

 

I have to say I was initially freaked out by there not being any kind of resource to manage Full-Text Search but it turns out that you really don’t need one. SQL handles it all behind the scenes for you. Excellent work Microsoft!

 

Reading SQL Server Error Logs Using PowerShell

I was messing around with PowerShell the other day and using it to read SQL Server error logs. It’s actually a pretty trivial thing.

From SQLPLS

$Logs = DIR SQLSERVER:SQLLOCALHOST

$Logs.ReadErrorLog()

Quick and easy. Then I tried to read from a clustered instance of SQL Server and ran into an issue

$Logs = DIR SQLSERVER:SQLCLUSTEREDINSTANCE

$Logs.ReadErrorLog()

image

This didn’t make a lot of sense, after all the method was there before. I decided to go and check to see if the method existed

$Logs = DIR SQLSERVER:SQLCLUSTEREDINSTANCE

$Logs | gm

image

Yup, the method exists just fine.

 

I couldn’t figure out what was going on. I enlisted the help of Sean McCown (blog|twitter) who posted a short training video on how to read the error logs on named instances of SQL Server.