Tag: SQL

Querying SQL Error Logs With PowerShell

It’s a real pain trying to get data out of the SQL Error Logs I’ve noticed. You either don’t cycle your files very often and end up with stupid amounts of data in there to sift through or you cycle them frequently and then you end up having to try and figure out which of the files is related to the data you want. You could use T-SQL to import all the files and grab stuff, alternatively you can use PowerShell to get information.

I wanted to take this process a step further so I created a PowerShell script that will accept servername, start and end date/time parameters and return to you, the kind reader, all the entries in the logs between those times (and just between those times). This is great for when someone asks you to give them information from the log for 3-7 days ago. One quick execution of the script and the data is sitting there in PowerShell grid format, ready to go. Even better, using the grid you can filter the information further and quickly copy/paste it to excel or some other program.

I’ve pasted the code down below, but it might be easier to download Get-SQLErrorLogs.ps1

This requires the PowerShell SQL snappins that come with 2008/R2 but can query versions 2000 and up.

Next steps are to add filtering and options to output to file. I’m sure I’ll get to that eventually.

 

<#

.SYNOPSIS

   Retrieves SQL Server error log entries

.DESCRIPTION

   Pulls error log details from SQL Server and outputs them to the client

.PARAMETER <paramName>

   -SQLInstance (required) The SQL Instance to pull the logs from

   -MaxDate (required) The most recent log record to return

   -MinDate (required) the oldest log record to return

.EXAMPLE

   ./Get-SQLErrorLogs -SQLInstance MyInstance -MinDate 2012-05-10 -MaxDate "2012-05-15 09:00"

#>

 

param

(

[string]$SQLInstance = $(Read-Host -prompt "SQL Instance [e.g. MyClusterInstance]"),

[string] $MinDate = $(Read-Host -prompt "Lowest log date entry (yyyy-mm-dd hh:mm)"),

[string] $MaxDate = $(Read-Host -prompt "Highest log date entry (yyyy-mm-dd hh:mm"),

[int] $LogNumber

)

 

cls

 

$CheckDateMin = [datetime]::Parse($MinDate)

$CheckDateMax = [datetime]::Parse($MaxDate)

 

 

Function Main {

    #Load the SQL snapins

    Load-SQLSnapins

 

    #Create a datatable to hold the data

    $Results = New-Object system.Data.DataTable "Results"

    #Setup columns

    $LogDate = New-Object system.Data.DataColumn LogDate,([DateTime])

    $ProcessInfo = New-Object system.Data.DataColumn ProcessInfo,([string])

    $Text = New-Object system.Data.DataColumn Text,([string])

    #Add columns to datatable

    $Results.columns.add($LogDate)

    $Results.columns.add($ProcessInfo)

    $Results.columns.add($Text)

 

    #Another to hold logs read

    $ReadLogs = New-Object system.Data.DataTable "ReadLogs"

    $ReadLogNum = New-Object system.Data.DataColumn LogNumber,([int])

    $ReadLogs.columns.add($ReadLogNum)

 

 

    #Shred the SQL Instance to get the right data

    $SQLName = Shred-SQLName $SQLInstance

    $InstanceName = Shred-SQLInstance $SQLInstance

 

 

    #Grabs SQL Logs by Date

    Get-SQLLogByDate $SQLName $InstanceName $SQLInstance $CheckDateMin $CheckDateMax

 

    $results | Sort-Object LogDate | ogv

 

}

 

 

 

Function Shred-SQLInstance ($SQLInstance) {

 

    $SQLInstance = $SQLInstance.ToUpper()

 

    if ($SQLInstance -ilike "**") 

    {

        $string = $SQLInstance.Split("")

        $Instance = $string[1]

        Return $Instance

    }

    else

    {

        $Instance = "DEFAULT"

        Return $Instance

    }

}

 

Function Shred-SQLName ($SQLInstance) {

    $SQLInstance = $SQLInstance.ToUpper()

    if ($SQLInstance -ilike "**") 

    {

        $string = $SQLInstance.Split("")

        $Name = $string[0]

        Return $Name

    }

    else

    {

        $Name = $SQLInstance

        Return $Name

    }

}

 

Function Get-SQLLogByDate ($SQLName, $InstanceName, $SQLInstance,$CheckDateMin ,$CheckDateMax) {

 

    IF ($InstanceName -eq "DEFAULT") {$Logs = DIR SQLSERVER:SQL$SQLName | Where-Object {$_.Name -eq $SQLInstance}; }

    ELSE {$Logs = DIR SQLSERVER:SQL$SQLName | Where-Object {$_.Name -eq $SQLInstance}; }

 

    $LogCount = ($Logs.EnumErrorLogs() | Measure-Object).Count

    Write-Output "--------------------------------------------------------------------------------------------------------------"

    Write-Output "Checking error logs on server $SQLInstance for logs created between $CheckDateMin & $CheckDateMax"

    Write-Output "--------------------------------------------------------------------------------------------------------------"

    Write-Output "Checking through $LogCount logs, please wait."

    Write-Output "-----------------------------------------------"

 

    $LogList = $Logs.EnumErrorLogs()

    #we'll need to do something hokey here. SQL error log creation is not when the log was created, rather when it was closed

    #not intended circumstances. This means we'll have to get the min log and then get the one before that

    ForEach ($ErrorLog in $LogList) {

 

        IF ($ErrorLog.CreateDate -ge $CheckDateMin -and $ErrorLog.CreateDate -le $CheckDateMax)

        {

 

            $CurrentLog = $ErrorLog.CreateDate

            $CurrentLogName = $ErrorLog.Name

            Write-Output "Currently reading error log $CurrentLogName dated $CurrentLog..."

            #write the log number to the $ReadLogs datatable

            $NewLogRow = $ReadLogs.NewRow(); $NewLogRow.LogNumber = $CurrentLogName; $ReadLogs.Rows.Add($NewLogRow);

            $Logs.ReadErrorLog(($ErrorLog.Name)) | Where-Object {$_.LogDate -ge $CheckDateMin -AND $_.LogDate -le $CheckDateMax} |

            % {$NewRow = $Results.NewRow(); 

                $NewRow.LogDate = $_.LogDate; 

                $NewRow.ProcessInfo = $_.ProcessInfo;

                $NewRow.Text = $_.Text;

                $Results.Rows.Add($NewRow);

            }

        }

 

 

 

    }

    #Now we've read all the logs read the one before the newest so we get that extra data 

    #unless the most current log (0) is included in the results already

    #Write-Output $ReadLogs | ft -AutoSize

    $LowestLog = ($ReadLogs | Measure-Object LogNumber -Minimum).Minimum

    IF ($LowestLog -gt 0)

    {

        $LogToRead = $LowestLog - 1

        Write-Output "Currently reading error log $LogToRead..."

        #write the log number to the $ReadLogs datatable

        $NewLogRow = $ReadLogs.NewRow(); $NewLogRow.LogNumber = $CurrentLogName; $ReadLogs.Rows.Add($NewLogRow);

        $Logs.ReadErrorLog($LogToRead) | Where-Object {$_.LogDate -ge $CheckDateMin -AND $_.LogDate -le $CheckDateMax} |

        % {$NewRow = $Results.NewRow(); 

            $NewRow.LogDate = $_.LogDate; 

            $NewRow.ProcessInfo = $_.ProcessInfo;

            $NewRow.Text = $_.Text;

            $Results.Rows.Add($NewRow);

        }

    }

    #on the off chance that the dates reside in the current log only pull that one in

    #but only if no other logs have been loaded (edge case)

    IF (!$LowestLog) 

    {

        Write-Output "Currently reading error log 0..."

        $Logs.ReadErrorLog(0) | Where-Object {$_.LogDate -ge $CheckDateMin -AND $_.LogDate -le $CheckDateMax} |

        % {$NewRow = $Results.NewRow(); 

            $NewRow.LogDate = $_.LogDate; 

            $NewRow.ProcessInfo = $_.ProcessInfo;

            $NewRow.Text = $_.Text;

            $Results.Rows.Add($NewRow);

        }

    }

}

 

Function Load-SQLSnapins 

{    $CheckSQLSnapins = get-pssnapin | where {$_.Name -match "SqlServerCmdletSnapin100"} 

    if (!$CheckSQLSnapins) 

    {

        Add-PSSnapin SqlServerCmdletSnapin100 

        Write-Host "SqlServerCmdletSnapin100 Loaded"

    }

    else { Write-Host "SqlServerCmdletSnapin100 Already Loaded" }

 

    $CheckSQLSnapins = get-pssnapin | where {$_.Name -match "SqlServerProviderSnapin100"} 

    if (!$CheckSQLSnapins) 

    {

        Add-PSSnapin SqlServerProviderSnapin100 

        Write-Host "SqlServerProviderSnapin100 Loaded"

    }

    else { Write-Host "SqlServerProviderSnapin100 Already Loaded" }

}

 

main

Speedy Log Shipping Setup

As a follow up to last weeks post on quickly restoring multiple databases to default locations I’ve another script which might help you with quick deployments.

Last week I covered restoring the databases, I also needed to get log shipping setup for multiple databases. If you’ve ever had to configure log shipping you know that there’s lots of clicking and typing of things and entering information. It’s a bit of a pain for a single database, when you have to set it up for multiple databases it’s a long, slow, time consuming nightmare.

I’d had enough of dealing with this so I put together a script that will do the complete configuration of the primary and secondary servers with a single execution. The key to doing this is using SQLCMD mode from within SSMS.

Never used SQLCMD mode before? I’m not surprised, it’s not that well known. To enter SQLCMD mode you either choose the option from the Query menu item or add the button to one of your tool bars

image

This allows you to do some special things, like parameter substitution and, key to what I’m doing, change connections as a part of the script.

Try the following, replacing PrimaryServer and SecondaryServer with something relevant to your environment.

image

As you can see from the script it connected to each server in turn and grabbed the @@SERVERNAME value. That’s pretty special.

My log shipping script does very much the same thing. You just provide the primary and secondary servers, the file locations for the transaction log backups and the name of the database. The script takes care of the rest.

It will perform a very basic check to ensure that the local backup directory exists and error if not. The script also does not encompass a log shipping monitor server and has a default of 15 minutes for the backup/copy/restore processes. All of that could be quickly adjusted and made additional parameters.

View the script below or download SetupLogShipping.sql

Of course this doesn’t answer the question of when or why you would use log shipping, that’s for another post.

:ON ERROR exit

 

:setvar PrimaryServer "ServerA"

:setvar PrimaryDatabase "MyDB"

:setvar PrimaryBackupDirectory "E:BackupDBTrxLogs"

:setvar PrimaryBackupShare "ServerAtrxlogs$"

:setvar SecondaryServer "ServerB"

:setvar SecondaryDatabase "MyDB"

:setvar SecondaryBackupDirectory "ServerBTrxlogs$TrxForRestore"

 

SET NOCOUNT ON

/* Begin: Script to be run at Primary */

 

 

:CONNECT $(PrimaryServer)

 

BEGIN

USE msdb;

 

/* Check to see if the backup folder exists */

RAISERROR('Checking to see if $(PrimaryBackupDirectory) exists...',0,1) WITH NOWAIT

DECLARE @CheckDir NVARCHAR(2000)

DECLARE @ErrorMsg NVARCHAR(200)

DECLARE @DirInfo TABLE

    (

      FileExists BIT ,

      FileIsADirectory BIT ,

      ParentDirectoryExists BIT

    )

                                

SET @CheckDir = '$(PrimaryBackupDirectory)'

 

INSERT  INTO @DirInfo

        ( FileExists ,

          FileIsADirectory ,

          ParentDirectoryExists

        )

        EXECUTE [master].dbo.xp_fileexist @CheckDir

 

 

 

IF NOT EXISTS ( SELECT  *

                FROM    @DirInfo

                WHERE   FileExists = 0

                        AND FileIsADirectory = 1

                        AND ParentDirectoryExists = 1 ) 

    BEGIN

        SET @ErrorMsg = 'The primary backup directory $(PrimaryBackupDirectory) does not exist...quitting' 

        RAISERROR(@ErrorMsg,16,1) WITH NOWAIT

        RETURN

    END

RAISERROR('$(PrimaryBackupDirectory) exists, continuing...',0,1) WITH NOWAIT  

 

DECLARE @LS_BackupJobId    AS uniqueidentifier 

DECLARE @LS_PrimaryId    AS uniqueidentifier 

DECLARE @SP_Add_RetCode    As int 

 

RAISERROR('Enabling log shipping for $(PrimaryDatabase) on $(PrimaryServer)...',0,1) WITH NOWAIT

 

EXEC @SP_Add_RetCode = master.dbo.sp_add_log_shipping_primary_database 

        @database = N'$(PrimaryDatabase)' 

        ,@backup_directory = N'$(PrimaryBackupDirectory)' 

        ,@backup_share = N'$(PrimaryBackupShare)' 

        ,@backup_job_name = N'LSBackup_$(PrimaryDatabase)' 

        ,@backup_retention_period = 4320

        ,@backup_compression = 1

        ,@backup_threshold = 60 

        ,@threshold_alert_enabled = 1

        ,@history_retention_period = 5760 

        ,@backup_job_id = @LS_BackupJobId OUTPUT 

        ,@primary_id = @LS_PrimaryId OUTPUT 

        ,@overwrite = 1 

 

 

IF (@@ERROR = 0 AND @SP_Add_RetCode = 0) 

BEGIN 

 

DECLARE @LS_BackUpScheduleUID    As uniqueidentifier 

DECLARE @LS_BackUpScheduleID    AS int 

 

 

EXEC msdb.dbo.sp_add_schedule 

        @schedule_name =N'LSBackupSchedule_$(PrimaryServer)' 

        ,@enabled = 1 

        ,@freq_type = 4 

        ,@freq_interval = 1 

        ,@freq_subday_type = 4 

        ,@freq_subday_interval = 15 

        ,@freq_recurrence_factor = 0 

        ,@active_start_date = 20120607 

        ,@active_end_date = 99991231 

        ,@active_start_time = 0 

        ,@active_end_time = 235900 

        ,@schedule_uid = @LS_BackUpScheduleUID OUTPUT 

        ,@schedule_id = @LS_BackUpScheduleID OUTPUT 

 

EXEC msdb.dbo.sp_attach_schedule 

        @job_id = @LS_BackupJobId 

        ,@schedule_id = @LS_BackUpScheduleID  

 

EXEC msdb.dbo.sp_update_job 

        @job_id = @LS_BackupJobId 

        ,@enabled = 1 

 

 

END 

 

 

EXEC master.dbo.sp_add_log_shipping_alert_job 

 

EXEC master.dbo.sp_add_log_shipping_primary_secondary 

        @primary_database = N'$(PrimaryDatabase)' 

        ,@secondary_server = N'$(PrimaryServer)' 

        ,@secondary_database = N'$(SecondaryDatabase)' 

        ,@overwrite = 1 

 

 

RAISERROR('Log shipping enabled for $(PrimaryDatabase) on $(PrimaryServer)...',0,1) WITH NOWAIT

END

GO

/* ENDS THE PRIMARY SCRIPT */

 

 

 

/* Script to be run on secondary */

:CONNECT $(SecondaryServer)

USE msdb;

 

DECLARE @LS_Secondary__CopyJobId    AS uniqueidentifier 

DECLARE @LS_Secondary__RestoreJobId    AS uniqueidentifier 

DECLARE @LS_Secondary__SecondaryId    AS uniqueidentifier 

DECLARE @LS_Add_RetCode    As int 

 

RAISERROR('Adding log shipping secondary to $(SecondaryServer) with database name $(SecondaryDatabase)...',0,1) WITH NOWAIT

 

EXEC @LS_Add_RetCode = master.dbo.sp_add_log_shipping_secondary_primary 

        @primary_server = N'$(PrimaryServer)' 

        ,@primary_database = N'$(PrimaryDatabase)' 

        ,@backup_source_directory = N'$(PrimaryBackupShare)' 

        ,@backup_destination_directory = N'$(SecondaryBackupDirectory)' 

        ,@copy_job_name = N'LSCopy_$(PrimaryServer)_$(PrimaryDatabase)' 

        ,@restore_job_name = N'LSRestore_$(PrimaryServer)_$(PrimaryDatabase)' 

        ,@file_retention_period = 4320 

        ,@overwrite = 1 

        ,@copy_job_id = @LS_Secondary__CopyJobId OUTPUT 

        ,@restore_job_id = @LS_Secondary__RestoreJobId OUTPUT 

        ,@secondary_id = @LS_Secondary__SecondaryId OUTPUT 

 

IF (@@ERROR = 0 AND @LS_Add_RetCode = 0) 

BEGIN 

 

DECLARE @LS_SecondaryCopyJobScheduleUID    As uniqueidentifier 

DECLARE @LS_SecondaryCopyJobScheduleID    AS int 

 

RAISERROR('Creating log shipping secondary jobs...',0,1) WITH NOWAIT

 

EXEC msdb.dbo.sp_add_schedule 

        @schedule_name =N'DefaultCopyJobSchedule' 

        ,@enabled = 1 

        ,@freq_type = 4 

        ,@freq_interval = 1 

        ,@freq_subday_type = 4 

        ,@freq_subday_interval = 15 

        ,@freq_recurrence_factor = 0 

        ,@active_start_date = 20120607 

        ,@active_end_date = 99991231 

        ,@active_start_time = 0 

        ,@active_end_time = 235900 

        ,@schedule_uid = @LS_SecondaryCopyJobScheduleUID OUTPUT 

        ,@schedule_id = @LS_SecondaryCopyJobScheduleID OUTPUT 

 

EXEC msdb.dbo.sp_attach_schedule 

        @job_id = @LS_Secondary__CopyJobId 

        ,@schedule_id = @LS_SecondaryCopyJobScheduleID  

 

DECLARE @LS_SecondaryRestoreJobScheduleUID    As uniqueidentifier 

DECLARE @LS_SecondaryRestoreJobScheduleID    AS int 

 

 

EXEC msdb.dbo.sp_add_schedule 

        @schedule_name =N'DefaultRestoreJobSchedule' 

        ,@enabled = 1 

        ,@freq_type = 4 

        ,@freq_interval = 1 

        ,@freq_subday_type = 4 

        ,@freq_subday_interval = 15 

        ,@freq_recurrence_factor = 0 

        ,@active_start_date = 20120607 

        ,@active_end_date = 99991231 

        ,@active_start_time = 0 

        ,@active_end_time = 235900 

        ,@schedule_uid = @LS_SecondaryRestoreJobScheduleUID OUTPUT 

        ,@schedule_id = @LS_SecondaryRestoreJobScheduleID OUTPUT 

 

EXEC msdb.dbo.sp_attach_schedule 

        @job_id = @LS_Secondary__RestoreJobId 

        ,@schedule_id = @LS_SecondaryRestoreJobScheduleID  

 

 

END 

 

 

DECLARE @LS_Add_RetCode2    As int 

 

 

IF (@@ERROR = 0 AND @LS_Add_RetCode = 0) 

BEGIN 

 

EXEC @LS_Add_RetCode2 = master.dbo.sp_add_log_shipping_secondary_database 

        @secondary_database = N'$(SecondaryDatabase)' 

        ,@primary_server = N'$(PrimaryServer)' 

        ,@primary_database = N'$(PrimaryDatabase)' 

        ,@restore_delay = 0 

        ,@restore_mode = 0 

        ,@disconnect_users    = 0 

        ,@restore_threshold = 45   

        ,@threshold_alert_enabled = 1 

        ,@history_retention_period    = 5760 

        ,@overwrite = 1 

 

END 

 

 

IF (@@error = 0 AND @LS_Add_RetCode = 0) 

BEGIN 

 

EXEC msdb.dbo.sp_update_job 

        @job_id = @LS_Secondary__CopyJobId 

        ,@enabled = 1 

 

EXEC msdb.dbo.sp_update_job 

        @job_id = @LS_Secondary__RestoreJobId 

        ,@enabled = 1 

 

END 

 

RAISERROR('Setup of log shipping from database $(PrimaryDatabase) on server $(PrimaryServer) to database $(SecondaryDatabase) on $(SecondaryServer) complete!',0,1) WITH NOWAIT

 

 

 

Generic Database Restores

The other day I found myself in the position of needing to restore multiple databases to a server. In this case all the log files would be on one drive and all the data files on another. Rather than sit there and write restore scripts for ~20 databases I figured it would be quicker to create a quick wrapper that I could just pass the filename into a then let nature take its course.

This is very basic, no restore performance tuning enhancements for example, however it does just the job that I need it to.

If I was smart I would have made this a stored procedure and then iterated through the list of backups in the directory using powershell and then executed the proc for each one. Maybe next time.

Anyhow, view the script below or download Database Restores Generic.sql

 

SET NOCOUNT ON

 

DECLARE @File NVARCHAR(1000) = N'MyDatabase.bak'

DECLARE @Path NVARCHAR(1000) = N'D:SQLBackups'

DECLARE @MDFPath NVARCHAR(1000) = 'E:MSSQLData'

DECLARE @LDFPath NVARCHAR(1000) = 'L:MSSQLLogs'

 

DECLARE @FullLoc NVARCHAR(2000)= @Path + @File

 

DECLARE @DatabaseName NVARCHAR(128)

DECLARE @RestoreMDF NVARCHAR(2000)

DECLARE @RestoreLDF NVARCHAR(2000)

DECLARE @RestoreCommandFull NVARCHAR(4000)

DECLARE @sqlexec NVARCHAR(4000) 

 

DECLARE @MDFID INT

DECLARE @LDFID INT

DECLARE    @MDFName NVARCHAR(128)

DECLARE @LDFName NVARCHAR(128)

 

 

DECLARE @RestoreFiles TABLE

    (

      ID INT IDENTITY(1,1),

      LogicalName NVARCHAR(128) , PhysicalName NVARCHAR(260) , [Type] CHAR(1) , FileGroupName NVARCHAR(128) , [size] NUMERIC(20, 0) ,

      MAXSIZE NUMERIC(20, 0) , FileID BIGINT , CreateLSN NUMERIC(25, 0) , DropLSN NUMERIC(25, 0) , UniqueID UNIQUEIDENTIFIER ,

      ReadOnlyLSN NUMERIC(25, 0) , ReadWriteLSN NUMERIC(25, 0) , BackupSizeInBytes BIGINT , SourceBlockSize INT , FileGroupID INT ,

      LogGroupGUID UNIQUEIDENTIFIER , DifferentialBaseLSN NUMERIC(25, 0) , DifferentialBaseGUID UNIQUEIDENTIFIER ,

      IsReadOnly BIT , IsPresent BIT , TDEThumbprint VARBINARY(32)

    )

 

DECLARE @RestoreHeader TABLE

    (

        BackupName NVARCHAR(128), BackupDescription NVARCHAR(255), BackupType SMALLINT, ExpirationDate DATETIME,

        Compressed CHAR(1), POSITION SMALLINT, DeviceType TINYINT, UserName NVARCHAR(128), ServerName NVARCHAR(128),

        DatabaseName NVARCHAR(128), DatabaseVersion INT, DatabaseCreationDate DATETIME, BackupSize NUMERIC(20,0),

        FirstLSN NUMERIC(25,0), LastLSN NUMERIC(25,0), CheckpointLSN NUMERIC(25,0), DatabaseBackupLSN NUMERIC(25,0),

        BackupStartDate DATETIME, BackupFinishDate DATETIME, SortORder SMALLINT, [CodePage] SMALLINT, UnicodeLocaleId INT,

        UnicodeComparisonStyle INT, CompatabilityLevel TINYINT, SoftwareVendorId INT, SoftwareVersionMajor INT,

        SoftwareVersionMinor INT, SoftwareVersionBuild INT, MachineName NVARCHAR(128), Flags INT, BindingID UNIQUEIDENTIFIER,

        RecoveryForkID UNIQUEIDENTIFIER, [Collation] NVARCHAR(128), FamilyGUID UNIQUEIDENTIFIER, HasBulkLoggedData BIT,

        IsSnapshot BIT, IsReadOnly BIT, IsSingleUser BIT, HasBackupChecksums BIT, IsDamaged BIT, BeginsLogChain BIT,

        HasIncompleteMetaData BIT, IsForceOffline BIT, IsCopyOnly BIT, FirstRecoveryForkID UNIQUEIDENTIFIER,

        ForkPointLSN NUMERIC(25,0), RecoveryModel NVARCHAR(60), DifferentialBaseLSN NUMERIC(25,0), 

        DifferentialBaseGUID UNIQUEIDENTIFIER, BackupTypeDescription NVARCHAR(60), BackupSetGUID UNIQUEIDENTIFIER,

        CompressedBackupSize BIGINT

        )

 

SELECT @sqlexec = 'RESTORE FILELISTONLY FROM DISK = ''' + @FullLoc + ''''

INSERT INTO @RestoreFiles EXEC (@sqlexec)

 

 

SELECT @sqlexec = 'RESTORE HEADERONLY FROM DISK = ''' + @FullLoc + ''''

INSERT INTO @RestoreHeader EXEC (@sqlexec)

 

SELECT @DatabaseName = DatabaseName FROM @RestoreHeader

 

 

SELECT @MDFID = MIN(ID) FROM @RestoreFiles WHERE [Type] != 'L'

WHILE @MDFID IS NOT NULL

BEGIN

 

 

    IF @MDFID = 1

        BEGIN

            SELECT @RestoreMDF = 'WITH MOVE ' + CHAR(39) + LogicalName + CHAR(39) + ' TO ' + CHAR(39) + @MDFPath +  REVERSE(LEFT(REVERSE(PhysicalName), CHARINDEX('', REVERSE(PhysicalName))-1)) + CHAR(39) + CHAR(13) FROM @RestoreFiles WHERE ID = @MDFID

            END

    ELSE

        BEGIN

            SELECT @RestoreMDF = @RestoreMDF + ', MOVE ' + CHAR(39) + LogicalName + CHAR(39) + ' TO ' + CHAR(39) + @MDFPath +  REVERSE(LEFT(REVERSE(PhysicalName), CHARINDEX('', REVERSE(PhysicalName))-1)) + CHAR(39) FROM @RestoreFiles WHERE ID = @MDFID

        END

 

SELECT @MDFID = MIN(ID) FROM @RestoreFiles WHERE ID > @MDFID AND [Type] != 'L' 

END

 

 

SELECT @LDFID = MIN(ID) FROM @RestoreFiles WHERE [Type] = 'L'

WHILE @LDFID IS NOT NULL

BEGIN

    SELECT @RestoreLDF = ISNULL(@RestoreLDF,'') + ', MOVE ' + CHAR(39) +LogicalName + CHAR(39) + ' TO ' + CHAR(39) + @LDFPath +  REVERSE(LEFT(REVERSE(PhysicalName), CHARINDEX('', REVERSE(PhysicalName))-1)) + CHAR(39) FROM @RestoreFiles WHERE ID = @LDFID

 

SELECT @LDFID = MIN(ID) FROM @RestoreFiles WHERE ID > @LDFID AND [Type] = 'L' 

END

 

 

 

SELECT @RestoreCommandFull = 'RESTORE DATABASE ' + QUOTENAME(@DatabaseName) + ' ' + CHAR(13) + 'FROM DISK = ''' + @FullLoc + '''' + CHAR(13)

SELECT @RestoreCommandFull = @RestoreCommandFull + @RestoreMDF + @RestoreLDF + CHAR(13) + ', NORECOVERY, STATS=20;'

--PRINT @RestoreCommandFull

 

EXEC (@RestoreCommandFull)

 

SQL Clusters And The Full-Text Search Service

I got an email from a friend this morning asking if I had a clustered resource for the Full-Text Search (FTS) service (actually called the SQL Full-Text Filter Daemon Launcher service). I pondered this for a few seconds and didn’t recall seeing one, so I logged on to one of my 2008 R2 test clusters and took a look. Nope, no FTS resource exists.

This made me a little concerned. If there’s no resource and the service fails or crashes would the cluster failover? To test this I logged on to one of the nodes and stopped the service.

Nothing happened.

The service stopped but there was no cluster failover or anything. As the FTS service is critical I thought this would break FTS. To test this out I created a new table, threw in some values and created a new Full-Text Index.

SET NOCOUNT ON;

 

/* Create the table */

CREATE TABLE test

    (

      c1 INT IDENTITY(1, 1) ,

      c2 NVARCHAR(MAX) DEFAULT 'some text'

    );

/* Add the PK */

ALTER TABLE test ADD CONSTRAINT Test_PK PRIMARY KEY CLUSTERED (c1);

 

/* Insert some records so we have something to see */

INSERT  INTO test

        DEFAULT VALUES

GO 500

 

 

/* Create the FT catalog and index */

CREATE FULLTEXT CATALOG [testft] AS DEFAULT;

 

CREATE FULLTEXT INDEX ON dbo.test (c2)

KEY INDEX Test_PK;

 

/* Wait a few seconds for population to take place and then query the FTI */

SELECT TOP 100

        *

FROM    test

WHERE   CONTAINS ( c2, 'text' )

 

 

I was expecting this to not work as I’d killed off the service.

Surprisingly I got results. How had that happened?

I went back to the server and found that the FTS service had started back up again. I killed it, ran the select statement again and still got results.

I read the description of the FTS service which states that it will “launch the full-text filter daemon process”. I went in search of that and killed off the fdhost.exe process and once more ran the query. This time I got an error which appears completely unrelated but is caused by the fdhost.exe process not running.

Msg 30053, Level 16, State 102, Line 1
Word breaking timed out for the full-text query string. This can happen if the wordbreaker took a long time to process the full-text query string, or if a large number of queries are running on the server. Try running the query again under a lighter load.

Now FTS was truly dead but I wanted to see if it could be resurrected without any intervention from me.

By default Full-Text indexes are created with automatic change tracking, so I just added a few more rows to the table to see what would happen.

Change tracking kicked in and SQL Server kicked off the FTS service, which in turn launched the fdhost.exe process. With fdhost.exe running I was then able to run the Full-Text query again and return results.

 

I have to say I was initially freaked out by there not being any kind of resource to manage Full-Text Search but it turns out that you really don’t need one. SQL handles it all behind the scenes for you. Excellent work Microsoft!

 

Reading SQL Server Error Logs Using PowerShell

I was messing around with PowerShell the other day and using it to read SQL Server error logs. It’s actually a pretty trivial thing.

From SQLPLS

$Logs = DIR SQLSERVER:SQLLOCALHOST

$Logs.ReadErrorLog()

Quick and easy. Then I tried to read from a clustered instance of SQL Server and ran into an issue

$Logs = DIR SQLSERVER:SQLCLUSTEREDINSTANCE

$Logs.ReadErrorLog()

image

This didn’t make a lot of sense, after all the method was there before. I decided to go and check to see if the method existed

$Logs = DIR SQLSERVER:SQLCLUSTEREDINSTANCE

$Logs | gm

image

Yup, the method exists just fine.

 

I couldn’t figure out what was going on. I enlisted the help of Sean McCown (blog|twitter) who posted a short training video on how to read the error logs on named instances of SQL Server.

Who Owns Your Databases And Jobs?

Quick question, who owns your databases? And how about your jobs? Don’t know? Go find out…now find out without using the GUI.

Querying sys.databases will give you the owner_sid. The trouble is that you have to relate that owner_sid back to a user. The same with sysjobs, that sid needs to relate to someone.

You could attempt joining against sys.server_principals, however if there’s a user that doesn’t exist or is a part of a domain group they wouldn’t show up and you would just get a NULL returned. Try the following example:

SELECT  d.NAME ,

        sp.NAME

FROM    sys.databases d

        LEFT JOIN sys.server_principals sp ON d.owner_sid = sp.sid;

        

SELECT  s.NAME ,

        sp.NAME

FROM    msdb.dbo.sysjobs s

        LEFT OUTER JOIN sys.server_principals sp ON s.owner_sid = sp.sid ;     

 

Now what, how can you quickly and easily get the owner? A little known system function will come to the rescue.

SUSER_SNAME (server_user_sid) will return the login name associated with a SID. Using this you can very quickly grab the information about who owns what without having to play with joins or get incomplete results:

SELECT  name ,

        SUSER_SNAME(owner_sid)

FROM    sys.databases;

 

SELECT  name ,

        SUSER_SNAME(owner_sid)

FROM    msdb.dbo.sysjobs;

 

There’s some great little hidden gems lying around in SQL Server like this. You just need to find them.

SQL2012 It’s The Small Things Pt4–User Defined Server Roles

Server roles have been around since before I started working with SQL Server however these roles have always been strictly defined and there has been no ability to either alter them or create new ones. SQL Server 2012 finally changes that and provides you with the opportunity to create server level roles that will let you provide greater permissions to logins without having to constantly deal with the minutiae of managing them on a login by login basis.

Let’s say for example you wanted to grant several logins the ability the view any definition. Normally this would require you to perform those grants on a very granular level. Now you can just create a server role and drop logins into that role:

USE [master]

GO

 

CREATE SERVER ROLE [SrvViewDefs]

GO

 

use [master]

GO

 

GRANT VIEW ANY DEFINITION TO [SrvViewDefs]

GO

 

ALTER SERVER ROLE [SrvViewDefs] ADD MEMBER [sirsqlJoe];

ALTER SERVER ROLE [SrvViewDefs] ADD MEMBER [sirsqlBiggles];

 

That’s some quick and easy management.

Here’s some examples of other permissions that can be quickly and easily provisioned to multiple users via user defined server roles:

  • Alter any linked server
  • Alter any login
  • Alter any server audit
  • Alter any server role
  • Alter resources
  • Alter server state
  • Alter settings
  • View any database
  • Shutdown SQL Server

It goes beyond basic SQL Server permissions, you can also allow server roles to impersonate logins or manage Availability Groups

USE [master]

GO

 

CREATE SERVER ROLE [SrvImpersonate]

GO

 

use [master]

GO

 

GRANT IMPERSONATE ON LOGIN::[SIRSQLnic] TO [SrvImpersonate]

GO

 

ALTER SERVER ROLE [SrvImpersonate] ADD MEMBER [sirsqlBiggles];

 

Anything that allows me to manage SQL Server more efficiently makes me happy. User defined server roles certainly make me happy.

SQL2012 It’s The Small Things Pt3–End Of The Month Function

If you work with financial data or have to do regular reporting type functions then I’m sure you currently use some kind of funky function/case statement to figure out what the last day of the month is and something even more funky to find the last day of next month or the month after.

SQL Server 2012 fixes that for you with the introduction of the EOMONTH function.

It’s very simple to use:

SELECT EOMONTH('2012/03/16')

image

 

Even better, the EOMONTH function can accept an additional parameter where you can specify a number of months to add or remove to get relevant results:

SELECT EOMONTH('2012/03/16', -1)

image

 

It even handled the leap year correctly.

Simplicity itself. A great addition.

SQL2012 It’s The Small Things Pt2–Unified Security Experience

Back when SQL Server 2005 came out there was a big change to the way that logins were handled. No more stored procedures to create a SQL login or to allow a Windows user or group to connect to SQL. It also brought the advent of allowing SQL logins to have the same kind of policy restrictions as you had in Active Directory.

Despite these huge changes for some reason Microsoft did not implement the same kind of changes when it came to roles. You still had to use stored procedures to grant role membership, this lead to an experience where you had to manage the syntax differently:

USE Master;

CREATE LOGIN [sirsqlDBAs] from windows WITH DEFAULT_DATABASE = TempDB;

 

USE tempdb;

CREATE USER [sirsqlDBAs];

 

CREATE ROLE RoleTest;

exec sp_addrolemember 'RoleTest', 'sirsqlDBAs';

exec sp_droprolemember 'RoleTest', 'sirsqlDBAs';

 

With SQL 2012 this old stored procedure way of managing things has finally gone by the wayside and we now manage roles in a similar fashion to logins:

USE Master;

CREATE LOGIN [sirsqlDBAs] from windows WITH DEFAULT_DATABASE = TempDB;

 

USE tempdb;

CREATE USER [sirsqlDBAs];

 

CREATE ROLE RoleTest;

ALTER ROLE RoleTest ADD MEMBER [sirsqlDBAs];

ALTER ROLE RoleTest DROP MEMBER [sirsqlDBAs];

 

This doesn’t just work with database roles. Also gone is the sp_addsrvrolemember syntax (which for some reason had you specify the login first then the role, opposite how sp_addrolemember did).

ALTER SERVER ROLE DiskAdmin ADD MEMBER [sirsqlDBAs];

ALTER SERVER ROLE DiskAdmin DROP MEMBER [sirsqlDBAs];

 

Managing roles this way makes a lot more sense than trying to do things via stored procedures. In an effort to ensure that old code still works Microsoft have left the old stored procedures in, but deprecated them, so don’t expect them to work in the next release. Anyway, wouldn’t you rather work to retool your code so that your security experience is more unified?

SQL2012 It’s The Small Things–Default Schemas For Groups

If you’re reading this then I’m sure you know all about SQL 2012 which just went RTM and will be available for general release on April 1st. I’m sure you’ve also heard all about some of the big new changes such as AlwaysOn with readable secondaries, ColumnStore indexes, and FileTables. Missed in all the big announcements are some of the smaller things that make a huge difference in the day to day running of SQL Server. I’m going to put together a few posts outlining some of the smaller, but to me equally significant, changes that are in the new version.

 

Back in the day there was no facility to provide a default schema to a Windows Group, you would actually get an error were you to try:

USE Master;

CREATE LOGIN [sirsqlDBAs] from windows WITH DEFAULT_DATABASE = TempDB;

 

USE tempdb;

CREATE USER [sirsqlDBAs];

 

ALTER USER [sirsqlDBAs] with DEFAULT_SCHEMA = dbo;

 

image

 

This script now works in SQL 2012 and defines the default schema

image

 

If we check out the properties of the user we can see that the default schema has been set to dbo:

image

 

 

This is a very welcome addition. Prior to SQL 2012 you would have to create a separate login and user for each user in a windows group and then set the default schema for each, that’s a lot of administrative hassle eliminated.