Tag: SQL

Do You Trust Your Application Admins?

I was sitting at my desk, happily minding my own business when an alert came through that a database backup had failed. Ok, backups fail, I just figured one of the transaction log backups hiccupped (we’ve been having some problems the last few days).

When I looked at the failure it was a backup trying to write to the C drive on the server.

I NEVER backup to C. It can easily fill the drive and take down the system.

A bigger indicator that something was up was that all of our backups are done across a 10Gb network to a centralized location for ease of tape backup. This indicated that someone, not a DBA, had the access to run a SQL backup.

I trawled through the permissions on the server and nobody has that level of access so I couldn’t figure out who had done this and how.

 

So What Happened?

Looking through the SQL logs I saw multiple attempts by a contractor to login to SQL, all of which failed, then about 5 minutes after the backup error came through. Interesting stuff, so I walked over to the contractor and asked what was going on.

After he was unable to login he went to the application admin who helped him out with access…using the application service account.

One of the third party applications from Microsoft some unnamed vendor has a database on that server. Due to the nature of the well designed code the database owner has to be the same as the service account of the application. The application admin knows this password (not my doing).

After logging this contractor in as the application service account the app admin walked away and left him to his own devices. As a result this contractor was dbo on a database which manages security for the entire company. We should just consider ourselves lucky all this guy did was attempt to perform a backup.

 

Preventative Actions

In order to try and prevent this kind of thing in the future I am looking at implementing a login trigger for the service account which checks the host and application connecting and denying access to anything not in a specifically approved list. There is also a conversation going on to possibly disable interactive logons for the service account using a group policy at the domain level.

 

It is a Matter of Trust

While the application admin is obviously at serious fault here it leads to a question of how well do you trust your admin team?

Domain admins will be able to access your SQL Servers (get over it, there is no way you can keep them out, if they really want in there are numerous ways for them to do so).

Anyone with a password could share that with someone else and allow them to access your servers.

Ultimately you have to trust those that you work with to do the right thing. It’s always sad when those people let you down.

Extended Properties Are Your Friend

It’s nice to have friends, why aren’t you a friend of extended properties? They can make your like so much easier by helping to document your databases and objects.

Take a basic table create statement

CREATE TABLE [dbo].[EatSomeBoogers](

    [ID] [int] NOT NULL,

    [BgType] [varchar](20) NOT NULL,

    [Size] [smallint] NULL,

    [Viscosity] [tinyint] NULL

) ON [PRIMARY]

 

GO

 

Pretty simple and we can infer a lot of the information about the table, but we mention size, are we measuring in inches, feet, yards?

We can add extended properties that will help to provide that sort of guidance for someone to reference later on.

EXEC sys.sp_addextendedproperty @name = N'Details',

    @value = N'Size is measured in mm', @level0type = N'SCHEMA',

    @level0name = N'dbo', @level1type = N'TABLE',

    @level1name = N'EatSomeBoogers', @level2type = N'COLUMN',

    @level2name = N'Size'

GO

We can also add properties at the table level:

EXEC sys.sp_addextendedproperty @name = N'Purpose',

    @value = N'Holds information about all the gold digging’,

    @level0type = N'SCHEMA', @level0name = N'dbo', @level1type = N'TABLE',

    @level1name = N'EatSomeBoogers'

GO

And at the database level:

EXEC [MyDB].sys.sp_addextendedproperty @name = N'Usage',

    @value = N'Will handle all information as relates to digging' 

GO

 

You can even add extended properties to other objects, like stored procedures:

EXEC sys.sp_addextendedproperty @name = N'ProcUsage',

    @value = N'Gets booger sizes and types', @level0type = N'SCHEMA',

    @level0name = N'dbo', @level1type = N'PROCEDURE',

    @level1name = N'GrabBoogers'

GO

 

What’s great is that you can then quickly and easily query the extended properties for your objects:

SELECT  

        OBJECT_NAME(ep.major_id) AS ObjectName ,

        CASE 

        WHEN c.name IS NOT NULL then 'COLUMN'

        else o.type_desc

        END AS ExtendedPropertyType,

        c.name AS 'ColumnName' ,

        ep.name AS ExtendedPropertyName ,

        ep.value AS ExtendedPropertyValue

FROM    sys.extended_properties ep

        LEFT JOIN sys.columns c ON ep.major_id = c.object_id

                                   AND ep.minor_id = c.column_id

        LEFT JOIN sys.objects o ON ep.major_id = o.object_id

ORDER BY ObjectName, ColumnName

image

 

 

This give you a really quick and easy way to document your objects. I highly recommend that during your next development project that you make life easy for yourself and add extended properties to your objects (and trust me, your DBA will thank you).

How Do You Provide Passwords To Your Users?

Passwords are a necessary evil and there are times when you have to hand out a password for a SQL login (because the POS application doesn’t support Windows Authentication). Traditionally I’ve done this by sending an email to the user with the login and a separate one with the password, figuring that internal security controls would be good enough to prevent anyone from accessing both emails. Recently it came to light that all emails were being siphoned off and an information security team had access to all email that traversed our Exchange servers. Now I’m not saying that I don’t trust these guys, but there’s no way in hell I would ever let them get one of my passwords.

I needed to come up with a better solution for getting passwords to users that had a pretty good level of security around it. Yes, I know that the password can easily be set to force a change at the next login, however this does not work in a lot of cases where it will be used by an application and the person doing the configuration doesn’t have the knowledge or tools to go in and change the password themselves.

I decided that I wanted to have a two-factor authentication type method that would limit the availability to a password and that would provide the information once and once only for the user so that it would never be stored for a long period of time.

First I created a table to hold the password and a unique-identifier and nothing else. I didn’t want to store a login name along with this data just for extra security purposes. This way even if someone got access to the password they wouldn’t know what login it was for, helping with additional security.

CREATE TABLE [dbo].[PwInfo]

    (

      [AuthenticationID] [uniqueidentifier] NULL ,

      [NewPwd] [varchar](128) NULL

    )

ON  [PRIMARY]

 

GO

 

ALTER TABLE [dbo].[PwInfo] ADD  DEFAULT (NEWID()) FOR [AuthenticationID]

GO

 

Now I needed a quick and easy way to get a password once entered. I wrote a procedure that accepts the AuthenticationID, returns the password and then deletes the entry.

CREATE PROCEDURE [dbo].[GetPwdByAuthenticationID]

    @AuthenticationID UNIQUEIDENTIFIER

AS 

    SET NOCOUNT ON

 

    DECLARE @NewPwd VARCHAR(128)

 

    SELECT  @NewPwd = NewPwd

    FROM    dbo.PwInfo

    WHERE   AuthenticationID = @AuthenticationID

 

    DELETE  FROM dbo.PwInfo

    WHERE   AuthenticationID = @AuthenticationID

 

    SELECT  @NewPwd

    

 

GO

 

Finally I added a proc which would accept a password, add it to the table and then return some information to later allow the password to be retrieved.

CREATE PROCEDURE [dbo].[AddNewPassword] @NewPwd VARCHAR(128)

AS 

    SET NOCOUNT ON

 

    DECLARE @AuthIDTbl TABLE

        (

          AuthenticationID UNIQUEIDENTIFIER

        )

 

    INSERT  INTO dbo.PwInfo

            ( NewPwd )

    OUTPUT  INSERTED.AuthenticationID

            INTO @AuthIDTbl

    VALUES  ( @NewPwd )

 

    DECLARE @AuthenticationID VARCHAR(128)

    DECLARE @Msg VARCHAR(4000)

 

    SELECT  @AuthenticationID = AuthenticationID

    FROM    @AuthIDTbl

 

    SELECT  @Msg = 'Password added. Add the users AD account to the report folder (http://ReportingServices/Reports/Pages/Folder.aspx?ItemPath=%2fDBA+Reports%2fUser+Password+Information )and remove once they have pulled the data.

 

Send the following to the user:

 

For your security purposes please visit the following URL in your browser http://ReportingServices/Reports/Pages/Report.aspx?ItemPath=%2fDBA+Reports%2fUser+Password+Information%2fGet+Password and enter the authentication ID of '

            + @AuthenticationID

            + '

 

This is a one time use authentication token, if you need the password information again you will need to contact the production DBA team.

'

 

    PRINT @Msg

 

 

GO

 

If you read through this code you’ll see that it outputs a message that provides a couple of links to Reporting Services. This is where the extra authentication comes in.

Within Reporting Services I created a report which called the GetPwdByAuthenticationID proc and just returned the password (nothing more). This report lives in it’s own subfolder which is only accessible by the DBA team.

 

Here’s how it works:

A user requests a password from us, we pull that password from our secure repository (highly encrypted) and use dbo.AddNewPassword to add this password to the table. We get back a message which we then use to email the requestor; this contains the URL and the AuthenticationID that they need to enter into SSRS to get the information out. We then go to SSRS and grant the user browser permissions in the folder, allowing them to run the report and get back the password. Once they have retrieved the password we then go and remove the user from the folder, closing down access once more.

This provides several layers of security:

  • The user must be logged in to AD with their own account to be able to access the report
  • The user must have the the AuthenticationID provided by the DBA team to get the password
  • The password has a one time access restriction meaning the process cannot be repeated
  • The login is never given along with the password and never stored by the DBA team together except in a highly encrypted password vault inaccessible to anyone but the DBAs.

 

I feel this a much better solution than sending passwords via email, and considering it only took an hour to put together I figure it a very worthwhile piece of my time. Sure, there are improvements that could be made around automation of access and notifications to users, but as a quick win I think this does a good job.

 

I’m interested to know what solutions you other folks might be using for those times when you need to provide password information to users. Please comment below.

Comparing Indexes Using PowerShell

When you have data replicated out to multiple machines or are working on migrations it can be a challenge to ensure that the indexes are in sync, thus preventing query performance issues. I found the need to check and sync up indexes myself the other day and I didn’t have any third party software installed to help me do the compare. Yet again PowerShell came to the rescue and I ended up putting together a script that just kind of grew and grew on me, so this morning I decided to stop and put it out for anyone to use.

The script accepts a few parameters:

  • Server1 – The source server that you’ll use as your source of index goodness
  • Server2 – The destination that you want to compare (this can be the same server)
  • Database1 – The source database that contains the tables and indexes
  • Database2 – (this is optional) the destination database to check, if this isn’t passed then this defaults to the same value as Database1
  • OutputScriptLocation – The path and filename for the output SQL script file which you can run to create the indexes
  • Verbose – A switch which will dump lots of information to the screen, which can be useful as it will also tell you if there are missing tables on the destination

 

I decided not to do the actual index creation within the script as it’s good to be able to check the output and be sure that’s the thing you want. I did however output the script in such a way to make it as easy as possible to run, and so you can just open the script, switch to SQLCMD mode and run it. The script will make the connection to your destination server, to the right database and then create the indexes. It will also output to the screen when an index is created (this is very useful for those times when you are creating a lot of indexes and some of them are large).

 

I’ve tested it on a few servers and it’s worked out great for me, let me know if you run into any problems running it or encounter strange bugs.

Download Compare-Indexes.ps1 or get the code below.

<#

.SYNOPSIS

   Script can be used to compare indexes on tables on different servers/databases.

.DESCRIPTION

   When executed the script will check passed in databases on different (or the same) server and output a SQLCMD script which

   can be executed to add missing indexes.

.PARAMETER <paramName>

   -Server1 The source SQL Instance that is used as the master

   -Server2 The destination SQL Instance that will be the target for additional indexes

   -Database1 The source database

   -Database2 The destination database (leave blank if the source and destination databases are the same)

   -OutputScriptLocation Path and filename for the .sql file that will be created for any indexes that need to be added

   -Verbose Output some logging information

.EXAMPLE

   ./Compare-Indexes -Server1 localhost -Server2 remotehostinstance1 -Database MyDB -OutputScriptLocation C:TempNewIndexes.sql -Verbose

#>

 

param

(

[string]$Server1 = "$(Read-Host 'SQL Instance 1 [e.g. (local)]')",

[string]$Server2 = "$(Read-Host 'SQL Instance 2 [e.g. (local)]')",

[string]$Database1 = "$(Read-Host 'Database Name')",

[string]$Database2, 

[string]$OutputScriptLocation = "$(Read-Host 'Output File Name')",

[switch]$Verbose

)

 

if (!$Database2) { $Database2 = $Database1}

 

$MissingIndexCount = 0

 

try

{

    #Initialize the output file

    $FileInitiation = @"

:ON ERROR exit 

:setvar SQLInstance `"$Server2`"

:setvar Database `"$Database2`"

:CONNECT `$(SQLInstance)

 

USE `$(Database);

 

 

"@

 

    $FileInitiation | Out-File $OutputScriptLocation

    $File = "True"

}

catch { Write-Host "Output folder does not exist...aborting script." 

    $File = "False"

}

 

 

if ($File -eq "True")

{

    if ($Server1 -ilike "**") { $BaseServer1 = "SQLSERVER:SQL$Server1Databases$Database1Tables" }

    else { $BaseServer1 = "SQLSERVER:SQL$Server1defaultDatabases$Database1Tables" }

 

    if ($Server2 -ilike "**") { $BaseServer2 = "SQLSERVER:SQL$Server2Databases$Database2Tables" }

    else { $BaseServer2 = "SQLSERVER:SQL$Server2defaultDatabases$Database2Tables" } 

 

 

    Write-Output "Please wait...comparing $Server1.$Database1 to $Server2.$Database2"

 

    $Server1List = dir $BaseServer1 | select DisplayName

    $Server2List = dir $BaseServer2 | select DisplayName

 

    #For information purposes show a list of tables missing on server2 if verbose enabled

    if ($Verbose)

    {

        $MissingTables = Compare-Object $Server1List $Server2List -Property DisplayName | Where-Object {$_.SideIndicator -eq '<=' } | Select DisplayName

        foreach ($MsTbl in $MissingTables)

        {            $MissingTable = $MsTbl.DisplayName

            Write-Output "Missing table - $MissingTable"

        }

    }

 

    #The compare uses -IncludeEqual as we only want to compare indexes on tables that exist on both servers

    $TableList = Compare-Object $Server1List $Server2List -Property DisplayName -IncludeEqual | Where-Object {$_.SideIndicator -eq '==' } | Select DisplayName

    if ($TableList)

    {

    foreach ($Table in $TableList)

    {

        $MissingIndex = "" #Initialize the variable so we can null it

        Clear-Variable MissingIndex #Null the variable as we'll be checking for a value in here later

        $TableToCheck = $Table.DisplayName

        $Table1Index = dir $BaseServer1$TableToCheckIndexes | Select Name # | Where-Object {$_.DisplayName -eq "$TableToCheck"}).indexes | Select Name

        $Table2Index = dir $BaseServer2$TableToCheckIndexes | Select Name # | Where-Object {$_.DisplayName -eq "$TableToCheck"}).indexes | Select Name

        if ($Verbose) {Write-Host "Checking $TableToCheck"}

        $MissingIndex = Compare-Object $Table1Index $Table2Index -Property Name -IncludeEqual | Where-Object {$_.SideIndicator -eq '<='} | Select Name

        if ($MissingIndex)

        {

            $MissingIndexCount += 1

            $Index = $MissingIndex.Name

            Write-Output "Missing Index - $Index on $TableToCheck"

            dir $BaseServer1$TableToCheckIndexes | where-object {$_.Name -eq "$Index"} |

            % {$_.Script() | out-file $OutputScriptLocation -append; "; `r`nRAISERROR(`'Index $TableToCheck.$Index created...`',0,1) WITH NOWAIT; `r`n`r`n " |

                out-file $OutputScriptLocation -Append;}

        }

    }

    if ($MissingIndexCount -gt 0) {Write-Output "File $OutputScriptLocation created. Open this file in SSMS and execute in SQLCMD mode"}

        else {Write-Output "No missing indexes found!"}

    }

    else { Write-Output "No matching tables found."}

}

Changing SQL Agent Job Subsystem Queue Lengths

In the event you are running a lot of jobs on your SQL Server, or you happen to have a great number of jobs that kick off close to each other and use the same subsystem (i.e. PowerShell) then you might receive a warning in the SQL Agent Error Log stating that the job step “is being queued for the PowerShell subsystem”.

This queuing is deliberate. Microsoft put in safeguards around the various subsystems that the Agent calls in order to help prevent any particular subsystem from taking over all the worker threads (after all you wouldn’t want your 25 PowerShell job steps preventing your queries from running because they’d taken all the worker threads would you?)

This is one of the reasons that I like to use dedicated servers just to handle all jobs/ETL processes. I like to keep my local jobs to just maintenance processes (backup/reindex/checkdb) wherever possible. In doing this I need to be sure that I am able to increase the max worker threads associated to a subsystem to be sure that jobs don’t get delayed. As such I keep an eye on the SQL Agent Error Log and server resources to check that I don’t get thread starvation issues (and being just a job server that’s pretty unusual).

Today I saw a bunch of warnings in the log to do with PowerShell job steps being queued. This wasn’t a surprise as I had added a whole bunch of new jobs and had not adjusted things from the default values. First I went through and grabbed the default information for each of the subsystems so that I could easily revert if the need arose:

SELECT [subsystem_id]

      ,[subsystem]

      ,[max_worker_threads]

  FROM [msdb].[dbo].[syssubsystems]

image

 

As you can see here there are only 2 worker threads set for PowerShell. I need to bump this up to 20 so that it could handle the concurrent job load. I have plenty of memory available on the server so I’m not concerned with the overhead associated with the loading of PowerShell either. Always something you need to keep in mind (oh and you have to be a sysadmin to be able to update this table).

UPDATE msdb.dbo.syssubsystems

SET max_worker_threads = 20

WHERE subsystem = N'PowerShell'  

 

Checking the value again yields the updated information:

SELECT [subsystem_id]

      ,[subsystem]

      ,[max_worker_threads]

  FROM [msdb].[dbo].[syssubsystems]

image

 

Now it’s a simple case of restarting the SQL Server Agent service for the change to take effect.

Note: Applying Service Packs or Cumulative Updates could potentially reset this value, so make sure you document the change and confirm it after any upgrade (for example http://support.microsoft.com/kb/972759 )

Configuring DTC Security In A Cluster With PowerShell

The other day Allan Hirt (blog|twitter) write a fantastic post around “How to properly configure DTC for Clustered Instances of SQL Server with Windows Server 2008 R2”. He included a PowerShell script to handle all the funky setup stuff. The only downside was that after that you had to manually go through and configure Network Access and security for the clustered DTC using the GUI.

“There has to be a better way” I thought. I recalled how security back in 2003 could be set in the registry so I went digging. Eventually I was able to find the requisite keys under the Distribute Transaction Coordinator resource. From there it was just a case of tying things back so that those keys could be easily found from the SQL Resource Name and then updating them so that they matched what Allan demonstrated on his page.

I’ve tested it on a couple of my clusters and it’s worked exactly as designed, you just need to change the ServiceName variable so that it matches your clustered service name and then decided what authentication method you want to use (it defaults to the most secure level: Mutual Authentication Required).

Get the code below or download ConfigureMSDTC.ps1

<#

.SYNOPSIS

   Configures MSDTC on a cluster

.DESCRIPTION

   This configures a basic networked config for MSDTC for clustered SQL Server instances.

   Uncomment the transaction manager security setting and enter the correct ServiceName that can be found in FC manager

.PARAMETER <paramName>

   NONE

.EXAMPLE

   NONE

#>

 

$ServiceName = "SQL Server (SOX2)"

 

#--What MSDTC Transaction Manager Security setting is requested? (uncomment one)

$TranManSec = "Mutual" #Mutual Authentication Required

#$TranManSec = "Incoming" #Incoming Called Authentication Required

#$TranManSec = "None" #No Authentication Required

 

 

 

 

#Grab a list of cluster groups

$GroupPath = "HKLM:ClusterGroups"

$GroupList = dir $GroupPath

 

#Iterate through the groups to find the one matching the service name

foreach ($Group in $GroupList)

{

    $GroupChildPath = $Group.PSPath

    if ((Get-ItemProperty -path $GroupChildPath -name Name).Name -eq $ServiceName)

    {

        #we got a match! Now grab a list of the groups in this service

        $ReferencedResources = (Get-ItemProperty -path $GroupChildPath -name Contains).Contains

        foreach ($Resource in $ReferencedResources)

        {

            #Query each of the resources for their type and work with MSDTC

            $ResourcePath = "HKLM:ClusterResources$Resource"

            if ((Get-ItemProperty -path $ResourcePath).Type -eq "Distributed Transaction Coordinator")

            {

                #We found MSDTC resource for that service group, let's configure it

                $SecurityPath = "$ResourcePathMSDTCPRIVATEMSDTCSecurity"

                Set-ItemProperty -path $SecurityPath -name "NetworkDtcAccess" -value 1

                Set-ItemProperty -path $SecurityPath -name "NetworkDtcAccessClients" -value 0

                Set-ItemProperty -path $SecurityPath -name "NetworkDtcAccessTransactions" -value 0

                Set-ItemProperty -path $SecurityPath -name "NetworkDtcAccessInbound" -value 1

                Set-ItemProperty -path $SecurityPath -name "NetworkDtcAccessOutbound" -value 1

                Set-ItemProperty -path $SecurityPath -name "LuTransactions" -value 1

 

                #Now configure the authentication method for MSDTC (defaulting to Mutual Auth as it's most secure)

                $SecurityPath = "$ResourcePathMSDTCPRIVATEMSDTC"

                if ($TranManSec -eq "None")

                {

                    Set-ItemProperty -path $MSDTCPath -name "TurnOffRpcSecurity" -value 1

                    Set-ItemProperty -path $MSDTCPath -name "AllowOnlySecureRpcCalls" -value 0

                    Set-ItemProperty -path $MSDTCPath -name "FallbackToUnsecureRPCIfNecessary" -value 0

                }

 

                elseif ($TranManSec -eq "Incoming")

                {

                    Set-ItemProperty -path $MSDTCPath -name "TurnOffRpcSecurity" -value 0

                    Set-ItemProperty -path $MSDTCPath -name "AllowOnlySecureRpcCalls" -value 0

                    Set-ItemProperty -path $MSDTCPath -name "FallbackToUnsecureRPCIfNecessary" -value 1

                }

 

                else 

                {

                    Set-ItemProperty -path $MSDTCPath -name "TurnOffRpcSecurity" -value 0

                    Set-ItemProperty -path $MSDTCPath -name "AllowOnlySecureRpcCalls" -value 1

                    Set-ItemProperty -path $MSDTCPath -name "FallbackToUnsecureRPCIfNecessary" -value 0

                } 

 

            }

        }

    }

}

 

Querying SQL Error Logs With PowerShell

It’s a real pain trying to get data out of the SQL Error Logs I’ve noticed. You either don’t cycle your files very often and end up with stupid amounts of data in there to sift through or you cycle them frequently and then you end up having to try and figure out which of the files is related to the data you want. You could use T-SQL to import all the files and grab stuff, alternatively you can use PowerShell to get information.

I wanted to take this process a step further so I created a PowerShell script that will accept servername, start and end date/time parameters and return to you, the kind reader, all the entries in the logs between those times (and just between those times). This is great for when someone asks you to give them information from the log for 3-7 days ago. One quick execution of the script and the data is sitting there in PowerShell grid format, ready to go. Even better, using the grid you can filter the information further and quickly copy/paste it to excel or some other program.

I’ve pasted the code down below, but it might be easier to download Get-SQLErrorLogs.ps1

This requires the PowerShell SQL snappins that come with 2008/R2 but can query versions 2000 and up.

Next steps are to add filtering and options to output to file. I’m sure I’ll get to that eventually.

 

<#

.SYNOPSIS

   Retrieves SQL Server error log entries

.DESCRIPTION

   Pulls error log details from SQL Server and outputs them to the client

.PARAMETER <paramName>

   -SQLInstance (required) The SQL Instance to pull the logs from

   -MaxDate (required) The most recent log record to return

   -MinDate (required) the oldest log record to return

.EXAMPLE

   ./Get-SQLErrorLogs -SQLInstance MyInstance -MinDate 2012-05-10 -MaxDate "2012-05-15 09:00"

#>

 

param

(

[string]$SQLInstance = $(Read-Host -prompt "SQL Instance [e.g. MyClusterInstance]"),

[string] $MinDate = $(Read-Host -prompt "Lowest log date entry (yyyy-mm-dd hh:mm)"),

[string] $MaxDate = $(Read-Host -prompt "Highest log date entry (yyyy-mm-dd hh:mm"),

[int] $LogNumber

)

 

cls

 

$CheckDateMin = [datetime]::Parse($MinDate)

$CheckDateMax = [datetime]::Parse($MaxDate)

 

 

Function Main {

    #Load the SQL snapins

    Load-SQLSnapins

 

    #Create a datatable to hold the data

    $Results = New-Object system.Data.DataTable "Results"

    #Setup columns

    $LogDate = New-Object system.Data.DataColumn LogDate,([DateTime])

    $ProcessInfo = New-Object system.Data.DataColumn ProcessInfo,([string])

    $Text = New-Object system.Data.DataColumn Text,([string])

    #Add columns to datatable

    $Results.columns.add($LogDate)

    $Results.columns.add($ProcessInfo)

    $Results.columns.add($Text)

 

    #Another to hold logs read

    $ReadLogs = New-Object system.Data.DataTable "ReadLogs"

    $ReadLogNum = New-Object system.Data.DataColumn LogNumber,([int])

    $ReadLogs.columns.add($ReadLogNum)

 

 

    #Shred the SQL Instance to get the right data

    $SQLName = Shred-SQLName $SQLInstance

    $InstanceName = Shred-SQLInstance $SQLInstance

 

 

    #Grabs SQL Logs by Date

    Get-SQLLogByDate $SQLName $InstanceName $SQLInstance $CheckDateMin $CheckDateMax

 

    $results | Sort-Object LogDate | ogv

 

}

 

 

 

Function Shred-SQLInstance ($SQLInstance) {

 

    $SQLInstance = $SQLInstance.ToUpper()

 

    if ($SQLInstance -ilike "**") 

    {

        $string = $SQLInstance.Split("")

        $Instance = $string[1]

        Return $Instance

    }

    else

    {

        $Instance = "DEFAULT"

        Return $Instance

    }

}

 

Function Shred-SQLName ($SQLInstance) {

    $SQLInstance = $SQLInstance.ToUpper()

    if ($SQLInstance -ilike "**") 

    {

        $string = $SQLInstance.Split("")

        $Name = $string[0]

        Return $Name

    }

    else

    {

        $Name = $SQLInstance

        Return $Name

    }

}

 

Function Get-SQLLogByDate ($SQLName, $InstanceName, $SQLInstance,$CheckDateMin ,$CheckDateMax) {

 

    IF ($InstanceName -eq "DEFAULT") {$Logs = DIR SQLSERVER:SQL$SQLName | Where-Object {$_.Name -eq $SQLInstance}; }

    ELSE {$Logs = DIR SQLSERVER:SQL$SQLName | Where-Object {$_.Name -eq $SQLInstance}; }

 

    $LogCount = ($Logs.EnumErrorLogs() | Measure-Object).Count

    Write-Output "--------------------------------------------------------------------------------------------------------------"

    Write-Output "Checking error logs on server $SQLInstance for logs created between $CheckDateMin & $CheckDateMax"

    Write-Output "--------------------------------------------------------------------------------------------------------------"

    Write-Output "Checking through $LogCount logs, please wait."

    Write-Output "-----------------------------------------------"

 

    $LogList = $Logs.EnumErrorLogs()

    #we'll need to do something hokey here. SQL error log creation is not when the log was created, rather when it was closed

    #not intended circumstances. This means we'll have to get the min log and then get the one before that

    ForEach ($ErrorLog in $LogList) {

 

        IF ($ErrorLog.CreateDate -ge $CheckDateMin -and $ErrorLog.CreateDate -le $CheckDateMax)

        {

 

            $CurrentLog = $ErrorLog.CreateDate

            $CurrentLogName = $ErrorLog.Name

            Write-Output "Currently reading error log $CurrentLogName dated $CurrentLog..."

            #write the log number to the $ReadLogs datatable

            $NewLogRow = $ReadLogs.NewRow(); $NewLogRow.LogNumber = $CurrentLogName; $ReadLogs.Rows.Add($NewLogRow);

            $Logs.ReadErrorLog(($ErrorLog.Name)) | Where-Object {$_.LogDate -ge $CheckDateMin -AND $_.LogDate -le $CheckDateMax} |

            % {$NewRow = $Results.NewRow(); 

                $NewRow.LogDate = $_.LogDate; 

                $NewRow.ProcessInfo = $_.ProcessInfo;

                $NewRow.Text = $_.Text;

                $Results.Rows.Add($NewRow);

            }

        }

 

 

 

    }

    #Now we've read all the logs read the one before the newest so we get that extra data 

    #unless the most current log (0) is included in the results already

    #Write-Output $ReadLogs | ft -AutoSize

    $LowestLog = ($ReadLogs | Measure-Object LogNumber -Minimum).Minimum

    IF ($LowestLog -gt 0)

    {

        $LogToRead = $LowestLog - 1

        Write-Output "Currently reading error log $LogToRead..."

        #write the log number to the $ReadLogs datatable

        $NewLogRow = $ReadLogs.NewRow(); $NewLogRow.LogNumber = $CurrentLogName; $ReadLogs.Rows.Add($NewLogRow);

        $Logs.ReadErrorLog($LogToRead) | Where-Object {$_.LogDate -ge $CheckDateMin -AND $_.LogDate -le $CheckDateMax} |

        % {$NewRow = $Results.NewRow(); 

            $NewRow.LogDate = $_.LogDate; 

            $NewRow.ProcessInfo = $_.ProcessInfo;

            $NewRow.Text = $_.Text;

            $Results.Rows.Add($NewRow);

        }

    }

    #on the off chance that the dates reside in the current log only pull that one in

    #but only if no other logs have been loaded (edge case)

    IF (!$LowestLog) 

    {

        Write-Output "Currently reading error log 0..."

        $Logs.ReadErrorLog(0) | Where-Object {$_.LogDate -ge $CheckDateMin -AND $_.LogDate -le $CheckDateMax} |

        % {$NewRow = $Results.NewRow(); 

            $NewRow.LogDate = $_.LogDate; 

            $NewRow.ProcessInfo = $_.ProcessInfo;

            $NewRow.Text = $_.Text;

            $Results.Rows.Add($NewRow);

        }

    }

}

 

Function Load-SQLSnapins 

{    $CheckSQLSnapins = get-pssnapin | where {$_.Name -match "SqlServerCmdletSnapin100"} 

    if (!$CheckSQLSnapins) 

    {

        Add-PSSnapin SqlServerCmdletSnapin100 

        Write-Host "SqlServerCmdletSnapin100 Loaded"

    }

    else { Write-Host "SqlServerCmdletSnapin100 Already Loaded" }

 

    $CheckSQLSnapins = get-pssnapin | where {$_.Name -match "SqlServerProviderSnapin100"} 

    if (!$CheckSQLSnapins) 

    {

        Add-PSSnapin SqlServerProviderSnapin100 

        Write-Host "SqlServerProviderSnapin100 Loaded"

    }

    else { Write-Host "SqlServerProviderSnapin100 Already Loaded" }

}

 

main