Category: Uncategorized

Disabling SSIS Logging To Event Log

Do you have a lot of SSIS packages executing in your environment?

Are you suffering from your Application Event Log filling up with Event IDs 12288 and 12289 letting you know that SSIS package have started and finished?

Is it difficult for you to find relevant events in your Event Log because of this?

Turn those pointless notifications off.

Open up the registry (careful now, you can do damage in here) and navigate down to:

HKEY_LOCAL_MACHINE -> SOFTWARE -> Microsoft -> Microsoft SQL Server -> 100 -> SSIS

You should see a key called LogPackageExecutionToEventLog. Change the value from 1 to 0 and enjoy a brand new lease of life in your Application Event Log.

CLR Errors In Replication

Did you know that .NET components are used in replication? I sure didn’t until I tried to add a new publication and perform a snapshot.

I had not troubles creating the publication or adding the subscribers but when I attempted to run the snapshot I got the following error:

Source: Replication

Exception Type: Microsoft.SqlServer.Replication.ReplicationAgentSqlException

Exception Message: Failed to initialize the Common Language Runtime (CLR) v2.0.50727 with HRESULT 0x800

 

Not a whole lot to go on there. There happens to be a DMV which will provide you with the status of CLR in SQL.

select * from sys.dm_clr_properties

I was expecting to see

name    value

———————–
directory    C:WINDOWSMicrosoft.NETFramework64v2.0.50727
version    v2.0.50727
state    CLR is initialized

 

but instead I saw

name    value

———————–
directory  
version   
state    CLR initialization permanently failed

 

Time to open a support case with Microsoft.

After running through some troubleshooting steps with the MS engineers, including verifying the .NET install was good (see http://blogs.msdn.com/b/astebner/archive/2008/10/13/8999004.aspx) the decision was ultimately made to restart SQL to see if this resolved the problem.

After the restart everything was running smoothly and there were no issues performing snapshots. Sadly the restart eliminated any chance of getting a root cause for the problem. I’ve got a filtered dump setup to try and capture something in the event that it happens again. I kind of hope that it does just so we can find out exactly what the cause was.

Here are a couple of interesting notes that I found from this:

  • .NET Framework 3.5 is just a wrapper for .NET 2.0
  • You can log all .NET events to disk (see http://blogs.msdn.com/b/junfeng/archive/2004/02/14/72912.aspx)
  • In the event SQL encounters a .NET/CLR error then it will fail and set a flag internally. That flag will not be reset unless the SQL Instance is restarted and so any CLR functionality will fail
  • You want to have your SQL Instance configured to write dumps to a drive with more free disk space than you have memory on your server. By default dumps will be written to the log directory. You can change this by opening “Error and Usage Settings” from the start menu, going to options and changing the error reporting location. Note, you have to restart SQL for this to take effect as this information is read in at startup

 

Changing Logshipping Locations

I was fortunate enough to be able to drop a 1.6TB database last week which cleared up a whole bunch of disk for me on one of my production environments.

I decided to use the freed up space to consolidate my backup and transaction log dumps onto a single drive. Moving the backups was a simple change to a single script (I really need to post that backup script one day).

Changing the location of the transaction log dumps, which are used in logshipping, looked to be a pain in the rear task (as most gui based operations are). As such it was time to move beyond the gui, and in the end it proved to be something quick and simple.

 

The first thing I did was disable all the transaction log backup jobs and wait for any running jobs to complete. Then I copied the existing log dumps (with their directory structure) to the new location.

Once all the folders and files were in place I was able to change the dumps with the following script

 

DECLARE @dbname NVARCHAR(256)

, @bkpdir NVARCHAR(1000)        

, @bkpshare NVARCHAR(1000)

        
        
SET @dbname = N’AdventureWorks’     
SET @bkpdir = N’P:MSSQLBackupTrx’ + @dbname
SET @bkpshare = N’PrimaryServerP$MSSQLBackupTrx’ + @dbname

EXEC MASTER.dbo.sp_change_log_shipping_primary_database
   @database = @dbname
 , @backup_directory = @bkpdir
 , @backup_share = @bkpshare
 , @backup_compression = 1

As I’m logshipping multiple databases and have a subdirectory for the logs for each db it was a quicker task to just have to set the database variable once and have the script take care of the rest.

This script just changes the locations on the primary server. In order for the files to be picked up by the secondary servers you need to change the share location there as well. As such I ran the following on each of the secondary servers.

 

DECLARE @dbname NVARCHAR(256)        
, @bkpshare NVARCHAR(1000)
        
        
SET @dbname = N’AdventureWorks’        
     
SET @bkpshare = N’PrimaryServerP$MSSQLBackupTrx’ + @dbname

EXEC MASTER.dbo.sp_change_log_shipping_secondary_primary
   @primary_server =  ‘PrimaryServerInstance1’
 , @primary_database =  @dbname
 , @backup_source_directory =  @bkpshare

Once this was done I executed the backup job for each db once to confirm that the log dumps were written to the correct location. With this working fine I performed a copy and restore of the logs on the secondary servers, then opened the log shipping report and confirmed that the copy and restore times were in order. Everything looked good and I had all of my log shipping cutover to a new drive in a few minutes.

You have to love moving beyond the gui.

T-SQL Tuesday #008 – Gettin My Learnin On

This months T-SQL Tuesday is being hosted by Robert Davis (blog|twitter) and he wants to take a look at learning. 

I’m finding this quite appropriate right now given a recent blog post by Buck Woody (blog|blog|blog|twitter) based around creating a “SQL Server Professional Development Plan” (go give that a read by the way).

In his article Buck talks about foundation learning and educating yourself about a possible career and it’s opportunities. This took me back, back, back a little further and then back a tad more than that to when I was in school. As a 13 year old kid I was a bit of a smart arse (some things never change). I had gotten to grips with a ZX81 and the ZX Spectrum and could see that computers were the way forward. It had reached a point in my schooling age where we had to choose our options for the next couple of years of study. For me it was a simple choice. I was going to take computers for two years, then go to college and take Computer Science. Sweet!

 

I put in my options and was very excited the next week when I got back my class schedule. I opened my schedule and to my absolute horror I was stuck with taking geography instead. Upon asking the reason for this I was advised that the school felt I would be wasted in the computer class and was much better suited to geography. I still don’t know why they called it geography as it all seemed to be about geology.

The bitter taste of geography did not sit well, it really put a complete dampener on education for me and I never did wind up going to college.

 

Many years later I found myself interviewing for a job at a company for overnight work (never fun) which consisted of changing backup tapes and watching some monitors for alarms. If an alarm came in I was to call the oncall person and then they would fix it.

That was 10 years ago (oh and Jorge, Arcserve sucked back then too).

I did the changing of the tapes and I called people when things broke. Here’s the problem with that…it’s BORING!

 

After a couple of weeks I started engaging with the oncall folks on the issues. Let me help I told them. I’ll help with the bad things “I want your drama“. My reasoning was that if they helped me understand what was broken and how it could be fixed then I wouldn’t need to call anyone the next time. This would allow the team to have an improved quality of life and eliminate the Lady Gaga oncall person, you know the type….

Obligatory GagaYou called, I can’t hear a thing.
I have got no service
In the club, you see, see
Wha-Wha-What did you say?
Oh, you’re breaking up on me
Sorry, I cannot hear you,
I’m kinda busy.

 

Helping these folks out got me a solid grounding in Windows NT 4.0 and 2000.

Now that everyone knew I could do more than stare at flashing lights they asked me to help with the rollouts of the web stuff, so I got started with using SiteServ.

One day the boss came to me and said that I’d been doing some good work, that we had this big upgrade from SQL 7.0 to SQL 2000 that was going to take place and would I like to be involved. My first response was “I’d love to, sounds awesome”. My second was “so what’s SQL?”.

I worked with that team for a few years, they taught me a great deal.

One of my fondest memories of those early days of SQL for me was being made fun of for using Enterprise Manager. I was told in no uncertain terms that if I ever wanted to amount to anything as a DBA that I had to learn T-SQL and I had to use it for the powers of good (ie for admin purposes, not for development).

That has held me in good sway over the years and it’s something I still force myself to do today. As much as possible I will use T-SQL to perform a task. Now I’m taking it a step further and I’m using powershell to do things that previously I would never have considered using a scripting language for. As an example, today I took a script I already had, made a very quick change and had something running in 5 minutes that gave me the size of every drive C on all of my servers.

In a roundabout way this get’s me to my point. Learning. I learn by doing. I have books a plenty, but nothing sinks into my brain until I actually do it.

 

The old saying is that practice makes perfect. We learn by doing. So go out, get your hands dirty, buy yourself a license for SQL 2008 Developer Edition it’s only $50. Once you have it install it somewhere. Corrupt some databases, recovery from the corruption. Create some databases, write some queries, have some fun with it. Once you start playing you’ll start learning.

 

Go out, get yer learnin on!

What Indexes Are Not Being Used And How Big Are They?

Leafs Stanley Cup Cabinet (or Cubs World Series Trophy Cabinet)Indexes…they are your friends. They order your data nice and neatly, they can dramatically improve your query performance and can have cool names like IX_UseMeToRunFasterLikeATurboButton. 

Indexes…they are your enemy. They slow down your inserts, updates and deletes, and they take up valuable disk space. 

So having indexes that are used is a great thing.  You’ll see query performance improve by an order of magnitude and some freeing up of your server resources. However, if not used, an index just sits there an takes up space kind of like the Toronto Maple Leafs Stanely Cup cabinet. Let’s not forget those unused indexes are still being kept up to date and slowing down your DML statements. 

 

A couple of months ago I was invited to add a large number of indexes to a database in production that is used for reporting. Being a good DBA I asked, quite politely, why they were needed. “They are so the queries run faster” was the response. Much banter flowed back and forth between teams and in the end all of the indexes were applied (this included 20 indexes on a 534 column table, yes, I know). 

I felt these indexes were not going to be used, but as in a lot of cases it comes down to a please just do it situation. So I did.

Three months on and I had a couple of hours to take a look and see how these indexes were doing. A great post by Robert Davis (blog|twitter) entitled “How Do I … Get a List of Tables With Data Modifications in a Specific Timeframe?” got me started with the idea of using sys.dm_db_index_usage_stats to check for this.

When a DML statement gets executed the user_updates column in this DMV gets incremented by one. Likewise, when a query is executed and an index used, be it for a lookup, scan or seek then then user_lookup, user_scan or user_seek value gets incremented.

Armed with this information I was quickly and easily able to identify indexes in the database that had not been used since the last Instance restart (joining against sysindexes provides the index name)

 

SELECT
   object_name(us.object_id) as objname
  , si.[name] as IndexName
FROM
  sys.dm_db_index_usage_stats us
    INNER JOIN sysindexes si
      ON us.[object_id] = si.id
      AND us.index_id = si.indid
WHERE
  us.database_id = db_id()
  AND us.user_lookups = 0
  AND us.user_scans = 0
  AND us.user_seeks = 0

 

 

This gave me a nice list of unused indexes that I could contemplate removing. However I really felt that I needed to put some numbers behind those indexes. Managers love numbers. By pulling the list of indexes into a table and then using sys.dm_db_index_physical_stats I was able to get the size of each index. Adding the user_updates information from sys.dm_db_index_usage_stats provided me with a long list of indexes along with how many times they have been updated since SQL was last started and the size of each index.

 

–CHECKS TO SEE IF INDEXES ARE BEING USED IN A DATABASE
–IF UNUSED THEN GETS THE SIZE OF THE INDEX ALONG WITH UPDATES SINCE LAST SQL START

if object_id(‘tempdb..#indexinfo’) is not null
drop table #indexinfo

–Load up a list of unused indexes for the current database
SELECT
  ROW_NUMBER() OVER (ORDER BY object_name(us.object_id), us.index_id) as ROWNUM
  , object_name(us.object_id) as objname
  , us.[object_id]
  , us.index_id
  , si.[name] as IndexName
  , us.user_updates
  , 0 as IndexSizeKB
INTO
  #indexinfo
FROM
  sys.dm_db_index_usage_stats us
    INNER JOIN sysindexes si
      ON us.[object_id] = si.id
      AND us.index_id = si.indid
WHERE
  us.database_id = db_id()
  AND us.user_lookups = 0
  AND us.user_scans = 0
  AND us.user_seeks = 0

–Going to use a while loop and get the physical info for each index for sizing purposes
DECLARE @object_id bigint
      , @index_id int
      , @rownum int
      , @index_size int
      
SELECT @rownum = min(ROWNUM) from #indexinfo where ROWNUM IS NOT NULL
WHILE @rownum IS NOT NULL
BEGIN

    SELECT @object_id = object_id from #indexinfo WHERE ROWNUM = @rownum
    SELECT @index_id = index_id from #indexinfo WHERE ROWNUM = @rownum

    SELECT @index_size = convert(int, (avg_record_size_in_bytes * record_count) / 1024)
    FROM sys.dm_db_index_physical_stats(db_id(), @object_id, @index_id,  null, ‘detailed’)
    WHERE index_level = 0

    UPDATE #indexinfo
    SET IndexSizeKB = @index_size
    WHERE ROWNUM = @rownum

    SELECT @rownum = min(ROWNUM) from #indexinfo where ROWNUM > @rownum and ROWNUM IS NOT NULL
END

–Give us the results
SELECTobjname as TableName, IndexName, user_updates, IndexSizeKB FROM #indexinfo
SELECTsum(user_updates) as Updates, sum(indexSizeKB) / 1024 as TotalUnusedIndexesMB
FROM #indexinfo

I ended up finding 245 indexes that were not being used. Those indexes had been updated over 15 million times in the last two weeks and they totalled up to over 12GB of utilized disk.

Some nice numbers to add to my request to drop those indexes.

Moving SQL Datafiles

The Problem

A Sharepoint database (LOCAL_CONTENT_DB) running on a clustered SQL 2005 EE Instance (on Windows 2003 EE) had experienced growth from 10GB to 95GB in a few weeks and there was no more disk for it to grow any further.

I needed to add some disk to the cluster, get it in the cluster so that SQL could see it and then move data around so that there was extra capacity for this database and others.

 

Existing Disk Layout

  • F: – SQL Data 200GB
  • G: – SQL Logs 100GB
  • H: – SQL Backups 200GB
  • I: – SQL System & TempDB 75GB

 

Proposed Disk Layout

  • F: – SQL Data 200GB
  • G: – SQL Logs 100GB
  • H: – SQL Data2 200GB (changed usage)
  • I: – SQL System & TempDB 75GB
  • J: – SQL Backups – 500GB (new)

Deploying The Changes

Attaching the disk

Fortunately we had some capacity on the SAN and a friendly SAN engineer who quickly carved up a new LUN for me and presented it to the clustered nodes. I logged on to each node, opened up the Disk Management console (diskmgmt.msc) and rescanned the disks (Action -> Rescan Disks). Once the scan was complete I scrolled down through the list and verified I could see the newly presented LUN (and made a note of the disk number)

 

Aligning the disk & creating a partition

With the disk number in hand, on one of the nodes, I opened up a Command Prompt window (so want to call this a DOS window) and opened diskpart.

Why not just format the disk? You might ask. Well there’s the disk offset issue in Windows 2003 (thankfully eliminated in Windows 2008) to be aware of which would cause extra writes and impact performance (read all about this at http://support.microsoft.com/kb/929491).

With diskpart open I partitioned the disk and assigned a drive letter (note disk 15 was the newly attached LUN)

select disk 15
create partition primary align = 64 (this is based upon recommended value from the storage provider)
assign letter = J

This done I exited out of diskpart and went back to Disk Management. A refresh and this now showed J good to go. Right click and a quick format later and the disk was ready to be added to the cluster.

 

Clustering the disk

Before working with the disk we need to get it in the cluster. Rather than just add it to the SQL cluster group I wanted to be sure that there were no issues with reading or writing to the disk from either host. To accomplish this I opened cluster administrator and created a new cluster group called temp.

In the cluster group I added a new disk resource for drive J, allowed both nodes to own the resource and brought the disk online.

To check I was able to read and write to the disk I just created a new file and saved it. Then failed over the disk to the other node, opened it there, appended to the file and saved it again. There were no issues so I knew we were good to go with getting this disk attached to SQL. To get the disk ready I set up my basic folder structure ready for the SQL files.

 

Presenting the disk to SQL

Now that I knew the disk was fully accessible and writable from both hosts it was time to actually present it to the SQL Instance. This, unfortunately, required a short period of downtime as the disk needed to be added as a dependency for SQL Server startup.

To do this I performed the following steps

  • opened the Sharepoint cluster group in cluster admin and took offline the SQL Server Agent and SQL Server resources
  • went to the temp cluster group and took “drive J” offline
  • right clicked on “Drive J” and selected “Change Group” from the context menu
  • selected the Sharepoint cluster group as the new group and confirmed
  • went to the Sharepoint cluster group and brought “Drive J” online
  • right clicked on the SQL Server resource, went to properties, dependencies, modified and added “Drive J” as a dependent resource
  • brought the SQL Server & SQL Server Agent resources online
  • checked that SQL was able to access and write to the new disk by performing a simple quick backup of the master database (and deleted this immediately afterwards)
  • performed full backups of all the databases on the instance to the new drive

 

 

Moving the database

With the disk in place, and full backups taken it was time to move the database. In SQL 2000 this would have been done using sp_detachdb & sp_attachdb, but in SQL 2005 and later there is a much more elegant solution whereby you can set the database offline, copy the relevant file and then use a modify database statement to reference the new location before bringing the database back online again. Here are the steps I followed to do this:

  • Set the database offline 
    • ALTER DATABASE LOCAL_CONTENT_DB SET OFFLINE;
  • Copy file F:MSSQLDataLocal_Content_DB.mdf to H:MSSQLDataLocal_Content_DB.mdf Please note that this was a copy, not a move. Moving the file increases the risk of something happening to that file and you not being able to recovery it.
  • Once the copy was complete change the reference for the mdf 
    • ALTER DATABASE LOCAL_CONTENT_DB
      MODIFY FILE
      (
      NAME = Local_Content_DB_Data
      , FILENAME = ‘H:MSSQLDataLocal_Content_DB.mdf’
      );
  • Bring the database back online
    •  ALTER DATABASE LOCAL_CONTENT_DB SET ONLINE;

Final steps

With the database back online I ran a couple of queries against it, just to confirm it was accessible completely and then, just because I am a careful chap I also executed a dbcc checkdb (which also came back clean). Finally I had the application admin check that everything looked good (which it was).

Once I knew that everything was good I changed the backup location to point to the new disk and deleted the old mdf from the F drive, tada, done.

 

 

SQL Saturday #43

Coming up rapidly is SQL Saturday #43 (twitter). This is the second SQL Saturday in Redmond, unfortunately I missed the first as it nicely coincided with knee surgery, however I am fit and ready to go this time. 

I’m not speaking, but I still wanted to do my part and so volunteered to help out in any way come the day. As a consequence I will be getting an early start and will be out at the event at 6:30am to help put out signs. Being up so early means I should be fully awake when the first session comes around. 

Cameron Mergel (blog | twitter) posted about the sessions he plans to attend on Saturday. It was such a good idea I decided to do the same, so here’s what I’m going to be hitting up come the weekend:

9am – Exploring SQL Server System Information with DMVs – Greg Larsen

This session will explore different DMV and how they can be used to obtain information about your SQL Server environment. There will be a number of demo’s. In these demos different DMV’s will be used provide different infomation about how your SQL Server instance is running. Peeking under the covers is now only a DMV away.

 This was probably the easiest choice of the day for me. You can never know enough about DMVs. Feeling comfortable with transactions and SSIS put those to the back of my list and the chances of me using Report Builder are slim to none.

 

10:30am – Accelerating SQL with Solid State – Sumeet Bansal

Solid-state technologies are changing the way that MS SQL users run their databases. DBA’s are achieving significant TCO savings through performance enhancements, reliability improvements, and reduced energy and real estate costs using solid-state technologies. Solid state is changing the way datacenters look and how MS SQL Servers perform. This session will address: 1) On overview of solid state technology options for SQL Server 2) Why solid state makes a difference? 3) An comparison of the different types of NAND Flash-based products and the advantages and disadvantages of each: a. Disk-based SSDs b. Onboard/Embedded PCI Express devices c. Host-based PCI Express devices 4) Customer proof: Wine.com case study 5)Raffle for a Steve Wozniak

Tough choice here as Dean Richards session “Tuna Helper for DBAs and Developers” was looked really compelling. In the end I felt this might suit me better due to an upcoming datacenter migration and complete SQL Server environment build out that I am going to be working on over the next year. It’s a chance to take a look at SSD and maybe give me some ideas for the upcoming architecture decisions I will need to make.

 

12:30pm – SQL Server and PowerShell – Donabel Santos

This session will cover how you can get started with PowerShell, and how you can use PowerShell with SQL Server. Powershell, a scripting language that leverages Microsoft .NET Framework, can help you manage and automate SQL Server. Demos will include common PowerShell tasks and commands, and querying/scripting/automating SQL Server using SMO and applicable .NET Libraries.

 Another tough choice. Passing up the chance to attend a Kalen Delaney session on compression feels a little crazy. In my defense I was lucky enough to attend a full day pre-summit seminar at Pass 2008 where Kalen spent hours going through the internals, including compression. If you’ve read any of my blog you can see I am a little keen on powershell and I’ll take up pretty much any chance to increase my knowledge on that front.

 

2pm – Using SQL Trace/Profiler Effectively – Robert Davis

The session will cover the difference between SQL Trace and SQL Profiler, and why we recommend using SQL Trace in a production environment. Topics covered will be how and when to use SQL Profiler vs SQL Trace, how to create a trace template and convert it to a SQL script, and how to start a SQL Trace running in response to a performance event. Demos will include creating a trace template and converting it to a script and an example of how SQL Profiler can bring your whole server to a crawl.

 It was this or Clifford Dribble presenting “Utility Control Point in SQL 2008 R2”. Ultimately it came down to what I use. I can’t see myself working with R2 in the near future and I felt that Robert’s session would help provide me with a way to build a presentation of a similar topic to internal teams.

 

3:30pm – SQL Server for the Oracle DBA – Buck Woody

Come hear Buck Woody, Microsoft’s “Real World DBA” give a marketing-free introduction to SQL Server for the Oracle professional. No experience in SQL Server is necessary – we’ll cover the basics of SQL Server Architecture using Oracle concepts as a guide. If you’re an Oracle professional and you want to add SQL Server to your “knowledge arsenal”, come hear this overview. You’ll also get a list of resources that will enable you to research further.

 

People give rave reviews of Buck Woody presentations. I just couldn’t pass this one up despite really wanting to go see Chuck Lathrope on “Replication Performance Tuning and Troubleshooting”. I can also take back the information from this session and talk to the “O side massive” folks at work.

That’s it. There’s no mention of an afterparty yet, maybe something will come up. I’m also really hoping that someone will have the England/USA game streaming through lunch and that Donabel dedicates a little screen real estate for the second half.

It should be really good day, absolultely looking forward to it and meeting some new folks.

 

Estimating Data Compression

It’s T-SQL Tuesday #007 time and we have Jorge Segarra (blog | twitter) the one and only SQLChicken running things.

I’m hoping he’s going to have so many posts to go through that he won’t notice that I am going to cheat a little.

When I first got the chance to play with SQL 2008 my attention was instantly captured by the opportunity to perform data compression. With an environment growing out of control and a SAN having reached it’s physical limits I really needed something to give me a little breathing room. Data compression provided me that option and I worked towards seeing exactly how much space it could save me. A consequence of this work was that last year I wrote an article for SQLServerCentral.com about Data Compression in SQL Server 2008 in which I gave an overview of compression and how it works. As a part of the article I also provided a script which could be used to estimate space savings on every table in a database (extending sp_estimate_data_compression_savings which works on an individual table only).

One thing missing from that article and the attached proc was a final rollup script that would take the data you’d collected and provide it to you in a somewhat more useful format. So in the interests of being helpful here one is…

 

SELECT
      Database_Name
    , OBJECT_NAME
    , sum(current_size_kb)/1024 as existing_server_size_MB
    , sum(estimated_size_page_kb)/1024 as estimated_server_size_page_compressed_MB
    , sum(estimated_page_savings_kb)/1024 as estimated_server_size_page_savings_MB
FROM
    estimated_data_compression_savings
WHERE
    database_name in ( ‘AdventureWorks2008’, ‘AdventureWorksDW2008’, ‘teststatsdb’)
GROUP BY
      Database_Name
    , object_name
WITH ROLLUP
HAVING sum(current_size_kb)/1024 > 0
ORDER BY
      Database_Name
    , estimated_server_size_page_savings_MB desc

 

Note: the WITH ROLLUP allows us to quickly get a summary of the total space savings we could get at the server and database level without having to perform other subqueries

So there you go, a quick and easy view of where you might be able to save yourself some disk space on your SQL Servers. Remember though, it’s only a guide and you need to thoroughly test performance before implementing any kind of compression.

 

 

SQL Job Notifications

Today on Twitter TJay Belt (blog|twitter) was asking about job notifications, specifically related to being able to add some kind of logic to that notification.

I’m using a trigger on the sysjobhistory in the msdb database to handle this, below is a version with very basic logic, but something easily expanded upon 

 

This process is based upon the script posted by Kenneth Singer back in 2007 on SQL Server Central. You can find that at http://www.sqlservercentral.com/scripts/Miscellaneous/30177/ 

 

USE [msdb]

GO

 

SET ANSI_NULLS ON

GO

SET QUOTED_IDENTIFIER ON

GO

 

CREATE   trigger [dbo].[trg_jobstepfailures]

on [dbo].[sysjobhistory]

for insert

as 

declare @strcmd varchar(800),@strRecipient varchar(500),@strMsg varchar(2000), @strServer varchar(255),@strTo varchar(255)

 

Declare @Subject varchar(500)

 

if exists (select * from inserted where run_status = 0 and step_name NOT IN ( '(job outcome)', 'do not notify'))

begin

    select  @strMsg =  '<FONT FACE="color: #A0522D; font-family: Verdana; font-size: 12; font-weight:bold">' + @@servername + '-Job: ' + sysjobs.name + '. Step= ' + 

     inserted.step_name + '</font><br><br><FONT FACE="color: #A0522D; font-family: Verdana; font-size: 12">Message ' + inserted.message + '</font>'

     from inserted

    join sysjobs

    on inserted.job_id = sysjobs.job_id

    where inserted.run_status = 0

    

--subject text

    select @subject = 'Job ' + sysjobs.name + ' Failed on Job Server' +@@Servername

    from inserted 

    join sysjobs 

    on inserted.job_id = sysjobs.job_id

    where inserted.run_status = 0

 

--recipeints text

    IF @subject like '%<USE NAME OF FAILED JOB HERE>%'

        Begin 

            SET @strRecipient= '<RECPIENT LIST>'

        End

    ELSE

        Begin

            SET @strRecipient= '<ALTERNATE RECIPIENT LIST>'

        End

      

    exec msdb.dbo.sp_send_dbmail @profile_name = null --uses the default profile

     ,  @recipients = @strRecipient

     ,  @subject = @subject

     ,  @body =  @strMsg

     ,  @body_format = 'HTML' --default is TEXT

     

end

 

GO

SQL Server 2000 DTS Logging

On twitter today TJay Belt (blog|twitter) asked if anyone had concerns with enabling logging on DTS packages. This brought to mind some of the logging fun that I have had with SQL 2000 and DTS over the years.

With DTS you have the option of logging to a flat file and also to SQL Server directly.

When choosing the SQL Server option and entering your server name you’ll find that logs are written to msdb.dbo.sysdtspackagelog and individual steps within those packages go to msdb.dbo.sysdtssteplog.

 

 Viewing DTS Package Logs

Looking at the SQL Server stored log is as simple as right clicking on the DTS package itself and select “view log” from the list. I have noticed in the past that this can actually cause significant blocking issues within the msdb database. This process never seemed very intelligent as it appeared to scan all the logs for all packages and then filter out what it didn’t want.

Frequently you are just looking for failures, for this I have a small script which can be run, it returns packages with steps that failed and the failure information for that step. You can limit it to whatever time frame you want, as well as limit it to certain packages.

 

select 

   PL.Name

   , PL.starttime

   , PL.endtime

   , SL.stepname

   , SL.errordescription 

from 

   msdb.dbo.sysdtspackagelog PL(nolock)

      inner join msdb.dbo.sysdtssteplog SL (nolock)

        on PL.lineagefull = SL.lineagefull

where 

    --name = '<job name>' and 

   stepexecresult <> 0

   --and starttime > '2009-06-08 00:00:00' and starttime < '2009-06-09'

order by 

    PL.starttime desc

Keep Your Logs Clean

Something else to keep in mind is that your DTS logs will not clean themselves. It’s worthwhile every week or two to clean up old data. Both sysdtssteplog and sysdtspackagelog have start time and end time columns making any deletes a quick and easy process.