Tag: Jobs

Passing SQL Data To Command Line Processes Within SQL Jobs

Wow, that title is a bit of a mouthful, let me give you a scenario to help understand what the business problem was that I ran in to today.

We have a production database running on SQL 2008 which contains a CLR procedure that accepts a reportID value, queries some data and writes out to the filesystem on a remote share. This procedure is called by multiple daily and weekly jobs to perform extracts for business partners. Yes, I know this is ugly. The database has TRUSTWORTHY on, which is a big security risk and we wanted to mitigate that risk with the minimum amount of work required.

Here’s an example of one of the jobs that makes the call in to that proc:



SELECT  @ReportID = ReportID

FROM    dbo.ReportList

WHERE   BusinessPartner = 'Customer1'

        AND Frequency = 'Daily';


EXEC Data2File @ReportID;


The first step to changing this was to get the CLR code out of source control and rebuild it as an executable file. This took the developer about 60 minutes. Now I had to figure out how we were going to call the executable with the appropriate ReportID.

The obvious way to call this would be to create a cmdline job step for D:ExportExeData2File.exe (the name and location of the new executable). This would be great except that it doesn’t contain the ReportID. The smart move here would be to just pass along the ReportID in the cmdline, except that we don’t know what that is for any particular report as they get deleted and added fairly frequently, we need to actually pass the results of the query along. The cmdline really wasn’t going to help here.

As is frequently the case, PowerShell came to the rescue.

All I had to do was create a PowerShell job step, run the query, pass the result into a variable and then call the executable with the variable. Sounds really simple, it took a few minutes to get it right, in the end I wound up with the following PowerShell script that runs from within a PowerShell job step:

$ErrorActionPreference  = "Stop"


$Query = @"


FROM    dbo.ReportList

WHERE   BusinessPartner = 'Customer1'

        AND Frequency = 'Daily';



$ResultSet = invoke-sqlcmd -ServerInstance MySQLServer -Database MyDatabase -Query $Query -QueryTimeout 30

[int]$RptID = $ResultSet.ReportID

Write-Output "Calling Data2File.exe with ReportID: $RptID"


& "D:ExportExeData2File.exe" $RptId


In this script I’m building the query, calling it with invoke-sqlcmd and then passing the output ReportID to the external executable.

While this is still pretty ugly and not the way that this sort of thing should be done (SSIS anyone?) it does work and more importantly it allows me to turn off the trustworthy setting on that database and improve the security on my SQL Servers.

Changing SQL Agent Job Subsystem Queue Lengths

In the event you are running a lot of jobs on your SQL Server, or you happen to have a great number of jobs that kick off close to each other and use the same subsystem (i.e. PowerShell) then you might receive a warning in the SQL Agent Error Log stating that the job step “is being queued for the PowerShell subsystem”.

This queuing is deliberate. Microsoft put in safeguards around the various subsystems that the Agent calls in order to help prevent any particular subsystem from taking over all the worker threads (after all you wouldn’t want your 25 PowerShell job steps preventing your queries from running because they’d taken all the worker threads would you?)

This is one of the reasons that I like to use dedicated servers just to handle all jobs/ETL processes. I like to keep my local jobs to just maintenance processes (backup/reindex/checkdb) wherever possible. In doing this I need to be sure that I am able to increase the max worker threads associated to a subsystem to be sure that jobs don’t get delayed. As such I keep an eye on the SQL Agent Error Log and server resources to check that I don’t get thread starvation issues (and being just a job server that’s pretty unusual).

Today I saw a bunch of warnings in the log to do with PowerShell job steps being queued. This wasn’t a surprise as I had added a whole bunch of new jobs and had not adjusted things from the default values. First I went through and grabbed the default information for each of the subsystems so that I could easily revert if the need arose:

SELECT [subsystem_id]



  FROM [msdb].[dbo].[syssubsystems]



As you can see here there are only 2 worker threads set for PowerShell. I need to bump this up to 20 so that it could handle the concurrent job load. I have plenty of memory available on the server so I’m not concerned with the overhead associated with the loading of PowerShell either. Always something you need to keep in mind (oh and you have to be a sysadmin to be able to update this table).

UPDATE msdb.dbo.syssubsystems

SET max_worker_threads = 20

WHERE subsystem = N'PowerShell'  


Checking the value again yields the updated information:

SELECT [subsystem_id]



  FROM [msdb].[dbo].[syssubsystems]



Now it’s a simple case of restarting the SQL Server Agent service for the change to take effect.

Note: Applying Service Packs or Cumulative Updates could potentially reset this value, so make sure you document the change and confirm it after any upgrade (for example http://support.microsoft.com/kb/972759 )

Mass Updating Job Step Retry Attempts

Sometimes I find that the SQL Server team make curious choices as regards default operations. When setting up replication by default the log reader and distribution agent job steps will retry 2,147,483,647 times before failing. Doing the math on the retry interval of 1 minute that means that the job wouldn’t actually hit a failed status for over 4000 years. While you can setup alerts around replication a lot of folks don’t do that and you could easily be in a situation where replication has been down for so long that you have to reinitialize the subscription and perform a snapshot to get things running again.

Having a great many distribution jobs running on my server I wanted to perform a mass update. It could have gone through and updated each job step but that would have taken me a couple of days. I needed a better way. Digging through the SQL Server Agent stored procedures I found that sp_update_jobstep would do the work for me. I just needed to pass in the relevant parameters and then restart the jobs so that the change took effect.

As such I ended up writing the following script. Just adjust the @retries variable to the value that you want to work with (in this instance it would be an hours worth of retries before the job failed). This script is limited to the Distribution job step but could be easily modified to manage the log reader also.

Running this script took two minutes and it updated 60 distribution agent jobs. I’m getting ready to kick it off on the rest of my servers.



DECLARE @retries INT = 60 --Set to the value that you want for retry attempts



    @JobStepUpd INT,

    @JobStatus INT,

    @JobName SYSNAME,

    @Message NVARCHAR(256)



--Create a temp table which will hold the results from xp_sqlagent_enum_jobs

--Note xp_sqlagent_enum_jobs is an undocumented stored procedure, it may not be there soon

CREATE TABLE #enumjobs



      last_run_date INT NOT NULL ,

      last_run_time INT NOT NULL ,

      next_run_date INT NOT NULL ,

      next_run_time INT NOT NULL ,

      next_run_schedule_id INT NOT NULL ,

      requested_to_run INT NOT NULL ,

      request_source INT NOT NULL ,

      request_source_id SYSNAME NULL ,

      running INT NOT NULL , 

      current_step INT NOT NULL ,

      current_retry_attempt INT NOT NULL ,

      job_state INT NOT NULL







SELECT  sjs.job_id ,

        sjs.step_id ,


FROM    msdb.dbo.sysjobsteps sjs

        INNER JOIN msdb.dbo.sysjobs sj ON sjs.job_id = sj.job_id

WHERE   subsystem = 'Distribution'

        AND retry_attempts > 60


OPEN UpdateJobsCrsr ;



INTO @JobIdUpd, @JobStepUpd,@JobName




--Update the job step so that the number of retry attempts is set to the @retries variable                

EXEC msdb.dbo.sp_update_jobstep @job_id = @jobidUpd, @step_id = @jobstepUpd, @retry_attempts = @retries  

SELECT @Message = N'Job ' + @JobName + ' updated.'

RAISERROR (@Message, 0, 1) WITH nowait 


--We have to restart the job in order for this change to take effect

--Check to see if the job is running, if so stop and restart it, otherwise leave it alone

--Note, you'll need to be a sysadmin for this to work (or change up some stuff)

    TRUNCATE TABLE #enumjobs;

    INSERT INTO #enumjobs

                EXEC master.dbo.xp_sqlagent_enum_jobs 1, 'sa', @JobIdUpd;


SELECT @JobStatus = job_state FROM #enumjobs


IF @JobStatus <> 4 --This is the job idle state


--Stop the job

        EXEC msdb.dbo.sp_stop_job @job_id = @JobIdUpd

--Wait a couple of seconds for the job to stop

        WAITFOR DELAY '00:00:03'        

--Start the job up again

        EXEC msdb.dbo.sp_start_job @job_id = @JobIdUpd    




        SELECT @Message = N'Job ' + @JobName + ' was not running, please start manually if required.'              

        RAISERROR (@Message, 0, 1) WITH NOWAIT



        FETCH NEXT FROM UpdateJobsCrsr

            INTO @JobIdUpd, @JobStepUpd, @JobName




CLOSE UpdateJobsCrsr ;


DEALLOCATE UpdateJobsCrsr ;




DROP TABLE #enumjobs