Tag: SQLBP

Rolling Upgrades With Availability Groups – A Warning

One of the great options provided by Availability Groups, in SQL Server 2012 Enterprise Edition and newer, is the ability to perform rolling upgrades to new Service Packs or Cumulative Updates.

The basic idea is that you apply the update to one of the AG secondary servers and then perform a failover of SQL to that server which then does the necessary things on the user databases to bring them up to the level of the update. The big advantage to this is that it minimizes the outage required to get the SP/CU applied, so that you are down for a few seconds instead of 40 minutes.

 

This is works really well for your regular user databases, however there is a problem when applying a CU or SP to a secondary server where a Integration Services (typically called SSISDB) is a member of an Availability Group. If you attempt to apply the CU/SP then it can fail and the SSISDB be left in an offline state. 

In order to apply the CU/SP you would first have to remove SSISDB from the Availability Group and recover it on each server you want to patch. Once you have completed patching all the servers you can add SSISDB back to the AG. But for that period of time you will be at risk, so get through and patch a couple of the machines and get the AG working for those as soon as possible.

Interestingly this does not apply for all CU/SP releases. Some do not make changes to SSISDB and this isn’t required. You can only find this out by patching, so be sure to get it going in your test environments first.

You Can’t Meet Your RPO/RTO With AlwaysOn

“That title may have caught your attention. AlwaysOn is the future of HA/DR for SQL Server, and has been since the release of SQL 2012.  

AlwaysOn is actually a marketing term which covers Failover Cluster Instances (FCIs) and Availability Groups (AGs). Allan Hirt (@sqlha | blog) is a strong proponent of ensuring that people understand what this actually means. So much so that he even ranted about it a little

I’ve used FCIs for years, going back to the active/passive clustering days of old, and I’ve used Availability Groups in the last few months. They are both great, and both have limitations: FCIs with their shared storage and AGs with some network and quorum oddities. 

Both of them will do a fine job for you if you have the time, patience, and in the case of AGs, money to get them up and running. They still will not allow you to meet your RPO/RTO though. 

Critical to your business and your users is your up time, and that’s where the Recovery Time Objective (RTO) and Recovery Point Objective (RPO)  come into play. They reflect amount of time it will take to get your services back up and running, as well as the level of data loss that you are willing to accept.

 

 

Where FCI/AG win

The key problem with FCI/AG is that they do everything that they can to ensure that transactions are kept as up to date as possible. With FCI you move an entire instance over to another node, everything committed goes with it. With AGs the log records are shipped to the secondaries and applied in a synchronous or asynchronous fashion. The asynchronous setting is designed to get transactions there as soon as possible, and great for longer distances or where low commit times are ultra-critical. Both of these solutions solve two problems…a hardware issue or a software issue.

What does that mean? If your server goes down, then you can failover and lose next to nothing and be back up and running quickly. If Windows goes out to lunch on one of the machines then you can failover and keep ticking along.

So where do they fall down? 

 

 

What FCI/AG cannot do

Let’s say there’s a code release and a table accidentally has an update run against it with no where clause. All of a sudden you have a table in a 500GB database which contains 3 million rows and all of the information is wrong. Your users cannot use the application, your help desk is getting call after call and you are stuck. 

Your only option here is to restore your backup and roll up your transaction logs to the point right before the update happened. You’ve done tests on this and know that it will take 120 minutes to get back to that point. Now you have a 2 hour outage and users are screaming, the CIO is at your desk wondering how this happened, and demanding you get the database back up sooner.

FCIs and AGs are not going to help you in this situation. That update is already committed and so failing over the instance won’t help. The transaction logs were hardened immediately on your synchronous partner and applied within 5 seconds on your asynchronous target.

So how has AlwaysOn helped you in this situation? It hasn’t. And while you can sit there cussing out Microsoft for pushing this solution that has this massive failing it’s not going to solve your problem. That’s why you need something more than AlwaysOn. 

 

 

You can pry Log Shipping from my cold dead hands

 

 “Log Shipping?” I hear you ask, “but that’s so old.”

It sure is. It’s old, it’s clunky, and it is perfect for the scenario I just mentioned.  

You can configure log shipping to delay writing transaction logs to remote servers. Let’s say you delay logs for 1 hour. That accidental mass update was performed, you realize that you are in trouble. You quickly apply the logs on the secondary to the point in time before the update, bring the database online and repoint your clients. You are back up again in 5 minutes. It’s a momentary issue. Sure, you have an outage, but that outage lasts a fraction of the time. Your help desk is not inundated with calls, your users aren’t left out in the cold for hours. 

There’s nothing to say that you have to delay applying those logs for an hour. It could be 2 hours, or even 24. It really all depends on how you want to handle things. 

Sure, you have to do manual failover, and you don’t have the ability for automatic page level restores from one of the synchronous AG secondaries, but you have a level of data resiliency that AlwaysOn does not provide you. 

 

So while AlwaysOn technologies are great, and you should absolutely use them to enhance HA/DR in your business, but you have to be aware of their limitations, and be sure to use other parts of SQL Server to ensure that you can keep your business running. 

Developing To Specifications

I’m a DBA. As a class of people you will find that DBAs have a tendency to rant a little about developers. I would certainly be someone that you would find in that category. The trouble is that most of the time I don’t think that it is the developers fault, it is just a case of shooting the messenger.

As an example let’s look at a new database release that was being asked.

The biggest issue was that the code was provided so late in the cycle that the issues found (and there were a great many of them) could not be fixed. Why could they not be fixed? Because an arbitrary release data had been provided to the customer and this data could not slip for any reason whatsoever. Artificial deadlines, one of the worst things that devs and DBAs have to deal with.

The developers agreed to get the code fixes added to their backlog and to get it into a sprint for fixing in the next month. So after much discussion and a firm commitment we decided to move ahead with the release.

My next question to the dev team was “how are you going to get the data out? You have a few procs here for loading data into tables, but nothing for being able to consume that data afterwards.”

The response was a little stunning:

The only requirement to get done by is to have the data written to a database. After this deploy, we are going to create a way to get the data out

Outstanding. Way to develop to requirements.

In this instance I cannot really place blame on the dev team, they are just doing what is asked of them. I think the bigger problem is sitting with the folks who are gathering the business requirements and translating those, along with timelines, up to the business.

I think that it might be time for DBAs to stop pointing fingers at devs and start holding accountable those who are causing these problems, namely the Business Analysts and Project Managers.

Who’s with me on this?

Automated Backup Tuning

Tuning your backups is a wonderful thing. You could easily reduce the time it takes to backup your databases by 50% of more just by changing a few settings, and yet so few people do it. I asked myself the question why and came up with two answers.

  1. People do not know how to tune their backups.
  2. It takes too long to run through the tuning process.

How to tune your backups

I’m not going to go over methods for tuning your backups in this post. There are people who have done a far better job at both explaining the adjustments you can make and that have created videos to help you understand and perform the process yourself.

My biggest concern was directed at the level of effort required to test all the possible permutations of files, maxtransfersize and buffercount values, after all, who has time to figure all of that out and then capture the data to look at the relative performance characteristics of each one?

I decided that the best way to do this was to create a nice little test harness which would run through all those tests without manual intervention, and then figure out a way to get the output from all of those backups into some kind of meaningful graph for that instant visual on performance differences.

NobodyGotTimeForThat
No manual backup tuning for this DBA

The backup testing script

Step one in the automated tuning is a SQL script I created which accepts a few input variables:

  • @DatabaseName – name of the database you want to use for testing
  • @MaxBackupFiles – maximum number of files you want to write to at a time
  • @BackupFolder – destination folder for the backups
  • @WithCompression – whether or not to use SQL Server backup compression
  • @CreateJob – Whether or not to create a SQL Agent job to run the tests
  • @JobLogFileName – file path and name for a log for the job
  • @OutputToScreen – outputs the backup commands to the screen

When executed the script is designed to created SQL Agent job which you can then execute at a time of your choosing. The job will run through a series of backups for a database (at minimum 40) and capture the information in the job log file.

Be warned, performing this many backups test, with so many settings, can and will impact your server performance at some point. Be aware of this. Do not cause issues for your users when performing the tests, they will not like you for doing so.

Visualizing the results

Once the backup testing job is complete a job log file is left behind. This log file contains all of the information for each of the backups performed including the:

  • number of files backed up to
  • maxtranfersize setting used
  • buffercount setting used
  • speed of the backup in MB/sec
  • time the backup took to complete in seconds

Getting that information usable is done using a PowerShell script. A blog post by Jonathan Kehayis described his method for parsing SQLIO output to excel charts. Based on this script, the update version parses the backup log file that has been created, imports the information into excel and creates a couple of charts.

These charts show the speed (Figure 1) and backup time (Figure 2) for the default backup options, the absolute fastest backup to NUL, and the all of the other options that were tested.

Backup+Speed
Figure 1. Backup speed results for 100GB database backed up across 10Gb network to CIFS share

Backup+Time
Figure 2. Backup time results for 100GB database backed up across 10Gb network to CIFS share

The graphs provide a nice visual representation of the backup performance which helps to quickly identify that there are improvements that can be made.

The default SQL backup runs at just over 200MB/sec and takes about 235 seconds to complete, but we know that we can perform that same backup to a NUL device in about 75 seconds at a rate of over 1200MB/sec.

Just looking through the rest of the data it is clear to see that a lot of the values used make a negligible change to the performance of the backups, so we can work to make an informed decision on what the best file, maxtransfersize and buffercount settings should be.

Hopefully this is something that you can find useful. Please give it a try and send me feedback on your testing efforts.

Downloads: Backup Test Harness.sqlParse Backup Perf Tests.ps1

Is Technical Writing Important?

Many Colleges and Universities now offer certificates in technical writing. As a DBA the idea of technical writing important to me (and after all I do have a blog to maintain). Technical writing can have quite a lot of scope though. It can encompass:

  • Writing instruction manuals
  • Building reports
  • Creating graphs and charts
  • Web layout
  • Font choices
  • Working with various tools
  • Analyzing data
  • Building proposals
  • Plus lots more

I think that a lot of technical writing skill comes along as a natural progression of working in the IT business (at least for most people), but what about those folks that are new out of college? Do you feel that new graduates should have a feel for technical writing? Would a certificate in technical writing be beneficial in helping graduates get jobs? Would someone with a certificate be more likely to get an interview with your company?

These are all important questions and I am looking for you, the good reader, to assist me in gathering some data.

Please, spend a couple of minutes and fill out the survey below or the version over at SurveyMonkey. I will share the results once the survey is complete.

Create your free online surveys with SurveyMonkey, the world’s leading questionnaire tool.

Fun With Recruiters

I love it when I get those special kinds of emails from recruitment agencies who claim they have the perfect position. I got one of those kinds of emails last week, I thought I would share it (as well as my response).

 

Title: Front End Web Development Lead
Position Type: Direct Placement
Location: Bothell, WA, United States
Description:

Duration: 0-6 month(s)
Job Description:
Front-End Web Development Lead – Bothell, WA
Every day over 19,000 Amdocs employees, serving customers in more than 60 countries, collaborate to help our customers realize their vision. We have a 30-year track record of ensuring service providers¿ success by embracing their most complex, mission-critical challenges. 100% of Fortune¿s Global 500 quad-play providers rely on Amdocs to help them run their businesses better.
Amdocs is a ¿can do¿ company that leads the industry, is fully accountable and most importantly, always delivers. This is our DNA. Our success has been sparked and sustained by hiring exceptional people. If this sounds like you— if you have the drive, focus and passion to succeed in a fast-paced, delivery-focused, global environment– then Amdocs would like to talk with you. Amdocs: Embrace Challenge, Experience Success.
– Please Note: All applicants must be currently authorized to work in the United States without employer sponsorship now or in the future.
Role Overview:
We are looking for a Front-End Web Development Lead to be a team lead directing a multi-shore group of developers tasked with providing issue resolution support for a very large-scale web retail store. Some of the responsibilities and duties include, but are not limited to:
Interface with defect assurance team to accept inbound production issues for resolution
Direct and coordinate work of offshore development team to ensure accurate and timely resolution of front-end production issues
Interface with customer development, business, and other teams as needed to provide good service, promote team visibility and positive perception
As team grows, evaluate potential additional team candidates and support Amdocs executive management by providing expert advice as required to grow our presence with the customer and provide continuous improvement
Provide analytical support to identify, develop, and drive strategic improvement initiatives involving functionality improvements, innovation solutions, and development and implementation methodologies
Serve as trusted advisor to management and client
Work day-to-day with key client management, development fulfillment partner, QA testing organization, providing expert support to each as needed and appropriate
Support development of improved governance of production defect management, including definitions of severity, criteria for prioritization, and defect management lifecycle processes.
Requirements:
5+ years front-end web development experience
5+ years hands on experience with the following key technologies: JSP Integration, HTML / HTML 5, AJAX, CSS, JavaScript, JSON, XML, JQuery
Strong leadership skills
Preferences:
Large scale /enterprise web retail experience
Integration with ATG Commerce
Integration with Adobe CQ
Experience with other industry standard integration technologies (e.g. WebLogic)
Technical leadership experiences in relevant technologies
Telecom experience
All Amdocs roles require strong verbal and written communications skills, position-appropriate mentoring/leadership abilities, ability to quickly master new systems and/or processes, capacity to stay organized while managing competing priorities, and a deep customer service orientation, both internally and externally.

 

I’m a database guy, I’ve never been a developer let alone a dev lead, and so I replied…

 

As a solutions provider I would expect you have have some great analytics. This leads me to ask the question as to what part of my skillset or background leads you, or anyone at your company to believe that I would be a good fit for, or consider the opportunity that you list below.

 

If I ever get a response I’ll be sure to post it.

Do You Trust Your Application Admins?

I was sitting at my desk, happily minding my own business when an alert came through that a database backup had failed. Ok, backups fail, I just figured one of the transaction log backups hiccupped (we’ve been having some problems the last few days).

When I looked at the failure it was a backup trying to write to the C drive on the server.

I NEVER backup to C. It can easily fill the drive and take down the system.

A bigger indicator that something was up was that all of our backups are done across a 10Gb network to a centralized location for ease of tape backup. This indicated that someone, not a DBA, had the access to run a SQL backup.

I trawled through the permissions on the server and nobody has that level of access so I couldn’t figure out who had done this and how.

 

So What Happened?

Looking through the SQL logs I saw multiple attempts by a contractor to login to SQL, all of which failed, then about 5 minutes after the backup error came through. Interesting stuff, so I walked over to the contractor and asked what was going on.

After he was unable to login he went to the application admin who helped him out with access…using the application service account.

One of the third party applications from Microsoft some unnamed vendor has a database on that server. Due to the nature of the well designed code the database owner has to be the same as the service account of the application. The application admin knows this password (not my doing).

After logging this contractor in as the application service account the app admin walked away and left him to his own devices. As a result this contractor was dbo on a database which manages security for the entire company. We should just consider ourselves lucky all this guy did was attempt to perform a backup.

 

Preventative Actions

In order to try and prevent this kind of thing in the future I am looking at implementing a login trigger for the service account which checks the host and application connecting and denying access to anything not in a specifically approved list. There is also a conversation going on to possibly disable interactive logons for the service account using a group policy at the domain level.

 

It is a Matter of Trust

While the application admin is obviously at serious fault here it leads to a question of how well do you trust your admin team?

Domain admins will be able to access your SQL Servers (get over it, there is no way you can keep them out, if they really want in there are numerous ways for them to do so).

Anyone with a password could share that with someone else and allow them to access your servers.

Ultimately you have to trust those that you work with to do the right thing. It’s always sad when those people let you down.

Extended Properties Are Your Friend

It’s nice to have friends, why aren’t you a friend of extended properties? They can make your like so much easier by helping to document your databases and objects.

Take a basic table create statement

CREATE TABLE [dbo].[EatSomeBoogers](

    [ID] [int] NOT NULL,

    [BgType] [varchar](20) NOT NULL,

    [Size] [smallint] NULL,

    [Viscosity] [tinyint] NULL

) ON [PRIMARY]

 

GO

 

Pretty simple and we can infer a lot of the information about the table, but we mention size, are we measuring in inches, feet, yards?

We can add extended properties that will help to provide that sort of guidance for someone to reference later on.

EXEC sys.sp_addextendedproperty @name = N'Details',

    @value = N'Size is measured in mm', @level0type = N'SCHEMA',

    @level0name = N'dbo', @level1type = N'TABLE',

    @level1name = N'EatSomeBoogers', @level2type = N'COLUMN',

    @level2name = N'Size'

GO

We can also add properties at the table level:

EXEC sys.sp_addextendedproperty @name = N'Purpose',

    @value = N'Holds information about all the gold digging’,

    @level0type = N'SCHEMA', @level0name = N'dbo', @level1type = N'TABLE',

    @level1name = N'EatSomeBoogers'

GO

And at the database level:

EXEC [MyDB].sys.sp_addextendedproperty @name = N'Usage',

    @value = N'Will handle all information as relates to digging' 

GO

 

You can even add extended properties to other objects, like stored procedures:

EXEC sys.sp_addextendedproperty @name = N'ProcUsage',

    @value = N'Gets booger sizes and types', @level0type = N'SCHEMA',

    @level0name = N'dbo', @level1type = N'PROCEDURE',

    @level1name = N'GrabBoogers'

GO

 

What’s great is that you can then quickly and easily query the extended properties for your objects:

SELECT  

        OBJECT_NAME(ep.major_id) AS ObjectName ,

        CASE 

        WHEN c.name IS NOT NULL then 'COLUMN'

        else o.type_desc

        END AS ExtendedPropertyType,

        c.name AS 'ColumnName' ,

        ep.name AS ExtendedPropertyName ,

        ep.value AS ExtendedPropertyValue

FROM    sys.extended_properties ep

        LEFT JOIN sys.columns c ON ep.major_id = c.object_id

                                   AND ep.minor_id = c.column_id

        LEFT JOIN sys.objects o ON ep.major_id = o.object_id

ORDER BY ObjectName, ColumnName

image

 

 

This give you a really quick and easy way to document your objects. I highly recommend that during your next development project that you make life easy for yourself and add extended properties to your objects (and trust me, your DBA will thank you).

How Do You Provide Passwords To Your Users?

Passwords are a necessary evil and there are times when you have to hand out a password for a SQL login (because the POS application doesn’t support Windows Authentication). Traditionally I’ve done this by sending an email to the user with the login and a separate one with the password, figuring that internal security controls would be good enough to prevent anyone from accessing both emails. Recently it came to light that all emails were being siphoned off and an information security team had access to all email that traversed our Exchange servers. Now I’m not saying that I don’t trust these guys, but there’s no way in hell I would ever let them get one of my passwords.

I needed to come up with a better solution for getting passwords to users that had a pretty good level of security around it. Yes, I know that the password can easily be set to force a change at the next login, however this does not work in a lot of cases where it will be used by an application and the person doing the configuration doesn’t have the knowledge or tools to go in and change the password themselves.

I decided that I wanted to have a two-factor authentication type method that would limit the availability to a password and that would provide the information once and once only for the user so that it would never be stored for a long period of time.

First I created a table to hold the password and a unique-identifier and nothing else. I didn’t want to store a login name along with this data just for extra security purposes. This way even if someone got access to the password they wouldn’t know what login it was for, helping with additional security.

CREATE TABLE [dbo].[PwInfo]

    (

      [AuthenticationID] [uniqueidentifier] NULL ,

      [NewPwd] [varchar](128) NULL

    )

ON  [PRIMARY]

 

GO

 

ALTER TABLE [dbo].[PwInfo] ADD  DEFAULT (NEWID()) FOR [AuthenticationID]

GO

 

Now I needed a quick and easy way to get a password once entered. I wrote a procedure that accepts the AuthenticationID, returns the password and then deletes the entry.

CREATE PROCEDURE [dbo].[GetPwdByAuthenticationID]

    @AuthenticationID UNIQUEIDENTIFIER

AS 

    SET NOCOUNT ON

 

    DECLARE @NewPwd VARCHAR(128)

 

    SELECT  @NewPwd = NewPwd

    FROM    dbo.PwInfo

    WHERE   AuthenticationID = @AuthenticationID

 

    DELETE  FROM dbo.PwInfo

    WHERE   AuthenticationID = @AuthenticationID

 

    SELECT  @NewPwd

    

 

GO

 

Finally I added a proc which would accept a password, add it to the table and then return some information to later allow the password to be retrieved.

CREATE PROCEDURE [dbo].[AddNewPassword] @NewPwd VARCHAR(128)

AS 

    SET NOCOUNT ON

 

    DECLARE @AuthIDTbl TABLE

        (

          AuthenticationID UNIQUEIDENTIFIER

        )

 

    INSERT  INTO dbo.PwInfo

            ( NewPwd )

    OUTPUT  INSERTED.AuthenticationID

            INTO @AuthIDTbl

    VALUES  ( @NewPwd )

 

    DECLARE @AuthenticationID VARCHAR(128)

    DECLARE @Msg VARCHAR(4000)

 

    SELECT  @AuthenticationID = AuthenticationID

    FROM    @AuthIDTbl

 

    SELECT  @Msg = 'Password added. Add the users AD account to the report folder (http://ReportingServices/Reports/Pages/Folder.aspx?ItemPath=%2fDBA+Reports%2fUser+Password+Information )and remove once they have pulled the data.

 

Send the following to the user:

 

For your security purposes please visit the following URL in your browser http://ReportingServices/Reports/Pages/Report.aspx?ItemPath=%2fDBA+Reports%2fUser+Password+Information%2fGet+Password and enter the authentication ID of '

            + @AuthenticationID

            + '

 

This is a one time use authentication token, if you need the password information again you will need to contact the production DBA team.

'

 

    PRINT @Msg

 

 

GO

 

If you read through this code you’ll see that it outputs a message that provides a couple of links to Reporting Services. This is where the extra authentication comes in.

Within Reporting Services I created a report which called the GetPwdByAuthenticationID proc and just returned the password (nothing more). This report lives in it’s own subfolder which is only accessible by the DBA team.

 

Here’s how it works:

A user requests a password from us, we pull that password from our secure repository (highly encrypted) and use dbo.AddNewPassword to add this password to the table. We get back a message which we then use to email the requestor; this contains the URL and the AuthenticationID that they need to enter into SSRS to get the information out. We then go to SSRS and grant the user browser permissions in the folder, allowing them to run the report and get back the password. Once they have retrieved the password we then go and remove the user from the folder, closing down access once more.

This provides several layers of security:

  • The user must be logged in to AD with their own account to be able to access the report
  • The user must have the the AuthenticationID provided by the DBA team to get the password
  • The password has a one time access restriction meaning the process cannot be repeated
  • The login is never given along with the password and never stored by the DBA team together except in a highly encrypted password vault inaccessible to anyone but the DBAs.

 

I feel this a much better solution than sending passwords via email, and considering it only took an hour to put together I figure it a very worthwhile piece of my time. Sure, there are improvements that could be made around automation of access and notifications to users, but as a quick win I think this does a good job.

 

I’m interested to know what solutions you other folks might be using for those times when you need to provide password information to users. Please comment below.

Passing SQL Data To Command Line Processes Within SQL Jobs

Wow, that title is a bit of a mouthful, let me give you a scenario to help understand what the business problem was that I ran in to today.

We have a production database running on SQL 2008 which contains a CLR procedure that accepts a reportID value, queries some data and writes out to the filesystem on a remote share. This procedure is called by multiple daily and weekly jobs to perform extracts for business partners. Yes, I know this is ugly. The database has TRUSTWORTHY on, which is a big security risk and we wanted to mitigate that risk with the minimum amount of work required.

Here’s an example of one of the jobs that makes the call in to that proc:

DECLARE @ReportID INT;

 

SELECT  @ReportID = ReportID

FROM    dbo.ReportList

WHERE   BusinessPartner = 'Customer1'

        AND Frequency = 'Daily';

 

EXEC Data2File @ReportID;

 

The first step to changing this was to get the CLR code out of source control and rebuild it as an executable file. This took the developer about 60 minutes. Now I had to figure out how we were going to call the executable with the appropriate ReportID.

The obvious way to call this would be to create a cmdline job step for D:ExportExeData2File.exe (the name and location of the new executable). This would be great except that it doesn’t contain the ReportID. The smart move here would be to just pass along the ReportID in the cmdline, except that we don’t know what that is for any particular report as they get deleted and added fairly frequently, we need to actually pass the results of the query along. The cmdline really wasn’t going to help here.

As is frequently the case, PowerShell came to the rescue.

All I had to do was create a PowerShell job step, run the query, pass the result into a variable and then call the executable with the variable. Sounds really simple, it took a few minutes to get it right, in the end I wound up with the following PowerShell script that runs from within a PowerShell job step:

$ErrorActionPreference  = "Stop"

 

$Query = @"

SELECT  ReportID

FROM    dbo.ReportList

WHERE   BusinessPartner = 'Customer1'

        AND Frequency = 'Daily';

"@

 

$ResultSet = invoke-sqlcmd -ServerInstance MySQLServer -Database MyDatabase -Query $Query -QueryTimeout 30

[int]$RptID = $ResultSet.ReportID

Write-Output "Calling Data2File.exe with ReportID: $RptID"

 

& "D:ExportExeData2File.exe" $RptId

 

In this script I’m building the query, calling it with invoke-sqlcmd and then passing the output ReportID to the external executable.

While this is still pretty ugly and not the way that this sort of thing should be done (SSIS anyone?) it does work and more importantly it allows me to turn off the trustworthy setting on that database and improve the security on my SQL Servers.