Tag: PowerShell

Reading SQL Server Error Logs Using PowerShell

I was messing around with PowerShell the other day and using it to read SQL Server error logs. It’s actually a pretty trivial thing.




Quick and easy. Then I tried to read from a clustered instance of SQL Server and ran into an issue




This didn’t make a lot of sense, after all the method was there before. I decided to go and check to see if the method existed


$Logs | gm


Yup, the method exists just fine.


I couldn’t figure out what was going on. I enlisted the help of Sean McCown (blog|twitter) who posted a short training video on how to read the error logs on named instances of SQL Server.

Grabbing Table Row Counts Into SQL Using PowerShell

Sometimes Twitter gives you the impetus to get something done that you’ve been meaning to accomplish but just haven’t quite gotten around to. Yesterday was one of the days that happened.

Ed Watson (blog|twitter)  asked the following question on #SQLHelp



This is something I’ve been meaning to do for a while so that I can get good ongoing trending information around table row counts.

There are several ways to accomplish this but PowerShell was the clear winner for me. I wanted the script to accomplish a couple of things

  • Capture the SQL instance name, database name, table name (including schema) and row count for all tables in a database
  • To grab the information for all the databases on a server (including system databases)
  • To write the data to a SQL table
  • Provide the option of dumping the data to the screen
  • Allow me to execute it quickly and easily against multiple servers without providing interactive input


The script I’ve written does all this. The only bugs I’ve found are that the model and tempdb table information is not accessible, but this isn’t really an issue as model doesn’t get any use and tempdb is far too frequently updated to be of any use.


The Scripts

Firstly we need to create a couple of SQL tables to hold the results. These should be created on the SQL instance and database of your choosing. The script itself should be updated to reference the locations of these tables (ultimately it made more sense to do it this way rather than pass the information in as a parameter).


CREATE TABLE dbo.SQLTableRowCounts


      SQLInstance NVARCHAR(128) ,

      DBName NVARCHAR(128) ,

      TableName NVARCHAR(256) ,

      TableRowCount BIGINT, 




CREATE TABLE dbo.SQLTableRowCountsArchive


      SQLInstance NVARCHAR(128) ,

      DBName NVARCHAR(128) ,

      TableName NVARCHAR(256) ,

      TableRowCount BIGINT,




There are no indexes on the tables currently. Depending upon your usage patterns a couple of different indexes could apply. I’ll let you decide what works best for you.


The PowerShell script is quite well documented and includes execution examples (you’ll want to run this on a machine where the SQL PS providers have been installed).



   Script takes in a list of SQL Instances and iterates through the databases to retrieve a row count for each table. This is then loaded into a SQL database.

   Within the script the $StorageSQLServer and $StorageDatabase should be updated to reflect the relevant locations to load the data.


   Script takes in a list of SQL Instances and iterates through the databases to retrieve a row count for each table. This is then loaded into a SQL database.

   Within the script the $StorageSQLServer and $StorageDatabase should be updated to reflect the relevant locations to load the data.

   Requires bulk admin permissions on the SQL Server that you are loading the data to.


   Accepts pipelined inputs so that you can pass an array for working with multiple SQL Instances


   This is the SQL Instance that you'll be working with

.PARAMETER <$verbose>

    Initially set to false. When set to true will also output the data to the console


   Basic example: Get-TableRowCounts MySQLInstance

   Advanced: Get-Content C:TempSQLInstances.txt | % { Get-TableRowCounts $_ }






[string]$SQLInstance = "$(Read-Host 'SQL Server Instance [e.g. SQLPROD]')",

[switch]$verbose = $false



#Load the SQL snapins

get-pssnapin | where {$_.Name -match "SqlServerProviderSnapin100"} 

if (!$CheckSQLSnapins) 


    Add-PSSnapin SqlServerProviderSnapin100 

    Write-Host "SqlServerProviderSnapin100 Loaded"



#Set the following to the SQL instance and database that you want to load the data to

#You will need to have bulk insert permissions to be able to perform this action

$StorageSQLServer = "localhost"

$StorageDatabase = "SQLMonitor"


$BCPconnectionString = "Data Source=$StorageSQLServer;Integrated Security=true;Initial Catalog=$StorageDatabase;"


#Clean out the current data table if the data is not from today

Invoke-Sqlcmd -ServerInstance $StorageSQLServer -Database $StorageDatabase -Query "DELETE FROM dbo.SQLTableRowCounts WHERE LoadDate != CONVERT(DATE, GETDATE());"




function TableList ($SQLInstance, $DBName)



    #Handle default vs named instances

    if ($SQLInstance.Contains("`"))

    {        $location = "SQLSERVER:SQL$SQLInstanceDatabases$DBNameTables" }


    {        $location = "SQLSERVER:SQL$SQLInstanceDEFAULTDatabases$DBNameTables" }



    Create data table to hold the results

    The datatable will live for the life of the function call to reduce memory overhead

    pulling from multiple servers


    $dataTable = New-Object system.Data.DataTable "Results"

    #Specify the column names for the data table

    $col1 = New-Object system.Data.DataColumn SQLInstance, ([string])

    $col2 = New-Object system.Data.DataColumn DatabaseName, ([string])

    $col3 = New-Object system.Data.DataColumn FullTableName,([string])

    $col4 = New-Object system.Data.DataColumn RowCounts,([int64])


    #Add the columns to the data table






    #Get our table list

    $TableList = dir $location -Force | select Owner, Name, RowCount


    #Iterate through the tables, load the rowcounts into the datatable

    foreach ($TableInfo in $TableList)


        $TableOwner = $TableInfo.Owner 

        $TableName = $TableInfo.Name

        $TableRowCount = $TableInfo.RowCount


        $TableFullName = "$TableOwner.$TableName"


        #Provided there are rows in the table

        #Load the table/rowcounts into the datatable

        #This prevents errors due to nulls in model & tempdb

        if ($TableRowCount)


            $row = $dataTable.NewRow()

            $row.SQLInstance = $SQLInstance

            $row.DatabaseName = $DBName

            $row.FullTableName = $TableFullName

            $row.RowCounts = $TableRowCount






    if ($verbose) { Write-Output $dataTable }


    #Bulk load the data into SQL from the data table

    $bulkCopyRoles = new-object ("Data.SqlClient.SqlBulkCopy") $BCPconnectionString

    #Loads to the current table

    $bulkCopyRoles.DestinationTableName = "SQLTableRowCounts"


    #Loads to the archive table

    $bulkCopyRoles.DestinationTableName = "SQLTableRowCountsArchive"



    #Clean up after ourselves so we release memory used to hold the data table

    Remove-Variable dataTable




function DBList ($SQLInstance)



    #Handle default vs named instances

    if ($SQLInstance.Contains("`"))

    {        $BaseLocation = "SQLSERVER:SQL$SQLInstanceDatabases" }


    {        $BaseLocation = "SQLSERVER:SQL$SQLInstanceDEFAULTDatabases" }


    $dblist = dir $BaseLocation -Force | select Name

    foreach ($db in $dblist)


        $DBName = $db.Name

        TableList $SQLInstance $DBName 




DBList $SQLInstance 



This script could easily be setup to execute daily to load the data into SQL so that you can perform quick and easy trends on table growth.


Give it a try, let me know what you think.

Adding Users To Local Groups With PowerShell

I ran into a situation last week where I needed to add a user to the Administrators group on a whole bunch of remote servers. Having had serious problems with cubital tunnel I try to avoid mouse clicks as much as possible, and anyway this is one of those situations that PowerShell is great for.

After a little trial and error I put together a script. The script doesn’t accept any pipelining, rather it will prompt you for the values (old school).

When asked enter the name of the local group (you don’t always want to add people to Administrators after all). Then the AD user or group name (include the domain for example PRODSomeAccount). Finally a comma separated list of servers to add the account to. Yes, I could have done this with a text file or something like that, but this is to be used as a quick thing and it’s easier to just type a list of servers in rather than create a file every single time you run it.

Once those three things have been entered the script will go out, check to see if the AD user/group already exists in that server group. If it exists you’ll get a warning that it’s already there, otherwise it will add the user/group to the local server group. There’s also a quick check to see if the server is available, if not you’ll get an error for that particular machine, it will continue to work on the others.

Of course the account you execute this script as will have to have the relevant permissions on the remote servers to allow you to add the user/group.




   Adds a domain user or group to a local Windows group on a local or remote server


   This script can be used in the event you need to add a domain user or group to a local Windows server group.

   It's real strength comes in being able to enter a comma delimited list of servers so that you can add the same domain user/group to multple machines quickly and easily.

   The user executing the process will require the rights on the remote machines to be able to add the accounts.

   Contains basic error handling to check if the server is reachable, if the group exists and if the user is already a member of the group.

.PARAMETER <paramName>

   No parameters, script file will ask for input



$usrName = (Read-Host "Enter the domain account or group that you wish to add to the local server group (eg PRODMyAccount)")

$grpName = (Read-Host "Enter the server group name that the user should be added to (eg Administrators)")

$srvList = (read-host "Enter a comma delimited list of servers (eg SrvA, SrvB)").split(",")

Write-Host ""

$Exists = 0 #Initialize the Exists variable which is used to see whether or not users should be added to groups


foreach ($srvName in $srvList)


    $srvName = $srvName.Trim().ToUpper() #Gets rid of spaces




if([ADSI]::Exists("WinNT://$srvName/$grpName,group")) #Check to see if the group exists


        #Set the comparison string for the account to add

        $usrName = $usrName  -replace "", "/"

        $chkOutput = "WinNT://$usrName"

        #Write-Output "Checking for $chkOutput"


        $group = [ADSI]("WinNT://$srvName/$grpName,group") 

        $group.Members() | 

        % { 

            $AdPath = $_.GetType().InvokeMember("Adspath", 'GetProperty', $null, $_, $null) 

            #Write-Output $AdPath

            if ($AdPath -ilike $chkOutput) 


                Write-Warning "User $usrName already a member of the $grpName group on $srvName"

                $Exists = 1 #This way we won't try to add an account twice




        if ($Exists -eq 0)


            Write-Output "Account $usrName does not exist in the local $grpName group on $srvName. Adding now..." 

            $group.add("WinNT://$usrName,user") #Add the account to the local server group

            $Exists = 0 #Reset the Exists variable ready for the next server





    Write-Warning "The $grpName group does not exist on server $srvName."





catch { Write-Error "Server $srvName was unreachable." } 



Quickly Build Transaction Log Restore Commands With PowerShell

I’m in the middle of a server migration and so I’m setting up log shipping to get the databases consistent between the two servers prior to cutting over.

The backup copy over the WAN was taking a long time and so I had a lot of transaction log files that I needed to apply. I could have let log shipping take care of performing the log restores, but I found that a secondary process had created log backups also, just with different names. This lack of consistency was causing me a problem so I needed a quick way to build a restore command of all the files in order.

This was a quick win for PowerShell. I was able to quickly grab a list of the files in date order and pipe those to a string and build the restore command. This I was able to copy into SSMS and perform the restores.

$filelist = dir LogShippingPrimaryp$backup | where-object {$_.Extension -eq '.trn'} | Sort-Object {$_.CreationTime} | select Name


foreach ($file in $filelist)


$restorefile = $file.Name


$text = "restore log MyDatabase from disk = 'P:ForRestore$restorefile' with norecovery;"

Write-Output $text



If you wanted to take this a step further you could actually use invoke-sqlcmd to execute the $text string and perform the restore (don’t even contemplate doing that if you aren’t running at least SQL 2008 R2). I always prefer to do that within SSMS though.

While I’ve used this script for a migration task it also works really well if you need to perform a point in time restore for your database from a full backup and some transaction log backups. Can you imagine how long it would take you to build out the restore string to recover a database which had transaction log backups every 15 minutes for the last 20 hours? That’s 80 log files. Want to do that manually? This is a much nicer way to go.

Using PowerShell To Restrict MSDTC Ports

Ever tried to create a linked server that uses MSDTC only to find yourself blocked by your company firewall? Ever tried to run a WMI query against a server just to find yourself blocked? Ever had the firewall team tell you that they aren’t going to open up ports 49152-65535 so that you can use RPC?

Let’s be fair, your network team shouldn’t have to open up all those ports because RPC responds somewhere within a large dynamic range.

How to configure RPC dynamic port allocation to work with firewalls will tell you how to edit your registry to restrict that port range and make your network admin a little happier.

Working with the registry is not fun at the best of times, and when you are setting up a bunch of machines it takes time. Sure, you could create a .reg file and run that on each machine, but this is 2011 and we have PowerShell now.


The following script checks and if necessary adds the required registry keys to restrict that port range. In the example below windows is being limited to ports 5000-5200.





   Restricts the RPC ports to be used on Windows from 5000-5200


   Execute to add registry entries on the local machine to restrict the RPC ports from 5000-5200. Requires a reboot once executed.

.PARAMETER <paramName>






if (Test-Path 'HKLM:SOFTWAREMicrosoftRpcInternet' -ErrorAction SilentlyContinue) { "Registry Key Exists" } 

else { md 'HKLM:SOFTWAREMicrosoftRpcInternet' }


if (Get-ItemProperty -Name "Ports" -Path 'HKLM:SOFTWAREMicrosoftRpcInternet' -ErrorAction SilentlyContinue ) { "Ports value exists" }

else { New-ItemProperty 'HKLM:SOFTWAREMicrosoftRpcInternet' -Name 'Ports' -Value '5000-5200' -PropertyType 'MultiString' }


if (Get-ItemProperty -Name "UseInternetPorts" -Path 'HKLM:SOFTWAREMicrosoftRpcInternet' -ErrorAction SilentlyContinue ) { "UseInternetPorts value exists" }

else { New-ItemProperty 'HKLM:SOFTWAREMicrosoftRpcInternet' -Name 'UseInternetPorts' -Value 'Y' -PropertyType 'String' }


if (Get-ItemProperty -Name "PortsInternetAvailable" -Path 'HKLM:SOFTWAREMicrosoftRpcInternet' -ErrorAction SilentlyContinue ) { "PortsInternetAvailable value exists" }

else { New-ItemProperty 'HKLM:SOFTWAREMicrosoftRpcInternet' -Name 'PortsInternetAvailable' -Value 'Y' -PropertyType 'String' }

Speaking At PASS Summit 2011

I’ll be speaking at the PASS Summit 2011 coming up in just over a month.

“Wait…what?” I hear you say, “you said in your blog post https://sirsql.net/blog/2011/6/20/not-presenting-at-pass-summit-2011.html that you were not going to be presenting”.


Well a couple of weeks ago PASS opened up the opportunity for folks to submit lightning sessions (5 minute talks on a subject). I submitted 4 of those and one of them was selected:


PowerShell: It’s your new BFF

PowerShell wants to be your bestie. Please accept PowerShell’s friend request and put it in your circle. Find out why you should.


Mine is just one of 24 lightning sessions that have been selected, check out the others at http://www.sqlpass.org/summit/2011/SummitContent/LightningTalks2011.aspx


Hope you’ll find the time to swing by at least one of the lightning sessions. They are a lot of fun.


Creating A Clustered SQL Install ini File Using PowerShell–Part 3

In parts 1 and 2 the cluster was made ready for the SQL install and the Active Directory objects were created. In the final part of the series we will finally get to creating the ini file and installing SQL.

As an FYI this script is designed to install the following components along with the Database Engine:

  • SQL Server Replication
  • Full-Text Search
  • Business Intelligence Development Studio
  • Client Tools Connectivity
  • Integration Services
  • Client Tools Backwards Compatibility
  • Client Tools SDK
  • SQL Server Books Online
  • Management Tools – Basic
    • Management Tools – Complete
  • SQL Client Connectivity SDK
  • Microsoft Sync Framework

The ini can be quickly modified to remove certain of these components if they are not required (I’m looking to add selections for these items to a future version).


Copy the script to the server

Download the CreateClusteredSQLInstallFile.ps1 and copy it over to the server. From there it can either be opened directly in PowerShell ISE or called from within the PowerShell shell.

If calling the script from the shell you might get an error:


If so you’ll need to reopen the shell as an administrator and change the execution policy to RemoteSigned

set-executionpolicy Remotesigned


This time it should run without any errors.


Working through the script

The first thing to come up is an informational box letting you know that you’ll be creating ini files:


Next, depending upon whether you started the script through the shell or the ISE you’ll see different things…in the ISE you’ll see a pop-up window asking if you want to “Install a new clustered instance of add node to an existing cluster?”, this is in case you want to generate an ini file for a cluster you already have in place. By default this will create files for a new clustered install of SQL Server:

In the shell you will just get a prompt”:


A new window will appear asking where you want to put the ini files when they are created. I’ve chosen to put them in C:SQL Install ini (this is the same if you use the shell or the ISE)


Next you’ll be prompted to enter the SQL virtual name (going forward I’ll be showing the prompts through the ISE, keep an eye on the shell if that’s what you are using). This is the disabled computer account we created in Active Directory:


The next prompt will ask for an instance name, here I’m going with PROD (which will give us the SQL instance of GROOPROD):


Follow this up by entering the IP address:


The next two prompts are for the SQL Service and SQL Agent accounts to be used:


Now choose a root folder for the SQL system databases (master/model/msdb). A subdirectory of SQLSystem will automatically be created, as such I am using the root of G:


A similar window will pop-up and ask for the location of TempDB, the data and log files and backups. This is just another case of choosing the appropriate locations. As a note from the location you choose the following subfolders will be created and used:

  • TempDB – <root>MSSQLData
  • SQL Data Files – <root>MSSQLData
  • SQL Log Files – <root>MSSQLLogs
  • Backups – <root>MSSQLBackup



The final step is a prompt that will appear asking what failover clustered disks should be used. These will be listed under available storage in the Failover Cluster Manager. Only the disks to actually be used should be added. This is a comma separated list of those disks (the screenshot shows the disks next to the prompt for easy identification):


A completion pop-up appears indicating what files were created and how to use them:


The filenames are generated based upon the user creating the script and the time that the script started executing (this helps to prevent accidental duplicates).


If you’d used the shell to execute the script you’d expect to see the following:


I’ve got files, so now what?

Now that we have generated the ini files it’s time to put them to use. The first thing I like to do is rename them to something more appropriate (like the SQL virtual name and their purpose) so



I also like to store these files away in a repository, this way it’s a very quick and easy operation to add a node to a cluster or find out what installation settings were used.


Can we do the SQL install already?

We sure can, it’s go time! But first, take a quick look at the ini file that was created:

;File created by: AWESOMESAUCENic
;File creation date: 2011-07-17 16:47
;Script to install new SQL clustered instance
;SQLSERVER2008 Configuration File
INSTALLSHAREDDIR=”C:Program FilesMicrosoft SQL Server”
INSTALLSHAREDWOWDIR=”C:Program Files (x86)Microsoft SQL Server”
INSTANCEDIR=”C:Program FilesMicrosoft SQL Server”



That’s all the information needed to perform the install.

Now open up an elevated command prompt and go to the location of the install media. Now just call setup.exe and pass along the location and name of the SQLInstall.ini file along with the SQL and Agent passwords, then hit enter:


Now go and make yourself a beverage of your choice, the install will take a little while….after an indeterminate amount of time you should see the following nondescript message in the command prompt window:


Not exactly a whole bunch of bells and whistles there, but “Setup result: 0” tells us that the install was successful (even though it’s somewhat cryptic).

We can test this by cranking up SSMS and connecting:


Add another node

A single node cluster while nice isn’t particularly highly available, so we need to have this SQL instance deployed on the other node(s). This is a quick and painless task. Just copy over the SQL Install ini folder to the other node(s), login and run the setup.exe, this time calling the AddNode.ini file:


Again, go for a walk around the block, talk to your colleagues and eventually come back to:



Rinse and repeat for as many nodes as you need.



Setup your cluster, ensure Active Directory is good to go and use CreateClusteredSQLInstallFile.ps1 to quickly build your ini file to install SQL Server on to your cluster. It’s quick and easy. It allowed me to deploy 4 SQL Server instances to 5 nodes in under 4 1/2 hours.

Give it a try, let me know what you think. I’ll be looking to enhance it in the future.


Read Part 1

Read Part 2

Creating A Clustered SQL Install ini File Using PowerShell–Part 2

In Part 1 I showed how the cluster was readied for the SQL install. In this short part 2 I’ll go over the Active Directory configuration components so that everything is ready for us to start performing the install.


Create the computer account

The first thing is to create a new Computer object in Active Directory. This will be the virtual name used in the cluster.


Once added we need to go back in and edit the properties of the Computer (you’ll need to use Advanced Features for this bit, look under the view menu item).

Go to the security tab of the Computer properties and hit the Add button. In the Select Users, Computers, Service Accounts or Groups window hit Object Types and ensure that Computers is checked. The cluster account will be used to manage the Computer resource (in this case CLUSTER1) so we add CLUSTER1$


And then give CLUSTER1$ Full Control over the computer object.


Finally the computer object needs to be disabled (if it’s not disabled your cluster install will not work correctly and it will fail to come online)



Create the service accounts

In the interest of security I’ll be using one account for the SQL Engine and another for the SQL Agent. Quickly create a couple of accounts with nice secure passwords (that of course you will keep somewhere secure, like in a KeePass file).


I like to give the accounts good descriptive names. It makes them easier to find later (actually the best way is to have a separate OU for the SQL accounts and for the SQL computer objects as it makes it a lot easier to manage and create Group Policies for them).


The boring stuff is done

The setup items are complete, we’re now ready to create the ini file….in part 3

Creating A Clustered SQL Install ini File Using PowerShell–Part 1

I’ve written a few posts recently around Clustering. Most of thse have been inspired by working on building out 4 new clusters consisting of 16 nodes and 14 new SQL instances. Each of the servers in the clusters are identical and each one of the SQL installs needed to be exactly the same.

Ensuring that all the installs are the same can be challenging when using the GUI, especially when you have you are installing a lot of instances. In this case:

  • Cluster A – 5 nodes x 4 SQL instances = 20 installs
  • Cluster B  – 4 nodes x 3 SQL instances = 12 installs
  • Cluster C – 4 nodes x 4 SQL instances = 16 installs
  • Cluster D – 3 nodes x 3 SQL instances = 9 installs
  • Total installs: 57

Given how slow it is to use the SQL installer GUI I figured it would take me about 2 1/2 lifetimes to perform the installs I needed.


A better and more consistent way to install SQL

So what would be a better way to perform these installs? It makes sense to perform automated installs using ini files that hold all the configuration information for the instance.

There’s a good MSDN article on “How to: Install SQL Server 2008 R2 Using a Configuration File”.

Using the SQL Server instance prep tool is again a major undertaking using the GUI. I figured that there had to be a better and faster way to build an ini file to perform the installs I needed. From previous cluster installs I had a couple of configuration.ini files laying around, it’s a pretty easy task to manipulate those to reference new clusters/disks and IP addresses. Thing is that it’s not particularly quick and also very prone to error.

So what would be a better way?


PowerShell to the rescue

Yet again, PowerShell is the savior in this situation.

I put together a script that is quick and lightweight, asks the relevant questions and provides you with the ini files that you need to install a new clustered SQL Server instance and the ini files that you need to perform the installs on the additional cluster nodes.


First things first

Before using the script we need to ensure the cluster is setup and ready to go. Here I’ll be using my home machine to install a new clustered SQL Server instance.

I’ve created a new cluster called CLUSTER1 consisting of two nodes CLUSTERNODEA and CLUSTERNODEB which consists of a private and public network (named as such).


The cluster had node and disk majority. Storage wise (as this is a demo) I’ve only got a couple of disks. SQL Data Disk to hold my SQL system, data and log files plus SQL Backup Disk to hold the backups. This is obviously not how I would setup a normal system, this is just iSCSI disk running on my laptop, don’t judge me too harshly.



In part 2 I’ll go over the configured Active Directory components so that everything is in place ready for the install to happen.

Grabbing The Newest File From Subdirectories Using PowerShell

Every once in a while I come up with a need for something a little out of the ordinary, in this instance I was moving backups from one machine to another. Robocopy is the obvious tool of choice to do this. Trouble was that the backups were not all within a single directory, rather they were in multiple subdirectories. Robocopy will of course handle this with the /S switch. What Robocopy can’t handle is the fact that I want only the most recent file from each one of those subdirectories, not all of them (in this case I just wanted to move the most recent differential backup from one location to another).

I figured I could sit down and query msdb for this information and dump that out. I mean it works, it’s functional and does exactly what I would need it to do. Where’s the fun in that though, really? Add to that it would only handle SQL backups, what if I had a need to do the same thing later on  for other types of files? The TSQL query wouldn’t work for me then.


PowerShell to the rescue

Seriously, I think that I’m going to get a cape for PowerShell that it can wear around the place as it’s that damn super (although I am not sure that I want to see it’s underwear outside its pants).

In this example I’m going to be working from C:Temp on my local machine.

Within C:Temp there are two folders and a subfolder:

Each of these folders contains a couple of files:


To grab the most recent file from a particular folder is a quick one liner:

dir c:tempsubfolder2 | sort-object {$_.LastWriteTime} -Descending | select-object -First 1


That is the basis for the rest of the script. Essentially everything else just handles recursing through the subdirectories to grab this information:


$Path = 'C:Temp' #Root path to look for files

$DestinationPath = 'RemoteD$' #Remote destination for file copy


#Grab a recursive list of all subfolders

$SubFolders = dir $Path -Recurse | Where-Object {$_.PSIsContainer} | ForEach-Object -Process {$_.FullName}


#Iterate through the list of subfolders and grab the first file in each

ForEach ($Folder in $SubFolders)


    $FullFileName = dir $Folder | Where-Object {!$_.PSIsContainer} | Sort-Object {$_.LastWriteTime} -Descending | Select-Object -First 1 


    #For every file grab it's location and output the robocopy command ready for use

    ForEach ($File in $FullFileName)


        $FilePath = $File.DirectoryName

        $FileName = $File.Name

        Write-Output "robocopy $FilePath $DestinationPath $FileName /R:6 /W:30 /Z"



Running this gives the results:


Quick and easy, does just what it says on the box. Feel free to download CopyNewestFileFromSubDir.ps1 and give it a try. Let me know what enhancements you put around it (here’s a quick one, to limit the type of files evaluated change !$_.PSIsContainer to $_.Extension –eq “.bak” )