The Curious Case of the Vanishing AG IP

Before you read any further, be sure to go and read Kendra Little’s fantastic post on How to Survive Opening A Microsoft Support Ticket for SQL Server or Azure SQL. There are details I won’t go into in this post, but I just wanted to note that the issues Kendra raises are not ones that only she experiences.

The Basics

Availability Groups (AGs) are a great way to handle business continuity. When they work, they work great. When they don’t work…well the documentation and tooling are rather lacking to help you get through.

One of the things with AGs is that you can have servers on different subnets. This is useful if you want to have AGs span across multiple data centers or want to perform cage migrations, or, in the case of Azure VMs, you want to have an AG that can automatically failover and isn’t at the mercy of a timeout of an Azure load balancer.

Failing over an AG to a different subnet works well, but it does require configuring your Windows Server Failover Cluster (WSFC) resource so that all of the IPs associate with the AG listener are registered (known as RegisterAllProvidersIp which is now the default). If they aren’t and you failover to a different subnet, you’re then at the mercy of your DNS TTL for the amount of time it will take for clients to connect to SQL on that new subnet (or you connect to each machine and flush its DNS cache).

With RegisterAllProvidersIp enabled, every IP address associated with the AG listener is presented and the client connection string includes MultiSubnetFailover=True (with supported client libraries) then the client will test all the presented addresses and then connect to the one that responds.

Adding a new IP to a listener for an AG is done using an ALTER AVAILABILITY GROUP MODIFY LISTENER command. This will normally update DNS appropriately but there are manual steps you can take to ensure it done. Why an extra step? Because if the IP is not registered and you failover to the new subnet, you end up with the TTL problem I mentioned earlier. A way around this is to create a new A record in DNS that points to the new IP. That way, when you failover, that IP is already in DNS and you don’t run into issues. You can create the A record either through the DNS console or through PowerShell.

The Issue

A requirement came in for a new server in a new subnet in an existing AG. A simple process and one that’s been done dozens of times in the past with a well worked SOP. In this instance, the new server was added to the WSFC, logins all added, databases restored, the instance added to the AG, and the listener modified to include it.

After getting the DNS team to manually add the new A record, a nslookup confirmed the IP appeared in the DNS record.

Great stuff! Ready to move into service then.

Except, prior to adding traffic to the new instance, the nslookup ran again and somehow the new IP had vanished from the A record. The DNS team stated they hadn’t removed it. Logs showed that none of the DBAs had executed anything to remove the IP, and yet it had vanished.

The DNS team added it back once more. Everyone validated it.

The next day it was gone once more.

The DNS admin did some looking and it showed that the AG listener computer account deleted it. Odd. So, I went digging through the SQL Server logs. Nothing there. Then I dumped the cluster logs and went digging through those.

There were some entries in the logs indicating that the WSFC was checking the IPs were valid, but only showed the IPs that already existed, and not the new one added. Looking back, this process ran every 24 hours.

Unfamiliar with this, it was time to open a ticket.

The Investigation

After explaining the issue multiple times and collecting many set of logs and and answering the same question a large number of times, we received a couple of “things to try” that included turning off RegisterAllProvidersIp (which would have caused an outage in a failover to a box on a different subnet) and to remove permissions from DNS for the AG listener (which would mean we couldn’t add new IPs using TSQL or PowerShell).

After several false starts over weeks, recreating the A records over and over again (I truly feel bad for the DNS admin who just kept his PowerShell script to hand and just hit enter once a day), we got to someone who moved past beyond reading some random web pages and gave us the first piece of useful information.

The Fix

The short version, is that after adding an IP to an AG listener, you have to restart the AG network name for it to actually pick up the change. You can do this by either failing over the AG to any other replica or offline/online the network name resource using the Cluster Manager GUI or PowerShell.

When a new IP is added to the AG listener, it’s added to the static config, but that config is not read in until the computer name is restarted. In this case, we hadn’t performed a failover or restarted that resource and so the WSFC used the cached record to validate the IPs with DNS. When it noticed that DNS had an extra IP that wasn’t in the cached configuration, it removed it.

After adding the IP once more, and performing an after-hours restart of the computer network name, the new A record remained through the next DNS check in the WSFC. After leaving it a couple more days to be sure that it wasn’t going to vanish again, the new server was added fully into service.

I asked for a link to documentation on this facet of AGs and WSFCs. Apparently there is none. So, this is just a warning note for those of you maybe adding IPs in extra subnets – restart your resources to ensure your change is picked up.

Using STRING_AGG to Return Schema Information

A few weeks ago, someone came to me wanting a dump of tables in a database and all the columns as well. They were working on some data governance and the first step was to know roughly what existed in the database.

After a chat, we decided what they needed was a list of columns and all the tables where those columns were used and another list of tables with a list of the columns used in the table. To help import the data into their tooling they needed a comma separated list of the tables for each column and the columns for each table.

After spending many years using various forms of FOR XML to do this sort of thing, I decided it was time to make my life easier and use STRING_AGG instead.

STRING_AGG can take a set of data and use the provided delimiter to create a list. As a bonus, you can also use the WITHIN GROUP to order the list of items within the STRING_AGG statement.

Here’s a quick example of I used STRING_AGG to return the table and column data requested by the governance folks.

/* Return a list columns in the database and for each column a comma delimited list of tables in which that column appears */
SELECT
c.name AS ColumnName
, STRING_AGG(CONCAT( QUOTENNAME(s.name), '.', QUOTENAME(t.name)), ', ')
WITHIN GROUP (ORDER BY CONCAT(QUOTENAME(s.name), '.', QUOTENAME(t.name))) AS TableListForColumn
, COUNT(*) AS TableUsageCountForColumn
FROM sys.tables AS t
JOIN sys.columns AS c ON c.object_id = t.object_id
JOIN sys.schemas AS s ON s.schema_id = t.schema_id
GROUP BY c.name
ORDER BY TableUsageCountForColumn DESC
, ColumnName ASC;
/* Return a list of tables in a database and for each table a comma delimited list of columns in that table */
SELECT
CONCAT(QUOTENAME(s.name), '.', QUOTENAME(t.name)) AS TableName
, STRING_AGG(c.name, ', ' )
WITHIN GROUP (ORDER BY c.name) AS ColumnList
, COUNT(*) AS CountColumnsInTable
FROM sys.tables AS t
JOIN sys.columns AS c ON c.object_id = t.object_id
JOIN sys.schemas AS s ON s.schema_id = t.schema_id
GROUP BY concat(quotename(s.name), '.', quotename(t.name))
ORDER BY CountColumnsInTable DESC
, TableName ASC

The Changing Community Landscape?

Just dropping in the thoughts I posted in a Twitter thread earlier today –

Random thoughts on the changing landscape around the SQL Server community and training events (more a train of thought than well constructed reasoning with data, so don’t expect some kind of wonderful insight here). Should the demise of PASS happen it will leave a significant gap in the “independent” training space (I say independent here despite times when folks on the board are working for vendors – I genuinely feel that they have overall integrity when it comes to fairness and note that I am excluding C&C from this).

Continue reading “The Changing Community Landscape?”

The Deception of the Log Reuse Wait Type

One day you’re happily working away and get interrupted by an alert that says your transaction logs are getting full or maybe growing. Ever the vigilant person you logon, crank open SSMS or ADS and take a look at sys.databases to check out the log_reuse_wait_desc.

AVAILABILITY_REPLICA the DMV proudly advises. Availability Groups – the lifesaver of your organization and the bane of your nightmares.

You crack open a SQL script and run it against the primary to see which replica is running behind. Strangely all the replicas appear to be up to date. The send queue is hovering around 60. The redo queue is not greater than a thousand on any of the asynchronous replicas.

Unsure why this is happening you use the tried and trusted method of restarting the hadr endpoint to see if that kicks things into motion. Nothing is resolved. The log is geting fuller and fuller and you really don’t want that thing to grow.

What do you do next?

You may go around to the secondary replicas and remove them from the AG, one at a time, to see if they fix the problem. Alternatively, you go ahead and check the running sessions on each of the replicas to see if there’s something going on you haven’t considered.

Your query discovers that the AG replica in your DR site is still taking a backup from overnight. It should have completed hours ago, but for some reason it is still running. Wondering if it is related you kill the backup.

With the backup dead, suddenly the transaction log on your primary clears and you can get back to your interrupted nap hard work.

 

This is an example of a choice in messaging within SQL Server that makes it more difficult to find a solution.

In this case the problem of the transaction log not truncating would have ideally been marked as ACTIVE_BACKUP_OR_RESTORE, after all, it was a backup that was blocking the log truncation. But the folks that wrote the product felt AVAILABILITY_REPLICA was more appropriate because the backup was running on a different replica than the primary and that was the root location for the truncation issue.

So, be aware, messaging may not always accurately reflect problems that are occurring.

 

The Curious Case of the Restore Slowdown

For quite some time I had been seeing a situation where database restores were slowing down after 50-60% of the restore was completed (you can output restore progress using WITH STATS = <some value 1-100> or use DMVs or sp_whoisactive). This was not really a big deal with small databases, it would just take an extra couple of minutes to complete. But when I had the need to restore a 16 TB database it became a real problem.

The restore would kick off at a decent rate giving me around 2 GB/sec (restoring over a WAN from multiple files to a server with 144 cores, 512 GB of RAM, and multiple data files on NVME storage). This was acceptable. Then, a little over halfway through the restore performance dropped to 200 MB/sec and then 100 MB/sec (this being viewable through perfmon)

Restore Performance Drop

This issue I could reproduce time and again. Something was up with the product.

In investigating I kicked off another restore and watched the process. Things started swimmingly reading from the 4 backup files on the NAS and processing fast. Then the restore seemed to complete reading 3 of those files and slow down dramatically reading the fourth one. At the same time I could see a single core on the database server being pegged at 100% usage.

There wasn’t anything I could do to resolve this issue (short of using a ridiculous number of backup files in an attempt to make the low perf period as short as possible).

Given that this had the potential for a real RTO/RPO problem it was time to call in the big guns over at Microsoft support.

We ran through a few more scenarios including taking stack dumps during the period when the restore was low (DBCC STACKDUMP <- NEVER EVER DO THIS ON A PROD SYSTEM).

Working through the dumps the MS support engineer found bug from last year whereby disks formatted with a 4K sector size could experience slow restore performance.

I checked up on my servers to confirm that we were using 4K using the built in FSUTIL command. In this case FSUTIL FSINFO NTFSINFO <drive>. This provides a wealth of useful information about the disk, and showed that it was indeed using a 4K sector size.

4K sector size

Fortunately this issue was fixed in October last year with

  • SQL Server 2014 Service Pack 3
  • SQL Server 2014 Service Pack 2 – Cumulative Update 12
  • SQL Server 2016 Service Pack 2 – Cumulative Update 1
  • SQL Server 2016 Service Pack 1 – Cumulative Update 9
  • SQL Server 2017 – Cumulative Update 7

The sad thing for me was that the particular instance I was restoring to was SQL Server 2012, for which no patch was released (as that version is out of mainstream support). But I was able to confirm that the slowdown did not happen when attempting to restore against an appropriately patched version of a fully supported version of SQL Server (time to put on the harder press for convincing management to allow us to upgrade).

So if you are seeing problems with restores getting slow and are on SQL Server 2014/2016/2017 get yourself patched up to date. If you are stuck on an older version either redo the low level formatting on your storage (not recommended), take uncompressed backups (mildly recommended), or work to get yourself on a fully supported version of the product.

The Argument Against Attending The PASS Summit

Back in 2016 I wrote a post called The Missing Piece of SQL Conferences which provided a perspective of a few DBAs, at various points in their careers, and what the Summit could provide in helping them to move forward. Upshot is that I believed that there was a serious gap for senior level DBAs (this could be true of other folks working in the SQL space, but I focused on what I know best). The post came to mind this week when I was talking with my boss.

“Are you planning on attending Summit this year?” he asked. Instinctively I went to respond in the affirmative, but stopped myself and thought about it for a few seconds. I contemplated the things that I had learned over the last few years of attendance, the things that I could potentially learn this year. This provided me with a few questions that I needed to address as someone with > 15 years as a DBA working with SQL Server.

Continue reading “The Argument Against Attending The PASS Summit”

High Availability–It’s Not All About The Hardware/Software

As I write this I’m sitting my my house working with the last hour of juice left in my laptop. We had a major Snow + Seattle = Bad Newssnowstorm here in the Seattle area which was followed by a big ice storm and some heavy winds, all of which have coincided to knock out power to over 200,000 homes. I’ve been without power for over 75 hours now and it’s given me a lot of time to think about multiple facets of High Availability (HA) and Disaster Recovery (DR).

HA/DR are critical items for all companies. HA in various degrees, but most certainly DR and the ability to provide some kind of business continuity model. Different companies, large and small, have different requirements around HA/DR. Some could easily remain down for a couple of days with no lingering effects, others cannot sustain downtime on their most critical systems of more than a couple of minutes. Based upon those requirements (and any budget constraints) a company could implement HA/DR using Failover Clustering, Mirroring, Log Shipping, SAN Replication, Peer-to-Peer Replication, VMotion or some other technology.

 

 

Where HA/DR often falls down

This is all good and well but what about the people that you need in order to get things up and running, the folks that you need in order to provide you with ongoing support in the event that a situation arises where you have to go to your DR site, or provide support for that critical system where a mirror has failed?

In the last 80 hours I have been called by the NOC at work stating that they have an issue. I don’t have power at home so my internet access is off. I don’t have sufficient power in my phone to tether it (don’t tell the wireless company I can do that) and it would take 4 hours sitting in the car to get sufficient power to allow me to do that. I do have a mobile broadband USB device, however I’m not getting coverage in my house right now from that carrier and I have been unable to get out of my road. In other words in the whole HA/DR side of things I’m a single point of failure. Luckily I have a colleague with power who I was able to point the NOC to in order to provide resolution to the issue.

My colleague and I live less than 5 miles apart, we are just luckily on different circuits and substations for the power company. If neither of us had been able to dial in then the problem would not have been resolved.

 

If you truly want to implement HA/DR you need to go further than just looking at your hardware infrastructure or the way that your software is setup. It means more than your DNS configuration and your Active Directory setup. There’s more to it than setting a Failover Partner in your application connection string. It’s about the people that you need to support these things!

 

 

Artificial limitations

I’ve been a DBA for about 12 years now. I’ve not been hands on with a server since doing datacenter implementation work about 5 years ago. Other than in very small shops it’s rare that a DBA touches a server, most don’t even have access to the datacenter in which their database servers reside. There has been a huge move towards setup in Colocation Datacenters (CoLo’s) over the last few years, meaning that more often than not the datacenter where you servers reside are probably not even in the same physical building that you are. There is also the current move to the Cloud initiative that’s gaining some traction. Given all of this information I ask you this:

Why are DBAs so often tied to a physical location for work?

 

This is a serious question. There are so few telecommuting options for DBAs out there. You can look on the job boards and see virtually none. There’s a lot of relocation opportunities if you want to move to another state, but it doesn’t make a lot of sense. If you are going to geographically disperse your hardware due to HA/DR needs why would you not do that same for the teams that have to support it?

So many companies point blank refuse to even entertain the idea of telecommuters from another state (heck the vast majority won’t even contemplate allowing IT support staff to work from home a couple of days a week).

The same companies that happily outsource all development processes to an unknown entity in India or China refuse to entertain the idea of allowing Windows/Unix Admins or DBAs to telecommute.

 

I feel that it is somewhat ironic that the same people that expect you to come to the office and be tied to a physical location also have you be on call so that you can provide remote support when something goes wrong.

 

 

Moving beyond the office

Forcing IT support staff to come to a physical location 5 days a week makes little to no sense to me. I can understand the desire to have folks in the same physical location but as your organization grows it really needs to find a way to get past that mentality. There are various options such as LiveMeeting or GoToMeeting that can provide a telepresence in meetings for people that work remotely. Instant Messaging can be used to constantly keep people in touch. Services such as Yammer can be used to quickly and easily provide collaborative thought outside of meetings.

Looking at my current situation, if my colleague and I had both been without power we would have had no way to provide support. Our servers sit in a CoLo that we don’t have access to so even if we could have gotten there, we would have had no hands on way to work with the servers.

Now imagine we had a third person on the team that lived and worked in another state (or that the second person was living in another state and telecommuted in). Even with power down for me (or the entire state) we would still have someone available to provide support (unless by extremely unusual circumstances they were also down). With a remote DR site they could have effected whatever steps were necessary to bring that online and check that things were working ok.

Let’s look at this from another angle. I’m in the Pacific Time Zone and get to work usually 7:00-7:30am and leave 4:30-5:00pm. If we had someone working in the Eastern Time Zone and they worked the same hours we would have coverage (without emergencies handled by being on call) from 4am PST – 4:30pm PST. That’s an extra 3 hours of standard DBA coverage. Imagine how much more work could get done. How overnight problems could be identified and resolved earlier?

We live in a global society and the company that I work for does business internationally. Would it not make a lot more sense to provide a wider margin of coverage? Think about it, it would also improve the quality of life for anyone that had to be on call. For me the worst time to get called out is about 4am as I’ll never get back to sleep. If there was someone already in the office on the east coast I wouldn’t need to get called out. The same for them with evening plans. They could go out to dinner or a movie knowing that they wouldn’t have to worry about missing a call.

 

 

TL;DR

It’s the 21st century, there are many options to prevent the need for anyone to be physically tied to an office location (certainly amongst IT staff). In order to provide higher levels of support to companies they really need to embrace telecommuting options beyond providing emergency support.

 

 

Late Note: After 80 hours PSE were able to restore power to my neighborhood. Throughout the outage PSE did a fantastic job of handling things. Ultimately over 375,000 were without power at one time or another. PSE provided constant information wherever possible via their Twitter feed and website. They brought in crews from around the country to assist in the restoration efforts. All in all I think they handled things really well. I have no complaints with how they handled any of this. (I just wish I’d been closer to the top of the restore list)

How To Not Be a Commodity DBA – Part 1

A couple of months ago I wrote about the concept of the DBA as a commodity (DBAaaC), and how it can be so easy for management to consider any DBA as a basic, and replaceable resource within their organization. This got me to thinking (and it would seem that my thinking takes a long time) that maybe a little guidance would be in order to help prevent you from becoming a DBAaaC.

Not everyone is in the same situation, so I’ve broken this down in to a couple of parts, one for those of you that have the time, and inclination to get things done, and those of you that can barely find a half hour to themselves.

First up…when time for you isn’t a problem…

Continue reading “How To Not Be a Commodity DBA – Part 1”

The Nature of Absurdity

The other night I was adjusting some quorum settings for the WSFC that was underlying a couple of AGs at work. Quorum needed fixing up as we were going to perform a site failover while some network switches were updated on the primary site, and I didn’t want to risk the potential of an outage should a WAN link go down between the primary DC and the DR DC. Fortunately PS makes this pretty simple, I just ensured that quorum would be maintained by the votes in the DR DC by removing them from a couple of machines in the primary DC:

(Get-ClusterNode -Name MySQLNode1).NodeWeight=0;

(Get-ClusterNode -Name MySQLNode2).NodeWeight=0;

 

And then checking the votes on the WSFC to ensure that the DR site would stay up no matter what:

Get-ClusterNode | ft Name, NodeWeight -auto

 

Then I failed over the AG, I could have used T-SQL to do this in SSMS or SQLCMD:

ALTER AVAILABILITY GROUP MyAG ON REPLICA = 'MyDrSQLNode3' WITH (AVAILABILITY_MODE = SYNCHRONOUS_COMMIT);

ALTER AVAILABILITY GROUP MyAG ON REPLICA = 'MyDrSQLNode3' WITH (FAILOVER_MODE = AUTOMATIC);

ALTER AVAILABILITY GROUP MyAG FAILOVER;

But I’ve written a GUI interface to do this sort of thing in C#. This provides our NOC the ability to perform failovers themselves so that they can apply Windows CUs without having to call someone up to failover. The C# originally used T-SQL under the covers, but I’ve changed it recently so that it uses SMO (although have had to work around an issue using T-SQL):

AvailabilityGroup.Failover();

 

The once the FEX and switch work was completed I added the votes back to the primary DC and failed everything back again.

 

It was while I was doing all this that I got to thinking about just how absurd all of this stuff is.

  • 20 years ago I would have laughed at the thought of all this random jargon and what it might mean
  • 15 years ago I would have laughed at the though of Microsoft creating a proper shell language, with them preferring dim heavy VBScript
  • 10 years ago I’d told you that you needed a disk quorum for your MSCS because otherwise you were going to have FCI problems with a machine was restarted
  • 5 years ago I’d have laughed at the idea of using AGs for production (that laughter has turned to tears many times)
  • 3 years ago the closest I would have gotten to C# would have been a tart orange
  • 2 years ago the concept of being a major contributor to a large open source project would have been laughable
  • 1 year ago I’d have given you an evil look at the merest mention of SMO

 

This stuff all moves so fast, it’s just ridiculous, there’s just no way to keep up with it all. Think of all the other changes over the last dozen years of SQL Server and Windows…things like DMVs, Azure, AWS, compression, TVPs, temp variables, CTEs, output clauses, lag and lead, partition by, partition tables, etc…

Spend a moment to think about all the stuff you’ve been through, for however long you’ve been doing this nonsense, and realize the absurd nature of it all.

 

By the way, this was deliberately acronym heavy, so here are some definitions:

  • AG – Availability Group
  • AWS – Amazon Web Services
  • CTE – Common Table Expression
  • DC – Data Center
  • DMV – Dynamic Management View
  • DR – Disaster Recovery
  • FEX – Cisco Fabric Extender (network stuffs)
  • FCI – Failover Cluster Instances
  • GUI – Graphical User Interface
  • MSCS – Microsoft Cluster Service (precursor to WSFC)
  • NOC – Network Operations Center (those folks who are the first in line when something goes bad, and also the ones that have the lousy job of waking you up in the middle of the night because something went wrong)
  • PS – PowerShell
    • ft – Format-Table (in Get-ClusterNode command)
    • -auto – automatically size data columns returned to the client
  • Quorum – a majority of votes in a WSFC that is required to ensure that the clustering services stay online to ensure that there is not a risk of a split-brain scenario which could cause massive data problems (oversimplification)
  • SMO – (SQL) Server Management Objects
  • SQL – Structured Query Language
  • SQLCMD – SQL Server Command Line Interface
  • SSMS – SQL Server Management Studio
  • T-SQL – Transact-SQL
  • TVP – Table-Valued Parameters
  • VBScript – Microsoft Visual Basic Scripting Edition
  • WAN – Wide Area Network
  • WSFC – Windows Server Failover Cluster

I’m pretty sure just going through all the SQL Server word salad is enough to drive a person to drink.

Bug – Getting Read-Routing Data Using SMO

I was futzing around with SMO recently, and wanted to grab information around the read-routing list in an Availability Group. Things went fine for 2012 and 2014, but when it came to SQL Server 2016 I ran into a problem.

The read-routing property in SMO is a string collection (reference https://msdn.microsoft.com/en-us/library/microsoft.sqlserver.management.smo.availabilityreplica.readonlyroutinglist.aspx), which is fine for 2012/4, but SQL Server 2016 introduced the idea of load-balanced read-routing. When used this provide you with the option of having multiple read-only replicas which can handle traffic, and are load-balanced (it only uses a basic round-robin algorithm, but that’s a lot better than the single replica option that exists in the earlier versions).  In order to correctly know how read-routing is configured you need to know what replicas are in a load-balanced group. SMO does not provide you with the ability to get at this information (nor does it give you the chance to configure read-routing in a load-balanced scenario), as it is a basic collection, and has no further property information around it.

I’ve created a Connect item (Gathering Read Routing information using SMO is inaccurate in SQL 2016) to try and get MS to look into this, and maybe provide a fix, please upvote if you can.

If you want to try this for yourself you can use the C# below, try it against the lower versions, and then on a load-balanced 2016 configuration. While it’s still possible to get this data from TSQL I find it annoying that the server management objects that are specifically written to deal with this stuff don’t do the job.


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Microsoft.SqlServer.Management.Smo;
using Microsoft.SqlServer.Management.Common;

namespace SmoTesting
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Enter the servername");
string connectServer = Console.ReadLine();
Console.WriteLine("Enter the AG name");
string agName = Console.ReadLine();


Server srv = new Server();
try
{
srv = new Server(connectServer);
srv.ConnectionContext.StatementTimeout = 60; //timeout after 60 seconds running the query

foreach (AvailabilityGroup ag in srv.AvailabilityGroups)
{
if (ag.Name == agName)
{
ag.PrimaryReplicaServerName.ToString());
foreach (AvailabilityReplica ar in ag.AvailabilityReplicas)
{
if (ar.Name.ToString() == "connectServer")
{
foreach (Object obj in ar.ReadonlyRoutingList)
{
Console.WriteLine(" {0}", obj);
}
}
}
}
}
}
catch (Exception ex)
{
Console.WriteLine(ex.InnerException.ToString());
}
finally
{
srv.ConnectionContext.Disconnect();
}
Console.WriteLine("press a key");
Console.Read();
}
}
}