Wednesday, 25 March 2020

Finding Duplicate DNS records

Internal DNS in many organisations is usually a little messy and if tasked with cleaning it, my first step would be to identify duplicate records in DNS - either an IP address that is recorded against multiple host names or a host name with multiple host (A) records. Assuming you have a file containing one record per line, this is fairly simple to report on.

The following examples use dnscmd.exe, which assumes Windows 2000/2003 DNS, but you could use any input, as long as it contains a name and an IP.

The two commands:

  1. dnscmd %dnsServer% /enumrecords test.local @ /additional /continue | find /i /v "Aging:" | find /i "192.168" > DNSRecords.txt
  2. echo. > DuplicateIPs.txt & (for /f "tokens=1,4" %i in (DNSRecords.txt) do @if "%j" NEQ "" @find /i "%j" DuplicateIPs.txt >nul & if errorlevel 1 for /f "tokens=1" %m in ('"findstr /i "%j$" DNSRecords.txt | find /i /c "%j""') do @if %m GTR 1 (@echo %j,%m: & findstr /i "%j$" DNSRecords.txt & echo.) >> DuplicateIPs.txt) & type DuplicateIPs.txt
The first command uses dnscmd to enumerate the records from the root of the test.local zone and exclude aging (dynamic DNS) records. The last find command further filters the output by IP, which can be useful when targeting specific subnets/sites. You might also want to check aging dynamic records instead of static to see how well scavenging is working – if you do, just change the tokens to 1,5 instead of 1,4 (as the aging data is another token separating the name and IP).

The second command:

  1. Creates a new file called DuplicateIPs.txt in the current working directory
  2. Iterates through each line in the DNS record dump, extracting the first and fourth token (Name and IP)
  3. The commands in the first FOR loop check there is a value, then check the value hasn't already been processed in the duplicate list (otherwise you'd have duplicates of each duplicate) and then counts the duplicates and appends them to the file.
  4. The final command types the duplicate IP file created by the for loop iteration.
For example, supposing your DNS export contained the following records:


printer1 3600 A 192.168.10.100
printer2 3600 A 192.168.10.101
printer3 3600 A 192.168.10.100
printer4 3600 A 192.168.10.102
printer5 3600 A 192.168.10.103
printer6 3600 A 192.168.10.100
printer7 3600 A 192.168.10.102

After running the second command above, a file called DuplicateIPs.txt would be created and then typed to the prompt:


192.168.10.100,3:
printer1 3600 A 192.168.10.100
printer3 3600 A 192.168.10.100
printer6 3600 A 192.168.10.100

192.168.10.102,2:
printer4 3600 A 192.168.10.102
printer7 3600 A 192.168.10.102

The command below is modified to report duplicate names instead of IP addresses. This was done by using the first token (%i) instead of the second (%j), and modifying the findstr command to use a literal string search ending with a space rather than the regular expression EOL:

echo. > DuplicateIPs.txt & (for /f "tokens=1,4" %i in (DNSRecords.txt) do @if "%i" NEQ "" @find /i "%i" DuplicateIPs.txt >nul & if errorlevel 1 for /f "tokens=1" %m in ('"findstr /i /c:"%i " DNSRecords.txt | find /i /c "%i""') do @if %m GTR 1 (@echo %i,%m: & findstr /i /c:"%i " DNSRecords.txt & echo.) >> DuplicateIPs.txt) & type DuplicateIPs.txt


If you wanted a summary rather than the detail of each duplicate, you could also run the following command:
echo. > DuplicateIPSummary.txt & (for /f "tokens=1,4" %i in (DNSRecords.txt) do @if "%j" NEQ "" @find /i "%j" DuplicateIPSummary.txt >nul & if errorlevel 1 for /f "tokens=1" %m in ('"findstr /i "%j$" DNSRecords.txt | find /i /c "%j""') do @if %m GTR 1 (@echo %j,%m) >> DuplicateIPSummary.txt) & type DuplicateIPSummary.txt


In the example above, this would produce the following report: 
192.168.10.100,3
192.168.10.102,2

I use this sort of command to generate reports on duplicates, in this case from DNS, but it could also be useful in DHCP, WINS, or any number of Active Directory objects/attributes. People (myself included) are often wary of automated processes that make changes, but this is an excellent example of how powerful read-only automated commands can be – you can take thousands of objects and produce a report in seconds to quickly identify inconsistencies in an environment.

Dnscmd Overview
http://technet.microsoft.com/en-us/library/cc778513.aspx

Tuesday, 17 March 2020

High Availability – A Storage Architecture


Hello all, so I’ve been doing a lot of work around availability in the cloud and how to build applications that are architected for resiliency. And one of the common elements that comes up, is how do I architecture for resiliency around storage.

So the scenario is this, and its a common one, I need to be able to write new files to blob storage, and read from my storage accounts, and need it to be as resilient as possible.

So let’s start with the SLA, so currently if you are running LRS storage, then your SLA is 99.9%, which from a resiliency perspective isn’t ideal for a lot of applications. But if I use RA-GRS, my SLA goes up to 99.99%.

Now, I want to be clear about storage SLAs, this SLA says that I will be able to read data from blob storage, and that it will be available 99.99% of the time using RA-GRS.

For those who are new to blob storage, let’s talk about the different types of storage available:

  •     Locally Redundant Storage (LRS) : This mean that the 3 copies of the data you put in blob storage, are stored within the same zone.
  •     Zone Redundant Storage (ZRS): This means that the 3 copies of the data you put in blob storage, and stored across availability zones.
  •     Geo Redundant Storage (GRS) : This means that the 3 copies of the data you put in blob storage, are stored across multiple regions, following azure region pairings.
  •     Read Access Geo Redundant Storage (RA-GRS): This means that the 3 copies of the data you put in blob storage, are stored across multiple regions, following azure region pairings. But in this case you get a read access endpoint you can control.

So based on the above, the recommendation is that for the best availability you would use RA-GRS, which is a feature that is unique to Azure. RA-GRS enables you to have a secondary endpoint where you can get read-only access to the back up copies that are saved in the secondary region.

For more details, look here.

So based on that, you gain the fact that if your storage account is called : storagexyz.blob.core.windows.netYour secondary read-access endpoint would be : storagexyz-secondary.blob.core.windows.net
So the next question is, “That’s great AV Tech, but I need to be able to write and read”, and I have an architecture pattern I recommend for that and it is this:

So the above architecture, is oversimplified but focuses on the storage account configuration for higher availability. In the above architecture, we have a web application that is deployed behind traffic manager, with an instance in a primary region, and an instance in a secondary region.
Additionally we have an Azure SQL database that is ego-replicated into a backup region.
Let’s say for the sake of argument, with the above:
  • Region A => East US
  • Region B => West US
But for storage, we do the following, Storage Account A, will be in East US, which means that it will automatically replicate to West US.
For Storage Account B, will be in West US, which means it replicates to East US.
So let’s look at the Region A side:
  • New Blobs are written to Storage Account A
  • Blobs are read based on database entries.
  • Application tries to read from database identified blob storage, if it fails it uses the “-secondary” endpoint.
So for Region B side:
  • New Blobs are written to Storage Account B
  • Blobs are read based on database entries
  • Application tries to read from database identified blob storage, if it fails it uses the “-secondary” endpoint.
So in our databases I would recommend the following fields for every blob saved:
  • Storage Account Name
  • Container Name
  • Blob Name
This allows for me to easily implement the “-secondary” when it is required.
So based on the above, let’s play out a series of events:
  • We are writing blobs to Storage Account A. (1,2,3)
  • There is a failure, we fail over to Region B.
  • We start writing new blobs to Storage Account B. (4,5,6)
  • If we want to read Blob 1, we do so through the “-secondary” endpoint from Storage Account A.
  • The issue resolves
  • We read Blob 1-3 from Storage Account A (primary endpoint)
  • If we read Blob 4-6 it would be from the “-secondary” endpoint of Storage Account B
Now some would ask the question, “when do we migrate the blobs from B to A?” I would make the argument you don’t, at the end of the day, storage accounts cost nothing, and you would need to incur additional charges to move the data to the other account for no benefit. As long as you store each piece of data you can always find the blobs so I don’t see a benefit from merging.

Featured post

Top 10 Rare Windows Shortcuts to Supercharge Productivity

  Windows Key + X, U, U: This sequence quickly shuts down your computer. It's a great way to initiate a safe and swift shutdown without...