Copying VHDs in Azure

Copying VHDs locally to machines in Azure

This was from when RemoteApp didn’t support creating an image directly from VM.

  • A1 Std machine, copying a 127GB VHD to a local drive (not temp D:\) via azcopy took 6.5 hours
  • A4 Std machine, copying a 127GB VHD to D:\ via azcopy took 5 mins 20 secs
  • A4 Std machine, copying a 127GB VHD to D:\ via save-azurevhd took 10 mins 39 secs
  • A4 Std machine, copying a 127GB VHD to a local drive (not Temp) via azcopy took 25 mins 21 seconds
  • A4 Std machine, copying a 127GB VHD to a local drive (not Temp) via save-azurevhd took 52 mins 11 seconds

Copying files into a VM via the two commands is very CPU intensive due to the threading it uses, so utilize a larger box no matter your method. And the hands down winner is to use Azcopy into the local temp D:\ (avoids an extra storage account hop). However, if you want a status bar, utilize save-azurevhd.

Copying VHDs between Storage Accounts

Due to a storage cluster issue in AU East, it has been advised to create new storage accounts and migrate VHDs to the new storage accounts.  MSFT had provided us with a script, but it was taking hours/days to copy (and kept timing out).

Instead, we spun up a D4v2 machine in the AU East region, and I was able to have 6 azcopy sessions happening all at once with the /SyncCopy command.  Each was running >100MB/sec whereas other async methods were running at <5MB/sec.  You will see a ton of CPU utilzation during this, but the faster the machine, the better.  Additionally, azcopy supports resume.  To allow multiple instances of azcopy to run on a machine, utilize the /Z:<folderpath> switch for the journal file.

Stop Azure Blob with Copy Pending

Prior to getting all our copies going with the /SyncCopy, we had a few that were running async.  Unfortunately, after stopping that with a CTRL-C and having azcopy stop, the blobs still had a copy pending action on them.  This resulted in errors when attempting to re-run the copy with /SyncCopy on a separate machine: HTTP error 409, copy pending.

To fix this, you can force stop the copy.  As these were new storage accounts with only these VHDs, we were able to run it against the full container.  However, MSFT has an article on how you can do it against individual blobs.

Set-AzureStubscription -SubscriptionName <name> - CurrentStorageAccount <affectedStorageAccount>
Get-AzureStorageBlob -Container <containerName> | Stop-AzureStorageBlobCopy -Force
Published
Categorized as azure

Nginx + WordPress + Infinite Redirects

As I was migrating my websites to a new host (I may blog about that later as it’s been an interesting ride), I had this lovely issue where one of my websites would go into an infinite redirect loop when sitting behind the Azure CDN (custom origin).

Of course, it worked fine for all pages except for the root.  And it also worked fine when it wasn’t behind the Azure CDN.  For whatever reason, adding a bit of code to the functions.php theme seemed to work.

remove_filter('template_redirect', 'redirect_canonical');

I then had to add in a manual redirect in nginx via the below.  Still no idea why it doesn’t just “work” as it has before, but whatever. Now that it’s working, I should go back and figure out why it wasn’t with redirect_canonical…

server {
   listen 80;
   server_name test.com;
   rewrite ^ $scheme://www.test.com$request_uri? permanent;
}

SP3 issues

Last night my SP3 decided to stop working with any Touch covers.  They would work on other SP3s, just not this one.  It definitely made work a lot of fun today.

Anyways, there is a “button reset” procedure that has worked in the past when it wouldn’t start.  Turns out, it solved this problem too.

Solution 3 is the answer.

 

 

Published
Categorized as computers

Surface Pro 3 Bootable USB

Ugh, this has taken me way too long to finally figure out/fix.  I’ve been trying to wipe my Surface Pro 3 with TH2 – as I upgraded to RTM.  However, I’ve had a bear of a time getting my USB key bootable.

Now, I’ve done it before, but for whatever reason previous ways haven’t been working.  Turns out, there are 2 key things (one of which I was missing):

  • GPT partitioning
  • FAT32 formatting

To make it easier, you can use Rufus.  Just make sure after you select the ISO, you reselect GPT and FAT32.

rufus

Once the key is formatted, you can boot from it either by restarting via advanced mode from within windows or by holding the volume down button when you turn it on.

*sigh* So much time wasted on this one.

Published
Categorized as computers

Associating a reserved IP to a running deployment

Microsoft has finally enabled the ability to associate a reserved IP to an already created cloud service (VMs).  This is great news as we have a few VMs that are externally accessible that were either built prior to this functionality or we just plumb forgot during build.

While logical, Microsoft doesn’t comment that this will cause an outage, and should be done during a normal change window.  Sadly, while the IP change takes very little time, DNS updates are typically 20 minute TTL.

Other items that cause small network blips that may require a downtime window (all V1):

  • Adding new endpoints to a VM
  • Adding subnets to an already created virtual network
Published
Categorized as azure

Breaking Blob Leases via PowerShell

We are utilizing SQL Backup to Azure blob and had a meltdown today where the log backups were erroring out leaving us with 1TB files up in Azure that were locked.  Needless to say it happened late last night and so there were multiple hourly files in multiple folder structures all over our storage accounts.  It took a bit, but the following script clears out all the locks on blobs within a container in all directories.  Please use carefully and don’t run it against your “vhds” container!

Also, it requires the Microsoft.WindowsAzure.Storage.dll assembly from the Windows Azure Storage NuGet package.  You can grab this by downloading the commandline nuget file and running the below.  Note, it will dump the file you need into .WindowsAzure.Storage.<ver>libnet40

nuget.exe install WindowsAzure.Storage

Break lease Script Below – one line modification from https://msdn.microsoft.com/en-us/library/jj919145.aspx

param(
[Parameter(Mandatory=$true)]
[string]$storageAccount,
[Parameter(Mandatory=$true)]
[string]$storageKey,
[Parameter(Mandatory=$true)]
[string]$blobContainer
)

$storageAssemblyPath = $pwd.Path + "Microsoft.WindowsAzure.Storage.dll"

# Well known Restore Lease ID
$restoreLeaseId = "BAC2BAC2BAC2BAC2BAC2BAC2BAC2BAC2"

# Load the storage assembly without locking the file for the duration of the PowerShell session
$bytes = [System.IO.File]::ReadAllBytes($storageAssemblyPath)
[System.Reflection.Assembly]::Load($bytes)

$cred = New-Object 'Microsoft.WindowsAzure.Storage.Auth.StorageCredentials' $storageAccount, $storageKey

$client = New-Object 'Microsoft.WindowsAzure.Storage.Blob.CloudBlobClient' "https://$storageAccount.blob.core.windows.net", $cred

$container = $client.GetContainerReference($blobContainer)

#list all the blobs in the container including subdirectories
$allBlobs = $container.ListBlobs($null,1)

$lockedBlobs = @()
# filter blobs that are have Lease Status as "locked"
foreach($blob in $allBlobs)
{
$blobProperties = $blob.Properties
if($blobProperties.LeaseStatus -eq "Locked")
{
$lockedBlobs += $blob

}
}

if ($lockedBlobs.Count -eq 0)
{
Write-Host " There are no blobs with locked lease status"
}
if($lockedBlobs.Count -gt 0)
{
write-host "Breaking leases"
foreach($blob in $lockedBlobs )
{
try
{
$blob.AcquireLease($null, $restoreLeaseId, $null, $null, $null)
Write-Host "The lease on $($blob.Uri) is a restore lease"
}
catch [Microsoft.WindowsAzure.Storage.StorageException]
{
if($_.Exception.RequestInformation.HttpStatusCode -eq 409)
{
Write-Host "The lease on $($blob.Uri) is not a restore lease"
}
}

Write-Host "Breaking lease on $($blob.Uri)"
$blob.BreakLease($(New-TimeSpan), $null, $null, $null) | Out-Null
}
}

 

Published
Categorized as azure

Adding Additional Azure Disks to an VM's StoragePool that is part of a SQL Server AlwaysOn Cluster

To get the optimal performance out of your Azure VMs running SQL servers, MS recommends to use Storage Spaces and stripe multiple Azure disks[1]. The nice thing about storage pools in Storage Spaces is that it allows you to add disks behind the scenes without impacting the actual volume.

Now lets say you have a SQL AlwaysOn cluster (2+ nodes), and for performance reasons (IOPS) you realize that you need to add more disks.  As Storage Spaces shows all disks (physical, virtual, and storagepools) across the whole cluster, it is possible you won’t be able to simply add them due to naming mismatch.  Fear not though, it is still possible if you follow the steps below:

  • Add the new disks to the VM
  • Log into the VM
  • Failover SQL to a secondary if the current VM is the primary
  • Stop clustering service on the VM
  • Run Get-PhysicalDisks to get the disknames
  • Run Add-PhysicalDisk -StoragePoolFriendlyName <storagepool> -PhysicalDisks (Get-PhysicalDisk -FriendlyName <disks>)
  • Run Update-HostStorageCache (if we don’t do this sometimes the volume resize doesn’t work)
  • Run Resize-VirtualDisk -FriendlyName <diskName> -Size <size>
  • Run Update-HostStorageCache (if we don’t do this sometimes the disk resize doesn’t work)
  • Run Resize-Partition -Size <size> -DriveLetter <letter>
  • Start the clustering service on the machine
  • Failback SQL to the VM if required

Hopefully this helps someone as we were beating our heads in for quite a few days (along with MS).

[1] https://msdn.microsoft.com/en-us/library/azure/dn133149.aspx

Published
Categorized as azure

SQL AlwaysOn Avail Group Failover and Client Disconnects

We had an issue recently where an application was not properly getting disconnected from SQL during a failover of an AlwaysOn Availability Group (AOAG).  Some background:  The application was accessing the primary node, and after the failover the application continued to access the same node.  Unfortunately, as it was now read-only, the app was not very happy.

Turns out it was due to the Read-Only configuration of the secondary.  We had it set to “Yes” which allows any connections to continue to access the secondary with the assumption the application is smart enough to know it can only read.  It appears while using this setting, connections aren’t forcefully closed, causing all sorts of issue.

Setting it to either “No” or “Read-Intent Only” properly severed the connections for us.  Yay!

For more info.