Category: computers

  • CrashPlan leaving home market

    CrashPlan leaving home market

    Boo, just got the email today that CrashPlan is leaving the home market.  After I don’t know how many years, it looks like I’ll have to find another provider.  It looks like there are a few, but with no computer-to-computer options baked in all will be a step back.  *sigh*

    **Update 8/23/2017**

    I’ve been following a lot of different threads on this.  Sadly, there are no direct competitors.  Turns out CrashPlan (even with the crappy Java app) was the best for a lot of reasons including the following:

    1. Unlimited – I am not a super heavy user with ~1TB of total storage spanning back for the last 10 years of use/versions, but it’s always nice to know it’s there.
    2. Unlimited versions – This is key and has saved my bacon a few times after a migration (computer/drive/other backup to NAS) and you think you have everything, but turns out you don’t until a year later when you’re looking for it.
    3. Family plan (i.e. more than one computer) – nice as I have 3 machines, plus my NAS that I can
    4. Peer-to-peer – one backup solution to rule them all that works on remote networks.  Unfortunately, it uses gross ports so doesn’t work anywhere (like in corporate places) and you can’t shove peer-to-peer backups to the cloud, those peers have to upload it directly.
    5. Ability to not backup on specific networks…like when I’m tethered to my phone.

    Total sidebar, but speaking of crappy Java apps, I had just migrated to using a docker image of CrashPlan too due the continued pain of updating it with Patter’s awesome SPK.  Yay to running everything in docker now instead of native Synology apps.

    My current setup consists of 3 Windows machines and a Synology NAS.  I had the CrashPlan family account so each of those machines would sync to the cloud, and all the windows machines would sync to the NAS.  Nothing crazy, and yes, I know I was missing a 3rd location for NAS storage for those following the 3-2-1 method.

    The other cloud options I’ve looked at so far:

    • Carbonite – no linux client, so non-starter as that’s where I’d like to centralize my data.  I used to use them before CrashPlan and wasn’t a fan.  I know things change in 10 years, but…
    • Backblaze – I want to like Backblaze, but no linux client and limited versions (that they say they are working on – see comments section) keeps me away.  They do have B2 integrations via 3rd party backup/sync partners.  After doing some digging, they all look hard.  I have setup a CloudBerry docker image to play with later and see how good it could be.  Using B2 storage, it would be similar price as CrashPlan as I don’t have tons of data.
    • iDrive – Linux client (!) and multiple hosts, but only allows 32 versions, and dedupe seems to be missing so I’m not sure what that would mean for my ~1TB of data.  They have a 2TB plan for super cheap right now ($7 for the first year), which could fill all my needs.
    • CrashPlan Small Business – Same as home, but a single computer and no peer-to-peer.

    So where does that leave me?  I’m hopefully optimistic about companies getting more feature parity, and thankfully my subscription doesn’t expire until July of 2018.  Therefore, while I’m doing some work, I’m firmly in the “wait and see” camp at this point.  However, if I were to move right now, this is what my setup would look like:

    • Install Synology Cloud Station Backup and configure the 3 Windows systems to backup to the Synology NAS.  Similar to CrashPlan, I can uPNP a port through the Firewall for external connectivity (I can even use 443 if I really want/need to).  This is my peer-to-peer backup and is basically like-for-like with Crashplan peer-to-peer.  This stores up to 32 versions of files, which while not ideal, is ok considering…
    • Upgrade to CrashPlan Small Business on the NAS.  While I’m not thrilled about the way this was handled, I understand it (especially seeing the “OMG I HAVE 30TB IN PERSONAL CRASHPLAN” redditor posts) and that means I don’t have to reupload anything.  Send both the Cloud Station Backups and other NAS data to CrashPlan.  This gets me the unlimited versions, plus I have 3-2-1 protections for my laptops/desktops.
    • Use Synology Cloud Sync (not a backup) or CloudBerry to B2 for anything I deem needs that extra offsite location for the NAS.  This would be an improvement to my current setup, and I could be more selective about what goes there to keep costs way down.

    Hopefully this helps others, and I’ll keep updating this post based on what I see/move towards.  Feel free to add your ideas into the comments too.

    Just saw this announcement from MSFT.  Could be an interesting archival strategy if tools start to utilize it – https://azure.microsoft.com/en-us/blog/announcing-the-public-preview-of-azure-archive-blob-storage-and-blob-level-tiering/

    **Update 10/11/2017**

    A quick update on some things that have changed.  I’ve moved away from Comcast, and now have Fiber!  That means, no more caps (and 1Gbps speeds), so I’m more confident to go with my ideas above.  So far this is what I’ve done:

    1. Setup Synology Cloud Backup.  To ensure I get the best coverage everywhere, I’ve created a new domain name and have mapped 443 externally to the internal synology software’s port.  When setting it up in the client, you need to specify <domain>:443, otherwise it attempts to use the default port (it even works with 2FA).  CPU utilization isn’t great on the client software, but that’s primarily because the filtering criteria is great (if you just add your Windows user folder, all the temp internet files and caches constantly get uploaded).  It would be nice if you could filter file paths too, similar to how CrashPlan does it – https://support.code42.com/CrashPlan/4/Troubleshooting/What_is_not_backing_up (duplicating below in case that ever goes away).  I’ll probably file a ticket about that and increasing the version limit…just because.
    2. I still have CrashPlan Home installed on most of my computers at this point as I migrate, but now that I know Synology backup works, I’ll start decommissioning it (yay to lots of java-stolen memory back!).
    3. I’ve played around with a cloudberry docker, but I’m not impressed.  I still want to find something else for my NAS stuff to maintain 3 copies (it would be <50GB of stuff).  Any ideas?

    CrashPlan’s Windows Exclusions – based on Java Regex

    .*/(?:42|\d{8,}).*/(?:cp|~).*
    (?i).*/CrashPlan.*/(?:cache|log|conf|manifest|upgrade)/.*
    .*\.part
    .*/iPhoto Library/iPod Photo Cache/.*
    .*\.cprestoretmp.*
    .*/Music/Subscription/.*
    (?i).*/Google/Chrome/.*cache.*
    (?i).*/Mozilla/Firefox/.*cache.*
    .*/Google/Chrome/Safe Browsing.* 
    .*/(cookies|permissions).sqllite(-.{3})?
    
    .*\$RECYCLE\.BIN/.*
    .*/System Volume Information/.*
    .*/RECYCLER/.*
    .*/I386.*
    .*/pagefile.sys
    .*/MSOCache.*
    .*UsrClass\.dat\.LOG
    .*UsrClass\.dat
    .*/Temporary Internet Files/.*
    (?i).*/ntuser.dat.*
    .*/Local Settings/Temp.*
    .*/AppData/Local/Temp.*
    .*/AppData/Temp.*
    .*/Windows/Temp.*
    (?i).*/Microsoft.*/Windows/.*\.log
    .*/Microsoft.*/Windows/Cookies.*
    .*/Microsoft.*/RecoveryStore.*
    (?i).:/Config\\.Msi.*
    (?i).*\\.rbf
    .*/Windows/Installer.*
    .*/Application Data/Application Data.*
    (?i).:/Config\.Msi.*
    (?i).*\.rbf
    (?i).*/Microsoft.*/Windows/.*\.edb 
    (?i).*/Google/Chrome/User Data/Default/Cookies(-journal)?", "(?i).*/Safari/Library/Caches/.*
    .*\.tmp
    .*\.tmp/.*

     

  • New Hosting

    Well, as my Azure credits will surely run out sometime soon from my MSDN account, I needed to find new hosting.  After a lot of searching for the right place, my new home is at TMD Hosting.

    I didn’t want a full host to manage, and reading the reviews these are some of the best.  The import took a bit longer than anticipated (issues with the Softaculous script), but so far so good!

    Next steps are to enable HTTPS via Lets Encrypt.

  • Copying VHDs in Azure

    Copying VHDs locally to machines in Azure

    This was from when RemoteApp didn’t support creating an image directly from VM.

    • A1 Std machine, copying a 127GB VHD to a local drive (not temp D:\) via azcopy took 6.5 hours
    • A4 Std machine, copying a 127GB VHD to D:\ via azcopy took 5 mins 20 secs
    • A4 Std machine, copying a 127GB VHD to D:\ via save-azurevhd took 10 mins 39 secs
    • A4 Std machine, copying a 127GB VHD to a local drive (not Temp) via azcopy took 25 mins 21 seconds
    • A4 Std machine, copying a 127GB VHD to a local drive (not Temp) via save-azurevhd took 52 mins 11 seconds

    Copying files into a VM via the two commands is very CPU intensive due to the threading it uses, so utilize a larger box no matter your method. And the hands down winner is to use Azcopy into the local temp D:\ (avoids an extra storage account hop). However, if you want a status bar, utilize save-azurevhd.

    Copying VHDs between Storage Accounts

    Due to a storage cluster issue in AU East, it has been advised to create new storage accounts and migrate VHDs to the new storage accounts.  MSFT had provided us with a script, but it was taking hours/days to copy (and kept timing out).

    Instead, we spun up a D4v2 machine in the AU East region, and I was able to have 6 azcopy sessions happening all at once with the /SyncCopy command.  Each was running >100MB/sec whereas other async methods were running at <5MB/sec.  You will see a ton of CPU utilzation during this, but the faster the machine, the better.  Additionally, azcopy supports resume.  To allow multiple instances of azcopy to run on a machine, utilize the /Z:<folderpath> switch for the journal file.

    Stop Azure Blob with Copy Pending

    Prior to getting all our copies going with the /SyncCopy, we had a few that were running async.  Unfortunately, after stopping that with a CTRL-C and having azcopy stop, the blobs still had a copy pending action on them.  This resulted in errors when attempting to re-run the copy with /SyncCopy on a separate machine: HTTP error 409, copy pending.

    To fix this, you can force stop the copy.  As these were new storage accounts with only these VHDs, we were able to run it against the full container.  However, MSFT has an article on how you can do it against individual blobs.

    Set-AzureStubscription -SubscriptionName <name> - CurrentStorageAccount <affectedStorageAccount>
    Get-AzureStorageBlob -Container <containerName> | Stop-AzureStorageBlobCopy -Force
    
  • SP3 issues

    Last night my SP3 decided to stop working with any Touch covers.  They would work on other SP3s, just not this one.  It definitely made work a lot of fun today.

    Anyways, there is a “button reset” procedure that has worked in the past when it wouldn’t start.  Turns out, it solved this problem too.

    Solution 3 is the answer.

     

     

  • Surface Pro 3 Bootable USB

    Ugh, this has taken me way too long to finally figure out/fix.  I’ve been trying to wipe my Surface Pro 3 with TH2 – as I upgraded to RTM.  However, I’ve had a bear of a time getting my USB key bootable.

    Now, I’ve done it before, but for whatever reason previous ways haven’t been working.  Turns out, there are 2 key things (one of which I was missing):

    • GPT partitioning
    • FAT32 formatting

    To make it easier, you can use Rufus.  Just make sure after you select the ISO, you reselect GPT and FAT32.

    rufus

    Once the key is formatted, you can boot from it either by restarting via advanced mode from within windows or by holding the volume down button when you turn it on.

    *sigh* So much time wasted on this one.

  • Associating a reserved IP to a running deployment

    Microsoft has finally enabled the ability to associate a reserved IP to an already created cloud service (VMs).  This is great news as we have a few VMs that are externally accessible that were either built prior to this functionality or we just plumb forgot during build.

    While logical, Microsoft doesn’t comment that this will cause an outage, and should be done during a normal change window.  Sadly, while the IP change takes very little time, DNS updates are typically 20 minute TTL.

    Other items that cause small network blips that may require a downtime window (all V1):

    • Adding new endpoints to a VM
    • Adding subnets to an already created virtual network
  • Breaking Blob Leases via PowerShell

    We are utilizing SQL Backup to Azure blob and had a meltdown today where the log backups were erroring out leaving us with 1TB files up in Azure that were locked.  Needless to say it happened late last night and so there were multiple hourly files in multiple folder structures all over our storage accounts.  It took a bit, but the following script clears out all the locks on blobs within a container in all directories.  Please use carefully and don’t run it against your “vhds” container!

    Also, it requires the Microsoft.WindowsAzure.Storage.dll assembly from the Windows Azure Storage NuGet package.  You can grab this by downloading the commandline nuget file and running the below.  Note, it will dump the file you need into .WindowsAzure.Storage.<ver>libnet40

    nuget.exe install WindowsAzure.Storage
    

    Break lease Script Below – one line modification from https://msdn.microsoft.com/en-us/library/jj919145.aspx

    param(
    [Parameter(Mandatory=$true)]
    [string]$storageAccount,
    [Parameter(Mandatory=$true)]
    [string]$storageKey,
    [Parameter(Mandatory=$true)]
    [string]$blobContainer
    )
    
    $storageAssemblyPath = $pwd.Path + "Microsoft.WindowsAzure.Storage.dll"
    
    # Well known Restore Lease ID
    $restoreLeaseId = "BAC2BAC2BAC2BAC2BAC2BAC2BAC2BAC2"
    
    # Load the storage assembly without locking the file for the duration of the PowerShell session
    $bytes = [System.IO.File]::ReadAllBytes($storageAssemblyPath)
    [System.Reflection.Assembly]::Load($bytes)
    
    $cred = New-Object 'Microsoft.WindowsAzure.Storage.Auth.StorageCredentials' $storageAccount, $storageKey
    
    $client = New-Object 'Microsoft.WindowsAzure.Storage.Blob.CloudBlobClient' "https://$storageAccount.blob.core.windows.net", $cred
    
    $container = $client.GetContainerReference($blobContainer)
    
    #list all the blobs in the container including subdirectories
    $allBlobs = $container.ListBlobs($null,1)
    
    $lockedBlobs = @()
    # filter blobs that are have Lease Status as "locked"
    foreach($blob in $allBlobs)
    {
    $blobProperties = $blob.Properties
    if($blobProperties.LeaseStatus -eq "Locked")
    {
    $lockedBlobs += $blob
    
    }
    }
    
    if ($lockedBlobs.Count -eq 0)
    {
    Write-Host " There are no blobs with locked lease status"
    }
    if($lockedBlobs.Count -gt 0)
    {
    write-host "Breaking leases"
    foreach($blob in $lockedBlobs )
    {
    try
    {
    $blob.AcquireLease($null, $restoreLeaseId, $null, $null, $null)
    Write-Host "The lease on $($blob.Uri) is a restore lease"
    }
    catch [Microsoft.WindowsAzure.Storage.StorageException]
    {
    if($_.Exception.RequestInformation.HttpStatusCode -eq 409)
    {
    Write-Host "The lease on $($blob.Uri) is not a restore lease"
    }
    }
    
    Write-Host "Breaking lease on $($blob.Uri)"
    $blob.BreakLease($(New-TimeSpan), $null, $null, $null) | Out-Null
    }
    }
    

     

  • Adding Additional Azure Disks to an VM's StoragePool that is part of a SQL Server AlwaysOn Cluster

    To get the optimal performance out of your Azure VMs running SQL servers, MS recommends to use Storage Spaces and stripe multiple Azure disks[1]. The nice thing about storage pools in Storage Spaces is that it allows you to add disks behind the scenes without impacting the actual volume.

    Now lets say you have a SQL AlwaysOn cluster (2+ nodes), and for performance reasons (IOPS) you realize that you need to add more disks.  As Storage Spaces shows all disks (physical, virtual, and storagepools) across the whole cluster, it is possible you won’t be able to simply add them due to naming mismatch.  Fear not though, it is still possible if you follow the steps below:

    • Add the new disks to the VM
    • Log into the VM
    • Failover SQL to a secondary if the current VM is the primary
    • Stop clustering service on the VM
    • Run Get-PhysicalDisks to get the disknames
    • Run Add-PhysicalDisk -StoragePoolFriendlyName <storagepool> -PhysicalDisks (Get-PhysicalDisk -FriendlyName <disks>)
    • Run Update-HostStorageCache (if we don’t do this sometimes the volume resize doesn’t work)
    • Run Resize-VirtualDisk -FriendlyName <diskName> -Size <size>
    • Run Update-HostStorageCache (if we don’t do this sometimes the disk resize doesn’t work)
    • Run Resize-Partition -Size <size> -DriveLetter <letter>
    • Start the clustering service on the machine
    • Failback SQL to the VM if required

    Hopefully this helps someone as we were beating our heads in for quite a few days (along with MS).

    [1] https://msdn.microsoft.com/en-us/library/azure/dn133149.aspx

  • Update Azure Internal Load Balancer Endpoints

    Looking to update your Azure ILB endpoints, but are struggling with the Set-AzureEndpoint cmdlet?  You should be using the Set-AzureLoadBalancedEndpoint cmdlet instead!

  • Surface Pro 3 Function Keys

    One of the bigger things I see as a complaint about the Surface Pro 3 (SP3) is that by default the Function keys are not the primary button press.  Instead the defaults are the shortcut keys.  Additionally, it’s not really documented anywhere how to switch what is primary.

    Well, you can.  Just use CAPS + Fn to switch between what you want to be primary.

    Unfortunately, I use a lot of the Fn keys, but also Home and End.  Oh well, guess I can’t have my cake and eat it too.