SQL Database Backup on IaaS using Azure Automation

I had a need to take a full SQL Database backup from a virtual machine with SQL Server hosted on Azure. This is done via an Azure Automation account, executing a runbook on a hybrid worker. This is a great way to take a offline copy of your production SQL and store it someplace safe.

To accomplish this we will use the PowerShell module ‘sqlps‘ that should be installed with SQL Server and run the command Backup-SqlDatabase.

Backup-SqlDatabase (SqlServer) | Microsoft Docs

Store SQL Storage Account Credentials

Before we can run the Backup-SqlDatabase command we must have a saved credential stored in SQL for the Storage Account using New-SqlCredential.

New-SqlCredential (SqlServer) | Microsoft Docs

Import-Module sqlps
# set parameters
$sqlPath = "sqlserver:\sql\$($env:COMPUTERNAME)"
$storageAccount = "<storageAccountName>"  
$storageKey = "<storageAccountKey>"  
$secureString = ConvertTo-SecureString $storageKey -AsPlainText -Force  
$credentialName = "azureCredential-"+$storageAccount

Write-Host "Generate credential: " $credentialName
  
#cd to sql server and get instances  
cd $sqlPath
$instances = Get-ChildItem

#loop through instances and create a SQL credential, output any errors
foreach ($instance in $instances)  {
    try {
        $path = "$($sqlPath)\$($instance.DisplayName)\credentials"
        New-SqlCredential -Name $credentialName -Identity $storageAccount -Secret $secureString -Path $path -ea Stop | Out-Null
        Write-Host "...generated credential $($path)\$($credentialName)."  }
    catch { Write-Host $_.Exception.Message } }

Backup SQL Databases with an Azure Runbook

The runbook below works on the DEFAULT instance and excludes both tempdb and model from backup.

Import-Module sqlps
$sqlPath = "sqlserver:\sql\$($env:COMPUTERNAME)"
$storageAccount = "<storageAccount>"  
$blobContainer = "<containerName>"  
$backupUrlContainer = "https://$storageAccount.blob.core.windows.net/$blobContainer/"  
$credentialName = "azureCredential-"+$storageAccount
$prefix = Get-Date -Format yyyyMMdd

Write-Host "Generate credential: " $credentialName

Write-Host "Backup database: " $backupUrlContainer
  
cd $sqlPath
$instances = Get-ChildItem

#loop through instances and backup all databases (excluding tempdb and model)
foreach ($instance in $instances)  {
    $path = "$($sqlPath)\$($instance.DisplayName)\databases"
    $databases = Get-ChildItem -Force -Path $path | Where-object {$_.name -ne "tempdb" -and $_.name -ne "model"}

    foreach ($database in $databases) {
        try {
            $databasePath = "$($path)\$($database.Name)"
            Write-Host "...starting backup: " $databasePath
            $fileName = $prefix+"_"+$($database.Name)+".bak"
            $destinationBakFileName = $fileName
            $backupFileURL = $backupUrlContainer+$destinationBakFileName
            Write-Host "...backup URL: " $backupFileURL
            Backup-SqlDatabase -Database $database.Name -Path $path -BackupFile $backupFileURL -SqlCredential $credentialName -Compression On 
            Write-Host "...backup complete."  }
        catch { Write-Host $_.Exception.Message } } }


NOTE: You will notice a performance hit on the SQL Server so schedule this runbook in a maintanence window.


Azure Migrate – Additional Firewall Rules

When deploying Azure Migrate Appliances to discovery servers, the appliance needs outbound internet access. In many IT environments, servers are disallowed internet access unless prescribed to certain URL sets. Gratefully Microsoft have given us a list of what they think is the list of URLs that the appliance will need to have whitelisted. This can be found here:

https://docs.microsoft.com/en-us/azure/migrate/migrate-appliance#public-cloud-urls

Issue

Once the appliance has booted up and your onto the GUI, you must ener your Azure Migrate Project key from your subscription and then authenticate to your subscription. We entailed the following error when attempting to resolve the initial key:

Azure Migrate Error

Failed to connect to the Azure Migrate project. Check the errors details, follow the remediation steps and click on ‘Retry’ button

The Azure Migrate Key doesn’t have an expiration on it so this wasn’t the issue. We had whitelisted the URL‘s but on the firewall we were seeing dropped packets:

13:40:41Default DROPTCP10.0.0.10:50860204.79.197.219:443
13:40:41Default DROPTCP10.0.0.10:50861204.79.197.219:80
13:40:41Default DROPTCP10.0.0.10:50857152.199.39.242:443
13:40:42Default DROPTCP10.0.0.10:50862204.79.197.219:80
13:40:42Default DROPTCP10.0.0.10:50858104.74.50.201:80
13:40:43Default DROPTCP10.0.0.10:5086352.152.110.14:443
13:40:44Default DROPTCP10.0.0.10:50860204.79.197.219:443
13:40:44Default DROPTCP10.0.0.10:50861204.79.197.219:80
13:40:45Default DROPTCP10.0.0.10:50862204.79.197.219:80
13:40:46Default DROPTCP10.0.0.10:5086352.152.110.14:443
13:40:46Default DROPTCP10.0.0.10:50859204.79.197.219:443
13:40:47Default DROPTCP10.0.0.10:5086440.90.189.152:443
13:40:47Default DROPTCP10.0.0.10:5086552.114.36.3:443
13:40:49Default DROPTCP10.0.0.10:5086440.90.189.152:443
13:40:50Default DROPTCP10.0.0.10:5086552.114.36.3:443
13:40:50Default DROPTCP10.0.0.10:50860204.79.197.219:443
13:40:50Default DROPTCP10.0.0.10:50861204.79.197.219:80
13:40:51Default DROPTCP10.0.0.10:50862204.79.197.219:80
13:40:52Default DROPTCP10.0.0.10:5086352.152.110.14:443
Subset of the dropped packets based on IP destination during connection failure

Reviewing the SSL certificates on these IP addresses, they are all Microsoft services with multiple SAN entries. We also had a look at the traffic from the developer tools in the browser:

We can see that the browser is trying to start a AAD workflow for device login, which is articulated in the onboarding documentation. Our issue was that the JavaScript for inside the browser session wasn’t located in the whitelist URLs. Reviewing the SAN entries in the certificates presented in the IP destination table we looked for ‘CDN’ or ‘Edge’ URLs.

The fix

The following URLs were added to the whitelist group for the appliance and problems went away.

204.79.197.219*.azureedge.net
152.199.39.242*.azureedge.net
152.199.39.242*.wpc.azureedge.net
152.199.39.242*.wac.azureedge.net
152.199.39.242*.adn.azureedge.net
152.199.39.242*.fms.azureedge.net
152.199.39.242*.azureedge-test.net
152.199.39.242*.ec.azureedge.net
152.199.39.242*.wpc.ec.azureedge.net
152.199.39.242*.wac.ec.azureedge.net
152.199.39.242*.adn.ec.azureedge.net
152.199.39.242*.fms.ec.azureedge.net
152.199.39.242*.aspnetcdn.com
152.199.39.242*.azurecomcdn.net
152.199.39.242cdnads.msads.net


Undocumented ARM Oddities – .Net Core App Services

Every once in a while, when working with ARM templates you come across something that is missing from the official Microsoft ARM template reference. In my case yesterday, I was looking to update the configuration of an Azure App Service to use the DotNetCore stack (rather than .NET 4.8).

While I initially thought this would be a quick job to simply look up the ARM reference and make the required changes, I found that there was nothing about DotNetCore in the ARM reference. Funny enough, there is a value for “netFrameworkVersion”, but don’t be deceived, if you are looking to setup DotNetCore – this value is not for you (this is for regular .Net only).

To better understand the problem, I Clickly Clikcy’d in an App Service and configured it for DotNetCore (Clickly Clicky is our lingo for deploying infrastructure using the portal rather than a CLI or template). With this, I attempted my usual trick of exporting a template and observing the JSON it spits out. However, much to my amazement I couldn’t see any reference to dotnetcore in there at all.

In the end it was the Azure Resource Explorer which came to my rescue. Used the tool to explore the example I created and found a value called “CURRENT_STACK” in the properties of the “Microsoft.Web/sites/config” resource type.

After playing this this for a while, I was able to translate this into my ARM template with the following JSON.

{
    "type": "Microsoft.Web/sites",
    "name": "[variables('WebSiteName')]",
    "apiVersion": "2020-06-01",
    "location": "[resourceGroup().location]",
    "kind": "app",
    "properties": {
        "siteConfig": {
            "metadata": [{
                "name": "CURRENT_STACK",
                "value": "dotnetcore"
            }]
        },

Hopefully this helps anyone who encounters this problem.

Cheers,

Joel


Azure Bastion – Unable to query Bastion data.

I’ve recently setup Azure Bastion to give external users/vendors access to resources via RDP or SSH following these instructions:

https://docs.microsoft.com/en-us/azure/bastion/tutorial-create-host-portal

The key permissions outlined in the prerequisites at point 3 are:

  • A virtual network.
  • A Windows virtual machine in the virtual network.
  • The following required roles:
    • Reader role on the virtual machine.
    • Reader role on the NIC with private IP of the virtual machine.
    • Reader role on the Azure Bastion resource.
  • Ports: To connect to the Windows VM, you must have the following ports open on your Windows VM:
    • Inbound ports: RDP (3389)

My scenario is to invite a guest AAD account, add them to a group and grant the group access as per below:

  • Grant Contributor role to the resource group that has the VM’s for the application.
  • Grant Reader role to the resource group that has the Bastion host.

This way the guest user logs into the Azure Portal complying with our conditional access policy and then they are presented with only the resources they have read or higher access too. In this scenario that is the two resource groups outlined above.

The guest user locates the virtual machine they wish to connect and then chooses Connect > Bastion > Use Bastion the following error message is presented.

Error Message:

“Unable to query Bastion data”

Initially working with Microsoft support we found that granting reader access at the subscription level gave the user permission to in-act the Bastion service, which simply give a username and password input.

These permissions were too lacks as a workaround and exposed a lot of production data to accounts that didn’t really have any business looking at it.

Workaround

[12/11/2020] The case is on-going inside Microsoft and I will leave a definitive response when I get the information. I’ve done some further investigation what would be the least amount of additional ‘Reader‘ permissions are required. I’ve found the following permissions are required in my scenario:

  • Reader permissions on the Virtual Network that has the ‘AzureBastionSubnet‘ subnet.
  • Reader permissions on the Virtual Network that has the connected virtual machine network interface.

In my scenario, the virtual machines are located in a development Virtual Network that is peered with the production Virtual Network which has the subnet ‘AzureBastionHost‘. So I had two sets of permissions to add. After applying the permissions you may need to get a coffee and return to the portal as it took 5-10 minutes to kick in for me.

Hope this helps someone that has done some googling but is still scratching their head with this error message.


Get all restore points for an Azure VM

Getting restore points out of Azure can be like getting blood from a stone. The portal likes to always set a custom filter showing only ~90 days and your Powershell cmdlet only allows for a 30 day interval for retrieval dates. When running ‘Get-AzRecoveryServicesBackupRecoveryPoint’ you get the following:

Get-AzRecoveryServicesBackupRecoveryPoint : Time difference should not be more than 30 days 

Sigh.. I just want all my restore points for a virtual machine please! All of them, because its my butt if for some reason I don’t have them. Using something like this can be useful to audit your backups against business needs for data retention.

Example: Get recovery points from the last two years for a single VM

# ------Variables--------------#
$retentionDays = 730
$vaultName = "PROD-RSV"
$vaultResourceGroup = "PROD-RSV-RG"
$friendlyName = "Server1"
#------------------------------#


$vault = Get-AzRecoveryServicesVault -ResourceGroupName $vaultResourceGroup -Name $vaultName 
$Container = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVM -Status Registered -FriendlyName $friendlyName -VaultId $vault.ID
$BackupItem = Get-AzRecoveryServicesBackupItem -Container $Container -WorkloadType AzureVM -VaultId $vault.ID

$startingPoint = -25
$finishingPoint = 0
$jobsArray = @()

Do {

$StartDate = (Get-Date).AddDays($startingPoint)
$EndDate = (Get-Date).AddDays($finishingPoint)
$RP = Get-AzRecoveryServicesBackupRecoveryPoint -Item $BackupItem -StartDate $Startdate.ToUniversalTime() -EndDate $Enddate.ToUniversalTime() -VaultId $vault.ID 
$jobsArray += $RP
$startingPoint = $startingPoint - 25
$finishingPoint = $finishingPoint -25
}until($startingPoint -le -($retentionDays))

$jobsArray | FT -AutoSize -Property RecoveryPointid, RecoveryPointTime, RecoveryPointType 


The example above will go back 2 years (730 Days). This outputs to a table but you can quiet easily export to a CSV via:

$jobsArray | Export-Csv c:\temp\restores.csv -NoTypeInformation 

Enjoy.