Migrate to AWS EC2 with SQL licensing included

While performing a lift and shift migration of Windows SQL Server using the AWS Application Migration Service I was challenged with wanting the newly migrated instance to have a Windows OS license ‘included’ but additionally the SQL Server Standard license billed to the account. The customer was moving away from their current hosting platform where both licenses were covered under SPLA. Rather then going to a license reseller and purchasing SQL Server it was preferred to have all the Windows OS and SQL Server software licensing to be payed through their AWS account.

In the Application Migration Service, under launch settings > Operating System Licensing. We can see all we have is OS licence options available to toggle between license-included and BYOL.

Choose whether you want to Bring Your Own Licenses (BYOL) from the source server into the Test or Cutover instance. This defines whether the launched test or cutover instance will include the license for the operating system (License-included), or if the licensing will be based on that of the migrated server (BYOL: Bring Your Own License).

If we review a migrated instance where ‘license-included’ was selected during launch, using Powershell on instance itself we see only a singular ‘BillingProduct = bp-6ba54002’ for Windows:

((Invoke-WebRequest http://169.254.169.254/latest/dynamic/instance-identity/document).Content | ConvertFrom-Json).billingProducts

bp-6ba54002 

AWS Preferred Approach

There are a lots of options for migrating SQL Server to AWS, so we weren’t without choices.

  1. Leverage the AWS Database Migration Service (DMS) to migrate on-premises Windows SQL Server to a Relation Database Services (RDS).
  2. Leverage the AWS Database Migration Service (DMS) to migrate on-premises Windows SQL Server to AWS EC2 Instance provisioned from a Marketplace AMI which includes SQL licensing.
  3. Leverage SQL Server native tooling between an on-premises Windows SQL Server to AWS EC2 Instance provisioned from a Marketplace AMI which includes SQL licensing. Use either
    1. Native backup and restore
    2. Log shipping
    3. Database mirroring
    4. Always On availability groups
    5. Basic Always On availability groups
    6. Distributed availability groups
    7. Transactional replication
    8. Detach and attach
    9. Import/export

The only concern our customer had with all the above approaches was that there was technical configuration on the source server that wasn’t well understand. The risk of reimplementation on a new EC2 instance and missing configuration was perceived to be high impact.

Solution

The solution was to create a new EC2 instance from a AWS Marketplace AMI that we would like to be billed for. In my case I chose ‘Microsoft Windows Server 2019 with SQL Server 2017 Standard – ami-09ee4321c0e1218c3’.

The procedure is to detach all the volumes (including root) from the migrated EC2 instance that has all the lovely SQL data and attach it to the newly created instance with the updated BillingProducts of ‘bp-6ba54002′ for Windows and bp-6ba54003′ for SQL Standard assigned to it.

If we review a Marketplace EC2 instance where SQL Server Standard was selected using Powershell on the instance:

((Invoke-WebRequest http://169.254.169.254/latest/dynamic/instance-identity/document).Content | ConvertFrom-Json).billingProducts

bp-6ba54002
bp-6ba54003 

How will it work?

This process will require a little outage as both EC2 Instances will have to be stopped to detach the volumes and re-attach. This all happens pretty fast so only expect it to last a minute.

NOTE: The primary ENI interface cannot be changed so there will be an IP swap, so be aware of any DNS updates you may need to do post to resolve the SQL Server being available via hostname to other servers.

The high level process of the script:

  1. Get Original Instance EBS mappings
  2. Stop the instances
  3. Detach the volumes from both instances
  4. Add the Original Instance’s EBS mappings to the New Instance
  5. Tag the New Instance with the Original Instance’s tags
  6. Tag the New Instance with the tag ‘Key=convertedFrom’ and ‘Value=<Original Instance ID>’
  7. Update the Name tag on the Original Instance with ‘Key=Name’ and ‘Value=<OldValue+.old>
  8. Update the Original Instance tags with its original BlockMapping for reference e.g. ‘Key=xvdc’ and ‘Value=vol-0c2174621f7fc2e4c’
  9. Start the New Instance

After the script completes the Original Instance will have the following information:

The New Instance will have the following information:

The volumes connected on the New Instance:

$orginalInstanceID = "i-0ca332b0b062dbe76"
$newInstanceID = "i-0ce3eeadfa27e2f64"
$AccessKey = ""
$Secret = ""
$Region = "ap-southeast-2"

If (!(get-module -ListAvailable | ? {$_.Name -like "*AWS.Tools.EC2*"}))
{                
    Write-Output "WARNING: EC2 AWS Modules Not Installed Yet..." 
    Exit
}
$getModuleResults = Get-Module "AWS.Tools.EC2"
If (!$getModuleResults) 
{
    Write-Output "INFO: Loading AWS Module..."
    Import-Module AWS.Tools.Common -ErrorAction SilentlyContinue -Force
    Import-Module AWS.Tools.EC2 -ErrorAction SilentlyContinue -Force
}
else{
    Write-Output "INFO: AWS Module Already Loaded"
}

Set-AWSCredential -AccessKey $AccessKey -SecretKey $Secret -ProfileLocation $Region
Write-Output "INFO: Getting details $($orginalInstanceID)"
$originalInstance = (Get-EC2Instance -InstanceId $orginalInstanceID).Instances
$orginalBlockMappings = $originalInstance.BlockDeviceMappings
$originalVolumes = @()
Write-Output "INFO: Getting EBS volumes from $($orginalInstanceID)"
ForEach($device in $orginalBlockMappings){
    $Object = New-Object System.Object
    #Get EBS volumes for the machine
    $Object | Add-Member -type NoteProperty -name "DeviceName" -Value $device.DeviceName
    $Object | Add-Member -type NoteProperty -name "VolumeId" -Value $device.ebs.VolumeId
    $Object | Add-Member -Type NoteProperty -name "Status" -Value $device.ebs.Status
    $volume = Get-EC2Volume -VolumeId $device.ebs.VolumeId
    $Object | Add-Member -Type NoteProperty -name "AvailabilityZone" -Value $volume.AvailabilityZone
    $Object | Add-Member -Type NoteProperty -name "Iops" -Value $volume.Iops
    $Object | Add-Member -Type NoteProperty -name "CreateTime" -Value $volume.CreateTime
    $Object | Add-Member -Type NoteProperty -name "Size" -Value $volume.Size
    $Object | Add-Member -Type NoteProperty -name "VolumeType" -Value $volume.VolumeType
    $originalVolumes += $Object
}
Write-Output $originalVolumes | Format-Table
$tempInstance = (Get-EC2Instance -InstanceId $newInstanceID).Instances
$tempBlockMappings = $tempInstance.BlockDeviceMappings
$tempVolumes = @()
Write-Output "INFO: Getting details $($newInstanceID)"
ForEach($device in $tempBlockMappings){
    $Object = New-Object System.Object
    #Get EBS volumes for the machine
    $Object | Add-Member -type NoteProperty -name "DeviceName" -Value $device.DeviceName
    $Object | Add-Member -type NoteProperty -name "VolumeId" -Value $device.ebs.VolumeId
    $Object | Add-Member -Type NoteProperty -name "Status" -Value $device.ebs.Status
    $volume = Get-EC2Volume -VolumeId $device.ebs.VolumeId
    $Object | Add-Member -Type NoteProperty -name "AvailabilityZone" -Value $volume.AvailabilityZone
    $Object | Add-Member -Type NoteProperty -name "Iops" -Value $volume.Iops
    $Object | Add-Member -Type NoteProperty -name "CreateTime" -Value $volume.CreateTime
    $Object | Add-Member -Type NoteProperty -name "Size" -Value $volume.Size
    $Object | Add-Member -Type NoteProperty -name "VolumeType" -Value $volume.VolumeType
    $tempVolumes += $Object
}
Write-Output $tempVolumes | Format-Table
#Lets do the work
Write-Output "INFO: Stop the instance $($orginalInstanceID)...."
try{
    Stop-EC2Instance -InstanceId $originalInstance -ErrorAction Stop
}catch{
    Write-Output "ERROR: $_"
    exit
}
While((Get-EC2Instance -InstanceId $orginalInstanceID).Instances[0].State.Name -ne 'stopped'){
    Write-Verbose "INFO: Waiting for instance to stop..."
    Start-Sleep -s 10
}
Write-Output "INFO: Stop the instance $($newInstanceID)...."
try{
    Stop-EC2Instance -InstanceId $newInstanceID -Force -ErrorAction Stop
}catch{
    Write-Output "ERROR: $_"
    exit
}
While((Get-EC2Instance -InstanceId $newInstanceID).Instances[0].State.Name -ne 'stopped'){
    Write-Verbose "INFO: Waiting for instance to stop..."
    Start-Sleep -s 10
}

Write-Output "INFO: detaching the EBS volumes from $($orginalInstanceID)...."
ForEach($volume in $originalVolumes){
    try{
        Dismount-EC2Volume -VolumeId $volume.VolumeId -InstanceId $orginalInstanceID -Device $volume.DeviceName -ErrorAction Stop
    }catch{
        Write-Output "ERROR: $_"
        exit
    }
}

Write-Output "INFO: detaching the EBS volumes from $($newInstanceID)...."
ForEach($volume in $tempVolumes){
    try{
        Dismount-EC2Volume -VolumeId $volume.VolumeId -InstanceId $newInstanceID -Device $volume.DeviceName -ErrorAction Stop
    }catch{
        Write-Output "ERROR: $_"
        exit
    }
}

Write-Output "INFO: Migrating $($orginalInstanceID) to $($newInstanceID) with $($originalVolumes.Count) connected volumes"
Write-Output "INFO: attaching the EBS volumes to $($newInstanceID)...."
ForEach($volume in $originalVolumes){
    try{
        Add-EC2Volume -VolumeId $volume.VolumeId -InstanceId $newInstanceID -Device $volume.DeviceName -ErrorAction Stop
    }catch{
        Write-Output "ERROR: $_"
        exit
    }
}

Write-Output "INFO: Tagging the $($newInstanceID) with original instance tags"
$orginalInstanceTags = $originalInstance.tags
ForEach($T in $orginalInstanceTags){
    try{
        $tag = New-Object Amazon.EC2.Model.Tag
        $tag.Key = $T.Key
        $value = $T.Value
        $tag.Value = $value
        New-EC2Tag -Resource $newInstanceID -Tag $tag -ErrorAction Stop
    }catch{
        Write-Output "ERROR: $_"
    }
}

Try{
    $tag = New-Object Amazon.EC2.Model.Tag
    $tag.Key = "convertedFrom"
    $value = $orginalInstanceID
    $tag.Value = $value
    New-EC2Tag -Resource $newInstanceID -Tag $tag -ErrorAction Stop
}catch{
    Write-Output "ERROR: $_"
}

Write-Output "INFO: Marking the $($orginalInstanceID) as old"
$orginalInstanceName = ($originalInstance.tags | ? {$_.Key -like "Name"}).Value
If($orginalInstanceName){
    try{
        $tag = New-Object Amazon.EC2.Model.Tag
        $tag.Key = "Name"
        $value = $orginalInstanceName+".old"
        $tag.Value = $value
        New-EC2Tag -Resource $orginalInstanceID -Tag $tag -ErrorAction Stop
    }catch{
        Write-Output "ERROR: $_"
    }
}

Write-Output "INFO: Tagging the $($orginalInstanceID) with original volumes for failback"
ForEach($device in $orginalBlockMappings){
    try{
        $tag = New-Object Amazon.EC2.Model.Tag
        $tag.Key = $device.DeviceName
        $value = $device.ebs.VolumeId
        $tag.Value = $value
        New-EC2Tag -Resource $orginalInstanceID -Tag $tag -ErrorAction Stop
    }catch{
        Write-Output "ERROR: $_"
    }
}

Write-Output "INFO: Starting the instance $($newInstanceID) with newly attached drives...."
try{
    Start-EC2Instance -InstanceId $newInstanceID -Force -ErrorAction Stop
}catch{
    Write-Output "ERROR: $_"
    exit
}
While((Get-EC2Instance -InstanceId $newInstanceID).Instances[0].State.Name -ne 'Running'){
    Write-Verbose "INFO: Waiting for instance to start..."
    Start-Sleep -s 10
}
$filterENI = New-Object Amazon.EC2.Model.Filter -Property @{Name = "attachment.instance-id"; Values = $newInstanceID}
$newInterface = Get-EC2NetworkInterface -Filter $filterENI
Write-Output "INFO: Conversion complete to $($newInstanceID)"
Write-Output "SUCCESS: Try logging into $($newInterface.PrivateIpAddress)"

Thanks Rene and Evan for passing on the idea.


SQL Database Backup on IaaS using Azure Automation

I had a need to take a full SQL Database backup from a virtual machine with SQL Server hosted on Azure. This is done via an Azure Automation account, executing a runbook on a hybrid worker. This is a great way to take a offline copy of your production SQL and store it someplace safe.

To accomplish this we will use the PowerShell module ‘sqlps‘ that should be installed with SQL Server and run the command Backup-SqlDatabase.

Backup-SqlDatabase (SqlServer) | Microsoft Docs

Store SQL Storage Account Credentials

Before we can run the Backup-SqlDatabase command we must have a saved credential stored in SQL for the Storage Account using New-SqlCredential.

New-SqlCredential (SqlServer) | Microsoft Docs

Import-Module sqlps
# set parameters
$sqlPath = "sqlserver:\sql\$($env:COMPUTERNAME)"
$storageAccount = "<storageAccountName>"  
$storageKey = "<storageAccountKey>"  
$secureString = ConvertTo-SecureString $storageKey -AsPlainText -Force  
$credentialName = "azureCredential-"+$storageAccount

Write-Host "Generate credential: " $credentialName
  
#cd to sql server and get instances  
cd $sqlPath
$instances = Get-ChildItem

#loop through instances and create a SQL credential, output any errors
foreach ($instance in $instances)  {
    try {
        $path = "$($sqlPath)\$($instance.DisplayName)\credentials"
        New-SqlCredential -Name $credentialName -Identity $storageAccount -Secret $secureString -Path $path -ea Stop | Out-Null
        Write-Host "...generated credential $($path)\$($credentialName)."  }
    catch { Write-Host $_.Exception.Message } }

Backup SQL Databases with an Azure Runbook

The runbook below works on the DEFAULT instance and excludes both tempdb and model from backup.

Import-Module sqlps
$sqlPath = "sqlserver:\sql\$($env:COMPUTERNAME)"
$storageAccount = "<storageAccount>"  
$blobContainer = "<containerName>"  
$backupUrlContainer = "https://$storageAccount.blob.core.windows.net/$blobContainer/"  
$credentialName = "azureCredential-"+$storageAccount
$prefix = Get-Date -Format yyyyMMdd

Write-Host "Generate credential: " $credentialName

Write-Host "Backup database: " $backupUrlContainer
  
cd $sqlPath
$instances = Get-ChildItem

#loop through instances and backup all databases (excluding tempdb and model)
foreach ($instance in $instances)  {
    $path = "$($sqlPath)\$($instance.DisplayName)\databases"
    $databases = Get-ChildItem -Force -Path $path | Where-object {$_.name -ne "tempdb" -and $_.name -ne "model"}

    foreach ($database in $databases) {
        try {
            $databasePath = "$($path)\$($database.Name)"
            Write-Host "...starting backup: " $databasePath
            $fileName = $prefix+"_"+$($database.Name)+".bak"
            $destinationBakFileName = $fileName
            $backupFileURL = $backupUrlContainer+$destinationBakFileName
            Write-Host "...backup URL: " $backupFileURL
            Backup-SqlDatabase -Database $database.Name -Path $path -BackupFile $backupFileURL -SqlCredential $credentialName -Compression On 
            Write-Host "...backup complete."  }
        catch { Write-Host $_.Exception.Message } } }


NOTE: You will notice a performance hit on the SQL Server so schedule this runbook in a maintanence window.


Assign Teams phone numbers using Microsoft Forms, Logic Apps and Azure Automation

Sometimes provisioning users into Office 365 services requires custom settings to be executed with PowerShell. This can present a problem when the teams responsible for managing the ongoing process have varying levels of understanding. How do you provide a front end user interface for my custom code without the need for the operators to need or know PowerShell?

This is the case for Microsoft Teams. Microsoft Phone System ‘Direct Routing’ feature lets you connect your telephony gateway (SBC) to Microsoft Phone System. With this capability you can configure on-premises telephone numbers with Microsoft Teams client. A subtle difference using Direct Routing for your PSTN connectivity over Microsoft Calling (Telstra Calling in AU) is the inability to assign phone numbers to users via the Teams Admin Portal. The only way to assign the phone number is through a PowerShell cmdlet with parameter ‘OnPremLineURI‘:

Set-CsUser -Identity $UPN -EnterpriseVoiceEnabled $true -HostedVoiceMail $true -OnPremLineURI $lineURI

So here in lies my problem. Let’s fix it.

Components

  • Microsoft Forms – The front end UI with required input fields.
  • Logic App – The glue and manages the process.
  • Azure Runbook – where my code lives to perform the steps against Office 365 API’s.

Microsoft Forms

This is a pretty basic form. I just need enough information as inputs to execute my PowerShell. The great thing about Microsoft Forms is that it has to be authenticated, the fact that it’s built into Office 365 is that it’s all done by Azure Active Directory.

Mobile Preview of the Form

Note: Unfortunately the simplicity of this form is also its short coming. I would love if we can do some form validation on the input string before it was submitted. Especially on the phone number format and length.

Create the Logic App

Open a new Blank Template in the Logic App Designer and search for Microsoft Forms and use the option ‘When a new response is submitted‘.

Start by getting the form data into the Logic App.

Assign all of the form inputs as variables in your Logic App to then be passed to our Runbook.

Azure Runbook

Create a Runbook, make sure you have defined the parameters (highlighted in lines 1-5). The Logic App will reference these automatically for you when working in the designer.

Note: All the settings we need are part of the Skype for Business PowerShell module which isn’t available in the Azure Automation Gallery. If you install Microsoft Teams module version 1.1.6 you will have the ability to execute New-CsOnlineSession and pull down all the cmdlets into the PS session. At the time of writing I don’t know a way of using a managed identity or client secret for New-CSOnlineSession, so it’s just a standard user account with bypass MFA (yuck).

 Param (
[Parameter (Mandatory = $true)][string]$upn,
[Parameter (Mandatory = $true)][string]$lineURI,
[Parameter (Mandatory = $true)][string]$dialPlan
)

$debug = $false

import-module MicrosoftTeams


if($debug -like $true){
    Write-Output "Connecting to Skype Online..."
}
$creds = Get-AutomationPSCredential -Name "SkypeCreds"
try{
    $sfboSession = New-CsOnlineSession -Credential $creds -OverrideAdminDomain "domain.onmicrosoft.com"
}
Catch{
    $errOutput = [PSCustomObject]@{
        status = "failed" 
        error = $_.Exception.Message
        step = "Connecting to Skype Online"
        cmdlet = "New-CsOnlineSession"
    }
    Write-Output ( $errOutput | ConvertTo-Json)
    exit
}
if($debug -like $true){
    Write-Output "Importing PS Session..."
}
try{
    Import-PSSession $sfboSession -AllowClobber
}
Catch{
    $errOutput = [PSCustomObject]@{
        status = "failed" 
        error = $_.Exception.Message
        step = "Importing PS Session"
        cmdlet = "Import-PSSession"
    }
    Write-Output ( $errOutput | ConvertTo-Json)
    exit
}
if($debug -like $true){
    Write-Output "Processing line: $($upn) "
}
    #Correct User
    if($upn -like $null){
        $sip = (Get-CsOnlineUser -Identity $($user.displayname)).SipAddress
        $upn = $sip.TrimStart('sip:')
    }
    #Correct Number
    if($lineURI -notlike "tel:*"){
        if($lineURI.Length -eq 12){
            $lineURI = "tel:"+$lineURI
        }
        elseif($lineURI.Length -eq 11){
            $lineURI = "tel:+"+$lineURI
        }
    }
if($debug -like $true){
    Write-Output "  INFO: Using values - $($upn) with $($lineURI)" 
    Write-Output "  INFO: Attempting to remove Skype for Business Online settings: VoiceRoutingPolicy" 
}    
    try{
        Grant-CsVoiceRoutingPolicy -PolicyName $NULL -Identity $upn
    }
    Catch{
        $errOutput = [PSCustomObject]@{
            status = "failed" 
            error = $_.Exception.Message
            step = "VoiceRoutingPolicy"
            cmdlet = "Grant-CsVoiceRoutingPolicy"
        }
        Write-Output ( $errOutput | ConvertTo-Json)
        exit
    }
if($debug -like $true){
    Write-Output "  INFO: Attempting to remove Skype for Business Online settings: UserPstnSettings" 
}    
    try{
        Set-CsUserPstnSettings -Identity $upn -AllowInternationalCalls $false -HybridPSTNSite $null | out-null
    }
    Catch{
        $errOutput = [PSCustomObject]@{
            status = "failed" 
            error = $_.Exception.Message
            step = "UserPstnSettings"
            cmdlet = "Set-CsUserPstnSettings"
        }
        Write-Output ( $errOutput | ConvertTo-Json)
        exit
    }
    # https://docs.microsoft.com/en-us/powershell/module/skype/grant-csteamsupgradepolicy?view=skype-ps
if($debug -like $true){    
    Write-Output "  INFO: Attempting to grant Teams settings: user to UpgradeToTeams (TeamsOnly)." #Upgrades the user to Teams and prevents chat, calling, and meeting scheduling in Skype for Business
}    
    try{
        Grant-CsTeamsUpgradePolicy -PolicyName UpgradeToTeams -Identity $upn
    }
    Catch{
        $errOutput = [PSCustomObject]@{
            status = "failed" 
            error = $_.Exception.Message
            step = "UpgradeToTeams"
            cmdlet = "Grant-CsTeamsUpgradePolicy"
        }
        Write-Output ( $errOutput | ConvertTo-Json)
        exit
    }
if($debug -like $true){
    Write-Output "  INFO: Attempting to set Teams settings: Enabling Telephony Features & Configure Phone Number"
}
    try{
        Set-CsUser -Identity $UPN -EnterpriseVoiceEnabled $true -HostedVoiceMail $true -OnPremLineURI $lineURI
    }
    Catch{
        $errOutput = [PSCustomObject]@{
            status = "failed" 
            error = $_.Exception.Message
            step = "SetUser"
            cmdlet = "Set-CsUser"
        }
        Write-Output ( $errOutput | ConvertTo-Json)
        exit
    }
if($debug -like $true){
    Write-Output "  INFO: Attempting to grant Teams settings: TeamsCallingPolicy" #Policies designate which users are able to use calling functionality within teams and determine the interoperability state with Skype for Business
}
    try{
        Grant-CsTeamsCallingPolicy -PolicyName Tag:AllowCalling -Identity $upn
    }
    Catch{
        $errOutput = [PSCustomObject]@{
            status = "failed" 
            error = $_.Exception.Message
            step = "TeamsCallingPolicy"
            cmdlet = "Grant-CsTeamsCallingPolicy"
        }
        Write-Output ( $errOutput | ConvertTo-Json)
        exit
    }
if($debug -like $true){
    Write-Output "  INFO: Attempting to grant Teams settings: Assign the Online Voice Routing Policy"
}
    try{
        Grant-CsOnlineVoiceRoutingPolicy -Identity $upn -PolicyName Australia
    }
    Catch{
        $errOutput = [PSCustomObject]@{
            status = "failed" 
            error = $_.Exception.Message
            step = "VoiceRoutingPolicy"
            cmdlet = "Grant-CsOnlineVoiceRoutingPolicy"
        }
        Write-Output ( $errOutput | ConvertTo-Json)
        exit
    }
if($debug -like $true){
    Write-Output "  INFO: Set Dial"
}
    try{
        
        if($dialPlan -eq "National"){
            Grant-CsTenantDialPlan -PolicyName $null -Identity $upn
        }else{
            Grant-CsTenantDialPlan -PolicyName $dialPlan -Identity $upn
        }
        
    }
    Catch{
        $errOutput = [PSCustomObject]@{
            status = "failed" 
            error = $_.Exception.Message
            step = "DialPlan"
            cmdlet = "Get-CsEffectiveTenantDialPlan"
        }
        Write-Output ( $errOutput | ConvertTo-Json)
        exit
    }

    #Completion Output
    $errOutput = [PSCustomObject]@{
        status = "Completed" 
        error = "None"
        step = "endOfJob"
        cmdlet = "None"
    }
    Write-Output ( $errOutput | ConvertTo-Json)
 

Link the Runbook to your Logic App

Now we can update the Logic App with our Runbook information.

Output the details via Email

I found the best way to get consistent structured results is to have error handling in your Runbook, and parse this back to the Logic App as outputted JSON with a known schema/structure. A sample output of the JSON can be used to generate a schema, like the example below.

{
    "status":  "failed",
    "error":  "One or more errors occurred.: Unable to find an entry point named \u0027GetPerAdapterInfo\u0027 in DLL \u0027iphlpapi.dll\u0027.",
    "step":  "Connecting to Skype Online",
    "cmdlet":  "New-CsOnlineSession"
}

This enables you to have sufficient levels of diagnostics logs as part of the output. In this case I’m using a email.

The example workflow is below.

Additions

Additional functionality you could include might be:

  • Check for licenses
    • AAD Module in PowerShell, or
    • AAD Group Membership in Logic App
  • License the user via PowerShell or Graph
  • Send the response in a Teams Notification, rather than email or teams channel.
  • Email the user on successful completion detailing they have a new phone number.
  • More error handling
  • Smaller more specific Runbooks that are executed rather than a large script block, allowing for more conditions to considered per step.

Lets Talk Teams!

We have years of experience deploying unified communication in the Microsoft stack. Reach out, we have a rapid deployment solution for Teams Direct Routing leveraging the public cloud and we have tried and tested a number of flavours of SIP Providers. Trial or PoC a voice solution with minimal effort leveraging public cloud deployments

Learn More


Automating Azure Site Recovery with PowerShell

In a recent consulting engagement, I’ve needed to perform a large-scale migration of a company’s virtual machine (VM) fleet from an On-premise datacenter to Microsoft Azure. Thinking about what that actually means – We’re picking up many compute workloads that are (in most cases) essential for day to day business operation and re-homing them to a new slice of a Microsoft-managed datacenter. After coming out the other end and completing the project, I thought I would shed some light on the tools that I used and developed to make the vision a reality.

In this particular engagement the customer is a large enterprise with a VMware environment servicing 300+ VM’s. When we consider the business value behind each of these compute workloads, it quickly becomes apparent that selecting the right tooling and approach is vital to deliver a successful outcome whilst causing as little disruption to the business as possible.

Enter Azure Site Recovery

Azure Site Recovery (ASR) is Microsoft’s Disaster Recovery as a Service solution which can replicate workloads running on physical and virtual machines from one location to a secondary location. As a disaster recovery platform, it’s possible for workloads to failover and successfully failback in a disaster scenario. ASR can also be used to migrate workloads to Azure by completing the failover component without failing back.

Why Should We Automate Azure Site Recovery?

I like to automate things like this because a computer following a process that someone writes will always perform it the same way. We can expect what the output will look like. In this case that means a like for like VM that looks and feels like it did in its previous life, before being migrated. When we introduce an operator into the mix we also introduce the human element. Things like resource names and groups, VM specs, disk settings, network location and ip addresses all need to be configured for each VM migration.

To have success running migrations at scale, it is important to use known, well-tested, repeatable processes. For me, that means figuring out the best way to use a tool then automating it so that you (or anyone else) can use it the right way, everytime, easily. 

How Can We Automate Azure Site Recovery?

I use PowerShell as an automation tool on top of ASR for a couple of reasons. The main reason being that Microsoft provide and maintain a set of PowerShell modules for interacting with Azure resources, including ASR. This is known as the Az module – See our previous post on the Azure PowerShell Az module for a deeper explanation.  PowerShell can also run almost anywhere thanks to PowerShell Core, a cross-platform edition of PowerShell that runs on Windows, macOS and Linux.

Armed with PowerShell and the Az module, we can get cracking on with the fun stuff – bashing out some lines of code. My approach and methodology here usually involve a fair bit of back and forth, playing with the commands that are available to me and learning the best ways to drive them. Importantly, you don’t want to do this with live data, setting up an isolated sandpit with dummy data will go a long way in allowing you to upskill knowledge around the tools while making sure your production systems remain untouched.

Once I’ve got a handle on the commands that are needed and how they fit together, I make a MVP (minimum viable product) script. The idea here is to demonstrate that its possible for the tooling to work (it’s not pretty but it works). To paint a picture, one of my MVP scripts will usually have a bunch of variables at the start declaring all the info that is required, things like VM name, source location, target location etc. From here, I usually design the script to be ran line by line, this is mostly for simplicity sake, complexity can come later, right now it just needs to be as simple as possible. At this stage, we can demonstrate our capability to perform a migration with PowerShell. A quick example of this is setting up a replication job, preceding this line, I do a series of get statements to build up all the variables seen in the command bellow.

$replicationJob = New-AzRecoveryServicesAsrReplicationProtectedItem -VMwareToAzure -ProtectableItem $vm -Name (New-Guid).Guid -ProtectionContainerMapping $replicationPolicy -ProcessServer $ProcessServer -Account $Account -RecoveryResourceGroupId $ResourceGroup.ResourceId -logStorageAccountId $LogStorageAccount.Id -RecoveryAzureNetworkId $vnet.Id -RecoveryAzureSubnetName $failoverSubnetName

From here, I like to put some lipstick on it and make it feel like a more polished product. Personally, I like to use a series of questions and prompts to generate the variables I described in the last paragraph. I also add status checks and operator prompts to continue. An example of this could be when performing a failover, once the operator confirms he is ready to begin, the command executes the failover, then continuously checks the failover job status until it has completed, once completed, tell the user running the script that its complete. Here is an example of a status check that I wrote for checking the progress of a failover job.

do {
    Clear-Host
    Write-Host "======== Monitoring Failover ========"
    Write-Host "This will refresh every 5 seconds."
    try {
        $failoverJob = Get-ASRJob -Name $failoverJob.Name
    }
    catch {
        Write-Host -ForegroundColor Red "ERROR - Unable to get status of Failover job"
        Write-Host -ForegroundColor Red "ERROR - " + $_
        log "ERROR" "Unable to get status of Failover job"
        log "ERROR" $_
        exit 
    }
    Write-Host "Failover status for $($VMName.FriendlyName) is $($failoverJob.State)"
    sleep 5;
} while (($failoverJob.State -eq "InProgress") -or ($failoverJob.State -eq "NotStarted"))

Once you get this far, the sky is the limit. Like most things, it can evolve over time. I like to add error handling and logging so we can elegantly handle a failure and have an audit trail of operations. I take this approach with most of the processes I automate, I think it’s important to start small and work up from there.


Azure PowerShell ‘Az’ Module

https://azure.microsoft.com/en-us/blog/azure-powershell-az-module-version-1/

Microsoft released a new PowerShell module specifically for Azure late last year called “Az”. On the plus side Az ensures that Windows PowerShell and PowerShell Core users can get the latest Azure tooling from PowerShell on every platform be it Windows PowerShell or PowerShell Core for my preferred operating system macOs.

Microsoft state that the Az module will be updated on a two-week cadence and will always be up-to-date, so that’s nice.

I’ve resisted upgrading to the new Az module until the completion of a recent customer engagement so as to avoid any complexity that a switch in modules may introduce. Call me risk adverse I know. . .So now that the project is complete, I’m excited to make the switch.

Ok so how do I upgrade from AzureRM to Az?

If you’ve been using PowerShell for Azure, you undoubtedly already have the AzureRM module installed. So its out with the old and in with the new. . . To accomplish this task I made use of some simple PowerShell to find the modules installed with a name like AzureRM and then uninstall them. Here is the code I lazily leached from my colleague Arran Peterson after he successfully uninstalled the old modules.

Remove all the old AuzreRM modules first . . .

$azurerm = get-module -ListAvailable | ? {$_.Name -like “AzureRM*”}

ForEach ($module in $azurerm) {

$name = $module.Name

$version = $module.Version

Uninstall-Module -Name $Name -MaximumVersion $version -Force

}

At the time of writing this blog the latest version available from the PowerShell Gallery is 1.5.0 https://www.powershellgallery.com/packages/Az/1.5.0

To install the module simply open PowerShell on your machine and enter:

Install-Module -Name Az

Boom its that easy. . .

Ok Great, but wont this break all my scripts?

So when I first heard of the new module and the change in cmdlet namespace, my first reaction was shock. .  I’ve produced loads of PowerShell for customers over the past couple of years that use the “azurerem” named cmdlets.

Microsoft state on their PowerShell Az blog that ‘Users are not required to migrate from AzureRM, as AzureRM will continue to be supported. However, it is important to note that all new Azure PowerShell features will appear only in ‘Az’.’  So my old stuff would continue to work, but they also state ‘Az and AzureRM cannot be executed in the same PowerShell session.’ So I’d need to make customers aware that they cannot mix AzureRm and Az cmdlets within a single session.

This all sounds like a bunch of annoying conversations and explanations I’d be faced with, I began to feel frustrated and was questioning why Microsoft saw the need to rename all of their cmdlets. I could feel a hate blog brewing. . .

However, as I read more I came across a diamond in the rough. . .AzureRM Aliases. Ah someone at Microsoft has considered my pain. . I could feel the catharsis as I read the official migration guide https://github.com/Azure/azure-powershell/blob/master/documentation/migration-guides/Az.1.0.0-migration-guide.md and came across the following statement. ‘To make the transition to these new cmdlet names simpler, Az introduces two new cmdlets, Enable-AzureRmAlias and Disable-AzureRmAlias. Enable-AzureRmAlias creates aliases from the older cmdlet names in AzureRM to the newer Az cmdlet names. The cmdlet allows creating aliases in the current session, or across all sessions by changing your user or machine profile.’

What Now?

Its time for a coffee then back to more PowerShell. . Happy Days. .