AWS EventBridge Triggering SSM Automation IAM Role Error

I recently wanted to create an Amazon EventBridge rule that will schedule an SSM Automation document.

A rule watches for certain events (cron in my case) and then routes them to AWS targets that you choose. You can create a rule that performs an AWS action automatically when another AWS action happens, or a rule that performs an AWS action regularly on a set schedule.

EventBridge needs permission to call SSM Start Automation Execution with the supplied Automation document and parameters. The rule will offer the generation of a new IAM role for this task.

In my case I received an error like below:

Error Output

The Automation definition for an SSM Automation target must contain an AssumeRole that evaluates to an IAM role ARN.

If you recieving this error you can create the role manually using the following CloudFormation Template.

AWSTemplateFormatVersion: '2010-09-09'
Description: AWS CloudFormation template IAM Roles for Event Bridge | SSM Automation

Resources:
  AutomationServiceRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Effect: Allow
          Principal:
            Service:
            - events.amazonaws.com
          Action: sts:AssumeRole
      ManagedPolicyArns:
      - arn:aws:iam::aws:policy/service-role/AmazonSSMAutomationRole
      Path: "/"
      RoleName: EventBridgeAutomationServiceRole

Migrate to AWS EC2 with SQL licensing included

While performing a lift and shift migration of Windows SQL Server using the AWS Application Migration Service I was challenged with wanting the newly migrated instance to have a Windows OS license ‘included’ but additionally the SQL Server Standard license billed to the account. The customer was moving away from their current hosting platform where both licenses were covered under SPLA. Rather then going to a license reseller and purchasing SQL Server it was preferred to have all the Windows OS and SQL Server software licensing to be payed through their AWS account.

In the Application Migration Service, under launch settings > Operating System Licensing. We can see all we have is OS licence options available to toggle between license-included and BYOL.

Choose whether you want to Bring Your Own Licenses (BYOL) from the source server into the Test or Cutover instance. This defines whether the launched test or cutover instance will include the license for the operating system (License-included), or if the licensing will be based on that of the migrated server (BYOL: Bring Your Own License).

If we review a migrated instance where ‘license-included’ was selected during launch, using Powershell on instance itself we see only a singular ‘BillingProduct = bp-6ba54002’ for Windows:

((Invoke-WebRequest http://169.254.169.254/latest/dynamic/instance-identity/document).Content | ConvertFrom-Json).billingProducts

bp-6ba54002 

AWS Preferred Approach

There are a lots of options for migrating SQL Server to AWS, so we weren’t without choices.

  1. Leverage the AWS Database Migration Service (DMS) to migrate on-premises Windows SQL Server to a Relation Database Services (RDS).
  2. Leverage the AWS Database Migration Service (DMS) to migrate on-premises Windows SQL Server to AWS EC2 Instance provisioned from a Marketplace AMI which includes SQL licensing.
  3. Leverage SQL Server native tooling between an on-premises Windows SQL Server to AWS EC2 Instance provisioned from a Marketplace AMI which includes SQL licensing. Use either
    1. Native backup and restore
    2. Log shipping
    3. Database mirroring
    4. Always On availability groups
    5. Basic Always On availability groups
    6. Distributed availability groups
    7. Transactional replication
    8. Detach and attach
    9. Import/export

The only concern our customer had with all the above approaches was that there was technical configuration on the source server that wasn’t well understand. The risk of reimplementation on a new EC2 instance and missing configuration was perceived to be high impact.

Solution

The solution was to create a new EC2 instance from a AWS Marketplace AMI that we would like to be billed for. In my case I chose ‘Microsoft Windows Server 2019 with SQL Server 2017 Standard – ami-09ee4321c0e1218c3’.

The procedure is to detach all the volumes (including root) from the migrated EC2 instance that has all the lovely SQL data and attach it to the newly created instance with the updated BillingProducts of ‘bp-6ba54002′ for Windows and bp-6ba54003′ for SQL Standard assigned to it.

If we review a Marketplace EC2 instance where SQL Server Standard was selected using Powershell on the instance:

((Invoke-WebRequest http://169.254.169.254/latest/dynamic/instance-identity/document).Content | ConvertFrom-Json).billingProducts

bp-6ba54002
bp-6ba54003 

How will it work?

This process will require a little outage as both EC2 Instances will have to be stopped to detach the volumes and re-attach. This all happens pretty fast so only expect it to last a minute.

NOTE: The primary ENI interface cannot be changed so there will be an IP swap, so be aware of any DNS updates you may need to do post to resolve the SQL Server being available via hostname to other servers.

The high level process of the script:

  1. Get Original Instance EBS mappings
  2. Stop the instances
  3. Detach the volumes from both instances
  4. Add the Original Instance’s EBS mappings to the New Instance
  5. Tag the New Instance with the Original Instance’s tags
  6. Tag the New Instance with the tag ‘Key=convertedFrom’ and ‘Value=<Original Instance ID>’
  7. Update the Name tag on the Original Instance with ‘Key=Name’ and ‘Value=<OldValue+.old>
  8. Update the Original Instance tags with its original BlockMapping for reference e.g. ‘Key=xvdc’ and ‘Value=vol-0c2174621f7fc2e4c’
  9. Start the New Instance

After the script completes the Original Instance will have the following information:

The New Instance will have the following information:

The volumes connected on the New Instance:

$orginalInstanceID = "i-0ca332b0b062dbe76"
$newInstanceID = "i-0ce3eeadfa27e2f64"
$AccessKey = ""
$Secret = ""
$Region = "ap-southeast-2"

If (!(get-module -ListAvailable | ? {$_.Name -like "*AWS.Tools.EC2*"}))
{                
    Write-Output "WARNING: EC2 AWS Modules Not Installed Yet..." 
    Exit
}
$getModuleResults = Get-Module "AWS.Tools.EC2"
If (!$getModuleResults) 
{
    Write-Output "INFO: Loading AWS Module..."
    Import-Module AWS.Tools.Common -ErrorAction SilentlyContinue -Force
    Import-Module AWS.Tools.EC2 -ErrorAction SilentlyContinue -Force
}
else{
    Write-Output "INFO: AWS Module Already Loaded"
}

Set-AWSCredential -AccessKey $AccessKey -SecretKey $Secret -ProfileLocation $Region
Write-Output "INFO: Getting details $($orginalInstanceID)"
$originalInstance = (Get-EC2Instance -InstanceId $orginalInstanceID).Instances
$orginalBlockMappings = $originalInstance.BlockDeviceMappings
$originalVolumes = @()
Write-Output "INFO: Getting EBS volumes from $($orginalInstanceID)"
ForEach($device in $orginalBlockMappings){
    $Object = New-Object System.Object
    #Get EBS volumes for the machine
    $Object | Add-Member -type NoteProperty -name "DeviceName" -Value $device.DeviceName
    $Object | Add-Member -type NoteProperty -name "VolumeId" -Value $device.ebs.VolumeId
    $Object | Add-Member -Type NoteProperty -name "Status" -Value $device.ebs.Status
    $volume = Get-EC2Volume -VolumeId $device.ebs.VolumeId
    $Object | Add-Member -Type NoteProperty -name "AvailabilityZone" -Value $volume.AvailabilityZone
    $Object | Add-Member -Type NoteProperty -name "Iops" -Value $volume.Iops
    $Object | Add-Member -Type NoteProperty -name "CreateTime" -Value $volume.CreateTime
    $Object | Add-Member -Type NoteProperty -name "Size" -Value $volume.Size
    $Object | Add-Member -Type NoteProperty -name "VolumeType" -Value $volume.VolumeType
    $originalVolumes += $Object
}
Write-Output $originalVolumes | Format-Table
$tempInstance = (Get-EC2Instance -InstanceId $newInstanceID).Instances
$tempBlockMappings = $tempInstance.BlockDeviceMappings
$tempVolumes = @()
Write-Output "INFO: Getting details $($newInstanceID)"
ForEach($device in $tempBlockMappings){
    $Object = New-Object System.Object
    #Get EBS volumes for the machine
    $Object | Add-Member -type NoteProperty -name "DeviceName" -Value $device.DeviceName
    $Object | Add-Member -type NoteProperty -name "VolumeId" -Value $device.ebs.VolumeId
    $Object | Add-Member -Type NoteProperty -name "Status" -Value $device.ebs.Status
    $volume = Get-EC2Volume -VolumeId $device.ebs.VolumeId
    $Object | Add-Member -Type NoteProperty -name "AvailabilityZone" -Value $volume.AvailabilityZone
    $Object | Add-Member -Type NoteProperty -name "Iops" -Value $volume.Iops
    $Object | Add-Member -Type NoteProperty -name "CreateTime" -Value $volume.CreateTime
    $Object | Add-Member -Type NoteProperty -name "Size" -Value $volume.Size
    $Object | Add-Member -Type NoteProperty -name "VolumeType" -Value $volume.VolumeType
    $tempVolumes += $Object
}
Write-Output $tempVolumes | Format-Table
#Lets do the work
Write-Output "INFO: Stop the instance $($orginalInstanceID)...."
try{
    Stop-EC2Instance -InstanceId $originalInstance -ErrorAction Stop
}catch{
    Write-Output "ERROR: $_"
    exit
}
While((Get-EC2Instance -InstanceId $orginalInstanceID).Instances[0].State.Name -ne 'stopped'){
    Write-Verbose "INFO: Waiting for instance to stop..."
    Start-Sleep -s 10
}
Write-Output "INFO: Stop the instance $($newInstanceID)...."
try{
    Stop-EC2Instance -InstanceId $newInstanceID -Force -ErrorAction Stop
}catch{
    Write-Output "ERROR: $_"
    exit
}
While((Get-EC2Instance -InstanceId $newInstanceID).Instances[0].State.Name -ne 'stopped'){
    Write-Verbose "INFO: Waiting for instance to stop..."
    Start-Sleep -s 10
}

Write-Output "INFO: detaching the EBS volumes from $($orginalInstanceID)...."
ForEach($volume in $originalVolumes){
    try{
        Dismount-EC2Volume -VolumeId $volume.VolumeId -InstanceId $orginalInstanceID -Device $volume.DeviceName -ErrorAction Stop
    }catch{
        Write-Output "ERROR: $_"
        exit
    }
}

Write-Output "INFO: detaching the EBS volumes from $($newInstanceID)...."
ForEach($volume in $tempVolumes){
    try{
        Dismount-EC2Volume -VolumeId $volume.VolumeId -InstanceId $newInstanceID -Device $volume.DeviceName -ErrorAction Stop
    }catch{
        Write-Output "ERROR: $_"
        exit
    }
}

Write-Output "INFO: Migrating $($orginalInstanceID) to $($newInstanceID) with $($originalVolumes.Count) connected volumes"
Write-Output "INFO: attaching the EBS volumes to $($newInstanceID)...."
ForEach($volume in $originalVolumes){
    try{
        Add-EC2Volume -VolumeId $volume.VolumeId -InstanceId $newInstanceID -Device $volume.DeviceName -ErrorAction Stop
    }catch{
        Write-Output "ERROR: $_"
        exit
    }
}

Write-Output "INFO: Tagging the $($newInstanceID) with original instance tags"
$orginalInstanceTags = $originalInstance.tags
ForEach($T in $orginalInstanceTags){
    try{
        $tag = New-Object Amazon.EC2.Model.Tag
        $tag.Key = $T.Key
        $value = $T.Value
        $tag.Value = $value
        New-EC2Tag -Resource $newInstanceID -Tag $tag -ErrorAction Stop
    }catch{
        Write-Output "ERROR: $_"
    }
}

Try{
    $tag = New-Object Amazon.EC2.Model.Tag
    $tag.Key = "convertedFrom"
    $value = $orginalInstanceID
    $tag.Value = $value
    New-EC2Tag -Resource $newInstanceID -Tag $tag -ErrorAction Stop
}catch{
    Write-Output "ERROR: $_"
}

Write-Output "INFO: Marking the $($orginalInstanceID) as old"
$orginalInstanceName = ($originalInstance.tags | ? {$_.Key -like "Name"}).Value
If($orginalInstanceName){
    try{
        $tag = New-Object Amazon.EC2.Model.Tag
        $tag.Key = "Name"
        $value = $orginalInstanceName+".old"
        $tag.Value = $value
        New-EC2Tag -Resource $orginalInstanceID -Tag $tag -ErrorAction Stop
    }catch{
        Write-Output "ERROR: $_"
    }
}

Write-Output "INFO: Tagging the $($orginalInstanceID) with original volumes for failback"
ForEach($device in $orginalBlockMappings){
    try{
        $tag = New-Object Amazon.EC2.Model.Tag
        $tag.Key = $device.DeviceName
        $value = $device.ebs.VolumeId
        $tag.Value = $value
        New-EC2Tag -Resource $orginalInstanceID -Tag $tag -ErrorAction Stop
    }catch{
        Write-Output "ERROR: $_"
    }
}

Write-Output "INFO: Starting the instance $($newInstanceID) with newly attached drives...."
try{
    Start-EC2Instance -InstanceId $newInstanceID -Force -ErrorAction Stop
}catch{
    Write-Output "ERROR: $_"
    exit
}
While((Get-EC2Instance -InstanceId $newInstanceID).Instances[0].State.Name -ne 'Running'){
    Write-Verbose "INFO: Waiting for instance to start..."
    Start-Sleep -s 10
}
$filterENI = New-Object Amazon.EC2.Model.Filter -Property @{Name = "attachment.instance-id"; Values = $newInstanceID}
$newInterface = Get-EC2NetworkInterface -Filter $filterENI
Write-Output "INFO: Conversion complete to $($newInstanceID)"
Write-Output "SUCCESS: Try logging into $($newInterface.PrivateIpAddress)"

Thanks Rene and Evan for passing on the idea.


Google Cloud’s Second Region in Australia

Google Cloud Platform (GCP) has extended its reach in Australia and New Zealand (ANZ) with a second region in Melbourne.

Why does this matter?

Having two regions inside Australia allows for customers to extend their architecture for highly available or disaster recoverable solutions. Google now join Azure (who has 3) as having multiple regions inside Australia, no doubt we will keep a close eye on AWS whom were the first public cloud provider to enter Sydney many moons ago.

What’s different about Google’s Regions?

The distinguishing network feature that sets GCP apart from its rivals is how they allow customers to design their Virtual Private Cloud (VPC). GCP allow for subnets in the single VPC to span across as many regions as you’d like. You have the ability to create a single globally distributed VPC with subnets in the Americas, Asia and Europe. Build a logical DMZ zone that has subnets in each region for your globally distributed web services. Unlike the other cloud providers whom their software defined networks are region specific and peering must be setup that incurs bandwidth usage and connection charges. Thoughts go through my head as to how you would build a globally distributed VPC with local on-ramp for your regions on-premises networks. None the less, allowing for your traffic from Asia to Europe to traverse Google’s backhaul could makes life easier. The devil is in the detail, as what is also unique to Google is the VM-VM egress charges with cross region. Which is kind of saying the same thing as peering, just putting the price on a different object. All things to carefully think about when planning your cloud deployments. Maybe in some circumstances based on the pricing model, Google outweighs the other heavy weight hitters.

What will be interesting to see is whether with Sydney and Melbourne onboarding soon, will the VM-VM egress pricing update to support Egress between Google Cloud regions within Australia. Currently if I look at what is written on the tin I’d assume that it falls under Egress between Google Cloud regions within Oceania (per GB) at $0.08, where Oceania includes Australia, New Zealand, and surrounding Pacific Ocean islands such as Papua New Guinea and Fiji. This region excludes Hawaii.

Google have a nice inter-Region Google Cloud Inter-region latency and throughput pivot table that gets metrics form a Perfkit test. The current lowest latent packet is to asia-southeast1 Singapore at ~92ms, we will definitely knock the socks off of that in ANZ with Melbourne to Sydney.

Google Cloud Inter-Region Latency and Throughput › Inter-region latency and throughput

Googles VPC network Example (here)


Anatomy of a Successful Cloud Migration

Free eBook Download

Download Now

“Through 2024, 80% of companies that are unaware of the mistakes made in their cloud adoption will overspend by 20 to 50%.” (4 Lessons Learned From Cloud Infrastructure Adopters – gartner.com)

It’s 2021 and you’re ready to move to the cloud. But first, you’ll want to do your research.

This eBook will walk you through:

  • Challenges faced by individuals within an organisation when migrating to the cloud
  • How to solve these challenges efficiently and effectively with powerful tools
  • How to best align business and IT goals to make sure everything runs smoothly

With years of experience working with Government and Enterprise organisations at a State and Local level, we want to share the lessons we’ve learnt. From procurement, to funding to security challenges, we’ve covered everything you need to know to migrate to the cloud successfully.

Complete the form for instant access to this eBook.


SQL Database Backup on IaaS using Azure Automation

I had a need to take a full SQL Database backup from a virtual machine with SQL Server hosted on Azure. This is done via an Azure Automation account, executing a runbook on a hybrid worker. This is a great way to take a offline copy of your production SQL and store it someplace safe.

To accomplish this we will use the PowerShell module ‘sqlps‘ that should be installed with SQL Server and run the command Backup-SqlDatabase.

Backup-SqlDatabase (SqlServer) | Microsoft Docs

Store SQL Storage Account Credentials

Before we can run the Backup-SqlDatabase command we must have a saved credential stored in SQL for the Storage Account using New-SqlCredential.

New-SqlCredential (SqlServer) | Microsoft Docs

Import-Module sqlps
# set parameters
$sqlPath = "sqlserver:\sql\$($env:COMPUTERNAME)"
$storageAccount = "<storageAccountName>"  
$storageKey = "<storageAccountKey>"  
$secureString = ConvertTo-SecureString $storageKey -AsPlainText -Force  
$credentialName = "azureCredential-"+$storageAccount

Write-Host "Generate credential: " $credentialName
  
#cd to sql server and get instances  
cd $sqlPath
$instances = Get-ChildItem

#loop through instances and create a SQL credential, output any errors
foreach ($instance in $instances)  {
    try {
        $path = "$($sqlPath)\$($instance.DisplayName)\credentials"
        New-SqlCredential -Name $credentialName -Identity $storageAccount -Secret $secureString -Path $path -ea Stop | Out-Null
        Write-Host "...generated credential $($path)\$($credentialName)."  }
    catch { Write-Host $_.Exception.Message } }

Backup SQL Databases with an Azure Runbook

The runbook below works on the DEFAULT instance and excludes both tempdb and model from backup.

Import-Module sqlps
$sqlPath = "sqlserver:\sql\$($env:COMPUTERNAME)"
$storageAccount = "<storageAccount>"  
$blobContainer = "<containerName>"  
$backupUrlContainer = "https://$storageAccount.blob.core.windows.net/$blobContainer/"  
$credentialName = "azureCredential-"+$storageAccount
$prefix = Get-Date -Format yyyyMMdd

Write-Host "Generate credential: " $credentialName

Write-Host "Backup database: " $backupUrlContainer
  
cd $sqlPath
$instances = Get-ChildItem

#loop through instances and backup all databases (excluding tempdb and model)
foreach ($instance in $instances)  {
    $path = "$($sqlPath)\$($instance.DisplayName)\databases"
    $databases = Get-ChildItem -Force -Path $path | Where-object {$_.name -ne "tempdb" -and $_.name -ne "model"}

    foreach ($database in $databases) {
        try {
            $databasePath = "$($path)\$($database.Name)"
            Write-Host "...starting backup: " $databasePath
            $fileName = $prefix+"_"+$($database.Name)+".bak"
            $destinationBakFileName = $fileName
            $backupFileURL = $backupUrlContainer+$destinationBakFileName
            Write-Host "...backup URL: " $backupFileURL
            Backup-SqlDatabase -Database $database.Name -Path $path -BackupFile $backupFileURL -SqlCredential $credentialName -Compression On 
            Write-Host "...backup complete."  }
        catch { Write-Host $_.Exception.Message } } }


NOTE: You will notice a performance hit on the SQL Server so schedule this runbook in a maintanence window.


Deploy Craft CMS with Azure App Service for Linux Containers

Here is some key points to deploy a Craft CMS installation on Azure Web App using container images. In this blog we will step you through some of the modifications needed to make the container image run in Azure and the deployment steps to run in an Azure DevOps Pipeline.

CraftCMS have reference material for their docker deployments found here:
GitHub – craftcms/docker: Craft CMS Docker images

Components

The components required are:

  • Azure Web App for Linux Containers
  • Azure Database for MySQL
  • Azure Storage Account
  • Azure Front Door with WAF
  • Azure Container Registry

Custom Docker Image

To make this work in an Azure Web App we have to do the following additional steps:

  • Install OpenSSH & Enable SSH daemon on 2222 at startup
  • Set the password for root to “Docker!”
  • Install the Azure Database for MySQL root certificates for SSL connections from the Container

We do this in the Dockerfile. We are customizing the NGINX implementation of CraftCMS to allow for the front end to service the HTTP/HTTPS requests from the App Service.

# composer dependencies
FROM composer:1 as vendor
COPY composer.json composer.json
COPY composer.lock composer.lock
RUN composer install --ignore-platform-reqs --no-interaction --prefer-dist

FROM craftcms/nginx:7.4
# Install OpenSSH and set the password for root to "Docker!". In this example, "apk add" is the install instruction for an Alpine Linux-based image.
USER root
RUN apk add openssh sudo \
     && echo "root:Docker!" | chpasswd 
# Copy the sshd_config file to the /etc/ directory
COPY sshd_config /etc/ssh/
COPY start.sh /etc/start.sh
COPY BaltimoreCyberTrustRoot.crt.pem /etc/BaltimoreCyberTrustRoot.crt.pem 
RUN ssh-keygen -A
RUN addgroup sudo
RUN adduser www-data sudo
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers

# the user is `www-data`, so we copy the files using the user and group
USER www-data
COPY --chown=www-data:www-data --from=vendor /app/vendor/ /app/vendor/
COPY --chown=www-data:www-data . .

EXPOSE 8080 2222
ENTRYPOINT ["sh", "/etc/start.sh"]

The corresponding ‘start.sh’

#!/bin/bash
sudo /usr/sbin/sshd &
/usr/bin/supervisord -c /etc/supervisor/conf.d/supervisor.conf

Build the Web App

The Azure Web App resource is deployed using a ARM template. Here is a snippet of the template, the key is to have your environment variables defined:

{
            "comments": "This is the docker web app running craftcms/custom Docker image",
            "type": "Microsoft.Web/sites",
            "name": "[parameters('siteName')]",
            "apiVersion": "2020-06-01",
            "location": "[parameters('location')]",
            "tags": "[parameters('tags')]",
            "dependsOn": [
                "[variables('hostingPlanName')]",
                "[variables('databaseName')]"
            ],
            "properties": {
                "siteConfig": {
                    "appSettings": [
                        {
                            "name": "DOCKER_REGISTRY_SERVER_URL",
                            "value": "[reference(variables('registryResourceId'), '2019-05-01').loginServer]"
                        },
                        {
                            "name": "DOCKER_REGISTRY_SERVER_USERNAME",
                            "value": "[listCredentials(variables('registryResourceId'), '2019-05-01').username]"
                        },
                        {
                            "name": "DOCKER_REGISTRY_SERVER_PASSWORD",
                            "value": "[listCredentials(variables('registryResourceId'), '2019-05-01').passwords[0].value]"
                        },
                        {
                            "name": "WEBSITES_ENABLE_APP_SERVICE_STORAGE",
                            "value": "false"
                        },
                        {
                            "name": "DB_DRIVER",
                            "value": "mysql"
                        },
                        {
                            "name": "DB_SERVER",
                            "value": "[reference(resourceId('Microsoft.DBforMySQL/servers',variables('serverName'))).fullyQualifiedDomainName]"
                        },
                        {
                            "name": "DB_PORT",
                            "value": "3306"
                        },
                        {
                            "name": "DB_DATABASE",
                            "value": "[variables('databaseName')]"
                        },
                        {
                            "name": "DB_USER",
                            "value": "[variables('databaseUserName')]"
                        },
                        {
                            "name": "DB_PASSWORD",
                            "value": "[parameters('administratorLoginPassword')]"
                        },
                        {
                            "name": "DB_SCHEMA",
                            "value": "public"
                        },
                        {
                            "name": "DB_TABLE_PREFIX",
                            "value": ""
                        },
                        {
                            "name": "SECURITY_KEY",
                            "value": "[parameters('cmsSecurityKey')]"
                        },
                        {
                            "name": "WEB_IMAGE",
                            "value": "[parameters('containerImage')]"
                        },
                        {
                            "name": "WEB_IMAGE_PORTS",
                            "value": "80:8080"
                        }

                    ],
                    "linuxFxVersion": "[variables('linuxFxVersion')]",
                    "scmIpSecurityRestrictions": [
                        
                    ],
                    "scmIpSecurityRestrictionsUseMain": false,
                    "minTlsVersion": "1.2",
                    "scmMinTlsVersion": "1.0"
                },
                "name": "[parameters('siteName')]",
                "serverFarmId": "[variables('hostingPlanName')]",
                "httpsOnly": true      
            },
            "resources": [
                {
                    "apiVersion": "2020-06-01",
                    "name": "connectionstrings",
                    "type": "config",
                    "dependsOn": [
                        "[resourceId('Microsoft.Web/sites/', parameters('siteName'))]"
                    ],
                    "tags": "[parameters('tags')]",
                    "properties": {
                        "dbstring": {
                            "value": "[concat('Database=', variables('databaseName'), ';Data Source=', reference(resourceId('Microsoft.DBforMySQL/servers',variables('serverName'))).fullyQualifiedDomainName, ';User Id=', parameters('administratorLogin'),'@', variables('serverName'),';Password=', parameters('administratorLoginPassword'))]",
                            "type": "MySQL"
                        }
                    }
                }
            ]
        },

All other resources should be ARM defaults. No customisation required. Either put them all in a single ARM template or seperate them out on their own. Your choice to be creative.

Build Pipeline

The infrastructure build pipeline looks something like the below:

# Infrastructure pipeline
trigger: none

pool:
  vmImage: 'windows-2019'
variables:
  TEMPLATEURI: 'https://storageAccountName.blob.core.windows.net/templates/portal/'
  CMSSINGLE: 'singleCraftCMSTemplate.json'
  CMSSINGLEPARAM: 'singleCraftCMSTemplate.parameters.json'
  CMSFILEREG: 'ContainerRegistry.json'
  CMSFRONTDOOR: 'frontDoor.json'
  CMSFILEREGPARAM: 'ContainerRegistry.parameters.json'
  CMSFRONTDOORPARAM: 'frontDoor.parameters.json'
  LOCATION: 'Australia East'
  SUBSCRIPTIONID: ''
  AZURECLISPID: ''
  TENANTID: ''
  RGNAME: ''
  TOKEN: ''
  ACS : 'registryName.azurecr.io'
resources:
  repositories:
    - repository: coderepo
      type: git
      name: Project/craftcms
stages:
- stage: BuildContainerRegistry
  displayName: BuildRegistry
  jobs:
  - job: BuildContainerRegistry
    displayName: Azure Git Repository
    pool:
      vmImage: 'windows-latest'
    steps:
    - task: CopyFiles@2
      name: copyToBuildHost
      displayName: 'Copy files to the build host for execution'
      inputs:
        Contents: '**'
        TargetFolder: '$(Build.ArtifactStagingDirectory)'
    - task: AzureFileCopy@4
      inputs:
        SourcePath: '$(Build.Repository.LocalPath)\CMS\template\*'
        azureSubscription: ''
        Destination: 'AzureBlob'
        storage: ''
        ContainerName: 'templates'
        BlobPrefix: portal
        AdditionalArgumentsForBlobCopy: --recursive=true
    - task: AzureResourceManagerTemplateDeployment@3
      displayName: "Deploy Azure ARM template for Azure Container Registry"
      inputs:
        deploymentScope: 'Resource Group'
        azureResourceManagerConnection: 'azureDeployCLI-SP'
        subscriptionId: '$(SUBSCRIPTIONID)'
        action: 'Create Or Update Resource Group'
        resourceGroupName: '$(RGNAME)'
        location: '$(LOCATION)'
        templateLocation: 'URL of the file'
        csmFileLink: '$(TEMPLATEURI)$(CMSFILEREG)$(TOKEN)' 
        csmParametersFileLink: '$(TEMPLATEURI)$(CMSFILEREGPARAM)$(TOKEN)' 
        deploymentMode: 'Incremental'
    - task: AzurePowerShell@5
      displayName: 'Import the public docker images to the Azure Container Repository'
      inputs:
        azureSubscription: 'azureDeployCLI-SP'
        ScriptType: 'FilePath'
        ScriptPath: '$(Build.ArtifactStagingDirectory)\CMS\template\dockerImages.ps1'
        errorActionPreference: 'silentlyContinue'
        azurePowerShellVersion: 'LatestVersion'

- stage: BuildGeneralImg
  dependsOn: BuildContainerRegistry
  displayName: BuildImages
  jobs:
  - job: BuildCraftCMSImage
    displayName: General Docker Image
    pool:
      vmImage: 'ubuntu-18.04'
    steps:
    - checkout: self
    - checkout: coderepo
    - task: CopyFiles@2
      name: copyToBuildHost
      displayName: 'Copy files to the build host for execution'
      inputs:
        Contents: '**'
        TargetFolder: '$(Build.ArtifactStagingDirectory)'
    - task: Docker@2
      displayName: Build and push
      inputs:
        containerRegistry: ''
        repository: craftcms
        command: buildAndPush
        dockerfile: 'craftcms/Dockerfile'
        tags: |
          craftcms
          latest

- stage: Deploy 
  dependsOn: BuildGeneralImg
  displayName: DeployWebService
  jobs:
  - job:
    displayName: ARM Templates
    pool:
      vmImage: 'windows-latest'
    steps:
    - checkout: self
    - checkout: coderepo
    - task: CopyFiles@2
      name: copyToBuildHost
      displayName: 'Copy files to the build host for execution'
      inputs:
        Contents: '**'
        TargetFolder: '$(Build.ArtifactStagingDirectory)'
    
    - task: AzureResourceManagerTemplateDeployment@3
      displayName: "Deploy Azure ARM single template for remaining assets"
      inputs:
        deploymentScope: 'Resource Group'
        azureResourceManagerConnection: ''
        subscriptionId: '$(SUBSCRIPTIONID)'
        action: 'Create Or Update Resource Group'
        resourceGroupName: '$(RGNAME)'
        location: '$(LOCATION)'
        templateLocation: 'URL of the file'
        csmFileLink: '$(TEMPLATEURI)$(CMSSINGLE)$(TOKEN)' 
        csmParametersFileLink: '$(TEMPLATEURI)$(CMSSINGLEPARAM)$(TOKEN)' 
        deploymentMode: 'Incremental'

- stage: Secure 
  dependsOn: Deploy
  displayName: DeployFrontDoor
  jobs:
  - job:
    displayName: ARM Templates
    pool:
      vmImage: 'windows-latest'
    steps:
    - task: CopyFiles@2
      name: copyToBuildHost
      displayName: 'Copy files to the build host for execution'
      inputs:
        Contents: '**'
        TargetFolder: '$(Build.ArtifactStagingDirectory)'
    
    - task: AzureResourceManagerTemplateDeployment@3
      displayName: "Deploy Azure ARM single template for Front Door"
      inputs:
        deploymentScope: 'Resource Group'
        azureResourceManagerConnection: ''
        subscriptionId: '$(SUBSCRIPTIONID)'
        action: 'Create Or Update Resource Group'
        resourceGroupName: '$(RGNAME)'
        location: '$(LOCATION)'
        templateLocation: 'URL of the file'
        csmFileLink: '$(TEMPLATEURI)$(CMSFRONTDOOR)$(TOKEN)' 
        csmParametersFileLink: '$(TEMPLATEURI)$(CMSFRONTDOORPARAM)$(TOKEN)' 
        deploymentMode: 'Incremental'
    - task: AzurePowerShell@5
      displayName: 'Apply Front Door service tags to Web App ACLs'
      inputs:
        azureSubscription: 'azureDeployCLI-SP'
        ScriptType: 'FilePath'
        ScriptPath: '$(Build.ArtifactStagingDirectory)\CMS\template\enableFrontDoorOnWebApp.ps1'
        errorActionPreference: 'silentlyContinue'
        azurePowerShellVersion: 'LatestVersion'    

Enable Front Door with WAF

The pipeline stage DeployFrontDoor has an enableFrontDoorOnWebApp.ps1

$azFrontDoorName = ""
$webAppName = ""
$resourceGroup = ""

Write-Host "INFO: Restrict access to a specific Azure Front Door instance"
try{
    $afd = Get-AzFrontDoor -Name $azFrontDoorName -ResourceGroupName $resourceGroup
}
catch{
    Write-Host "ERROR: $($_.Exception.Message)"
}

Write-Host "INFO: Setting the IP ranges defined in the AzureFrontDoor.Backend service tag to the Web App"
try{
    Add-AzWebAppAccessRestrictionRule -ResourceGroupName $resourceGroup -WebAppName $webAppName -Name "Front Door Restrictions" -Priority 100 -Action Allow -ServiceTag AzureFrontDoor.Backend -HttpHeader @{'x-azure-fdid' = $afd.FrontDoorId}}
catch{
    Write-Host "ERROR: $($_.Exception.Message)"
}


You should now have a CraftCMS web app that is only available through the FrontDoor URL.

Continuous Deployment

There are many ways to deploy updates to your website, an Azure Web App has a beautiful thing called slots that can be used.

# Trigger on commit
# Build and push an image to Azure Container Registry
# Update Web App Slot

trigger:
  branches:
    include:
      - main
  paths:
    exclude:
      - pipelines
      - README.md
  batch: true

resources:
- repo: self

pool:
  vmImage: 'windows-2019'
variables:
  TEMPLATEURI: 'https://storageAccountName.blob.core.windows.net/templates/portal/'
  LOCATION: 'Australia East'
  SUBSCRIPTIONID: ''
  RGNAME: ''
  TOKEN: ''
  SASTOKEN: ''
  TAG: '$(Build.BuildId)'
  CONTAINERREGISTRY: 'registryName.azurecr.io'
  IMAGEREPOSITORY: 'craftcms'
  APPNAME: ''

stages:
- stage: BuildImg
  displayName: BuildLatestImage
  jobs:
  - job: BuildCraftCMSImage
    displayName: General Docker Image
    pool:
      vmImage: 'ubuntu-18.04'
    steps:
    - checkout: self
    - task: CopyFiles@2
      name: copyToBuildHost
      displayName: 'Copy files to the build host for execution'
      inputs:
        Contents: '**'
        TargetFolder: '$(Build.ArtifactStagingDirectory)'
    - task: Docker@2
      displayName: Build and push
      inputs:
        containerRegistry: ''
        repository: $(IMAGEREPOSITORY)
        command: buildAndPush
        dockerfile: 'Dockerfile'
        tags: |
          $(IMAGEREPOSITORY)
          $(TAG)


- stage: UpdateApp 
  dependsOn: BuildImg
  displayName: UpdateTestSlot
  jobs:
  - job:
    displayName: 'Update Web App Slot'
    pool:
      vmImage: 'windows-latest'
    steps:
    - task: AzureWebAppContainer@1
      displayName: 'Update Web App Container Image Reference' 
      inputs:
        azureSubscription: ''
        appName: $(APPNAME)
        containers: $(CONTAINERREGISTRY)/$(IMAGEREPOSITORY):$(TAG)
        deployToSlotOrASE: true
        resourceGroupName: $(RGNAME)
        slotName: test