AWS EventBridge Triggering SSM Automation IAM Role Error

I recently wanted to create an Amazon EventBridge rule that will schedule an SSM Automation document.

A rule watches for certain events (cron in my case) and then routes them to AWS targets that you choose. You can create a rule that performs an AWS action automatically when another AWS action happens, or a rule that performs an AWS action regularly on a set schedule.

EventBridge needs permission to call SSM Start Automation Execution with the supplied Automation document and parameters. The rule will offer the generation of a new IAM role for this task.

In my case I received an error like below:

Error Output

The Automation definition for an SSM Automation target must contain an AssumeRole that evaluates to an IAM role ARN.

If you recieving this error you can create the role manually using the following CloudFormation Template.

AWSTemplateFormatVersion: '2010-09-09'
Description: AWS CloudFormation template IAM Roles for Event Bridge | SSM Automation

Resources:
  AutomationServiceRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Effect: Allow
          Principal:
            Service:
            - events.amazonaws.com
          Action: sts:AssumeRole
      ManagedPolicyArns:
      - arn:aws:iam::aws:policy/service-role/AmazonSSMAutomationRole
      Path: "/"
      RoleName: EventBridgeAutomationServiceRole

Migrate to AWS EC2 with SQL licensing included

While performing a lift and shift migration of Windows SQL Server using the AWS Application Migration Service I was challenged with wanting the newly migrated instance to have a Windows OS license ‘included’ but additionally the SQL Server Standard license billed to the account. The customer was moving away from their current hosting platform where both licenses were covered under SPLA. Rather then going to a license reseller and purchasing SQL Server it was preferred to have all the Windows OS and SQL Server software licensing to be payed through their AWS account.

In the Application Migration Service, under launch settings > Operating System Licensing. We can see all we have is OS licence options available to toggle between license-included and BYOL.

Choose whether you want to Bring Your Own Licenses (BYOL) from the source server into the Test or Cutover instance. This defines whether the launched test or cutover instance will include the license for the operating system (License-included), or if the licensing will be based on that of the migrated server (BYOL: Bring Your Own License).

If we review a migrated instance where ‘license-included’ was selected during launch, using Powershell on instance itself we see only a singular ‘BillingProduct = bp-6ba54002’ for Windows:

((Invoke-WebRequest http://169.254.169.254/latest/dynamic/instance-identity/document).Content | ConvertFrom-Json).billingProducts

bp-6ba54002 

AWS Preferred Approach

There are a lots of options for migrating SQL Server to AWS, so we weren’t without choices.

  1. Leverage the AWS Database Migration Service (DMS) to migrate on-premises Windows SQL Server to a Relation Database Services (RDS).
  2. Leverage the AWS Database Migration Service (DMS) to migrate on-premises Windows SQL Server to AWS EC2 Instance provisioned from a Marketplace AMI which includes SQL licensing.
  3. Leverage SQL Server native tooling between an on-premises Windows SQL Server to AWS EC2 Instance provisioned from a Marketplace AMI which includes SQL licensing. Use either
    1. Native backup and restore
    2. Log shipping
    3. Database mirroring
    4. Always On availability groups
    5. Basic Always On availability groups
    6. Distributed availability groups
    7. Transactional replication
    8. Detach and attach
    9. Import/export

The only concern our customer had with all the above approaches was that there was technical configuration on the source server that wasn’t well understand. The risk of reimplementation on a new EC2 instance and missing configuration was perceived to be high impact.

Solution

The solution was to create a new EC2 instance from a AWS Marketplace AMI that we would like to be billed for. In my case I chose ‘Microsoft Windows Server 2019 with SQL Server 2017 Standard – ami-09ee4321c0e1218c3’.

The procedure is to detach all the volumes (including root) from the migrated EC2 instance that has all the lovely SQL data and attach it to the newly created instance with the updated BillingProducts of ‘bp-6ba54002′ for Windows and bp-6ba54003′ for SQL Standard assigned to it.

If we review a Marketplace EC2 instance where SQL Server Standard was selected using Powershell on the instance:

((Invoke-WebRequest http://169.254.169.254/latest/dynamic/instance-identity/document).Content | ConvertFrom-Json).billingProducts

bp-6ba54002
bp-6ba54003 

How will it work?

This process will require a little outage as both EC2 Instances will have to be stopped to detach the volumes and re-attach. This all happens pretty fast so only expect it to last a minute.

NOTE: The primary ENI interface cannot be changed so there will be an IP swap, so be aware of any DNS updates you may need to do post to resolve the SQL Server being available via hostname to other servers.

The high level process of the script:

  1. Get Original Instance EBS mappings
  2. Stop the instances
  3. Detach the volumes from both instances
  4. Add the Original Instance’s EBS mappings to the New Instance
  5. Tag the New Instance with the Original Instance’s tags
  6. Tag the New Instance with the tag ‘Key=convertedFrom’ and ‘Value=<Original Instance ID>’
  7. Update the Name tag on the Original Instance with ‘Key=Name’ and ‘Value=<OldValue+.old>
  8. Update the Original Instance tags with its original BlockMapping for reference e.g. ‘Key=xvdc’ and ‘Value=vol-0c2174621f7fc2e4c’
  9. Start the New Instance

After the script completes the Original Instance will have the following information:

The New Instance will have the following information:

The volumes connected on the New Instance:

$orginalInstanceID = "i-0ca332b0b062dbe76"
$newInstanceID = "i-0ce3eeadfa27e2f64"
$AccessKey = ""
$Secret = ""
$Region = "ap-southeast-2"

If (!(get-module -ListAvailable | ? {$_.Name -like "*AWS.Tools.EC2*"}))
{                
    Write-Output "WARNING: EC2 AWS Modules Not Installed Yet..." 
    Exit
}
$getModuleResults = Get-Module "AWS.Tools.EC2"
If (!$getModuleResults) 
{
    Write-Output "INFO: Loading AWS Module..."
    Import-Module AWS.Tools.Common -ErrorAction SilentlyContinue -Force
    Import-Module AWS.Tools.EC2 -ErrorAction SilentlyContinue -Force
}
else{
    Write-Output "INFO: AWS Module Already Loaded"
}

Set-AWSCredential -AccessKey $AccessKey -SecretKey $Secret -ProfileLocation $Region
Write-Output "INFO: Getting details $($orginalInstanceID)"
$originalInstance = (Get-EC2Instance -InstanceId $orginalInstanceID).Instances
$orginalBlockMappings = $originalInstance.BlockDeviceMappings
$originalVolumes = @()
Write-Output "INFO: Getting EBS volumes from $($orginalInstanceID)"
ForEach($device in $orginalBlockMappings){
    $Object = New-Object System.Object
    #Get EBS volumes for the machine
    $Object | Add-Member -type NoteProperty -name "DeviceName" -Value $device.DeviceName
    $Object | Add-Member -type NoteProperty -name "VolumeId" -Value $device.ebs.VolumeId
    $Object | Add-Member -Type NoteProperty -name "Status" -Value $device.ebs.Status
    $volume = Get-EC2Volume -VolumeId $device.ebs.VolumeId
    $Object | Add-Member -Type NoteProperty -name "AvailabilityZone" -Value $volume.AvailabilityZone
    $Object | Add-Member -Type NoteProperty -name "Iops" -Value $volume.Iops
    $Object | Add-Member -Type NoteProperty -name "CreateTime" -Value $volume.CreateTime
    $Object | Add-Member -Type NoteProperty -name "Size" -Value $volume.Size
    $Object | Add-Member -Type NoteProperty -name "VolumeType" -Value $volume.VolumeType
    $originalVolumes += $Object
}
Write-Output $originalVolumes | Format-Table
$tempInstance = (Get-EC2Instance -InstanceId $newInstanceID).Instances
$tempBlockMappings = $tempInstance.BlockDeviceMappings
$tempVolumes = @()
Write-Output "INFO: Getting details $($newInstanceID)"
ForEach($device in $tempBlockMappings){
    $Object = New-Object System.Object
    #Get EBS volumes for the machine
    $Object | Add-Member -type NoteProperty -name "DeviceName" -Value $device.DeviceName
    $Object | Add-Member -type NoteProperty -name "VolumeId" -Value $device.ebs.VolumeId
    $Object | Add-Member -Type NoteProperty -name "Status" -Value $device.ebs.Status
    $volume = Get-EC2Volume -VolumeId $device.ebs.VolumeId
    $Object | Add-Member -Type NoteProperty -name "AvailabilityZone" -Value $volume.AvailabilityZone
    $Object | Add-Member -Type NoteProperty -name "Iops" -Value $volume.Iops
    $Object | Add-Member -Type NoteProperty -name "CreateTime" -Value $volume.CreateTime
    $Object | Add-Member -Type NoteProperty -name "Size" -Value $volume.Size
    $Object | Add-Member -Type NoteProperty -name "VolumeType" -Value $volume.VolumeType
    $tempVolumes += $Object
}
Write-Output $tempVolumes | Format-Table
#Lets do the work
Write-Output "INFO: Stop the instance $($orginalInstanceID)...."
try{
    Stop-EC2Instance -InstanceId $originalInstance -ErrorAction Stop
}catch{
    Write-Output "ERROR: $_"
    exit
}
While((Get-EC2Instance -InstanceId $orginalInstanceID).Instances[0].State.Name -ne 'stopped'){
    Write-Verbose "INFO: Waiting for instance to stop..."
    Start-Sleep -s 10
}
Write-Output "INFO: Stop the instance $($newInstanceID)...."
try{
    Stop-EC2Instance -InstanceId $newInstanceID -Force -ErrorAction Stop
}catch{
    Write-Output "ERROR: $_"
    exit
}
While((Get-EC2Instance -InstanceId $newInstanceID).Instances[0].State.Name -ne 'stopped'){
    Write-Verbose "INFO: Waiting for instance to stop..."
    Start-Sleep -s 10
}

Write-Output "INFO: detaching the EBS volumes from $($orginalInstanceID)...."
ForEach($volume in $originalVolumes){
    try{
        Dismount-EC2Volume -VolumeId $volume.VolumeId -InstanceId $orginalInstanceID -Device $volume.DeviceName -ErrorAction Stop
    }catch{
        Write-Output "ERROR: $_"
        exit
    }
}

Write-Output "INFO: detaching the EBS volumes from $($newInstanceID)...."
ForEach($volume in $tempVolumes){
    try{
        Dismount-EC2Volume -VolumeId $volume.VolumeId -InstanceId $newInstanceID -Device $volume.DeviceName -ErrorAction Stop
    }catch{
        Write-Output "ERROR: $_"
        exit
    }
}

Write-Output "INFO: Migrating $($orginalInstanceID) to $($newInstanceID) with $($originalVolumes.Count) connected volumes"
Write-Output "INFO: attaching the EBS volumes to $($newInstanceID)...."
ForEach($volume in $originalVolumes){
    try{
        Add-EC2Volume -VolumeId $volume.VolumeId -InstanceId $newInstanceID -Device $volume.DeviceName -ErrorAction Stop
    }catch{
        Write-Output "ERROR: $_"
        exit
    }
}

Write-Output "INFO: Tagging the $($newInstanceID) with original instance tags"
$orginalInstanceTags = $originalInstance.tags
ForEach($T in $orginalInstanceTags){
    try{
        $tag = New-Object Amazon.EC2.Model.Tag
        $tag.Key = $T.Key
        $value = $T.Value
        $tag.Value = $value
        New-EC2Tag -Resource $newInstanceID -Tag $tag -ErrorAction Stop
    }catch{
        Write-Output "ERROR: $_"
    }
}

Try{
    $tag = New-Object Amazon.EC2.Model.Tag
    $tag.Key = "convertedFrom"
    $value = $orginalInstanceID
    $tag.Value = $value
    New-EC2Tag -Resource $newInstanceID -Tag $tag -ErrorAction Stop
}catch{
    Write-Output "ERROR: $_"
}

Write-Output "INFO: Marking the $($orginalInstanceID) as old"
$orginalInstanceName = ($originalInstance.tags | ? {$_.Key -like "Name"}).Value
If($orginalInstanceName){
    try{
        $tag = New-Object Amazon.EC2.Model.Tag
        $tag.Key = "Name"
        $value = $orginalInstanceName+".old"
        $tag.Value = $value
        New-EC2Tag -Resource $orginalInstanceID -Tag $tag -ErrorAction Stop
    }catch{
        Write-Output "ERROR: $_"
    }
}

Write-Output "INFO: Tagging the $($orginalInstanceID) with original volumes for failback"
ForEach($device in $orginalBlockMappings){
    try{
        $tag = New-Object Amazon.EC2.Model.Tag
        $tag.Key = $device.DeviceName
        $value = $device.ebs.VolumeId
        $tag.Value = $value
        New-EC2Tag -Resource $orginalInstanceID -Tag $tag -ErrorAction Stop
    }catch{
        Write-Output "ERROR: $_"
    }
}

Write-Output "INFO: Starting the instance $($newInstanceID) with newly attached drives...."
try{
    Start-EC2Instance -InstanceId $newInstanceID -Force -ErrorAction Stop
}catch{
    Write-Output "ERROR: $_"
    exit
}
While((Get-EC2Instance -InstanceId $newInstanceID).Instances[0].State.Name -ne 'Running'){
    Write-Verbose "INFO: Waiting for instance to start..."
    Start-Sleep -s 10
}
$filterENI = New-Object Amazon.EC2.Model.Filter -Property @{Name = "attachment.instance-id"; Values = $newInstanceID}
$newInterface = Get-EC2NetworkInterface -Filter $filterENI
Write-Output "INFO: Conversion complete to $($newInstanceID)"
Write-Output "SUCCESS: Try logging into $($newInterface.PrivateIpAddress)"

Thanks Rene and Evan for passing on the idea.


Fault Tolerant Multi AZ EC2, On a beer budget – live from AWS Meetup

Filmed on 18th of March 2021 at the Adelaide AWS User Group, where Arran Peterson presented on how to put together best practice (and cheap!) cloud architecture for business continuity. The title:

“Enterprise grade fault tolerant multi-AZ server deployment on a beer budget”

Recording

RATED ‘PG’ – Mild course language.

Presenter

Arran Peterson
Arran Peterson

Arran is an Infrastructure Consultant with a passion for Microsoft Unified Communications and the true flexibility and scalability of cloud-based solutions.
As a Senior Consultant, Arran brings his expertise in enterprise environments to work with clients around Microsoft Unified Communications product portfolio of Office 365, Exchange and Skype/Teams, along with expertise around transitioning to the cloud-based platforms including AWS, Azure and Google.

More Reading

Amazon Elastic Block Store

https://aws.amazon.com/ebs/

AWS Sydney outage prompts architecture rethink

https://www.itnews.com.au/news/aws-sydney-outage-prompts-architecture-rethink-420506

Chalice Framework

https://aws.github.io/chalice/

Adelaide AWS User Group

https://www.meetup.com/en-AU/Amazon-Web-Services-User-Group-Adelaide/events/276728885/


Pimp my VS Code

Those who know me, know that I have a keen interest in software tools and exploring the various different ways that people use them. I take great joy in exploring custom or 3rd party plugins and add-ons to get the most out of the tools I use every day. From OS automation tools (like BetterTouchTool) to custom screen savers (Brooklyn is my current favourite), I love it all.

On a good day, I spend quite a bit of time in Visual Studio Code, my IDE of choice. VS Code has all that you need right out of the box, but why stop there? Heres a list of some of my favourite VS Code Extensions that I now consider essential when doing a fresh install.

Indent-Rainbow and Bracket Pair Colorizer 2 are must installs for me. Both really simple, change colours of indents and brackets so you can easily see them at a glance. Always useful when working with ident heavy languages like YAML.

GitLense is another essential if you are working with Git repositories. GitLense integrates lots of various Git tools and information into the editor. My favourite feature of GitLense is the current line blame, you can see it in the screenshot above which shows an unobtrusive annotation at the end of each line as you select it. The annotation shows commit information for that piece of code.

Beautify helps you make your code beautiful. Beautify can automatically indent Javascript, JSON, CSS, and HTML.

Better Comments makes your comments human readable by changing the colour of comments based on an opening tag. You can even define your own.

Source: Better Comments Documentation in Visual Studio Code

Next up, some extensions that I install to match the work I’m doing. In my day to day work, I’m regularly authoring infrastructure templates for Azure and AWS (ARM and CloudFormation). To assist with making this as simple as possible I install some specific extensions for syntax highlighting, autocompletion and even do some code snippet referencing.

Azure Resource Manager (ARM) Tools is a collection of extensions for working with Azure made by Microsoft. This one has lots of features so I’ll just pick a few. You can use the ‘arm!’ shortcut to create a blank ARM template with all the property you need – this one makes life so much better, spend less time lining up brackets in JSON and more time defining resources!

Image showing the arm template scaffolding snippet
Source: Azure Resource Manager (ARM) Tools Documentation in Visual Studio Code

Each time you use a snippet, you can also use tab complete to go through commonly modified configurations, again, less time reading documentation more time writing code!

Image showing snippet tab stops with pre-populated parameter values
Source: Azure Resource Manager (ARM) Tools Documentation in Visual Studio Code

CloudFormation Template Linter and CloudFormation Resource Snippets add some similar functionality for working with AWS CloudFormation templates. While neither of these are created by Amazon, they both do a good job at implementing similar functionality to the above ARM tools.

Next up is one of my new favourites, Dash, sorry Windows guru’s this one’s only on Mac. Dash is an API documentation browser which can hook into your VS code to quickly search documentation (from their 200+ built in doc sets, or add your own GitHub doc sets). Sounds boring, but I think it’s far from it. I’ve loaded mine up with lots of Microsoft Azure Documentation and AWS documentation. It’s really handy to be able to highlight a resource type or PowerShell Command, hit control + H and have the document reference instantly pop up, each time it saves me minutes.

Dash - Visual Studio Marketplace
Source: Dash Documentation in Visual Studio Code

Finally, my icon and colour theme I use VSCode Icons and Atom One Dark. This really comes down to personal preference. I like the syntax colour coding included in the Atom One Dark theme, I find it useful especially when writing PowerShell. VSCode icons is the most popular icons extension, and I’ve had no issues since installation.

Source: Atom One Dark Theme Documentation in Visual Studio Code

Thats my round up for my must have extensions. Are there any missing off this list that you think should be here? – Comment below with your must have extensions.

Cheers, Joel


Cognito authentication integration with Django using authorization code grant.

Note: Assumed knowledge of AWS Cognito backend configuration and underlying concepts, mostly it’s just the setup from an application integration perspective that is talked about here.

Recently we have been working on a Django project where a secure and flexible authentication system was required, as most of our existing structure is on AWS we chose Cognito as the backend.

Below are the steps we took to get this working and some insights learned on the way.

Django Warrant

The first attempt was using django_warrant, this is probably going to be the first thing that comes up when you google ‘how to django and cognito’.

Django_warrant works by injecting an authentication backend into django which does some magic that allows your username/password to be submitted and checked against a configured user pool, on success it authenticates you and if required creates a stub django user.

The basics of this were very easy to get working and integrated but had a few issues such as:

  • We still see username/password requests and have to send them on.
  • By default can only be configured for one user pool.
  • Does not support federated identity provider workflows.
  • Github project did not seem super active or updated.

Ultimately we chose not to use this module, however inspiration was taken from its source code to do some of the user handling stuff we implemented later on.

Custom authorization_code workflow implementation

This involves using the cognito hosted login form, which does both user pool and connected identity provider authentication (O365/Azure, Google, Facebook, Amazon) .

The form can be customised with HTML, CSS, images and put behind a custom URL, other aspects of the process and events can be changed and reacted upon using triggers and lambda.

Once you are authenticated in cognito it redirects you back to the page of your choosing (usually your applications login page or custom endpoint) with a set of tokens, using these tokens you then grab the authenticated users details and authenticate them within the context of your app.

The difference between authorization code grant and implicit grant are:

  • Implicit grant
    • Intended for client side authentication (javascript applications mostly)
    • Sends both the id_token (JWT) and acccess_token in the redirect response
    • Sends the tokens with an #anchor before them so it is not seen by the web server
    • https://your-app/login#id_token=n&auth_token=n
  • Authorization code grant
    • Intended for server side authentication
    • Sends a authorization code in the redirect response
    • Sends this as a normal GET parameter
    • https://your-app/login?code=n
    • Your application holds a preconfigured secret
    • Code + secret get turned into id_token token and access_token via oauth2/token endpoint

We chose to use the authorization code grant workflow, it takes a bit more effort to setup but is generally more secure and alleviates any hacky javascript shenanigans that would be needed to get implicit grant working with a django server based backend.

After these steps you can use boto3 or helpers to turn those tokens into a set of attributes (email, name, other custom attributes) kept by cognito. Then you simply hook this up to your internal user/session logic by matching them with your chosen attributes like email, username etc.

I was unable to find any specific library support to handle some aspects of this, like the token handling in python or the django integration so i have included some code which may be useful.

Code

This can be integrated into a view to get the user details from Cognito based on a token, this will be sitting at the redirect URL that cognito returns from.

import warrant
import cslib.aws

def tokenauth(request):
    authorization_code = request.GET.get("code")
    token_grabber = cslib.aws.CognitoToken(
        <client_id>
        <client_secret>
        <domain>
        <redir>
        <region>?
    )

    id_token, access_token = token_grabber.get(authorization_code)

    if id_token and access_token:
        # This uses warrant (different than django_warrant)
        # A helper lib that wraps cognito
        # Plain boto3 can do this also.  
        cognito = warrant.Cognito(
            <user_pool_id>
            <client_id>
            id_token=id_token,
            access_token=access_token,
        )

        # Their lib is a bit broken, because we dont supply a username it wont
        # build a legit user object for us, so we reach into the cookie jar....
        # {'given_name': 'Joe', 'family_name': 'Smith', 'email': 'joe@jtwo.solutions'}
        data = cognito.get_user()._data
        return data
    else:
        return None

Class that handles the oauth/token2 workflow, this is mysteriously missing from the boto3 library which seems to handle everything else quite well…

from http.client import HTTPSConnection
from base64 import b64encode
import urllib.parse
import json

class CognitoToken(object):
    """
    Why you no do this boto3...
    """
    def __init__(self, client_id, client_secret, domain, redir, region="ap-southeast-2"):
        self.client_id = client_id
        self.client_secret = client_secret
        self.redir = redir
        self.token_endpoint = "{0}.auth.{1}.amazoncognito.com".format(domain, region)
        self.token_path = "/oauth2/token"

    def get(self, authorization_code):
        headers = {
            "Authorization" : "Basic {0}".format(self._encode_auth()),
            "Content-type": "application/x-www-form-urlencoded",
        }

        query = urllib.parse.urlencode({
                "grant_type" : "authorization_code",
                "client_id" : self.client_id,
                "code" : authorization_code,
                "redirect_uri" : self.redir,
            }
        )

        con = HTTPSConnection(self.token_endpoint)
        con.request("POST", self.token_path, body=query, headers=headers)
        response = con.getresponse()

        if response.status == 200:
            respdata = str(response.read().decode('utf-8'))
            data = json.loads(respdata)
            return (data["id_token"], data["access_token"])

        return None, None

    def _encode_auth(self):
        # Auth is a base64 encoded client_id:secret
        string = "{0}:{1}".format(self.client_id, self.client_secret)
        return b64encode(bytes(string, "utf-8")).decode("ascii")

Further reading