Earlier this week Amazon Web Services made a statement, indicating that the battle of tier-one public cloud providers is still heating up. Yesterday Matthew Graham (AWS Head of Security Assurance for Australia and New Zealand) announced that The Australian Cyber Security Centre (ACSC) had awarded PROTECTED certification to AWS for 42 of their cloud services.
In what appears to be a tactical move that has been executed hot off the trail of Microsoft announcing their PROTECTED accredited Azure Central Regions in the back half of last year. This clearly demonstrates that AWS aren’t prepared to reduce the boil to a gentle simmer any time soon.
Graham announced “You will find AWS on the ACSC’s Certified Cloud Services List (CCSL) at PROTECTED for AWS services, including Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3), AWS Lambda, AWS Key Management Service (AWS KMS), and Amazon GuardDuty.”
He continued to state “We worked with the ACSC to develop a solution that meets Australian government security requirements while also offering a breadth of services so you can run highly sensitive workloads on AWS at scale. These certified AWS services are available within our existing AWS Asia-Pacific (Sydney) Region and cover service categories such as compute, storage, network, database, security, analytics, application integration, management and governance. “
Finally, delivering a seemingly well orchestrated jab “Importantly, all certified services are available at current public prices, which ensures that you are able to use them without paying a premium for security.”
It is no secret that the blue team currently charges a premium for entry into their PROTECTED level facility (upon completion of a lengthy eligibility assessment process) due to a finite amount of capacity available.
Both vendors state that consumers must configure services in line with the guidance in the respective ACSC certification report and consumer guidelines. This highlights that additional security controls must be implemented to ensure workloads are secured head to toe whilst storing protected level data. Ergo, certification is not implicit by nature of consuming accredited services.
AWS have released the IRAP assessment reports under NDA within their Artefact repository. For more information, review the official press release here.
Amazon Web Services is a well established cloud provider. In this blog, I am going to explore how we can interface with the orange cloud titan programmatically. First of all, lets explore why we may want to do this. You might be thinking “But hey, the folks at AWS have built a slick web interface which offers all the capability I could ever need.”Whilst this is true, repetitive tasks quickly become onerous. Additionally, manual repetition introduces the opportunity to introduce human error. That sounds like something we should avoid, right? After all, many of the core tenets of the DevOps movement is built on these principles (“To increase the speed, efficiency and quality of software delivery”– amongst others.)
From a technology perspective, we achieve this by establishing automated services. This presents a significant speed advantage as automated processes are much faster than their manual counterparts. The quality of the entire release process improves because steps in the pipeline become standardised, thus creating predictable outcomes.
Here at cloudstep, this is one of our core beliefs when operating a cloud infrastructure platform. Simply put, the portal is a great place to look around and check reporting metrics. However, any services should be provisioned as code. Once again, to realise efficiency and improve overall quality.
“How do we go about this and what are some example use cases?”
So lets get into it… The first thing you’ll want to do is walk through the process of aligning your operating environment with any mandatory prerequisites, then you can get install the AWS CLI tools in a flavour of your choice. The process is well documented, so I wont cover it off here.
Once you have the tools installed, you will need to provide the CLI tools with a base level of configuration which is stored in a profile of your choice. Running “AWS Configure” from a terminal of your choice is the fastest way to do this. Here you will provide IAM credentials to interface with your tenant, a default region and an output format. For the purpose of this example I’ve set my region to “ap-southeast-2” and my output format to “JSON.”
From here I could run “aws ec2 describe-instances” to validate that my profile had been defined correctly within the AWS CLI tools. The expected return would be a list of EC2 instances hosted within my AWS subscription as shown below.
This shouldn’t take more than 5 minutes to get you up and running. However, don’t stop here. The AWS CLI supports almost all of the capability which can be found within the management portal. Therefore, if you’re in an operations role and your company is investing in AWS in 2019. You should be spending some time to learn about how to interface with services such as DynamoDB, EC2, S3/Glacier, IAM, SNS and SWF using the AWS CLI.
Lets have a look at a more practical example whereby automating a simple task can potentially save you hours of time each year. As a Mac user (you’ve probably already picked up on that) I often need to fire up a windows PC for Visual Studio or Visio. AWS is a great use case for this. I simply fire up my machine when I need it and shut it down when I’m done. I pay them a couple of bucks a month for some storage costs and some compute hours and I’m a happy camper. Simple right?
Lets unpack it further. I am not only a happy camper. I’m also a lazy camper. Firing up my VM to do my day job means:
Opening my browser and navigating to the AWS management console
Authenticating to the console
Navigating to the EC2 service
Scrolling through a long list of instances looking for my jumpbox
Starting my VM
Waiting for the network interface to refresh so I can get the public IP for RDP purposes.
This is all getting too hard right? All of this has to happen before I can even do my job and sometimes I have to do this a few times each day. Maybe its time to practice what I preach? I could automate all of this using the AWS tools for PowerShell, which would allow me to automate this process by running a script which saves me hours each year (employers love that.) Whilst this example wont necessarily increase the overall quality of my work, it does provide me with a predictable outcome every single time.
For a measly 20 lines of PowerShell I was able to define an executable script which authenticates to the AWS EC2 service, checks the power state of my VM in question. If the VM is already running it will return the connectivity details for my RDP client. If the VMis not running, it will fire up my instance, wait for the NIC to refresh and then return the connectivity details for my RDP client. I then have a script based on the same logic to shutdown my VM to save money when I’m not using the service. All of this takes less than 5 seconds to execute.
The AWS CLI tools provide an interface to interact with the cloud provider programmatically. In this simple example we looked at automating a manual process which has the potential to save hours of time each year whilst also ensuring a predictable outcome for each execution. Each of the serious public cloud players offer similar capability. If you are looking to increase your overall efficiency, improve the quality of your work whilst automating monotonous tasks, consider investing some effort into learning a how to interface with your cloud provider of choice programmatically. You will be surprised how many repetitive tasks you can bowl over when you maximise the usage of the tools you have available to you.
AWS WorkSpaces VDI solution has two pricing options that you need to choose between for your implementation.
Hourly (On demand)
In my opinion it is always worth attempting to run your WorkSpaces VDI deployment in on-demand where there is chance of cost savings when the virtual desktops can be turned off and you will not be charged.
With hourly billing you pay a small fixed monthly fee per WorkSpace to cover infrastructure costs and storage, and a low hourly rate for each hour the WorkSpace is used during the month. Hourly billing works best when Amazon WorkSpaces are used, on average, for less than a full working day or for just a few days a month, making it ideal for part-time workers, job sharing, road warriors, short-term projects, corporate training, and education.
Turning off the VDI is done by AWS using a setting called Running Mode per VDI:
Always on – Billed monthly. Instant access to an always running WorkSpaces
AutoStop – Billed by the hour. WorkSpaces start automatically when you login, and stop when no longer being used (1-48hrs).
In my opinion a turn off period of 1 hour is too short, it doesn’t cover a user who has a long lunch or meeting that runs slightly over. 2 hours cool down period seems to be perfect for cost optimisation. With this in mind, all your VDI’s will be off at the beginning of your working day. To eliminate the need for the 60-90 second boot up time imposed by AWS for cold starts we can pre-warm the VDI’s using Lambda function on a schedule. The process will be as follows:
CloudWatch Event that runs based on a CRON schedule.
Event triggers the execution of a Lambda function
The Lambda function runs python that starts WorkSpaces based on a set of conditions using the boto3 library to interact with the service.
The python code block below wakes all VDI’s in a region that are in a ‘STOPPED’ state but there is no reason why you couldn’t be more granular with tagging per VDI.
def lambda_handler(event, context):
region = 'ap-southeast-2'
running_mode = 'AVAILABLE'
session = boto3.session.Session(
ws = session.client('workspaces')
workspaces = 
resp = ws.describe_workspaces(DirectoryId=directory_id)
workspaces += resp['Workspaces']
resp = ws.describe_workspaces(DirectoryId=directory_id, NextToken=resp['NextToken']) if 'NextToken' in resp else None
for workspace in workspaces:
if workspace['State'] == running_mode:
if workspace['State'] in ['STOPPED']:
print 'Starting WorkSpace for user: ' + workspace['UserName']
print 'Could not start workspace for user: ' + workspace['UserName']
To start the Workspaces on a schedule, Lambda can invoke using a CRON expression:
cron(0 22 ? * SUN-THU *)
The cron schedule runs in GMT, so in this case 10:00 PM in GMT is 8:00 AM in AEST for following day (GMT +10:00).
The end result is the WorkSpaces you have chosen to wake up would start at 8am and shutdown again at 10am if not used. If you had departments or user groups that are heavy users versus sometimes users this might be where your code looks at some tags you’ve set per VDI.
AWS WorkSpaces is a great low cost Virtual Desktop experience. Extremely easy to get started and build quick images to support your needs. During the implementation you are going to want to provide a Quality of Service policy (QoS) much like you would if you had Citrix or VMWare Horizon on-premises. WorkSpaces is slightly different to other VDI solutions where it uses PCoIP protocol and basically streams the desktop to your endpoint much like a video conference would. With video conferencing in mind the biggest design flaw is to not prioritise the packets across your managed network segments.
WorkSpaces real-time traffic is going to be sensitive to packet loss, delay and jitter, which occur frequently in congested networks. Quality of Service (QoS) – sometimes called Class of Service – must also be deployed on managed external WANs, managed internal LANs, and enterprise-based WiFi networks. This will help to properly prioritise VDI real-time streaming over other non-real time traffic on local networks and over WAN, creating a better experience for end users. There are two types of clients we need to accomodate for in an environment:
WorkSpaces Soft Client on Windows and Mac computers, Chromebooks, iPads, Fire tablets, and Android tablets.
Teradici Zero Clients
Does WorkSpaces need any Quality of Service configurations to be updated on my network?
If you wish to implement Quality of Service on your network for WorkSpaces traffic, you should prioritize the WorkSpaces interactive video stream which is comprised of real time traffic on UDP port 4172. If possible, this traffic should be prioritized just after VoIP to provide the best user experience. https://aws.amazon.com/workspaces/faqs/
What service class should WorkSpaces use?
PCoIP traffic should be set to a QoS priority below Voice-over-IP (VOIP) traffic (if used), but above the priority level for any TCP traffic. For most networks, this translates to a DSCP value of AF41 or AF31 (if interactive video is prioritized above PCoIP traffic) https://help.teradici.com/s/article/1590
WorkSpaces streaming should be deployed in the AF41 (Assured Forwarding – DSCP 34) queue. Streaming media happens on TCP/UDP 4172 below is how we can enable this on the soft client on your network to leverage DSCP tagging.
Create a QoS Group Policy
Create a GPO using Group Policy Management Console and link it to your workstations/computer Organizational Unit.
Computer Configuration >Policies > Window Settings > Policy Based QoS.
Right Click and create a new policy.
Give the policy a name like “WorkSpaces Client QoS”. Assign the DSCP Value of 34.
Change the Application the policy applies to from All to specific, enter “workspaces.exe”
Add a destination IP address range as per this link
From the Protocol selection, choose TCP and UDP and Select “From this destination port number or range”. Enter the range 4172.
To test whether the packets are being tagged, install Wireshark on your PC that has AWS Workspaces and take a capture while you have a VDI session active. Stop the capture and filter by the below expression.
udp.port eq 4172
Looking at the UDP packets we can see before/after DSCP tagging
And after the policy is enabled.
Handy Hints for Traffic
Bypass proxies and WAN optimization devices
All streaming traffic is encrypted and is typically not able to be inspected by proxy/firewall devices. For these reasons I’d recommend bypassing proxy devices and not decrypting the packets for all WorkSpaces network traffic.
Keep my traffic private
If you would like to keep your traffic completely private and as low latent as humanly possible, then implement an AWS Direct Connect Public Peering session to have the streaming media IP ranges for your region advertised as routes via Border Gateway Protocol (BGP) on your network.