Using the AWS CLI for Process Automation

Amazon Web Services is a well established cloud provider. In this blog, I am going to explore how we can interface with the orange cloud titan programmatically. First of all, lets explore why we may want to do this. You might be thinking “But hey, the folks at AWS have built a slick web interface which offers all the capability I could ever need.”Whilst this is true, repetitive tasks quickly become onerous. Additionally, manual repetition introduces the opportunity to introduce human error. That sounds like something we should avoid, right? After all, many of the core tenets of the DevOps movement is built on these principles (“To increase the speed, efficiency and quality of software delivery”– amongst others.)

From a technology perspective, we achieve this by establishing automated services. This presents a significant speed advantage as automated processes are much faster than their manual counterparts. The quality of the entire release process improves because steps in the pipeline become standardised, thus creating predictable outcomes.

Here at cloudstep, this is one of our core beliefs when operating a cloud infrastructure platform. Simply put, the portal is a great place to look around and check reporting metrics. However, any services should be provisioned as code. Once again, to realise efficiency and improve overall quality.

How do we go about this and what are some example use cases?”

AWS provide an open source CLI bundle which enables you to interface directly with their public API’s. Typically speaking, this is done using a terminal of your choice (Linux shells, Windows Command Line, PowerShell, Puty, Remotely.. You name it, its there.) Additionally, they also offer SDK’s which provide a great starting point for developing applications on-top of their services in many different languages (PowerShell, Java, .NET, JavaScript, Ruby, Python, PHP and GO.)   

So lets get into it… The first thing you’ll want to do is walk through the process of aligning your operating environment with any mandatory prerequisites, then you can get install the AWS CLI tools in a flavour of your choice. The process is well documented, so I wont cover it off here.

Link – https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html

Once you have the tools installed, you will need to provide the CLI tools with a base level of configuration which is stored in a profile of your choice. Running “AWS Configure” from a terminal of your choice is the fastest way to do this. Here you will provide IAM credentials to interface with your tenant, a default region and an output format. For the purpose of this example I’ve set my region to “ap-southeast-2” and my output format to “JSON.”

aws configure example

From here I could run “aws ec2 describe-instances” to validate that my profile had been defined correctly within the AWS CLI tools. The expected return would be a list of EC2 instances hosted within my AWS subscription as shown below.

aws ec2 describe-instances example

This shouldn’t take more than 5 minutes to get you up and running. However, don’t stop here. The AWS CLI supports almost all of the capability which can be found within the management portal. Therefore, if you’re in an operations role and your company is investing in AWS in 2019. You should be spending some time to learn about how to interface with services such as DynamoDB, EC2, S3/Glacier, IAM, SNS and SWF using the AWS CLI.

Lets have a look at a more practical example whereby automating a simple task can potentially save you hours of time each year. As a Mac user (you’ve probably already picked up on that) I often need to fire up a windows PC for Visual Studio or Visio. AWS is a great use case for this. I simply fire up my machine when I need it and shut it down when I’m done. I pay them a couple of bucks a month for some storage costs and some compute hours and I’m a happy camper. Simple right?

Lets unpack it further. I am not only a happy camper. I’m also a lazy camper. Firing up my VM to do my day job means:

  • Opening my browser and navigating to the AWS management console
  • Authenticating to the console
  • Navigating to the EC2 service
  • Scrolling through a long list of instances looking for my jumpbox
  • Starting my VM
  • Waiting for the network interface to refresh so I can get the public IP for RDP purposes.

This is all getting too hard right? All of this has to happen before I can even do my job and sometimes I have to do this a few times each day. Maybe its time to practice what I preach? I could automate all of this using the AWS tools for PowerShell, which would allow me to automate this process by running a script which saves me hours each year (employers love that.) Whilst this example wont necessarily increase the overall quality of my work, it does provide me with a predictable outcome every single time.

For a measly 20 lines of PowerShell I was able to define an executable script which authenticates to the AWS EC2 service, checks the power state of my VM in question. If the VM is already running it will return the connectivity details for my RDP client. If the VMis not running, it will fire up my instance, wait for the NIC to refresh and then return the connectivity details for my RDP client. I then have a script based on the same logic to shutdown my VM to save money when I’m not using the service. All of this takes less than 5 seconds to execute.

PowerShell Automation Example

The AWS CLI tools provide an interface to interact with the cloud provider programmatically. In this simple example we looked at automating a manual process which has the potential to save hours of time each year whilst also ensuring a predictable outcome for each execution. Each of the serious public cloud players offer similar capability. If you are looking to increase your overall efficiency, improve the quality of your work whilst automating monotonous tasks, consider investing some effort into learning a how to interface with your cloud provider of choice programmatically. You will be surprised how many repetitive tasks you can bowl over when you maximise the usage of the tools you have available to you. 



Lazy Afternoons with Azure ARM Conditionals

Like many IT professionals I spent my early years in the industry working in customer oriented support roles. Working in a support role can be challenging. You are the interface between the people who consume IT services and the complexity of the IT department. You interact with a broad range of people who use the IT services you are responsible for to do the real work of the business. Positives include broad exposure to a wide range of issues and personality types. Some of the roles I worked in had the downside of repetitive reactive work, however, the flip side to this is, that this is where I really started to understand the value proposition of automation.

Unfortunately the nature of being subjected to reactive support work and grumpy frustrated consumers is you can fall into the trap of just going through the motions and become caught in a cycle of just doing a good job and not striving to make a real difference. Not this late in the day. . . This phrase was used by a co-worker who for anonymity purposes we will call “Barry”. Barry had grown tired of changes to the cyclical rhythm of his daily support routine and at the slightest suggestion of any activity or initiative that came remotely close to threatening a lazy afternoon would be rebutted with “Not this late in the day. . .” Sometimes not this late in the day could have been interchanged with not this late in the year. . .sigh.

What I learnt from Barry, is there is no need to do repetitive tasks. Humans are terrible at this, despite best intentions we make mistakes which introduce subtle differences into environments, this almost always causes stability and reliability issues and no one wants that. We all like lazy afternoons, so in the spirit of doing more with less I’d like to share some examples of how the use of conditionals within ARM templates will change your life. . .well maybe just make your afternoon a little easier and the consumers of your services happier. . . Win!

Conditionals support in ARM has been around for a little while now, but are not as widely used as I believe they should be. Conditionals are useful in scenarios where you may want to optionally include a resource that would have previously required the use of separate templates or complex nesting scenarios.

In the following example I am provisioning a virtual machine and I want to optionally include a data disk and/or a public IP.

In the parameters section of the ARM template I’ve included a couple of parameters to determine if we want to create a public IP (pip) and a Data Disk (dataDisk) as follows:

"pip": {
"type": "string",
"allowedValues": [
"Yes",
"No"
],
"metadata": {
"description": "Select whether the VM should have a public ip."
}
},
"dataDisk": {
"type": "string",
"allowedValues": [
"Yes",
"No"
],
"metadata": {
"description": "Select whether the VM should have an additional data disk."
}
},

In the variables section I’ve defined a public IP object and a dataDisk array. I’ll use these values later if “Yes” is chosen for either my pip or dataDisk.

"pipObject": {
"id": "[resourceId('Microsoft.Network/publicIPAddresses',variables('publicIPAddressName'))]"
},
"dataDisks": [
{
"name": "[concat(parameters('virtualMachineName'),'_Data1')]",
"caching": "None",
"diskSizeGB": "[parameters('vmDataDisk1Size')]",
"lun": 0,
"createOption": "Empty",
"managedDisk": {
"storageAccountType": "[parameters('vmDataDisk1Sku')]"
}
}
]

Condition is an optional keyword that you can use with any of the resource objects within your ARM templates. To make it easily identifiable as a conditionally deployed resource, I’d suggest placing above the other required keywords of the resource.
Conditionally create a public IP resource by adding a “condition”: keyword. I’m using an equals comparison function to determine if the pip parameter is set to “Yes” or “No”. In this case, if its set to “Yes”, then the resource is created.

{
"condition": "[equals(parameters('pip'), 'Yes')]",
"apiVersion": "2015-06-15",
"type": "Microsoft.Network/publicIPAddresses",
"name": "[variables('publicIPAddressName')]",
"location": "[parameters('location')]",
"properties": {
"publicIPAllocationMethod": "[variables('publicIPAddressType')]",
"dnsSettings": {
"domainNameLabel": "[variables('dnsNameForPublicIP')]"
}
}
},

Ok so far that’s pretty straight forward, the tricky part is how we tell other resources that consume our public IP about it (like the network interface) in the condition where we have created a public IP we need to provide a pipObject containing a resourceId for the “id”: keyword. Or in the case of not creating a public IP we need to provide a null value. For this I’ve chosen to use a “if” logical function in conjunction with the “equals” comparison function to either provide the aforementioned resourceId or a json null value.

{
"name": "[toLower(concat('nic',parameters('virtualMachineName')))]",
"type": "Microsoft.Network/networkInterfaces",
"apiVersion": "2018-04-01",
"location": "[parameters('location')]",
"properties": {
"ipConfigurations": [
{
"name": "ipconfig1",
"properties": {
"subnet": {
"id": "[variables('subnetRef')]"
},
"privateIPAllocationMethod": "Dynamic",
"publicIPAddress": "[if(equals(parameters('pip'), 'Yes'), variables('pipObject'), json('null'))]"
}
}
]
},

Similarly I am doing much the same thing with the dataDisk

{
"name": "[parameters('virtualMachineName')]",
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2018-06-01",
"location": "[parameters('location')]",
"dependsOn": [
"[resourceid('Microsoft.Network/networkInterfaces/', toLower(concat('nic',parameters('virtualMachineName'))))]"
],
"properties": {
"hardwareProfile": {
"vmSize": "[parameters('virtualMachineSize')]"
},
"storageProfile": {
"imageReference": {
"publisher": "SUSE",
"offer": "SLES-Standard",
"sku": "12-SP3",
"version": "latest"
},
"osDisk": {
"name": "[concat(parameters('virtualMachineName'),copyIndex(1))]",
"createOption": "fromImage"
},
"dataDisks": "[if(equals(parameters('dataDisk'), 'Yes'), variables('dataDisks'), json('null'))]"
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceid('Microsoft.Network/networkInterfaces/', toLower(concat('nic',parameters('virtualMachineName'))))]"
}
]
},
"osProfile": {
"computerName": "[parameters('virtualMachineName')]",
"adminUsername": "[parameters('adminUsername')]",
"adminPassword": "[parameters('adminPassword')]"
}
}
}

So there we have it, the use of conditionals has reduced the need for more complex scenarios like nested templates or multiple templates for different scenarios. Next time you get a request that’s not the status quo perhaps you can accommodate it with a few conditionals. . . Now back to my lazy afternoon.


Azure ARM, DevOps and Happy Days

Any application workload you deploy, be it on-premises or in the cloud requires supporting infrastructure. Things like network, storage, web servers, database servers etc. In the good old days we built each layer piece by piece. Virtualisation and then the cloud made this easier, reducing the need to laboriously wrangle with hardware on a day to day basis, however, we still need to build the various layers. The Azure Resource Manager (ARM) portal has improved significantly in recent years, but we are still largely building things one by one.

What’s wrong with this you may ask? I don’t have to crawl around in a data centre and I haven’t rack mounted anything in years. . . Happy Days right?

Well. . Its still a pretty complicated way of building things. There are loads of manual steps, its error prone and duplicating infrastructure in a consistent manner is difficult, time consuming and almost always results in subtle inconsistencies.

Subtle means small or minor differences right? That can’t be a big deal. . Most well run IT departments run some variant of Dev, QA, UAT, pre-Prod and Prod environments. We all understand this, everything preceding prod its where we test and iron out bugs before placing our workloads into prod for the business to consume. “Subtle”, no big deal, minor, whatever. .  This almost always results in a conversation between an application owner, project manager and infrastructure manager where the application owners app works in one environment, but not another. . Head scratching usually follows. One of the common symptoms of subtle differences that result from this approach is an erosion of trust and unwanted friction between the IT department and the business, and nobody wants that.

“Erosion” that sounds bad. . How do we fix this problem and get back to Happy Days? Azure DevOps, ARM templates and Infrastructure as Code. These are your new friends. In the words of the Fonzie. . “Step into my office”.  Whilst almost anyone who has played with Azure are familiar with the Azure Portal, the clicky clicky way of quickly deploying infrastructure. ARM templating in conjunction with well defined DevOps methodologies is where the subtle differences disappear.

When you describe your infrastructure as code, deploying entire workloads or even an entire data centre becomes a easily repeatable process. “Subtle” and its differences doesn’t live here. Build an exact replica of the UAT workload in pre-Prod, no worries. . . What’s the catch? What does this take. . ? I wont lie it involves some new thinking, but its easier than you might think. . .

Microsoft provide free cross platform high quality tools to accomplish this task. If you are thinking, “Infrastructure as Code. . . hang on a sec, I’m not a programmer. . .What is this guy on about. . .” Keep reading I promise its really not all that difficult. With some different thinking you can easily accomplish great things. If you are still with me and are thinking Exactamundo that’s what we need, but are unsure where to start, I have some tips for where to begin.

Step 1 – Establish your DevOps Development Environment

You’ll need some tools to get started, whilst there are many different approaches, I like the MS tool set and best of all its cross platform so I can use it with my Mac.

Azure DevOps Subscription

Azure DevOps makes collaboration and code deployment easy, it is an online suite of continuous integration / continuous delivery (CI/CD) tools that can be used to develop, host and deploy ARM templates. Azure DevOps can host private GIT repositories to securely store and maintain version control of the ARM templates throughout development. You can get started with Azure DevOps for free at: https://azure.microsoft.com/en-au/services/devops/

Visual Studio Code

Visual Studio Code is a free code editor that supports multiple languages, its cross platform, rock solid and easy to use. It also sports third party extension support that can make the development process even easier. You can read all about it here: https://code.visualstudio.com/docs/editor/whyvscode

I personally use the following extensions:

PowerShell and Azure

PowerShell is the swiss army knife for all things Microsoft these day, Azure is no exception. Your existing PowerShell skills are transferable and compliment what you’ll soon achieve with ARM templates. Azure (Resource Manager) can be managed using PowerShell and the AzureRM Module. In addition to native support in Windows, PowerShell is now cross platform and can be installed on MacOS and Linux.

If you are a macOS user like me, PowerShell Core supports macOS 10.12 and higher. All packages are available from the GitHub releases page: https://github.com/PowerShell/PowerShell/releases/latest. Once the package is installed, run “pwsh” from a terminal. Detailed installation instructions can be found here: https://docs.microsoft.com/en-us/powershell/scripting/setup/installing-powershell-core-on-macos?view=powershell-6

  • The AzureRM module for Windows can be installed by running Install-Module -Name AzureRM -AllowClobber
  • The AzureRM.netcore module for MacOS can be installed by running Install-Module -Name AzureRM.Netcore

Detailed instructions for Windows can be found here: https://docs.microsoft.com/en-us/powershell/azure/install-azurerm-ps?view=azurermps-6.13.0and for MacOs here: https://www.powershellgallery.com/packages/AzureRM.Netcore/0.13.1

Step 2. ARM Template Fundamentals

ARM Templates provide an elegant way of deploying Infrastructure as Code to Azure, however, getting started can be overwhelming especially if you are not from a development background. Before you try and author your first template its helpful to have a run through of the fundamentals and some knowledge of where to look for more information.

An ARM template defines the objects you want, their types, names and properties in a Java Script Object Notation (JSON) file which can be interpreted by the ARM REST API. To begin authoring ARM templates, it is helpful to have an understanding of some fundamental concepts. Whilst not being a full featured programming language, ARM does have some advanced function capabilities over and above the general descriptive nature of ordinary JSON.

Template Structure

ARM templates have a defined structure. In its simplest form a template has the following elements:

{
"$schema": "<a href="http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#">http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#</a>",
"contentVersion": "",
"parameters": {},
"variables": {},
"functions": [],
"resources": [],
"outputs": {}
}

You can read all about the template structure here: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates

ARM Template Reference

Ok so how do I describe the objects that make up my workload? Objects in ARM are defined in a consistent way. The “type” identifies what kind of object we are creating. Microsoft maintain an ARM template reference for each object type which can be reviewed to determine the required properties of an object and the data type of the value that is expected.

{
"name": "string",
"type": "Microsoft.Network/virtualNetworks","apiVersion": "2018-08-01",
"location": "string",
"tags": {},
"properties": {}
}

The template reference for each object can easily be found by concatenating the base reference URL: https://docs.microsoft.com/en-gb/azure/templates/with the type “Microsoft.Network/virtualNetworks” to create the URL: https://docs.microsoft.com/en-gb/azure/templates/Microsoft.Network/virtualNetworks

ARM Template Functions

In programming a function can, be described as a named section of a program that performs a specific task. In this sense, a function is a type of procedure or routine, which usually returns a value. ARM has comprehensive function capabilities which assist with looking up objects, deriving values and reducing the number of lines of code. ARM has functions in the following categories:

  • Array and object functions
  • Comparison functions
  • Deployment value functions
  • Logical functions
  • Numeric functions
  • Resource functions
  • String functions

You add functions in your templates by enclosing them within brackets: [ and ], respectively. The expression is evaluated during deployment. While written as a string literal, the result of evaluating the expression can be of a different JSON type, such as an array, object, or integer. Just like in JavaScript, function calls are formatted as functionName(arg1,arg2,arg3). You reference properties by using the dot and [index] operators. Further reading on ARM functions can be found here: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-template-functions

Step 3 Play, Practice, Happy Days

Like anything new, it takes time, but with some practice and persistence, you can become a DevOps ARM template guru and remove “Subtle” from your IT department. . . Happy Days.