YAML it Rhymes with Camel

I’ve blogged before about my passion for automation and the use of ARM templating in the Azure world to eradicate the burden of dull and mundane tasks from the daily routine of system administrators for whom I do consulting for.

I loath repetitive tasks, its in this space where subtle differences and inconsistency love to live. Recently I was asked to help out with a simple task, provisioning a couple of EC2 Windows servers in AWS. So in the spirit of infrastructure as code, I thought, there is no better time to try out AWS CloudFormation to describe my EC2 instances . I’ve actually used CloudFormation before in the past, but always describing my stack in JSON. CloudFormation also supports YAML, so challenge accepted and away I went. . .

So what is YAML anyway. . .Yet Another Mark-up Language. Interestingly its described at the official YAML website (https://yaml.org) as a “YAML Ain’t Markup Language” rather,  “human friendly data serialisation standard for all programming languages”.

What attracted me to YAML is its simplicity, there are no curly braces {} just indenting. Its also super easy to read. So if JSON looks a bit to cody for your liking, YAML may be a more palatable alternative.

So how would you get started? As you’d expect AWS have extensive CloudFormation documentation. The AWS::EC2::Instance resource is described here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-instance.html#cfn-ec2-instance-volumes. You’ll notice that there is a Syntax description for JSON and YAML. The YAML looks like this:


Type: AWS::EC2::Instance
Properties: 
  Affinity: String
  AvailabilityZone: String
  BlockDeviceMappings: 
    - EC2 Block Device Mapping
  CreditSpecification: CreditSpecification
  DisableApiTermination: Boolean
  EbsOptimized: Boolean
  ElasticGpuSpecifications: [ ElasticGpuSpecification, ... ]
  ElasticInferenceAccelerators: 
    - ElasticInferenceAccelerator
  HostId: String
  IamInstanceProfile: String
  ImageId: String
  InstanceInitiatedShutdownBehavior: String
  InstanceType: String
  Ipv6AddressCount: Integer
  Ipv6Addresses:
    - IPv6 Address Type
  KernelId: String
  KeyName: String
  LaunchTemplate: LaunchTemplateSpecification
  LicenseSpecifications: 
    - LicenseSpecification
  Monitoring: Boolean
  NetworkInterfaces: 
    - EC2 Network Interface
  PlacementGroupName: String
  PrivateIpAddress: String
  RamdiskId: String
  SecurityGroupIds: 
    - String
  SecurityGroups: 
    - String
  SourceDestCheck: Boolean
  SsmAssociations: 
    - SSMAssociation
  SubnetId: String
  Tags: 
    - Resource Tag
  Tenancy: String
  UserData: String
  Volumes: 
    - EC2 MountPoint
  AdditionalInfo: String

With this as a starting point I was quickly able to build a EC2 instance and customise my YAML so as to do some extra things.

If you’ve got this far and YAML is starting to look like it might be the ticket for you, its worth familiarising yourself with the CloudFormation built-in functions. You can use these to do things like assign values to properties that are not available until runtime.

Fn::Base64
Fn::Cidr
Condition Functions
Fn::FindInMap
Fn::GetAtt
Fn::GetAZs
Fn::Join
Fn::Select
Fn::Split
Fn::Sub
Fn::Transform
Ref

The link to the complete Intrinsic Function Reference can be found here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference.html

With a learning curve of a couple of hours including a bit of googling and messing around I was able to achieve my goal. I built an EC2 instance, applied tagging, installed some Windows features post build via a PowerShell script (downloaded from S3 and launched with AWS::CloudFormation::Init cfn-init.exe), all without having to logon to the server or touch the console. Here is a copy of my YAML. . .


AWSTemplateFormatVersion: "2010-09-09"
Description: CloudFormation Template to deploy an EC2 instance
Parameters: 
  Hostname: 
    Type: String
    Description: Hostname - maximum 15 characters
    MaxLength: '15'    
  LatestAmiId :
    Type: 'AWS::SSM::Parameter::Value'
    Default: /aws/service/ami-windows-latest/Windows_Server-2019-English-Full-Base
  InstanceSize: 
    Type: String
    Description: Instance Size
    Default: t2.micro
    AllowedValues:
      - "t2.micro" 
      - "t2.small"
      - "t2.medium"
  AvailabilityZone:
    Type: String
    Description: Default AZ
    AllowedValues: 
      - ap-southeast-2a
      - ap-southeast-2b
      - ap-southeast-2c
    Default: ap-southeast-2a
  KeyPair: 
    Type: String
    Description: KeyPair Name
    Default: jtwo
  S3BucketName:
    Default: NotARealBucket
    Description: S3 bucket containing boot artefacts
    Type: String
  
  # tag values
  awPurpose: 
    Type: String
    Description: A plain English description of what the object is for.
    Default: WindowsServer2019 Domain Controller
  awChargeTo: 
    Type: String
    Description: Billing Code for charge back of resource.
    Default: IT-123
  awRegion: 
    Type: String
    Description: Accolade Wines Region not AWS. 
    Default: Australia
  awExpiry: 
    Type: String
    Description: The date when the resource(s) can be considered for decommissioning.
    Default: 01-01-2022
  awBusinessSegment: 
    Type: String
    Description: Agency code.
    Default: ICT
  awEnvironment: 
    Type: String
    Description: Specific environment for resource.
    AllowedValues: 
      - prod
      - prodServices
      - nonprod
      - uat
      - dev
      - test 
  awApplication: 
    Type: String
    Description: A single or multiple word with the name of the application that the infrastructure supports. "JDE", "AD", "Apache", "Utility", "INFOR", "PKI".
    Default: AD

Mappings:
  SubnetMap: 
    ap-southeast-2a:
      prodServices: "subnet-idGoesHere"
    ap-southeast-2b:
      prodServices: "subnet-idGoesHere"
    ap-southeast-2c:
      prodServices: "subnet-idGoesHere"
      
# Resources
Resources:
  # IAM Instance Profile
  Profile:
    Type: 'AWS::IAM::InstanceProfile'
    Properties:
      Roles:
        - !Ref HostRole
      Path: /
      InstanceProfileName: !Join
        - ''
        - - 'instance-profile-'
          - !Ref S3BucketName
  HostRole:
    Type: 'AWS::IAM::Role'
    Properties:
      RoleName: !Join
        - ''
        - - 'role-s3-read-'
          - !Ref S3BucketName
      Policies:
        - PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Action:
                  - 's3:GetObject'
                Resource: !Join
                  - ''
                  - - 'arn:aws:s3:::'
                    - !Ref S3BucketName
                    - '/*'
                Effect: Allow
          PolicyName: s3-policy-read
      Path: /
      AssumeRolePolicyDocument:
        Statement:
          - Action:
              - 'sts:AssumeRole'
            Principal:
              Service:
                - ec2.amazonaws.com
            Effect: Allow
        Version: 2012-10-17  

  # ENI
  NIC1:
    Type: AWS::EC2::NetworkInterface
    Properties: 
      Description: !Sub 'ENI for EC2 instance: ${Hostname}-${awEnvironment}'
      GroupSet:
          - sg-050cadbf0e159b0ac
      SubnetId: !FindInMap [SubnetMap, !Ref AvailabilityZone, !Ref awEnvironment]
      Tags:
        - Key: Name
          Value: !Sub '${Hostname}-eni'
  
  # EC2 Instance
  Instance:
    Type: 'AWS::EC2::Instance'
    Metadata:
      'AWS::CloudFormation::Authentication':
        S3AccessCreds:
          type: S3
          buckets:
            - !Ref S3BucketName
          roleName: !Ref HostRole
      'AWS::CloudFormation::Init':
        configSets: 
          config:
            - get-files 
            - configure-instance
        get-files:
          files:
            'c:\s3-downloads\scripts\Add-WindowsFeature.ps1':
              source: https://NotARealBucket.s3.amazonaws.com/scripts/Add-WindowsFeature.ps1
              authentication: S3AccessCreds
        configure-instance:
          commands:
            1-set-powershell-execution-policy:
              command: >-
                powershell.exe -Command "Set-ExecutionPolicy UnRestricted -Force"
              waitAfterCompletion: '0'
            2-rename-computer:
              command: !Join
                - ''
                - - >-
                  -  powershell.exe -Command "Rename-Computer -Restart -NewName "
                  -  !Ref Hostname
              waitAfterCompletion: forever  
            3-install-windows-components:
              command: >-
                powershell.exe -Command "c:\s3-downloads\scripts\Add-WindowsFeature.ps1"
              waitAfterCompletion: '0'


    Properties:
      DisableApiTermination: 'false'
      AvailabilityZone: !Sub "${AvailabilityZone}"
      InstanceInitiatedShutdownBehavior: stop
      IamInstanceProfile: !Ref Profile
      ImageId: !Ref LatestAmiId
      InstanceType: !Sub "${InstanceSize}"
      KeyName: !Sub "${KeyPair}"
      UserData: !Base64
        'Fn::Join': 
          - ''
          - - "\n"
            - "cfn-init.exe "
            - " --stack "
            - "Ref": "AWS::StackId"
            - " --resource Instance"
            - " --region "
            - "Ref": "AWS::Region"
            - " --configsets config"
            - " -v \n"
            - "cfn-signal.exe  "
            - " ---exit-code 0"
            - " --region "
            - "Ref": "AWS::Region"
            - " --resource Instance" 
            - " --stack "
            - "Ref": "AWS::StackName"
            - "\n"           
            - "\n"
      Tags:
        - Key: Name
          Value: !Sub "${Hostname}"
        - Key: awPurpose
          Value: !Sub "${awPurpose}"
        - Key: awChargeTo
          Value: !Sub "${awChargeTo}"
        - Key: awRegion
          Value: !Sub "${awRegion}"
        - Key: awExpiry
          Value: !Sub "${awExpiry}"
        - Key: awBusinessSegment
          Value: !Sub "${awBusinessSegment}"
        - Key: awEnvironment
          Value: !Sub "${awEnvironment}"
        - Key: awApplication
          Value: !Sub "${awApplication}"

      NetworkInterfaces:
        - NetworkInterfaceId: !Ref NIC1
          DeviceIndex: 0

Outputs:
  InstanceId:
    Description: 'InstanceId'
    Value: !Ref Instance
    Export:
      Name: !Sub '${Hostname}-${awEnvironment}-InstanceId'
  InstancePrivateIP:
    Description: 'InstancePrivateIP'
    Value: !GetAtt Instance.PrivateIp
    Export:
      Name: !Sub '${Hostname}-${awEnvironment}-InstancePrivateIP'

So my question now is, why doesn’t Azure also support YAML?


Get your Flow on. .

I’ve talked before about my passion for automation. I loath doing repetitive tasks and fear inconsistency whilst undertaking them. Its not that I’m lazy, I recognise that people are generally busy and sometimes its hard to maintain focus on repetitive tasks, its easy to forget a step here and there amongst everything else that’s going on in your day. 

Software has evolved over the years to the point where all decent software includes a public Application Programming Interface (API), that provides consumers with access to functions and procedures to obtain, manipulate data and generally perform useful tasks. If you are thinking, yeah this is awesome, but I’m not a developer, I don’t know how to invoke an API, this all sounds too difficult. . . Let me introduce you to Microsoft Flow.

What is Flow?

Flow isn’t a new concept, its been around for a while, Zapier and IFTTT are both awesome mature products in this space and do much the same sort of thing. What makes Flow stand out is that its included as part of a Office 365 subscription. Its something that you likely already have access to. .  This is awesome, cause you don’t need to ask for permission to purchase another app or subscription. The barriers of entry that stifle innovation likely aren’t there. . . You can get started and experiment right away.

That said like most software, there are multiple plans and entitlements, detailed information regarding this can be found here: https://australia.flow.microsoft.com/en-us/pricing/

So what can flow do for me?

To a degree, this is really only limited by your imagination and the quality of the software products you interact with. The good news is that as I’m writing this it’s the year 2018 and most organisations I interact with use modern software and cloud services that will definitely work with Flow. Further more, its not limited to just the Microsoft stack. You can use Flow with third party software.

I like to think of Flow as a means to glue otherwise disparate software together. The concept is pretty simple, you choose a starting point to be your trigger, the trigger then results in actions somewhere else. Put simply an action in one place lets you trigger a sequence of events somewhere else.

Here at cloudstep we use Flow with our WordPress blog, every time we publish a new post, Flow detects this and posts a link to it on our LinkedIn company page and sends a tweet on Twitter. Sounds like magic. . .its not really, its just using the API’s behind the scenes, no coding required. . . Painless awesomeness.

If you think about your daily activities, there are likely several workflows just like this. Just remember automation doesn’t have to be elaborate to make a real difference. 

Flow provides a nice dashboard view with the state of your connections and traffic light status on the run history.

Another nice feature of Flow, is ‘Team Flows’ which allows you to share your automated workflows with others inside your organisation, removing another pet hate of mine, single points of failure within a workflow.

Still struggling for inspiration?  Microsoft provide hundreds of examples within their template library: https://australia.flow.microsoft.com/en-us/templates/

So as the year draws to a close, if you are fortunate enough to have some idle time, have a play with Flow, get automating and put some time in the bank for next year!


Cross Region, Peering Pitfalls. .

Ah if only all pitfalls were fun. Remember Pitfall on the Atari  2600. It was the second best selling game after Pac-Man. Pitfall Harry had to negotiate a jungle full of hazardous quicksand, rolling logs, fire and rattle snakes to recover precious treasures. 

Recently we did some work with a customer where they made use of two Azure regions (Australia East and Australia Southeast) for their IaaS workloads. The eastern region was used to house their production IaaS workloads and the southeastern region was treated as a fail over region to be used in fail over / disaster recovery situations. Each region had a couple of virtual networks and virtual network peering had been configured between them. Unbeknown to us we were about to encounter a slightly less entertaining or pleasurable pitfall whilst attempting to utilise two specific properties of virtual network peering.

  • Allow Gateway Transit
  • Use Remote Gateways 

If you are not familiar with peering, virtual network peering seamlessly connects two Azure virtual networks, merging the two virtual networks into one for connectivity purposes. Virtual network peering also has an appealing feature “Gateway transit”. Gateway transit is a peering property that enables one virtual network to utilise the gateway in the peered virtual network for cross-premises or VNet-to-VNet connectivity.

You can read up on peering and the Gateway transit feature here: https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-peering-gateway-transit

This works great, however when trying to enable this between our two Azure regions we discovered that the “Allow Gateway Transit” and “Use Remote Gateways” properties are unavailable. 

When we tried to enable this we were greeted with the following error:

“Failed to save virtual network peering ‘<peeringName>’. Error: AllowGatewayTransit and UseRemoteGateways options are currently supported only when both peered virtual networks are in the same region.”

You’ll also see the following in the Azure Portal:

Whilst this isnt a bug, Microsoft actually describe this here: https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-peering-overview in our case it was still a less than desirable outcome. Fear not, if you really require transit across the gateway in your remote virtual network, we may have a possible solution for you. . 

Work around:

Establish a site to site VPN between regions and control routing via route tables and or BGP depending on your configuration. A VPN can be quickly created via ARM template, the portal or PowerShell.

[code language=”powershell”] 
$prodSharedKey = ‘mysharedkeygoeshere’ 
# AE variables 
$AERG = “internal-networking-ae-rg” 
$AELocation = “australiaeast” 
$AEVNetName = “internal-vnet” 
$AEGWName = “internal-vnet-ae-vpn-vng” 
$AEGWIPName = “internal-vnet-ae-vpn-vng-pip” 
$AEGWIPconfName = “gwipconfAE” $ConnectionAEASE = “internalVNetAEtoInternalVNetASE” 

# ASE variables 
$ASERG = “internal-networking-ase-rg” 
$ASELocation = “australiasoutheast” 
$ASEVnetName = “internal-vnet” 
$ASEGWName = “internal-vnet-ase-vpn-vng” 
$ASEGWIPName = “internal-vnet-ase-vpn-vng-pip” 
$ASEGWIPconfName = “gwipconfASE” 
$ConnectionASEAE = “internalVNetASEtoInternalVNetAE” 

# – Australia East Side 
$AEgwpip = New-AzureRmPublicIpAddress -Name $AEGWIPName -ResourceGroupName $AERG -Location $AELocation -AllocationMethod Dynamic 
$AEvnet = Get-AzureRmVirtualNetwork -Name $AEVNetName -ResourceGroupName $AERG 
$AEsubnet = Get-AzureRmVirtualNetworkSubnetConfig -Name “GatewaySubnet” -VirtualNetwork $AEvnet 
$AEgwipconf = New-AzureRmVirtualNetworkGatewayIpConfig -Name $AEGWIPconfName -Subnet $AEsubnet -PublicIpAddress $AEgwpip 
New-AzureRmVirtualNetworkGateway -Name $AEGWName -ResourceGroupName $AERG -Location $AELocation -IpConfigurations $AEgwipconf -GatewayType Vpn -VpnType RouteBased -GatewaySku VpnGw1 

# – Australia South East Side 
$ASEgwpip = New-AzureRmPublicIpAddress -Name $ASEGWIPName -ResourceGroupName $ASERG -Location $ASELocation -AllocationMethod Dynamic 
$ASEvnet = Get-AzureRmVirtualNetwork -Name $ASEVnetName -ResourceGroupName $ASERG 
$ASEsubnet = Get-AzureRmVirtualNetworkSubnetConfig -Name “GatewaySubnet” -VirtualNetwork $ASEvnet 
$ASEgwipconf = New-AzureRmVirtualNetworkGatewayIpConfig -Name $ASEGWIPconfName -Subnet $ASEsubnet -PublicIpAddress $ASEgwpip 
New-AzureRmVirtualNetworkGateway -Name $ASEGWName -ResourceGroupName $ASERG -Location $ASELocation -IpConfigurations $ASEgwipconf -GatewayType Vpn -VpnType RouteBased -GatewaySku VpnGw1 

# Connection 
$AEvnetgw = Get-AzureRmVirtualNetworkGateway -Name $AEGWName -ResourceGroupName $AERG 
$ASEvnetgw = Get-AzureRmVirtualNetworkGateway -Name $ASEGWName -ResourceGroupName $ASERG 

New-AzureRmVirtualNetworkGatewayConnection -Name $ConnectionAEASE -ResourceGroupName $AERG -VirtualNetworkGateway1 $AEvnetgw -VirtualNetworkGateway2 $ASEvnetgw -Location $AELocation -ConnectionType Vnet2Vnet -SharedKey $prodSharedKey 
New-AzureRmVirtualNetworkGatewayConnection -Name $ConnectionASEAE -ResourceGroupName $ASERG -VirtualNetworkGateway1 $ASEvnetgw -VirtualNetworkGateway2 $AEvnetgw -Location $ASELocation -ConnectionType Vnet2Vnet -SharedKey $prodSharedKey 
[/code]

Once you’ve solved this issue, you can play Pitfall online here: https://www.retrogames.cz/play_029-Atari2600.php?language=EN

Enjoy!


Lazy Afternoons with Azure ARM Conditionals

Like many IT professionals I spent my early years in the industry working in customer oriented support roles. Working in a support role can be challenging. You are the interface between the people who consume IT services and the complexity of the IT department. You interact with a broad range of people who use the IT services you are responsible for to do the real work of the business. Positives include broad exposure to a wide range of issues and personality types. Some of the roles I worked in had the downside of repetitive reactive work, however, the flip side to this is, that this is where I really started to understand the value proposition of automation.

Unfortunately the nature of being subjected to reactive support work and grumpy frustrated consumers is you can fall into the trap of just going through the motions and become caught in a cycle of just doing a good job and not striving to make a real difference. Not this late in the day. . . This phrase was used by a co-worker who for anonymity purposes we will call “Barry”. Barry had grown tired of changes to the cyclical rhythm of his daily support routine and at the slightest suggestion of any activity or initiative that came remotely close to threatening a lazy afternoon would be rebutted with “Not this late in the day. . .” Sometimes not this late in the day could have been interchanged with not this late in the year. . .sigh.

What I learnt from Barry, is there is no need to do repetitive tasks. Humans are terrible at this, despite best intentions we make mistakes which introduce subtle differences into environments, this almost always causes stability and reliability issues and no one wants that. We all like lazy afternoons, so in the spirit of doing more with less I’d like to share some examples of how the use of conditionals within ARM templates will change your life. . .well maybe just make your afternoon a little easier and the consumers of your services happier. . . Win!

Conditionals support in ARM has been around for a little while now, but are not as widely used as I believe they should be. Conditionals are useful in scenarios where you may want to optionally include a resource that would have previously required the use of separate templates or complex nesting scenarios.

In the following example I am provisioning a virtual machine and I want to optionally include a data disk and/or a public IP.

In the parameters section of the ARM template I’ve included a couple of parameters to determine if we want to create a public IP (pip) and a Data Disk (dataDisk) as follows:

"pip": {
"type": "string",
"allowedValues": [
"Yes",
"No"
],
"metadata": {
"description": "Select whether the VM should have a public ip."
}
},
"dataDisk": {
"type": "string",
"allowedValues": [
"Yes",
"No"
],
"metadata": {
"description": "Select whether the VM should have an additional data disk."
}
},

In the variables section I’ve defined a public IP object and a dataDisk array. I’ll use these values later if “Yes” is chosen for either my pip or dataDisk.

"pipObject": {
"id": "[resourceId('Microsoft.Network/publicIPAddresses',variables('publicIPAddressName'))]"
},
"dataDisks": [
{
"name": "[concat(parameters('virtualMachineName'),'_Data1')]",
"caching": "None",
"diskSizeGB": "[parameters('vmDataDisk1Size')]",
"lun": 0,
"createOption": "Empty",
"managedDisk": {
"storageAccountType": "[parameters('vmDataDisk1Sku')]"
}
}
]

Condition is an optional keyword that you can use with any of the resource objects within your ARM templates. To make it easily identifiable as a conditionally deployed resource, I’d suggest placing above the other required keywords of the resource.
Conditionally create a public IP resource by adding a “condition”: keyword. I’m using an equals comparison function to determine if the pip parameter is set to “Yes” or “No”. In this case, if its set to “Yes”, then the resource is created.

{
"condition": "[equals(parameters('pip'), 'Yes')]",
"apiVersion": "2015-06-15",
"type": "Microsoft.Network/publicIPAddresses",
"name": "[variables('publicIPAddressName')]",
"location": "[parameters('location')]",
"properties": {
"publicIPAllocationMethod": "[variables('publicIPAddressType')]",
"dnsSettings": {
"domainNameLabel": "[variables('dnsNameForPublicIP')]"
}
}
},

Ok so far that’s pretty straight forward, the tricky part is how we tell other resources that consume our public IP about it (like the network interface) in the condition where we have created a public IP we need to provide a pipObject containing a resourceId for the “id”: keyword. Or in the case of not creating a public IP we need to provide a null value. For this I’ve chosen to use a “if” logical function in conjunction with the “equals” comparison function to either provide the aforementioned resourceId or a json null value.

{
"name": "[toLower(concat('nic',parameters('virtualMachineName')))]",
"type": "Microsoft.Network/networkInterfaces",
"apiVersion": "2018-04-01",
"location": "[parameters('location')]",
"properties": {
"ipConfigurations": [
{
"name": "ipconfig1",
"properties": {
"subnet": {
"id": "[variables('subnetRef')]"
},
"privateIPAllocationMethod": "Dynamic",
"publicIPAddress": "[if(equals(parameters('pip'), 'Yes'), variables('pipObject'), json('null'))]"
}
}
]
},

Similarly I am doing much the same thing with the dataDisk

{
"name": "[parameters('virtualMachineName')]",
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2018-06-01",
"location": "[parameters('location')]",
"dependsOn": [
"[resourceid('Microsoft.Network/networkInterfaces/', toLower(concat('nic',parameters('virtualMachineName'))))]"
],
"properties": {
"hardwareProfile": {
"vmSize": "[parameters('virtualMachineSize')]"
},
"storageProfile": {
"imageReference": {
"publisher": "SUSE",
"offer": "SLES-Standard",
"sku": "12-SP3",
"version": "latest"
},
"osDisk": {
"name": "[concat(parameters('virtualMachineName'),copyIndex(1))]",
"createOption": "fromImage"
},
"dataDisks": "[if(equals(parameters('dataDisk'), 'Yes'), variables('dataDisks'), json('null'))]"
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceid('Microsoft.Network/networkInterfaces/', toLower(concat('nic',parameters('virtualMachineName'))))]"
}
]
},
"osProfile": {
"computerName": "[parameters('virtualMachineName')]",
"adminUsername": "[parameters('adminUsername')]",
"adminPassword": "[parameters('adminPassword')]"
}
}
}

So there we have it, the use of conditionals has reduced the need for more complex scenarios like nested templates or multiple templates for different scenarios. Next time you get a request that’s not the status quo perhaps you can accommodate it with a few conditionals. . . Now back to my lazy afternoon.


Azure ARM, DevOps and Happy Days

Any application workload you deploy, be it on-premises or in the cloud requires supporting infrastructure. Things like network, storage, web servers, database servers etc. In the good old days we built each layer piece by piece. Virtualisation and then the cloud made this easier, reducing the need to laboriously wrangle with hardware on a day to day basis, however, we still need to build the various layers. The Azure Resource Manager (ARM) portal has improved significantly in recent years, but we are still largely building things one by one.

What’s wrong with this you may ask? I don’t have to crawl around in a data centre and I haven’t rack mounted anything in years. . . Happy Days right?

Well. . Its still a pretty complicated way of building things. There are loads of manual steps, its error prone and duplicating infrastructure in a consistent manner is difficult, time consuming and almost always results in subtle inconsistencies.

Subtle means small or minor differences right? That can’t be a big deal. . Most well run IT departments run some variant of Dev, QA, UAT, pre-Prod and Prod environments. We all understand this, everything preceding prod its where we test and iron out bugs before placing our workloads into prod for the business to consume. “Subtle”, no big deal, minor, whatever. .  This almost always results in a conversation between an application owner, project manager and infrastructure manager where the application owners app works in one environment, but not another. . Head scratching usually follows. One of the common symptoms of subtle differences that result from this approach is an erosion of trust and unwanted friction between the IT department and the business, and nobody wants that.

“Erosion” that sounds bad. . How do we fix this problem and get back to Happy Days? Azure DevOps, ARM templates and Infrastructure as Code. These are your new friends. In the words of the Fonzie. . “Step into my office”.  Whilst almost anyone who has played with Azure are familiar with the Azure Portal, the clicky clicky way of quickly deploying infrastructure. ARM templating in conjunction with well defined DevOps methodologies is where the subtle differences disappear.

When you describe your infrastructure as code, deploying entire workloads or even an entire data centre becomes a easily repeatable process. “Subtle” and its differences doesn’t live here. Build an exact replica of the UAT workload in pre-Prod, no worries. . . What’s the catch? What does this take. . ? I wont lie it involves some new thinking, but its easier than you might think. . .

Microsoft provide free cross platform high quality tools to accomplish this task. If you are thinking, “Infrastructure as Code. . . hang on a sec, I’m not a programmer. . .What is this guy on about. . .” Keep reading I promise its really not all that difficult. With some different thinking you can easily accomplish great things. If you are still with me and are thinking Exactamundo that’s what we need, but are unsure where to start, I have some tips for where to begin.

Step 1 – Establish your DevOps Development Environment

You’ll need some tools to get started, whilst there are many different approaches, I like the MS tool set and best of all its cross platform so I can use it with my Mac.

Azure DevOps Subscription

Azure DevOps makes collaboration and code deployment easy, it is an online suite of continuous integration / continuous delivery (CI/CD) tools that can be used to develop, host and deploy ARM templates. Azure DevOps can host private GIT repositories to securely store and maintain version control of the ARM templates throughout development. You can get started with Azure DevOps for free at: https://azure.microsoft.com/en-au/services/devops/

Visual Studio Code

Visual Studio Code is a free code editor that supports multiple languages, its cross platform, rock solid and easy to use. It also sports third party extension support that can make the development process even easier. You can read all about it here: https://code.visualstudio.com/docs/editor/whyvscode

I personally use the following extensions:

PowerShell and Azure

PowerShell is the swiss army knife for all things Microsoft these day, Azure is no exception. Your existing PowerShell skills are transferable and compliment what you’ll soon achieve with ARM templates. Azure (Resource Manager) can be managed using PowerShell and the AzureRM Module. In addition to native support in Windows, PowerShell is now cross platform and can be installed on MacOS and Linux.

If you are a macOS user like me, PowerShell Core supports macOS 10.12 and higher. All packages are available from the GitHub releases page: https://github.com/PowerShell/PowerShell/releases/latest. Once the package is installed, run “pwsh” from a terminal. Detailed installation instructions can be found here: https://docs.microsoft.com/en-us/powershell/scripting/setup/installing-powershell-core-on-macos?view=powershell-6

  • The AzureRM module for Windows can be installed by running Install-Module -Name AzureRM -AllowClobber
  • The AzureRM.netcore module for MacOS can be installed by running Install-Module -Name AzureRM.Netcore

Detailed instructions for Windows can be found here: https://docs.microsoft.com/en-us/powershell/azure/install-azurerm-ps?view=azurermps-6.13.0and for MacOs here: https://www.powershellgallery.com/packages/AzureRM.Netcore/0.13.1

Step 2. ARM Template Fundamentals

ARM Templates provide an elegant way of deploying Infrastructure as Code to Azure, however, getting started can be overwhelming especially if you are not from a development background. Before you try and author your first template its helpful to have a run through of the fundamentals and some knowledge of where to look for more information.

An ARM template defines the objects you want, their types, names and properties in a Java Script Object Notation (JSON) file which can be interpreted by the ARM REST API. To begin authoring ARM templates, it is helpful to have an understanding of some fundamental concepts. Whilst not being a full featured programming language, ARM does have some advanced function capabilities over and above the general descriptive nature of ordinary JSON.

Template Structure

ARM templates have a defined structure. In its simplest form a template has the following elements:

{
"$schema": "<a href="http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#">http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#</a>",
"contentVersion": "",
"parameters": {},
"variables": {},
"functions": [],
"resources": [],
"outputs": {}
}

You can read all about the template structure here: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates

ARM Template Reference

Ok so how do I describe the objects that make up my workload? Objects in ARM are defined in a consistent way. The “type” identifies what kind of object we are creating. Microsoft maintain an ARM template reference for each object type which can be reviewed to determine the required properties of an object and the data type of the value that is expected.

{
"name": "string",
"type": "Microsoft.Network/virtualNetworks","apiVersion": "2018-08-01",
"location": "string",
"tags": {},
"properties": {}
}

The template reference for each object can easily be found by concatenating the base reference URL: https://docs.microsoft.com/en-gb/azure/templates/with the type “Microsoft.Network/virtualNetworks” to create the URL: https://docs.microsoft.com/en-gb/azure/templates/Microsoft.Network/virtualNetworks

ARM Template Functions

In programming a function can, be described as a named section of a program that performs a specific task. In this sense, a function is a type of procedure or routine, which usually returns a value. ARM has comprehensive function capabilities which assist with looking up objects, deriving values and reducing the number of lines of code. ARM has functions in the following categories:

  • Array and object functions
  • Comparison functions
  • Deployment value functions
  • Logical functions
  • Numeric functions
  • Resource functions
  • String functions

You add functions in your templates by enclosing them within brackets: [ and ], respectively. The expression is evaluated during deployment. While written as a string literal, the result of evaluating the expression can be of a different JSON type, such as an array, object, or integer. Just like in JavaScript, function calls are formatted as functionName(arg1,arg2,arg3). You reference properties by using the dot and [index] operators. Further reading on ARM functions can be found here: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-template-functions

Step 3 Play, Practice, Happy Days

Like anything new, it takes time, but with some practice and persistence, you can become a DevOps ARM template guru and remove “Subtle” from your IT department. . . Happy Days.