The Cloud – Anagnorisis and Peripeteia

In my work here at Cloudstep we have two distinct sides to our business, a consulting practice “Jtwo Solutions” and a cloud modelling software and services practice “Cloudstep”. Working on both sides of these businesses affords me the benefit of hands on consulting, technical architecture and implementation as well as scenario based cost modelling activities with a wide range of government and commercial customers.

Recently, I’ve been reflecting on what it is that makes me happy about working with customers within these businesses.  I decided to set my self the challenge of coming up with just two words that could be used to articulate this in a concise form.

After reflecting on this for some time, two words come to mind, “Anagnorisis and Peripeteia”. After sleeping on it for a few days, these words seem to have stuck.

So what the hell is Anagnorisis and Peripeteia. . . ? In short, Aristotle made these words famous (for me anyway).

Aristotle

Anagnorisis:  the transition or change from ignorance to knowledge.

Peripeteia: a sudden or unexpected reversal of circumstances or situation.

When considering the meaning of these two words, I think they elegantly describe the two way street that is IT consulting and cost modelling. I’ve always enjoyed the excitement of the changing IT landscape, ever evolving, disruptive yet inspiring and endlessly yielding of new opportunities.

Opportunity is what business thrives on, competitive advantage can be found here. Businesses that capitalise on the right new knowledge / technology win. The trouble is, that new is only short lived and you have to stay ahead of the curve. In the fast paced, evolving IT space, anagnorisis is something you are constantly chasing.  

I repeatedly find myself in the position of educator and student, both assisting clients with the relentless learning and learning myself. This is delightful, challenging and terrifying all at the same time, but it’s what makes IT interesting and enjoyable for me.

This brings me to the second word. . . peripeteia.  Cloudstep provides customers with a multi-dimensional view of the cost of delivery of application workloads. We do this by modelling, teams of people, the functions they carry out, the applications they use, the infrastructure the applications live on and the underlying hosting costs of the infrastructure (servers, storage, networks, data centres).

With this data we can accurately articulate the true cost of a specific workload and conduct fair comparison with alternative delivery models like software as a service or a public cloud implementation.

Anagnorisis happens here too, but what is really beautiful is the peripeteia that this knowledge can enable. Cloudstep helps provide businesses with clarity and can enable them to see the most cost effective path forward. For me, I find happiness is the situation where a business can shift their focus from any undifferentiated workloads and shift the focus of their IT resources towards workloads that are specific to their core business, directing efforts towards innovation in their own space.

The future in IT that, I imagine, is one where we don’t have to spend as much time on undifferentiated workloads rather, one where we have more time to thrive on the new opportunities that are yet to come.


Azure PowerShell ‘Az’ Module

https://azure.microsoft.com/en-us/blog/azure-powershell-az-module-version-1/

Microsoft released a new PowerShell module specifically for Azure late last year called “Az”. On the plus side Az ensures that Windows PowerShell and PowerShell Core users can get the latest Azure tooling from PowerShell on every platform be it Windows PowerShell or PowerShell Core for my preferred operating system macOs.

Microsoft state that the Az module will be updated on a two-week cadence and will always be up-to-date, so that’s nice.

I’ve resisted upgrading to the new Az module until the completion of a recent customer engagement so as to avoid any complexity that a switch in modules may introduce. Call me risk adverse I know. . .So now that the project is complete, I’m excited to make the switch.

Ok so how do I upgrade from AzureRM to Az?

If you’ve been using PowerShell for Azure, you undoubtedly already have the AzureRM module installed. So its out with the old and in with the new. . . To accomplish this task I made use of some simple PowerShell to find the modules installed with a name like AzureRM and then uninstall them. Here is the code I lazily leached from my colleague Arran Peterson after he successfully uninstalled the old modules.

Remove all the old AuzreRM modules first . . .

$azurerm = get-module -ListAvailable | ? {$_.Name -like “AzureRM*”}

ForEach ($module in $azurerm) {

$name = $module.Name

$version = $module.Version

Uninstall-Module -Name $Name -MaximumVersion $version -Force

}

At the time of writing this blog the latest version available from the PowerShell Gallery is 1.5.0 https://www.powershellgallery.com/packages/Az/1.5.0

To install the module simply open PowerShell on your machine and enter:

Install-Module -Name Az

Boom its that easy. . .

Ok Great, but wont this break all my scripts?

So when I first heard of the new module and the change in cmdlet namespace, my first reaction was shock. .  I’ve produced loads of PowerShell for customers over the past couple of years that use the “azurerem” named cmdlets.

Microsoft state on their PowerShell Az blog that ‘Users are not required to migrate from AzureRM, as AzureRM will continue to be supported. However, it is important to note that all new Azure PowerShell features will appear only in ‘Az’.’  So my old stuff would continue to work, but they also state ‘Az and AzureRM cannot be executed in the same PowerShell session.’ So I’d need to make customers aware that they cannot mix AzureRm and Az cmdlets within a single session.

This all sounds like a bunch of annoying conversations and explanations I’d be faced with, I began to feel frustrated and was questioning why Microsoft saw the need to rename all of their cmdlets. I could feel a hate blog brewing. . .

However, as I read more I came across a diamond in the rough. . .AzureRM Aliases. Ah someone at Microsoft has considered my pain. . I could feel the catharsis as I read the official migration guide https://github.com/Azure/azure-powershell/blob/master/documentation/migration-guides/Az.1.0.0-migration-guide.md and came across the following statement. ‘To make the transition to these new cmdlet names simpler, Az introduces two new cmdlets, Enable-AzureRmAlias and Disable-AzureRmAlias. Enable-AzureRmAlias creates aliases from the older cmdlet names in AzureRM to the newer Az cmdlet names. The cmdlet allows creating aliases in the current session, or across all sessions by changing your user or machine profile.’

What Now?

Its time for a coffee then back to more PowerShell. . Happy Days. .


IPv6 – slowly but surely

I first blogged about IPv6 and the reasons for its slow adoption way back in 2014. A lot can change in the world of ICT over the course of five years, but interestingly the reasons for slow adoption I believe have remained somewhat constant. I’ve updated my post to include some new thoughts.

The first time I recall there being a lot of hype about IPv6 was way back in the early 2000’s, ever since then the topic seems to get attention every once in a while and then disappears into insignificance alongside more exciting IT news.

The problem with IPv4 is that there are only about 3.7 billion public IPv4 addresses. Whilst this may initially sound like a lot, take a moment to think about how many devices you currently have that connect to the Internet. Globally we have already experienced a rapid uptake of Internet connected smart-phones and the recent hype surrounding the Internet of Things (IoT) promises to connect an even larger array of devices to the Internet. With a global population (according to http://www.worldometers.info/world-population/) of approx. 7.7 billion people we just don’t have enough to go around.

Back in the early 2000’s there was limited support in the form of hardware and software that supported IPv6. So now that we have wide spread hardware and software IPv6 support, why is it that we haven’t all switched?

Like most things in the world it’s often determined by the capacity to monetise an event. Surprisingly not all carriers / ISP’s are on board and some are reluctant to spend money to drive the switch. APNIC have stats (https://stats.labs.apnic.net/ipv6/) that suggest Australia is currently sitting at 14% uptake, lagging behind other developed countries.

Network address translation (NAT) and Classless Inter-Domain Routing (CIDR), have made it much easier to live with IPv4. NAT used on firewalls and routers lets many nodes in a network sit behind a single public IP address. CIDR, sometimes referred to as supernetting is a way to allocate and specify the Internet addresses used in inter-domain routing in a much more flexible manner than with the original system of Internet Protocol (IP) address classes. As a result, the number of available Internet addresses has been greatly increased and has allowed service providers to conserve addresses by divvying up pieces of a full range of IP addresses to multiple customers.

Unsurprisingly enterprise adoption in Australia is slow, perceived risk comes into play. It is plausible that many companies may be of the view that the introduction of IPv6 is somewhat unnecessary and potentially risky in terms of effort required to implement and loss of productivity during implementation. Most corporations are simply not feeling any pain with IPv4 so it’s not on their short term radar as being of any level of criticality to their business. When considering IPv6 implementation from a business perspective, the successful adoption of new technologies are typically accompanied by some form of reward or competitive advantage associated with early adoption. The potential for financial reward is often what drives significant change.

To IPv6’s detriment from the layperson’s perspective it has little to distinguish itself from IPv4 in terms of services and service costs. Many of IPv4’s short comings have been addressed. Financial incentives to make the decision to commence widespread deployment just don’t exist.

We have all heard the doom and gloom stories associated with the impending end of IPv4. Surely this should be reason enough for accelerated implementation of IPv6? Why isn’t everyone rushing to implement IPv6 and mitigate future risk? The situation where exhaustion of IPv4 addresses would cause rapid escalation in costs to consumers hasn’t really happened yet and has failed to be a significant factor to encourage further deployment of IPv6 in the Internet.

Another factor to consider is backward compatibility. IPv4 hosts are unable to address IP packets directly to an IPv6 host and vice-versa. So this means that it is not realistic to just switch over a network from IPv4 to IPv6. When implementing IPv6 a significant period of dual stack IPv4 and IPv6 coexistence needs to take place. This is where IPv6 is turned on and run in parallel with the existing IPv4 network. Again from an Enterprise perspective, I suspect this just sounds like two networks instead of one and double administrative overhead for most IT decision makers.

Networks need to provide continued support for IPv4 for as long as there are significant levels of IPv4 only networks and services still deployed. Many IT decision makers would rather spend their budget elsewhere and ignore the issue for another year.

Only once the majority of the Internet supports a dual stack environment can networks start to turn off their continued support for IPv4. Therefore, while there is no particular competitive advantage to be gained by early adoption of IPv6, the collective internet wide decommissioning of IPv4 is likely to be determined by the late adopters.

So what should I do?

It’s important to understand where you are now and arm yourself with enough information to plan accordingly.

  • Check if your ISP is currently supporting IPv6 by visiting a website like http://testmyipv6.com/. There is a dual stack test which will let you know if you are using IPv4 alongside IPv6.
  • Understand if the networking equipment you have in place supports IPv6.
  • Understand if all your existing networked devices (everything that consumes an IP address) supports IPv6.
  • Ensure that all new device acquisitions are fully supportive of IPv6.
  • Understand if the services you consume support IPv6. (If you are making use of public cloud providers, understand if the services you consume support IPv6 or have a road map to IPv6.)

Whilst there is no official switch-off date for IPv4. The reality is that IPv6 isn’t going away and as IT decision makers we can’t postpone planning for its implementation indefinitely. Take the time now to understand where your organisation is at. Make your transition to IPv6 a success story!!


YAML it Rhymes with Camel

I’ve blogged before about my passion for automation and the use of ARM templating in the Azure world to eradicate the burden of dull and mundane tasks from the daily routine of system administrators for whom I do consulting for.

I loath repetitive tasks, its in this space where subtle differences and inconsistency love to live. Recently I was asked to help out with a simple task, provisioning a couple of EC2 Windows servers in AWS. So in the spirit of infrastructure as code, I thought, there is no better time to try out AWS CloudFormation to describe my EC2 instances . I’ve actually used CloudFormation before in the past, but always describing my stack in JSON. CloudFormation also supports YAML, so challenge accepted and away I went. . .

So what is YAML anyway. . .Yet Another Mark-up Language. Interestingly its described at the official YAML website (https://yaml.org) as a “YAML Ain’t Markup Language” rather,  “human friendly data serialisation standard for all programming languages”.

What attracted me to YAML is its simplicity, there are no curly braces {} just indenting. Its also super easy to read. So if JSON looks a bit to cody for your liking, YAML may be a more palatable alternative.

So how would you get started? As you’d expect AWS have extensive CloudFormation documentation. The AWS::EC2::Instance resource is described here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-instance.html#cfn-ec2-instance-volumes. You’ll notice that there is a Syntax description for JSON and YAML. The YAML looks like this:


Type: AWS::EC2::Instance
Properties: 
  Affinity: String
  AvailabilityZone: String
  BlockDeviceMappings: 
    - EC2 Block Device Mapping
  CreditSpecification: CreditSpecification
  DisableApiTermination: Boolean
  EbsOptimized: Boolean
  ElasticGpuSpecifications: [ ElasticGpuSpecification, ... ]
  ElasticInferenceAccelerators: 
    - ElasticInferenceAccelerator
  HostId: String
  IamInstanceProfile: String
  ImageId: String
  InstanceInitiatedShutdownBehavior: String
  InstanceType: String
  Ipv6AddressCount: Integer
  Ipv6Addresses:
    - IPv6 Address Type
  KernelId: String
  KeyName: String
  LaunchTemplate: LaunchTemplateSpecification
  LicenseSpecifications: 
    - LicenseSpecification
  Monitoring: Boolean
  NetworkInterfaces: 
    - EC2 Network Interface
  PlacementGroupName: String
  PrivateIpAddress: String
  RamdiskId: String
  SecurityGroupIds: 
    - String
  SecurityGroups: 
    - String
  SourceDestCheck: Boolean
  SsmAssociations: 
    - SSMAssociation
  SubnetId: String
  Tags: 
    - Resource Tag
  Tenancy: String
  UserData: String
  Volumes: 
    - EC2 MountPoint
  AdditionalInfo: String

With this as a starting point I was quickly able to build a EC2 instance and customise my YAML so as to do some extra things.

If you’ve got this far and YAML is starting to look like it might be the ticket for you, its worth familiarising yourself with the CloudFormation built-in functions. You can use these to do things like assign values to properties that are not available until runtime.

Fn::Base64
Fn::Cidr
Condition Functions
Fn::FindInMap
Fn::GetAtt
Fn::GetAZs
Fn::Join
Fn::Select
Fn::Split
Fn::Sub
Fn::Transform
Ref

The link to the complete Intrinsic Function Reference can be found here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference.html

With a learning curve of a couple of hours including a bit of googling and messing around I was able to achieve my goal. I built an EC2 instance, applied tagging, installed some Windows features post build via a PowerShell script (downloaded from S3 and launched with AWS::CloudFormation::Init cfn-init.exe), all without having to logon to the server or touch the console. Here is a copy of my YAML. . .


AWSTemplateFormatVersion: "2010-09-09"
Description: CloudFormation Template to deploy an EC2 instance
Parameters: 
  Hostname: 
    Type: String
    Description: Hostname - maximum 15 characters
    MaxLength: '15'    
  LatestAmiId :
    Type: 'AWS::SSM::Parameter::Value'
    Default: /aws/service/ami-windows-latest/Windows_Server-2019-English-Full-Base
  InstanceSize: 
    Type: String
    Description: Instance Size
    Default: t2.micro
    AllowedValues:
      - "t2.micro" 
      - "t2.small"
      - "t2.medium"
  AvailabilityZone:
    Type: String
    Description: Default AZ
    AllowedValues: 
      - ap-southeast-2a
      - ap-southeast-2b
      - ap-southeast-2c
    Default: ap-southeast-2a
  KeyPair: 
    Type: String
    Description: KeyPair Name
    Default: jtwo
  S3BucketName:
    Default: NotARealBucket
    Description: S3 bucket containing boot artefacts
    Type: String
  
  # tag values
  awPurpose: 
    Type: String
    Description: A plain English description of what the object is for.
    Default: WindowsServer2019 Domain Controller
  awChargeTo: 
    Type: String
    Description: Billing Code for charge back of resource.
    Default: IT-123
  awRegion: 
    Type: String
    Description: Accolade Wines Region not AWS. 
    Default: Australia
  awExpiry: 
    Type: String
    Description: The date when the resource(s) can be considered for decommissioning.
    Default: 01-01-2022
  awBusinessSegment: 
    Type: String
    Description: Agency code.
    Default: ICT
  awEnvironment: 
    Type: String
    Description: Specific environment for resource.
    AllowedValues: 
      - prod
      - prodServices
      - nonprod
      - uat
      - dev
      - test 
  awApplication: 
    Type: String
    Description: A single or multiple word with the name of the application that the infrastructure supports. "JDE", "AD", "Apache", "Utility", "INFOR", "PKI".
    Default: AD

Mappings:
  SubnetMap: 
    ap-southeast-2a:
      prodServices: "subnet-idGoesHere"
    ap-southeast-2b:
      prodServices: "subnet-idGoesHere"
    ap-southeast-2c:
      prodServices: "subnet-idGoesHere"
      
# Resources
Resources:
  # IAM Instance Profile
  Profile:
    Type: 'AWS::IAM::InstanceProfile'
    Properties:
      Roles:
        - !Ref HostRole
      Path: /
      InstanceProfileName: !Join
        - ''
        - - 'instance-profile-'
          - !Ref S3BucketName
  HostRole:
    Type: 'AWS::IAM::Role'
    Properties:
      RoleName: !Join
        - ''
        - - 'role-s3-read-'
          - !Ref S3BucketName
      Policies:
        - PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Action:
                  - 's3:GetObject'
                Resource: !Join
                  - ''
                  - - 'arn:aws:s3:::'
                    - !Ref S3BucketName
                    - '/*'
                Effect: Allow
          PolicyName: s3-policy-read
      Path: /
      AssumeRolePolicyDocument:
        Statement:
          - Action:
              - 'sts:AssumeRole'
            Principal:
              Service:
                - ec2.amazonaws.com
            Effect: Allow
        Version: 2012-10-17  

  # ENI
  NIC1:
    Type: AWS::EC2::NetworkInterface
    Properties: 
      Description: !Sub 'ENI for EC2 instance: ${Hostname}-${awEnvironment}'
      GroupSet:
          - sg-050cadbf0e159b0ac
      SubnetId: !FindInMap [SubnetMap, !Ref AvailabilityZone, !Ref awEnvironment]
      Tags:
        - Key: Name
          Value: !Sub '${Hostname}-eni'
  
  # EC2 Instance
  Instance:
    Type: 'AWS::EC2::Instance'
    Metadata:
      'AWS::CloudFormation::Authentication':
        S3AccessCreds:
          type: S3
          buckets:
            - !Ref S3BucketName
          roleName: !Ref HostRole
      'AWS::CloudFormation::Init':
        configSets: 
          config:
            - get-files 
            - configure-instance
        get-files:
          files:
            'c:\s3-downloads\scripts\Add-WindowsFeature.ps1':
              source: https://NotARealBucket.s3.amazonaws.com/scripts/Add-WindowsFeature.ps1
              authentication: S3AccessCreds
        configure-instance:
          commands:
            1-set-powershell-execution-policy:
              command: >-
                powershell.exe -Command "Set-ExecutionPolicy UnRestricted -Force"
              waitAfterCompletion: '0'
            2-rename-computer:
              command: !Join
                - ''
                - - >-
                  -  powershell.exe -Command "Rename-Computer -Restart -NewName "
                  -  !Ref Hostname
              waitAfterCompletion: forever  
            3-install-windows-components:
              command: >-
                powershell.exe -Command "c:\s3-downloads\scripts\Add-WindowsFeature.ps1"
              waitAfterCompletion: '0'


    Properties:
      DisableApiTermination: 'false'
      AvailabilityZone: !Sub "${AvailabilityZone}"
      InstanceInitiatedShutdownBehavior: stop
      IamInstanceProfile: !Ref Profile
      ImageId: !Ref LatestAmiId
      InstanceType: !Sub "${InstanceSize}"
      KeyName: !Sub "${KeyPair}"
      UserData: !Base64
        'Fn::Join': 
          - ''
          - - "\n"
            - "cfn-init.exe "
            - " --stack "
            - "Ref": "AWS::StackId"
            - " --resource Instance"
            - " --region "
            - "Ref": "AWS::Region"
            - " --configsets config"
            - " -v \n"
            - "cfn-signal.exe  "
            - " ---exit-code 0"
            - " --region "
            - "Ref": "AWS::Region"
            - " --resource Instance" 
            - " --stack "
            - "Ref": "AWS::StackName"
            - "\n"           
            - "\n"
      Tags:
        - Key: Name
          Value: !Sub "${Hostname}"
        - Key: awPurpose
          Value: !Sub "${awPurpose}"
        - Key: awChargeTo
          Value: !Sub "${awChargeTo}"
        - Key: awRegion
          Value: !Sub "${awRegion}"
        - Key: awExpiry
          Value: !Sub "${awExpiry}"
        - Key: awBusinessSegment
          Value: !Sub "${awBusinessSegment}"
        - Key: awEnvironment
          Value: !Sub "${awEnvironment}"
        - Key: awApplication
          Value: !Sub "${awApplication}"

      NetworkInterfaces:
        - NetworkInterfaceId: !Ref NIC1
          DeviceIndex: 0

Outputs:
  InstanceId:
    Description: 'InstanceId'
    Value: !Ref Instance
    Export:
      Name: !Sub '${Hostname}-${awEnvironment}-InstanceId'
  InstancePrivateIP:
    Description: 'InstancePrivateIP'
    Value: !GetAtt Instance.PrivateIp
    Export:
      Name: !Sub '${Hostname}-${awEnvironment}-InstancePrivateIP'

So my question now is, why doesn’t Azure also support YAML?


Get your Flow on. .

I’ve talked before about my passion for automation. I loath doing repetitive tasks and fear inconsistency whilst undertaking them. Its not that I’m lazy, I recognise that people are generally busy and sometimes its hard to maintain focus on repetitive tasks, its easy to forget a step here and there amongst everything else that’s going on in your day. 

Software has evolved over the years to the point where all decent software includes a public Application Programming Interface (API), that provides consumers with access to functions and procedures to obtain, manipulate data and generally perform useful tasks. If you are thinking, yeah this is awesome, but I’m not a developer, I don’t know how to invoke an API, this all sounds too difficult. . . Let me introduce you to Microsoft Flow.

What is Flow?

Flow isn’t a new concept, its been around for a while, Zapier and IFTTT are both awesome mature products in this space and do much the same sort of thing. What makes Flow stand out is that its included as part of a Office 365 subscription. Its something that you likely already have access to. .  This is awesome, cause you don’t need to ask for permission to purchase another app or subscription. The barriers of entry that stifle innovation likely aren’t there. . . You can get started and experiment right away.

That said like most software, there are multiple plans and entitlements, detailed information regarding this can be found here: https://australia.flow.microsoft.com/en-us/pricing/

So what can flow do for me?

To a degree, this is really only limited by your imagination and the quality of the software products you interact with. The good news is that as I’m writing this it’s the year 2018 and most organisations I interact with use modern software and cloud services that will definitely work with Flow. Further more, its not limited to just the Microsoft stack. You can use Flow with third party software.

I like to think of Flow as a means to glue otherwise disparate software together. The concept is pretty simple, you choose a starting point to be your trigger, the trigger then results in actions somewhere else. Put simply an action in one place lets you trigger a sequence of events somewhere else.

Here at cloudstep we use Flow with our WordPress blog, every time we publish a new post, Flow detects this and posts a link to it on our LinkedIn company page and sends a tweet on Twitter. Sounds like magic. . .its not really, its just using the API’s behind the scenes, no coding required. . . Painless awesomeness.

If you think about your daily activities, there are likely several workflows just like this. Just remember automation doesn’t have to be elaborate to make a real difference. 

Flow provides a nice dashboard view with the state of your connections and traffic light status on the run history.

Another nice feature of Flow, is ‘Team Flows’ which allows you to share your automated workflows with others inside your organisation, removing another pet hate of mine, single points of failure within a workflow.

Still struggling for inspiration?  Microsoft provide hundreds of examples within their template library: https://australia.flow.microsoft.com/en-us/templates/

So as the year draws to a close, if you are fortunate enough to have some idle time, have a play with Flow, get automating and put some time in the bank for next year!