Google Cloud’s Second Region in Australia

Google Cloud Platform (GCP) has extended its reach in Australia and New Zealand (ANZ) with a second region in Melbourne.

Why does this matter?

Having two regions inside Australia allows for customers to extend their architecture for highly available or disaster recoverable solutions. Google now join Azure (who has 3) as having multiple regions inside Australia, no doubt we will keep a close eye on AWS whom were the first public cloud provider to enter Sydney many moons ago.

What’s different about Google’s Regions?

The distinguishing network feature that sets GCP apart from its rivals is how they allow customers to design their Virtual Private Cloud (VPC). GCP allow for subnets in the single VPC to span across as many regions as you’d like. You have the ability to create a single globally distributed VPC with subnets in the Americas, Asia and Europe. Build a logical DMZ zone that has subnets in each region for your globally distributed web services. Unlike the other cloud providers whom their software defined networks are region specific and peering must be setup that incurs bandwidth usage and connection charges. Thoughts go through my head as to how you would build a globally distributed VPC with local on-ramp for your regions on-premises networks. None the less, allowing for your traffic from Asia to Europe to traverse Google’s backhaul could makes life easier. The devil is in the detail, as what is also unique to Google is the VM-VM egress charges with cross region. Which is kind of saying the same thing as peering, just putting the price on a different object. All things to carefully think about when planning your cloud deployments. Maybe in some circumstances based on the pricing model, Google outweighs the other heavy weight hitters.

What will be interesting to see is whether with Sydney and Melbourne onboarding soon, will the VM-VM egress pricing update to support Egress between Google Cloud regions within Australia. Currently if I look at what is written on the tin I’d assume that it falls under Egress between Google Cloud regions within Oceania (per GB) at $0.08, where Oceania includes Australia, New Zealand, and surrounding Pacific Ocean islands such as Papua New Guinea and Fiji. This region excludes Hawaii.

Google have a nice inter-Region Google Cloud Inter-region latency and throughput pivot table that gets metrics form a Perfkit test. The current lowest latent packet is to asia-southeast1 Singapore at ~92ms, we will definitely knock the socks off of that in ANZ with Melbourne to Sydney.

Google Cloud Inter-Region Latency and Throughput › Inter-region latency and throughput

Googles VPC network Example (here)


Anatomy of a Successful Cloud Migration

Free eBook Download

Download Now

“Through 2024, 80% of companies that are unaware of the mistakes made in their cloud adoption will overspend by 20 to 50%.” (4 Lessons Learned From Cloud Infrastructure Adopters – gartner.com)

It’s 2021 and you’re ready to move to the cloud. But first, you’ll want to do your research.

This eBook will walk you through:

  • Challenges faced by individuals within an organisation when migrating to the cloud
  • How to solve these challenges efficiently and effectively with powerful tools
  • How to best align business and IT goals to make sure everything runs smoothly

With years of experience working with Government and Enterprise organisations at a State and Local level, we want to share the lessons we’ve learnt. From procurement, to funding to security challenges, we’ve covered everything you need to know to migrate to the cloud successfully.

Complete the form for instant access to this eBook.


SQL Database Backup on IaaS using Azure Automation

I had a need to take a full SQL Database backup from a virtual machine with SQL Server hosted on Azure. This is done via an Azure Automation account, executing a runbook on a hybrid worker. This is a great way to take a offline copy of your production SQL and store it someplace safe.

To accomplish this we will use the PowerShell module ‘sqlps‘ that should be installed with SQL Server and run the command Backup-SqlDatabase.

Backup-SqlDatabase (SqlServer) | Microsoft Docs

Store SQL Storage Account Credentials

Before we can run the Backup-SqlDatabase command we must have a saved credential stored in SQL for the Storage Account using New-SqlCredential.

New-SqlCredential (SqlServer) | Microsoft Docs

Import-Module sqlps
# set parameters
$sqlPath = "sqlserver:\sql\$($env:COMPUTERNAME)"
$storageAccount = "<storageAccountName>"  
$storageKey = "<storageAccountKey>"  
$secureString = ConvertTo-SecureString $storageKey -AsPlainText -Force  
$credentialName = "azureCredential-"+$storageAccount

Write-Host "Generate credential: " $credentialName
  
#cd to sql server and get instances  
cd $sqlPath
$instances = Get-ChildItem

#loop through instances and create a SQL credential, output any errors
foreach ($instance in $instances)  {
    try {
        $path = "$($sqlPath)\$($instance.DisplayName)\credentials"
        New-SqlCredential -Name $credentialName -Identity $storageAccount -Secret $secureString -Path $path -ea Stop | Out-Null
        Write-Host "...generated credential $($path)\$($credentialName)."  }
    catch { Write-Host $_.Exception.Message } }

Backup SQL Databases with an Azure Runbook

The runbook below works on the DEFAULT instance and excludes both tempdb and model from backup.

Import-Module sqlps
$sqlPath = "sqlserver:\sql\$($env:COMPUTERNAME)"
$storageAccount = "<storageAccount>"  
$blobContainer = "<containerName>"  
$backupUrlContainer = "https://$storageAccount.blob.core.windows.net/$blobContainer/"  
$credentialName = "azureCredential-"+$storageAccount
$prefix = Get-Date -Format yyyyMMdd

Write-Host "Generate credential: " $credentialName

Write-Host "Backup database: " $backupUrlContainer
  
cd $sqlPath
$instances = Get-ChildItem

#loop through instances and backup all databases (excluding tempdb and model)
foreach ($instance in $instances)  {
    $path = "$($sqlPath)\$($instance.DisplayName)\databases"
    $databases = Get-ChildItem -Force -Path $path | Where-object {$_.name -ne "tempdb" -and $_.name -ne "model"}

    foreach ($database in $databases) {
        try {
            $databasePath = "$($path)\$($database.Name)"
            Write-Host "...starting backup: " $databasePath
            $fileName = $prefix+"_"+$($database.Name)+".bak"
            $destinationBakFileName = $fileName
            $backupFileURL = $backupUrlContainer+$destinationBakFileName
            Write-Host "...backup URL: " $backupFileURL
            Backup-SqlDatabase -Database $database.Name -Path $path -BackupFile $backupFileURL -SqlCredential $credentialName -Compression On 
            Write-Host "...backup complete."  }
        catch { Write-Host $_.Exception.Message } } }


NOTE: You will notice a performance hit on the SQL Server so schedule this runbook in a maintanence window.


Deploy Craft CMS with Azure App Service for Linux Containers

Here is some key points to deploy a Craft CMS installation on Azure Web App using container images. In this blog we will step you through some of the modifications needed to make the container image run in Azure and the deployment steps to run in an Azure DevOps Pipeline.

CraftCMS have reference material for their docker deployments found here:
GitHub – craftcms/docker: Craft CMS Docker images

Components

The components required are:

  • Azure Web App for Linux Containers
  • Azure Database for MySQL
  • Azure Storage Account
  • Azure Front Door with WAF
  • Azure Container Registry

Custom Docker Image

To make this work in an Azure Web App we have to do the following additional steps:

  • Install OpenSSH & Enable SSH daemon on 2222 at startup
  • Set the password for root to “Docker!”
  • Install the Azure Database for MySQL root certificates for SSL connections from the Container

We do this in the Dockerfile. We are customizing the NGINX implementation of CraftCMS to allow for the front end to service the HTTP/HTTPS requests from the App Service.

# composer dependencies
FROM composer:1 as vendor
COPY composer.json composer.json
COPY composer.lock composer.lock
RUN composer install --ignore-platform-reqs --no-interaction --prefer-dist

FROM craftcms/nginx:7.4
# Install OpenSSH and set the password for root to "Docker!". In this example, "apk add" is the install instruction for an Alpine Linux-based image.
USER root
RUN apk add openssh sudo \
     && echo "root:Docker!" | chpasswd 
# Copy the sshd_config file to the /etc/ directory
COPY sshd_config /etc/ssh/
COPY start.sh /etc/start.sh
COPY BaltimoreCyberTrustRoot.crt.pem /etc/BaltimoreCyberTrustRoot.crt.pem 
RUN ssh-keygen -A
RUN addgroup sudo
RUN adduser www-data sudo
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers

# the user is `www-data`, so we copy the files using the user and group
USER www-data
COPY --chown=www-data:www-data --from=vendor /app/vendor/ /app/vendor/
COPY --chown=www-data:www-data . .

EXPOSE 8080 2222
ENTRYPOINT ["sh", "/etc/start.sh"]

The corresponding ‘start.sh’

#!/bin/bash
sudo /usr/sbin/sshd &
/usr/bin/supervisord -c /etc/supervisor/conf.d/supervisor.conf

Build the Web App

The Azure Web App resource is deployed using a ARM template. Here is a snippet of the template, the key is to have your environment variables defined:

{
            "comments": "This is the docker web app running craftcms/custom Docker image",
            "type": "Microsoft.Web/sites",
            "name": "[parameters('siteName')]",
            "apiVersion": "2020-06-01",
            "location": "[parameters('location')]",
            "tags": "[parameters('tags')]",
            "dependsOn": [
                "[variables('hostingPlanName')]",
                "[variables('databaseName')]"
            ],
            "properties": {
                "siteConfig": {
                    "appSettings": [
                        {
                            "name": "DOCKER_REGISTRY_SERVER_URL",
                            "value": "[reference(variables('registryResourceId'), '2019-05-01').loginServer]"
                        },
                        {
                            "name": "DOCKER_REGISTRY_SERVER_USERNAME",
                            "value": "[listCredentials(variables('registryResourceId'), '2019-05-01').username]"
                        },
                        {
                            "name": "DOCKER_REGISTRY_SERVER_PASSWORD",
                            "value": "[listCredentials(variables('registryResourceId'), '2019-05-01').passwords[0].value]"
                        },
                        {
                            "name": "WEBSITES_ENABLE_APP_SERVICE_STORAGE",
                            "value": "false"
                        },
                        {
                            "name": "DB_DRIVER",
                            "value": "mysql"
                        },
                        {
                            "name": "DB_SERVER",
                            "value": "[reference(resourceId('Microsoft.DBforMySQL/servers',variables('serverName'))).fullyQualifiedDomainName]"
                        },
                        {
                            "name": "DB_PORT",
                            "value": "3306"
                        },
                        {
                            "name": "DB_DATABASE",
                            "value": "[variables('databaseName')]"
                        },
                        {
                            "name": "DB_USER",
                            "value": "[variables('databaseUserName')]"
                        },
                        {
                            "name": "DB_PASSWORD",
                            "value": "[parameters('administratorLoginPassword')]"
                        },
                        {
                            "name": "DB_SCHEMA",
                            "value": "public"
                        },
                        {
                            "name": "DB_TABLE_PREFIX",
                            "value": ""
                        },
                        {
                            "name": "SECURITY_KEY",
                            "value": "[parameters('cmsSecurityKey')]"
                        },
                        {
                            "name": "WEB_IMAGE",
                            "value": "[parameters('containerImage')]"
                        },
                        {
                            "name": "WEB_IMAGE_PORTS",
                            "value": "80:8080"
                        }

                    ],
                    "linuxFxVersion": "[variables('linuxFxVersion')]",
                    "scmIpSecurityRestrictions": [
                        
                    ],
                    "scmIpSecurityRestrictionsUseMain": false,
                    "minTlsVersion": "1.2",
                    "scmMinTlsVersion": "1.0"
                },
                "name": "[parameters('siteName')]",
                "serverFarmId": "[variables('hostingPlanName')]",
                "httpsOnly": true      
            },
            "resources": [
                {
                    "apiVersion": "2020-06-01",
                    "name": "connectionstrings",
                    "type": "config",
                    "dependsOn": [
                        "[resourceId('Microsoft.Web/sites/', parameters('siteName'))]"
                    ],
                    "tags": "[parameters('tags')]",
                    "properties": {
                        "dbstring": {
                            "value": "[concat('Database=', variables('databaseName'), ';Data Source=', reference(resourceId('Microsoft.DBforMySQL/servers',variables('serverName'))).fullyQualifiedDomainName, ';User Id=', parameters('administratorLogin'),'@', variables('serverName'),';Password=', parameters('administratorLoginPassword'))]",
                            "type": "MySQL"
                        }
                    }
                }
            ]
        },

All other resources should be ARM defaults. No customisation required. Either put them all in a single ARM template or seperate them out on their own. Your choice to be creative.

Build Pipeline

The infrastructure build pipeline looks something like the below:

# Infrastructure pipeline
trigger: none

pool:
  vmImage: 'windows-2019'
variables:
  TEMPLATEURI: 'https://storageAccountName.blob.core.windows.net/templates/portal/'
  CMSSINGLE: 'singleCraftCMSTemplate.json'
  CMSSINGLEPARAM: 'singleCraftCMSTemplate.parameters.json'
  CMSFILEREG: 'ContainerRegistry.json'
  CMSFRONTDOOR: 'frontDoor.json'
  CMSFILEREGPARAM: 'ContainerRegistry.parameters.json'
  CMSFRONTDOORPARAM: 'frontDoor.parameters.json'
  LOCATION: 'Australia East'
  SUBSCRIPTIONID: ''
  AZURECLISPID: ''
  TENANTID: ''
  RGNAME: ''
  TOKEN: ''
  ACS : 'registryName.azurecr.io'
resources:
  repositories:
    - repository: coderepo
      type: git
      name: Project/craftcms
stages:
- stage: BuildContainerRegistry
  displayName: BuildRegistry
  jobs:
  - job: BuildContainerRegistry
    displayName: Azure Git Repository
    pool:
      vmImage: 'windows-latest'
    steps:
    - task: CopyFiles@2
      name: copyToBuildHost
      displayName: 'Copy files to the build host for execution'
      inputs:
        Contents: '**'
        TargetFolder: '$(Build.ArtifactStagingDirectory)'
    - task: AzureFileCopy@4
      inputs:
        SourcePath: '$(Build.Repository.LocalPath)\CMS\template\*'
        azureSubscription: ''
        Destination: 'AzureBlob'
        storage: ''
        ContainerName: 'templates'
        BlobPrefix: portal
        AdditionalArgumentsForBlobCopy: --recursive=true
    - task: AzureResourceManagerTemplateDeployment@3
      displayName: "Deploy Azure ARM template for Azure Container Registry"
      inputs:
        deploymentScope: 'Resource Group'
        azureResourceManagerConnection: 'azureDeployCLI-SP'
        subscriptionId: '$(SUBSCRIPTIONID)'
        action: 'Create Or Update Resource Group'
        resourceGroupName: '$(RGNAME)'
        location: '$(LOCATION)'
        templateLocation: 'URL of the file'
        csmFileLink: '$(TEMPLATEURI)$(CMSFILEREG)$(TOKEN)' 
        csmParametersFileLink: '$(TEMPLATEURI)$(CMSFILEREGPARAM)$(TOKEN)' 
        deploymentMode: 'Incremental'
    - task: AzurePowerShell@5
      displayName: 'Import the public docker images to the Azure Container Repository'
      inputs:
        azureSubscription: 'azureDeployCLI-SP'
        ScriptType: 'FilePath'
        ScriptPath: '$(Build.ArtifactStagingDirectory)\CMS\template\dockerImages.ps1'
        errorActionPreference: 'silentlyContinue'
        azurePowerShellVersion: 'LatestVersion'

- stage: BuildGeneralImg
  dependsOn: BuildContainerRegistry
  displayName: BuildImages
  jobs:
  - job: BuildCraftCMSImage
    displayName: General Docker Image
    pool:
      vmImage: 'ubuntu-18.04'
    steps:
    - checkout: self
    - checkout: coderepo
    - task: CopyFiles@2
      name: copyToBuildHost
      displayName: 'Copy files to the build host for execution'
      inputs:
        Contents: '**'
        TargetFolder: '$(Build.ArtifactStagingDirectory)'
    - task: Docker@2
      displayName: Build and push
      inputs:
        containerRegistry: ''
        repository: craftcms
        command: buildAndPush
        dockerfile: 'craftcms/Dockerfile'
        tags: |
          craftcms
          latest

- stage: Deploy 
  dependsOn: BuildGeneralImg
  displayName: DeployWebService
  jobs:
  - job:
    displayName: ARM Templates
    pool:
      vmImage: 'windows-latest'
    steps:
    - checkout: self
    - checkout: coderepo
    - task: CopyFiles@2
      name: copyToBuildHost
      displayName: 'Copy files to the build host for execution'
      inputs:
        Contents: '**'
        TargetFolder: '$(Build.ArtifactStagingDirectory)'
    
    - task: AzureResourceManagerTemplateDeployment@3
      displayName: "Deploy Azure ARM single template for remaining assets"
      inputs:
        deploymentScope: 'Resource Group'
        azureResourceManagerConnection: ''
        subscriptionId: '$(SUBSCRIPTIONID)'
        action: 'Create Or Update Resource Group'
        resourceGroupName: '$(RGNAME)'
        location: '$(LOCATION)'
        templateLocation: 'URL of the file'
        csmFileLink: '$(TEMPLATEURI)$(CMSSINGLE)$(TOKEN)' 
        csmParametersFileLink: '$(TEMPLATEURI)$(CMSSINGLEPARAM)$(TOKEN)' 
        deploymentMode: 'Incremental'

- stage: Secure 
  dependsOn: Deploy
  displayName: DeployFrontDoor
  jobs:
  - job:
    displayName: ARM Templates
    pool:
      vmImage: 'windows-latest'
    steps:
    - task: CopyFiles@2
      name: copyToBuildHost
      displayName: 'Copy files to the build host for execution'
      inputs:
        Contents: '**'
        TargetFolder: '$(Build.ArtifactStagingDirectory)'
    
    - task: AzureResourceManagerTemplateDeployment@3
      displayName: "Deploy Azure ARM single template for Front Door"
      inputs:
        deploymentScope: 'Resource Group'
        azureResourceManagerConnection: ''
        subscriptionId: '$(SUBSCRIPTIONID)'
        action: 'Create Or Update Resource Group'
        resourceGroupName: '$(RGNAME)'
        location: '$(LOCATION)'
        templateLocation: 'URL of the file'
        csmFileLink: '$(TEMPLATEURI)$(CMSFRONTDOOR)$(TOKEN)' 
        csmParametersFileLink: '$(TEMPLATEURI)$(CMSFRONTDOORPARAM)$(TOKEN)' 
        deploymentMode: 'Incremental'
    - task: AzurePowerShell@5
      displayName: 'Apply Front Door service tags to Web App ACLs'
      inputs:
        azureSubscription: 'azureDeployCLI-SP'
        ScriptType: 'FilePath'
        ScriptPath: '$(Build.ArtifactStagingDirectory)\CMS\template\enableFrontDoorOnWebApp.ps1'
        errorActionPreference: 'silentlyContinue'
        azurePowerShellVersion: 'LatestVersion'    

Enable Front Door with WAF

The pipeline stage DeployFrontDoor has an enableFrontDoorOnWebApp.ps1

$azFrontDoorName = ""
$webAppName = ""
$resourceGroup = ""

Write-Host "INFO: Restrict access to a specific Azure Front Door instance"
try{
    $afd = Get-AzFrontDoor -Name $azFrontDoorName -ResourceGroupName $resourceGroup
}
catch{
    Write-Host "ERROR: $($_.Exception.Message)"
}

Write-Host "INFO: Setting the IP ranges defined in the AzureFrontDoor.Backend service tag to the Web App"
try{
    Add-AzWebAppAccessRestrictionRule -ResourceGroupName $resourceGroup -WebAppName $webAppName -Name "Front Door Restrictions" -Priority 100 -Action Allow -ServiceTag AzureFrontDoor.Backend -HttpHeader @{'x-azure-fdid' = $afd.FrontDoorId}}
catch{
    Write-Host "ERROR: $($_.Exception.Message)"
}


You should now have a CraftCMS web app that is only available through the FrontDoor URL.

Continuous Deployment

There are many ways to deploy updates to your website, an Azure Web App has a beautiful thing called slots that can be used.

# Trigger on commit
# Build and push an image to Azure Container Registry
# Update Web App Slot

trigger:
  branches:
    include:
      - main
  paths:
    exclude:
      - pipelines
      - README.md
  batch: true

resources:
- repo: self

pool:
  vmImage: 'windows-2019'
variables:
  TEMPLATEURI: 'https://storageAccountName.blob.core.windows.net/templates/portal/'
  LOCATION: 'Australia East'
  SUBSCRIPTIONID: ''
  RGNAME: ''
  TOKEN: ''
  SASTOKEN: ''
  TAG: '$(Build.BuildId)'
  CONTAINERREGISTRY: 'registryName.azurecr.io'
  IMAGEREPOSITORY: 'craftcms'
  APPNAME: ''

stages:
- stage: BuildImg
  displayName: BuildLatestImage
  jobs:
  - job: BuildCraftCMSImage
    displayName: General Docker Image
    pool:
      vmImage: 'ubuntu-18.04'
    steps:
    - checkout: self
    - task: CopyFiles@2
      name: copyToBuildHost
      displayName: 'Copy files to the build host for execution'
      inputs:
        Contents: '**'
        TargetFolder: '$(Build.ArtifactStagingDirectory)'
    - task: Docker@2
      displayName: Build and push
      inputs:
        containerRegistry: ''
        repository: $(IMAGEREPOSITORY)
        command: buildAndPush
        dockerfile: 'Dockerfile'
        tags: |
          $(IMAGEREPOSITORY)
          $(TAG)


- stage: UpdateApp 
  dependsOn: BuildImg
  displayName: UpdateTestSlot
  jobs:
  - job:
    displayName: 'Update Web App Slot'
    pool:
      vmImage: 'windows-latest'
    steps:
    - task: AzureWebAppContainer@1
      displayName: 'Update Web App Container Image Reference' 
      inputs:
        azureSubscription: ''
        appName: $(APPNAME)
        containers: $(CONTAINERREGISTRY)/$(IMAGEREPOSITORY):$(TAG)
        deployToSlotOrASE: true
        resourceGroupName: $(RGNAME)
        slotName: test




How to Achieve Cloud Cost Savings by Avoiding These Cost Overruns

Any company that runs for long enough will inevitably run into cost overruns; the key, though, is to minimise the number of cost overruns and mitigate the damage from overruns that occur. However, one of the costliest overruns a company can face has to do with cloud migration. In this article, we will tackle the ten most important strategies for avoiding a devastating financial blowout.

Elements of a Cloud Migration

When it comes to cloud migration, you’re really spoiled for choice. The market is fiercely competitive, which is great news for you, but picking the right vendor for cloud migration can be a difficult and potentially frustrating experience—worse yet, picking the wrong one could cost you massively in the long run. Cloudstep seeks to take the pain out of the decision-making process while simultaneously ensuring you get the best deal for your company’s specific needs.

10 Ways to Avoid a Financial Blowout

01. Have your migration plans ready

Cutting costs on cloud migration is only a reality if the cloud-based system is effective. If your new cloud-based system (or the migration itself) is plagued with issues, it can lead to exorbitant cost overruns, which is why planning your migration and performance analysis is so important—and where Cloudstep’s state-of-the art analysis comes into play, including plans to suit companies both big and small.

By tracking KPIs (Key Performance Indicators) and making note of suboptimal performance, you can track and tweak your original plan as you go.  A cloud-based system is only useful if you can maximise the tangible benefits that come with moving to a cloud-based system. As an example, you should check out Cloud Infrastructure Monitoring Software so you can ensure everything is working to your expectations.

2. Implement continuous monitoring

This one is interesting, since it might not be immediately obvious why continually monitoring the migration period could potentially affect costs. Most companies have sensitive data that they would very much like to keep private. This could be anything from trade secrets to non-public financial data. It could also be employee data and a range of other things that aren’t intended for public (or rival) consumption.

If the cloud migration is botched, say, after a security breach, a hacker could steal this sensitive data and hold the company to ransom, abuse the information for their own (or their company’s own) benefit, or simply cause chaos as an act of malice or revenge by deleting the data. A topical security risk is that of the ransomware attack, which encrypts the data until a cryptocurrency ransom is paid. If a hacker is stealthy, you might not even know your sensitive data has been compromised until it’s too late.

03. Invest in automation

Automation makes our lives easy. We let automation set our clocks and alarms, we let automated processes trade stocks as bots, and we use automation to build most of our stuff. It has allowed our economy to boom while also reducing a lot of the need for back-breaking labour.

Cloud migration is no different. There already exists a variety of excellent tools and software to help you along the journey, including AWS Migration Services, Azure Migration Tools, Corent SurPaaS, and Carbonite Migrate. You can download our eBook for a more thorough understanding of the different tools that are available to you.

04. Reduce excess storage

We usually don’t think about it much these days, but since the dawn of the computer age to the early 2000s, the compression of a given file was given a lot of importance. Software packages like StuffIt and WinZip were created 30+ years ago to tackle bloated file archives using cutting-edge compression algorithms. The MP3, which helped lay the foundation for the digital music revolution triggered by iTunes and the iPod, was a game-changer. Compression on the internet is still useful to this day. Gone are the days where you need to turn a lossless picture file into a lossy jpeg when uploading online.

However, there is still a need for maintaining good compression techniques and minimising bloat when it comes to files your company has on hand. For instance, do you really need an uncompressed hour-long 4k video clogging up your server that’s purpose is as a training or onboarding video? (For reference, that’s a stupidly high 318 GB, although it’s an admittedly extreme example.) Video is highly compressible, and the same video would probably be just as serviceable in 1080p. If you used similar compression to YouTube, the file size plummets to about 1.65GB. But even if you kept the video at 4k with the same compression technique, you’d still only have 2.7 GB. If such video content is sensitive, you could put the entire video on YouTube but with a valid email to watch it. This would save you a lot of storage space and bandwidth, especially when there is a lot of content involved.

05. Identify overprovisioning

If you’re going on holiday, you don’t pack your entire wardrobe (unless your name happens to be Mark Zuckerberg). Instead, you pack according to your destination. Simple enough, right?

If you only need 16 GB of server space, why pay for 64 GB? If your answer is “I might need it later”, consider that the price per gigabyte of storage is always going down. Have enough space to cover your overheads, sure, but don’t overprovision unless you have a good reason for doing so. Your company’s hypothetical overprovisioning might well be logical, but for many it is not.

06. Correct inefficient code

Inefficient code is problematic on a number of levels. While unorthodox (i.e., bloated) code might be okay in some esoteric instances, inefficient code in cloud migration can be disastrous.

According to APMdigest, inefficient apps are causing some companies to overspend by millions of dollars. It is estimated by the end of 22 that $330 billion will be spent on the cloud, meaning that billions of dollars are being lost as the result of inefficient code. Having your code appraised now is a small price to pay to save your money and headaches down the line.

07. Assign an inventory owner

In the Cloud Asset API in Google Cloud, access control can be configured at the project level or organization level. In this environment, you can bestow certain individuals (or a group of developers) with access to all Cloud Asset Inventory resources within a project.

08. Manage shadow IT

The term shadow IT isn’t as well known as it should be, although almost anyone who works in a company with computers in it has probably either encountered or engaged in shadow IT. It also goes by a cavalcade of other names, including embedded IT, fake IT, stealth IT, rogue IT, feral IT, or client IT. Put simply, shadow IT is when employees who aren’t with the official IT department start implementing their own workarounds. Some have even created their own software just to bypass problematic official software.

While shadow IT can have its benefits in some aspects, including innovation and reactivity, it can also pose a risk to company control, security, and reliability. It is imperative that you keep any shadow IT efforts in check, as the road to hell is often paved with good intentions.

09. Review support contracts

Cloud service agreements can lock you into contracts that won’t do you any favours. Make sure you actually have an expert read through the terms of service to ensure you’re not breaking any rules but also that your support contract will actually get you out of a bind if something goes wrong.

10. Bring your own license

Google Cloud (and other cloud services) allows you to bring your own license (BYOL). That being said, as with any BYOL agreement, do your due diligence and ensure that you have read and understood the terms of conditions. To find out more information on how to comply with these terms and how to carry out the steps in correct order, please visit Google’s support article or the supporting documentation for any cloud service you may wish to use with a BYOL agreement.

How Cloud Computing Leads to Cost Savings

There are numerous ways in which cloud computing can reduce costs. In the following five sections, we will take a look at five of the biggest points.

Requires No Setup Investments

One of the biggest pain points for any company looking to archive data—or simply process it—is the logistical hurdles and upfront costs. Server maintenance and physical storage can add up quickly. By incorporating cloud technology to solve your storage and processing concerns, much of these upfront costs are offset. This is because the cloud space is very competitive. Moreover, the largest cloud-hosting companies in the world have done a fantastic job of cutting costs through technological innovation and scaling up to nearly unfathomably large degrees. Even the biggest companies in the world have outsourced their cloud-hosting needs to pre-established cloud-hosting companies rather than use their own proprietary server farms.

Optimal Hardware Utilization

This is sort of a follow-on from the previous point. Perhaps the best idea here is to use a simple analogy. Imagine a typical office with, say, two dozen workstations. For much of the day, the computers are either operating at partial capacity or not at all. With cloud storage, data is processed and stored across various nodes for built-in redundancy—in simple terms, that means your data is backed up and always retrievable (on a competently run cloud server). So not only is this a cheaper option for most companies but also a more secure one.

Energy Savings

Server farms often get criticised for their energy use; however, what is often overlooked is how scalability actually cuts down the total amount of energy required per byte stored. Indeed, the higher demand there is for cloud computing, the more incentive that cloud-hosting companies will have to innovate and create more optimal energy-saving techniques. In any case, traditional on-site storage and data-processing techniques cannot match the efficiency per byte stored/processed.

No In-house Team

Depending on your company’s size, this could be the straw that breaks the camel’s back. By migrating to the cloud, you no longer need to keep a dedicated team devoted to maintaining server racks and other such problems that arise when you’re not harnessing the incredible utility of the cloud. Regardless of whether you have a dedicated IT team or intermittent server inspectors, your operating costs tend to become quite bloated when you’re handling everything yourself.

Eliminates Redundancies

If you’ve ever had to deal with magnetic tape backups, you know how antiquated and frustrating the experience can be. After all, storing data on tape doesn’t just feel so 20th century; it is 20th century. Moreover, a lot of companies only create backups once a day! Imagine if your company lost an entire day’s work! By migrating to the cloud, creating redundancies and backups into your system isn’t something you need to worry about. Having said that, we do encourage to keep an onsite backup of your company’s most important files (just in case).

Conclusion

Throughout this article, we’ve looked at all the incredible benefits that a cloud-based system can have on your business, including energy savings, cost cuts, hardware optimisation, code improvements, and taking advantage of automation; however, you don’t need to take our word for it! Just take a moment to look at the chart below from Research and Markets.

This says it all, really. In five years, the cloud market is projected to more than double. Companies have realised how much money there is to save by embracing cloud technologies. Much of this growth also stems from cloud computing. Whether you model, analyse, or plan, Cloudstep has got everything you need to streamline your cloud-migration process, making it as pain-free and as efficient as you’d like it to be.


More About Us!

Check out our features page or download our free eBook to read further about how you can revolutionise your company’s infrastructure. The eBook is a must-read for anyone who is serious about increasing your company’s agility and scalability. We cover risk mitigation, digital transformation, and how to reduce your company’s overall IT expenditure.

Get eBook

We have plans starting at just $49 per month for an exploratory plan, all the way up to $1,499 for our comprehensive enterprise plan. We also have a free 30-day trial. Plus, unlike many companies, we won’t try to trick you into paying for your plan if you forget about the trial, as we will only ask for your billing information after your trial period has commenced.


Fault Tolerant Multi AZ EC2, On a beer budget – live from AWS Meetup

Filmed on 18th of March 2021 at the Adelaide AWS User Group, where Arran Peterson presented on how to put together best practice (and cheap!) cloud architecture for business continuity. The title:

“Enterprise grade fault tolerant multi-AZ server deployment on a beer budget”

Recording

RATED ‘PG’ – Mild course language.

Presenter

Arran Peterson
Arran Peterson

Arran is an Infrastructure Consultant with a passion for Microsoft Unified Communications and the true flexibility and scalability of cloud-based solutions.
As a Senior Consultant, Arran brings his expertise in enterprise environments to work with clients around Microsoft Unified Communications product portfolio of Office 365, Exchange and Skype/Teams, along with expertise around transitioning to the cloud-based platforms including AWS, Azure and Google.

More Reading

Amazon Elastic Block Store

https://aws.amazon.com/ebs/

AWS Sydney outage prompts architecture rethink

https://www.itnews.com.au/news/aws-sydney-outage-prompts-architecture-rethink-420506

Chalice Framework

https://aws.github.io/chalice/

Adelaide AWS User Group

https://www.meetup.com/en-AU/Amazon-Web-Services-User-Group-Adelaide/events/276728885/