Fault Tolerant Multi AZ EC2, On a beer budget – live from AWS Meetup

Filmed on 18th of March 2021 at the Adelaide AWS User Group, where Arran Peterson presented on how to put together best practice (and cheap!) cloud architecture for business continuity. The title:

“Enterprise grade fault tolerant multi-AZ server deployment on a beer budget”


RATED ‘PG’ – Mild course language.


Arran Peterson
Arran Peterson

Arran is an Infrastructure Consultant with a passion for Microsoft Unified Communications and the true flexibility and scalability of cloud-based solutions.
As a Senior Consultant, Arran brings his expertise in enterprise environments to work with clients around Microsoft Unified Communications product portfolio of Office 365, Exchange and Skype/Teams, along with expertise around transitioning to the cloud-based platforms including AWS, Azure and Google.

More Reading

Amazon Elastic Block Store


AWS Sydney outage prompts architecture rethink


Chalice Framework


Adelaide AWS User Group


The role of Datacenters

The State of Play

Current Enterprise ICT Environments are a mix of various technology stacks.  Critical and second-tier systems are from different eras.  A mix of modern and legacy applications sit alongside each other.  The common challenges are security, manageability and integration of disparate parts.  There is some use of public cloud services, but most applications are hosted in on-premises or third-party datacenters.  Against this is the demand from the business for more responsive service provision, innovative use of data to solve problems and relentless downward pressure on spending.  This complexity and conflicting demands have inherent risk and ICT departments across the world are wrestling with their response.

ICT departments are attempting to reduce their exposure to technical debt, operational risk and costs.  This could be called “Getting out of IT”.  It means still delivering applications and services to the business but avoiding the necessity of owning the risk.  This can be achieved in two main ways:

  • Outsourcing.  The risk of operating the environment is handed to a third party with a series of service level agreements the business accepts.  While this works the reality is that ICT still wears the pain and risk when services fail.  Complexity is not reduced it is merely abstracted.
  • Software as a Service (SaaS).  Applications become the commodity rather than infrastructure.  A tapestry of services is purchased from software vendors and services providers.  ICT’s role is to integrate these higher-level components.  This places risk where it is most easily mitigated, with the software vendor themselves.

It is not an either-or scenario.  Generally, there is a mix of both approaches with a progressive move towards SaaS over time.

How should we deliver services?

The business currently consumes Software as a Service already.  That is how they think of it despite ICT departments having to deal with all the problems.  Applications the business uses are the top of a complex stack of services.  The key to a successful transition to lower risk and cost is to choose which parts of the stack you need to be responsible for and which should be consumed as a service.  The best place an ICT department can position itself is where it provides the maximum value to the business and its activities for the lowest cost.

The Application Delivery Stack shows, for each model where the primary responsibilities lie.  This stack can be used to evaluate delivery models for the whole environment or for individual applications.

Application Delivery Stack

DIY Model

The ICT department owns everything in the stack.  There is support from Software and hardware vendors but most responsibility for delivery lies with the ICT department

  • A large amount of effort and cost is spent in activities that do not offer value to the business. 
  • You are required to be good at everything.
  • There is little capacity to scale
  • Operations will be brittle, and the business will experience varying service levels.
  • You must orchestrate the various vertical support contracts to deliver service.

Outsource Model

The ICT department owns issues closest to the business and outsources lower layers of the stack to one or more service providers

  • ICT is responsible for application delivery.
  • A variety of providers are responsible for different parts of the stack leading to unclear responsibilities.
  • ICT still owns the burden of complexity and technical debt despite having outsourced the lower level components.

IaaS or Private Cloud Model

The ICT department owns application delivery and some platform services, but the majority of the stack is operated by a single vendor.

  • An option for hosting venerable systems where no SaaS model is available.
  • Depending on the vendor chosen more or less of the Platform Services may be viable.
  • A good combination with SaaS and to use as a steppingstone.

SaaS Model

The ICT department owns delivery of the application to the business.  ICT also owns the problem of integration between components. The effort and risks of making services available is placed with those best able to deal with it.

  • You own the problem of using the application to achieve business outcomes.
  • The application vendor provides the software in its own datacenters and owns its operational burden.
  • There is an aligned self-interest between the SaaS provider and the customer given that an outage or service interruption affects many customers.

Its pretty clear that SaaS is the way to go. So the goal should be to move towards that over time. The role of the three main delivery models (collapsing outsourcing and IaaS together) looks like this…

Market Share Over Time

What is the role of the Datacenter?

After setting the scene for the current and future direction of ICT it’s possible to put the role of a datacenter in context.  They are critical parts of an ICT landscape, but their role is evolving.

The case for Datacenters

At the bottom of the Application Delivery Stack is the Datacenter.  Whether you operate on-premises or in public cloud there is always a Datacenter.  Should a business operate its own Datacenters?  Only in limited circumstances.  Generally, a business will not be as good at datacenter operations as a dedicated third-party datacenter provider.  Levels of security, availability, power are uneconomical to provide internally.  Pooling these costs and risks reduces the costs to users.

Beyond being mere “bit barns” for servers, storage and networking the modern datacenter provider can be best considered a real estate play.  In the same way that shopping centers aggregate demand from shoppers and sell that demand to shop owners, datacenters can be thought of in the same way.  They are often not just places to house servers but marketplaces for valuable services.  The larger the datacenter, the more services available there.  It becomes convenient to connect to these services if your operations are collocated nearby.  The business model in many datacenters focuses on this market aggregation capability.  Revenue from cross-connection in datacenters is sometimes the source of profit with racking/power charges merely cost recovery.

The case against Datacenters

The case against datacenters is more a case against the sort of services they offer.  In the Application Delivery Stack there are four delivery models presented.  The best way to deliver services to the business is to leverage SaaS offerings.  In this case the role of a datacenter is limited.  They are a critical part of the service delivery model for each SaaS provider.  More often, datacenters are critical to the public cloud provider upon which the SaaS offering is built.

So this is less a case against datacenters, more an argument that you should be in a position where they no longer matter to you directly.

SD-WAN made easy

I’ll start by asking you two questions:

Are you paying too much for your Wide Area Network (WAN)?

And, is it the best method of connecting to the public Cloud?

At cloudstep.io we are constantly looking for ways to improve our customers connectivity to the public cloud. We consider cloud network connectivity a foundation service that must be implemented at the beginning of a successful cloud journey. Getting it right at the start is imperative to allow any cloud service adoption to truely reach its potential and not suffer from underserved network issues like latency, bandwidth and round-trip.

If the public cloud is going to become your new datacenter then why not structure your network around it.

What if I could solve your cloud connectivity and WAN connectivity in a single solution. Azure WAN is a service that offers you a centralised software defined managed network. Connect all your sites via VPN or ExpressRoute to Azure WAN and let Microsoft become your network layer 3 cloud that traditional telco providers are probably charging you hand over fist for. Who better to become your network service provider for your software defined network (SDN) then one of the largest software companies in the world! Microsoft.

Commodity business grade internet services are becoming cheaper now thanks to things like the NBN where it is truely a race to the bottom in regards to price in my opinion, which is great for the consumer… finally! Procuring NBN business grade connections for each of your office locations and then use Azure WAN to quickly deploy a secure network for site-to-site and site-to-Azure connectivity.

I believe that a service like this is really here to disrupt traditional network service providers and add great value to existing or new Microsoft Azure customers.

We are always looking to save money in a move to the cloud, potentially your network cost could be your biggest reduction. Get in contact with us at cloudstep.io to see if we can help you reform your network.

IPv6 – slowly but surely

I first blogged about IPv6 and the reasons for its slow adoption way back in 2014. A lot can change in the world of ICT over the course of five years, but interestingly the reasons for slow adoption I believe have remained somewhat constant. I’ve updated my post to include some new thoughts.

The first time I recall there being a lot of hype about IPv6 was way back in the early 2000’s, ever since then the topic seems to get attention every once in a while and then disappears into insignificance alongside more exciting IT news.

The problem with IPv4 is that there are only about 3.7 billion public IPv4 addresses. Whilst this may initially sound like a lot, take a moment to think about how many devices you currently have that connect to the Internet. Globally we have already experienced a rapid uptake of Internet connected smart-phones and the recent hype surrounding the Internet of Things (IoT) promises to connect an even larger array of devices to the Internet. With a global population (according to http://www.worldometers.info/world-population/) of approx. 7.7 billion people we just don’t have enough to go around.

Back in the early 2000’s there was limited support in the form of hardware and software that supported IPv6. So now that we have wide spread hardware and software IPv6 support, why is it that we haven’t all switched?

Like most things in the world it’s often determined by the capacity to monetise an event. Surprisingly not all carriers / ISP’s are on board and some are reluctant to spend money to drive the switch. APNIC have stats (https://stats.labs.apnic.net/ipv6/) that suggest Australia is currently sitting at 14% uptake, lagging behind other developed countries.

Network address translation (NAT) and Classless Inter-Domain Routing (CIDR), have made it much easier to live with IPv4. NAT used on firewalls and routers lets many nodes in a network sit behind a single public IP address. CIDR, sometimes referred to as supernetting is a way to allocate and specify the Internet addresses used in inter-domain routing in a much more flexible manner than with the original system of Internet Protocol (IP) address classes. As a result, the number of available Internet addresses has been greatly increased and has allowed service providers to conserve addresses by divvying up pieces of a full range of IP addresses to multiple customers.

Unsurprisingly enterprise adoption in Australia is slow, perceived risk comes into play. It is plausible that many companies may be of the view that the introduction of IPv6 is somewhat unnecessary and potentially risky in terms of effort required to implement and loss of productivity during implementation. Most corporations are simply not feeling any pain with IPv4 so it’s not on their short term radar as being of any level of criticality to their business. When considering IPv6 implementation from a business perspective, the successful adoption of new technologies are typically accompanied by some form of reward or competitive advantage associated with early adoption. The potential for financial reward is often what drives significant change.

To IPv6’s detriment from the layperson’s perspective it has little to distinguish itself from IPv4 in terms of services and service costs. Many of IPv4’s short comings have been addressed. Financial incentives to make the decision to commence widespread deployment just don’t exist.

We have all heard the doom and gloom stories associated with the impending end of IPv4. Surely this should be reason enough for accelerated implementation of IPv6? Why isn’t everyone rushing to implement IPv6 and mitigate future risk? The situation where exhaustion of IPv4 addresses would cause rapid escalation in costs to consumers hasn’t really happened yet and has failed to be a significant factor to encourage further deployment of IPv6 in the Internet.

Another factor to consider is backward compatibility. IPv4 hosts are unable to address IP packets directly to an IPv6 host and vice-versa. So this means that it is not realistic to just switch over a network from IPv4 to IPv6. When implementing IPv6 a significant period of dual stack IPv4 and IPv6 coexistence needs to take place. This is where IPv6 is turned on and run in parallel with the existing IPv4 network. Again from an Enterprise perspective, I suspect this just sounds like two networks instead of one and double administrative overhead for most IT decision makers.

Networks need to provide continued support for IPv4 for as long as there are significant levels of IPv4 only networks and services still deployed. Many IT decision makers would rather spend their budget elsewhere and ignore the issue for another year.

Only once the majority of the Internet supports a dual stack environment can networks start to turn off their continued support for IPv4. Therefore, while there is no particular competitive advantage to be gained by early adoption of IPv6, the collective internet wide decommissioning of IPv4 is likely to be determined by the late adopters.

So what should I do?

It’s important to understand where you are now and arm yourself with enough information to plan accordingly.

  • Check if your ISP is currently supporting IPv6 by visiting a website like http://testmyipv6.com/. There is a dual stack test which will let you know if you are using IPv4 alongside IPv6.
  • Understand if the networking equipment you have in place supports IPv6.
  • Understand if all your existing networked devices (everything that consumes an IP address) supports IPv6.
  • Ensure that all new device acquisitions are fully supportive of IPv6.
  • Understand if the services you consume support IPv6. (If you are making use of public cloud providers, understand if the services you consume support IPv6 or have a road map to IPv6.)

Whilst there is no official switch-off date for IPv4. The reality is that IPv6 isn’t going away and as IT decision makers we can’t postpone planning for its implementation indefinitely. Take the time now to understand where your organisation is at. Make your transition to IPv6 a success story!!