Flutter for mobile app development

Mobile apps have become an essential part of our daily lives, with over 6 billion smartphone users worldwide. As a result, there has been a significant increase in demand for mobile app development.

There are two main approaches to mobile app development:

  • Cross-platform development
  • Traditional/Native development

In this blog, I’m going to talk about the leading cross-platform framework, Flutter, and why we should use it. I will also be recommending some packages that I found quite useful during the development process.

What is flutter

Flutter is an open-source mobile app development framework created and supported by Google that allows developers to build high-performance, natively compiled user interfaces (UI) for applications across different platforms. When Flutter launched in 2018, it mainly supported mobile app development for iOS and Android platforms. After the release of Flutter 3 in 2022, it now supports not only mobile platforms but also web and desktop app development on Windows, macOS, and Linux.

Why flutter?

Cross-platform development

Flutter allows developers to create apps for multiple platforms using a single codebase and programming language(Dart), saving time and effort while delivering a consistent user experience across different devices.

Sometimes, it does require extra effort to make things work on different platforms, for example, when implementing routing for the app.

Rich set of pre-built widgets

Flutter provides a wide range of customisable pre-built widgets and tools that make it easy for developers to create beautiful and responsive user interfaces without having to build everything from scratch. You can find a complete list of all the widgets and learn how to use them at https://docs.flutter.dev/ui/widgets.

Performance

Flutter uses Dart, which is a fast, object-oriented programming language that allows for ahead-of-time (AOT) compilation. This means that Dart code can be compiled into machine code, resulting in faster performance.

Some nice tools and great integration with vscode with extensions

Here’s the list of tools that are highly recommended, which make your development process faster and your life easier.

  • Flutter inspector: It helps you visualise your widget and diagnose layout issues.
  • Hot reload: It only updates the widget that has been changed instead of rerunning the whole main() function and rebuilding the app. This allows you to debug and view changes quickly without losing state.
  • Flutter (extension): It nicely integrates with the debugger and helps you easily refactor code, such as wrapping a widget around a block of code or removing a widget.
  • Flutter Widget Snippets (extension): It provides a set of snippets that help you complete boilerplate code that you use repeatedly.

Recommended packages

All Flutter developers should know Pub.dev (https://pub.dev), which is a central repository for all Flutter packages. It contains a vast collection of useful packages that can greatly simplify your development process. Here are some of my personal favorties:

Flutter Riverpod:

Flutter Riverpod is a state management library for Flutter, based on Provider, which simplifies the management of the app’s state. It provides an alternative to Flutter’s built-in state management system and offers a more modern approach to managing state in Flutter apps. It stores data or state globally and allows all widgets to use it without passing the state around, making your code cleaner. It’s quite similar to Redux if you have ever used React before.

Auto Route:


Flutter Auto Route is a package that simplifies the process of defining and generating navigation routes in Flutter apps. It provides a type-safe and boilerplate-free approach to generating routes, which can save time and effort during implementation and eliminates the need to worry about the routing.

http/dio


HTTP and Dio are two popular packages in the Flutter community that simplify handling HTTP requests and responses in Flutter apps. They provide intuitive APIs for making network requests, as well as support for features like interceptors, file uploading and downloading, and more.

Conclusion:

Flutter is a powerful and versatile mobile app development framework that offers many benefits, including fast development times, high-performance applications, and a large community with extensive resources. It provides a wide range of customizable pre-built widgets and tools that simplify the development process, and many great packages that make development easier and save time. Overall, Flutter is a great framework for developers to create beautiful and responsive apps across multiple platforms and can also help businesses save time and money when delivering projectss.


Securing Born in the Cloud Businesses

Everyone’s had this recently. Organisations they partner with are becoming (justifiably) more stringent about their security. It creates some thorny problems though:

  • How do we get the security without bludgeoning our business to death?
  • How do you improve data protection without making your staff rage quit?
  • How do we align initiatives I make with broader security standards.

Born in the Cloud

When we’re talking about a Born in the Cloud Business (BITC) we’re talking about this sort of company:

  • Not much in the way of legacy systems.
  • Mostly SaaS based tools.
  • A boat load of BYOD.
  • Loosey Goosey office security 🙂

Larger organisations like working with businesses like these. They’re small, agile and generally full of rock-star grade experts in their field. But large organisations are also terrified of working with these sorts of companies. The locked-down SOE based work day they’re used to which provide them with a measure of confidence isn’t present in these BITC businesses. The large org wants all the warm fuzzy security but wants to keep the innovation and glint in their partner’s eye.

Security Standards

In Europe this is lot more mature than it is in Australia. There are two different standards that get bandied about:

Essential 8

Here, there are a set of guidelines that the Australian Signals Directorate have adopted and provide as advice. This is called the Essential 8 Maturity Model. It covers several areas and each one has four levels of maturity and organisation can reach (0-3). It was originally envisaged as a straightforward, practical approach to data security but has been “beefed up” to be a lot more complex over time.

ISO:27001

Another standard is ISO 27001. This is a heavyweight standard to attain and can take 6-18 months depending on your complexity, maturity and size.

It covers a range of different technology and policy “controls” that should be applied. You an self-assert your compliance then have that audited externally.

Essential 8 Level 3 (the highest) is a sort of subset of the work you’d need to complete to get to ISO 27001. Essential 8 is used in Australian Federal and State Governments and ISO:27001 is a global standard.

What do I need to do?

We at jtwo have been on the journey of achieving both and we have some general advice on how to get going.

We aren’t security consultants and our professional indemnity doesn’t allow us to be so take this advice with a grain of salt. That should keep our insurers happy 🙂

So with that out of the way Its a big beast but here are some pointers on how to get started. We use Office365 with the E5 licensing so a lot of the tools we need to build this stuff out are there and we already pay for them.

Take it Seriously

You can’t fake this stuff. You have to embrace the idea of security in your bones or you won’t get anywhere. You have to think about the tools, processes and behaviours you use and think about them through a security lens. Once you’ve embraced the idea of security it all starts to look a bit more achievable.

Build Registers

In each of these security standards there are set of lists and registers you need to keep. They involve asset registers (physical and information based) and there’s lots of them. This is particularly the case with ISO27k1.

We use Office365 so we built each of these registers as SharePoint Lists. They are easy to use and they can be used in reporting too.

Embrace a SOE

Everyone hates them, they suck. They make it hard for you to be flexible and innovative. Developers hate them especially. But you should consider them part of your new world order. We use E5 licensing for Microsoft 365 and as part of this we get InTune and Defender. Rolling these out together can help you tick lots of boxes and actually be secure to boot.

MFA Everywhere, All at Once

You probably already do this, in fact if you don’t then do it as soon as you’ve read this. We use O365 and all the identities are in Azure AD. We’ve turned on MFA using Microsoft Authenticator and it does a lot of the heavy lifting.

Policies, Policies, Policies

You’ll need to write and maintain lots of policies. These are generally short (thankfully) but they need to be reviewed periodically and you need to record attestations that people have read, understood and agreed to the policies.

We build our policies as Word Documents and we built a PowerApp that lets people read and agree to the policies. The records for this go in our SharePoint lists for record keeping.

Enforcement

You need to enforce the use of policies, practices and tools. Consider making security compliance part of your staff meetings. Reward people for good behaviour and following policies. Gently (at first) nudge people towards good behaviour if they’re lagging behind.

Office365 and Purview are your friend

While many of the compliance activities you’ll need to do are policy and people based there’s a lot of technology stuff too. As a BITC business you have a lot of this at your fingertips. We use Microsoft 365 and Purview is part of the E5 licensing we have. Its got a bunch of great technology you can use to improve your security. It arranges it as a set of scores so you get the dopamine rush when you move the score up too. If you use M365 and have E5 you should definitely explore this. It will help greatly.

Data Classification

This is a big one and can be hard. Data classification is generally difficult but the Purview classification tools are able to use ML to do the classification work for you. Here’s what our Teams, email and other communication profile looks like…

We should probably tone down on the fruity language.

This is also what our data looks like from the perspective of sensitive information.

You can see that we use what might be considered sensitive information in the content of our comms. This will vary from org to org but you don’t have to do anything to get this, it works out of the box.

Standards Mapping

Another interesting capability is the standards mapping. You can choose a standard like E8L3 or ISO:27001 and apply that template to the controls you have in O365. This will give you a (probably massive) checklist of changes you need to make to meet those standarsd.

Microsoft also have their own standards for security which are applied to your controls. Here’s an example of how it provides a gauge on your security compliance:

Moving this score up will move you along with various standards at the same time.


Azure Application Insights – No Client Source IP Address

Working with one of your customers this week who is implementing Azure API Management alongside their web applications. We are funnelling all the request logs into an Application Insights services to manage visibility of the end-to-end transaction data. We noticed that all the client GET requests had ‘0.0.0.0’ in Client IP Address.

Request PropertiesValue
Client IP address0.0.0.0

I since learned that Microsoft obfuscate this data from Azure Monitor as it’s ingested into Applications Insights for what I call a ‘privacy policy‘. As this was a corporate application anonymity wasn’t needed and the development team wanted to understand when a request was made from their application either from inside corporate network or an unknown internet address.

A good habit to get into is first do a quick review of the latest API version for ‘Microsoft.Insights/components’ which does show a boolean value for DisableIpMasking.

{
  "name": "string",
  "type": "Microsoft.Insights/components",
  "apiVersion": "2020-02-02-preview",
  "location": "string",
  "tags": {},
  "kind": "string",
  "properties": {
    "Application_Type": "string",
    "Flow_Type": "Bluefield",
    "Request_Source": "rest",
    "HockeyAppId": "string",
    "SamplingPercentage": "number",
    "DisableIpMasking": "boolean",
    "ImmediatePurgeDataOn30Days": "boolean",
    "WorkspaceResourceId": "string",
    "publicNetworkAccessForIngestion": "string",
    "publicNetworkAccessForQuery": "string"
  }
}

Reviewing the property values for ApplicationInsightsComponentProperties object DisableIpMasking gave the following short but sweet answer.

NameTypeRequriedValue
DisableIpMaskingbooleanNoDisable IP masking.

Yeah I reckon that is worth a shot!

Update ApplicationInsightsComponentProperties value DisableIpMasking

As this value only seems to be exposed through the API we have to either push a new incremental ARM template through the sausage maker or perform a API request directly. An API request seems like the quicker request method, but doing this in a script with authentication and correct structure takes time. I have a nice trick when wanting to update or add a value to an object when either of those feel like overkill.

  1. Navigate to the Azure Resource Explorer
  2. Find the Application Insights Resource Group
  3. Select Providers > Microsoft.Insights
  4. Select Components > ‘Application Insights Name

You will be shown the JSON definition of your Application Insights Object. You can tell this by the line:

"type": "microsoft.insights/components"

To know your in the right place, under properties there will be many values, we should see Application_Type, InstrumentationKey, ConnectionString, Retention, but what will be missing is DisableIpMasking. So it’s as simple as adding it.

  1. Up the top of the page toggle the blue switch to ‘Read/Write’ from ‘Read Only’.
  2. Select ‘Edit‘.
  3. Remember to add a ‘,’ to the previous last line (in my case “HockeyAppToken“) before adding your new property.

The final step is to use the PUT button to update the object. Which intern has authenticated you to the API using your existing login token, constructed the JSON object and is sending a ‘POST’ method to the API endpoint for ‘management.azure.com/subscriptions/<subscriptionId>/resourceGroups/<rgName>/providers/microsoft.insights/components/<resourceName>?api-version=2015-05-01‘. Much simpler than doing a Powershell or Bash script, what a clever little tool it is.

The result will be that new request in Application Insights will have the source NAT IP address. Unfortunately all previous requests will remain scrubbed with ‘0.0.0.0’.

Closing thoughts

This is a great way to tweak services while attempting to understand whether it’s the correct knob to turn in the Azure service. But while it’s quick, it isn’t documented. If you have a repository of deployment ARM templates make sure you go back and amend the deployment JSON. The day will come when it gets re-deployed and it wont come out the sausage maker the same. The finger will get pointed back at that Azure administrator who doesn’t follow good DevOps practices.


Upgrading Megaport Cloud Routers

Recently I had the pleasure of upgrade a Megaport Cloud Router (MCR) from version 1 to the new version 2. Version 2 MCR sits on a whole new code base and a side by side migration is required. In this blog I’ll show you how we went about the process, this could also be used if migrating MCR’s in general or any cloud connectivity for that matter.

The aim is to create the smallest outage possible with the customer on-premises connectivity to the cloud datacenter. In a fault tolerant environment this is usually done via having multiple routes advertised to the on-premises routers via a dynamic protocol. In the case of my example BGP is used throughout the environment and standard times for route propagation is only a few minutes end-to-end without interference.

In my example I will be moving an Azure Express Route. The key to moving the Express Route is that there is a primary and secondary BPG session (Green) for fault tolerance. I’ll move the secondary connection (2) from my active MCR to another MCRv2 in a staged approach to maintain connectivity for as long as possible. Once each Express Route peering sessions are connected to their own MCR, I will move the Megaport physical connection (Blue) from the MCR1 to the MCR2.

Create the new MCR

Create a new MCR in the correct datacenter location.

New MCR in NextDC M1

Add a connection to the cloud provider, using the existing service key out of the Express Route Virtual Circuit panel in your Azure subscription.

Add an Express Route with Service Key

We can see above that we have a secondary connection available. (This was completed ahead of starting this blog).

Finalise your connection, and after you select ‘order’ the designer view will deploy the Express Route connection for you.

Secondary Express Route Peering Session deployed to MCR2

Check the Connection

Give it all of an itch and a scratch and the BPG peers of Microsoft and the MCR will light up.

BPG Session status

Head over to Azure and we can check the ARP records to see the secondary peering endpoints now populating.

Here is the primary that is still online with the existing MCR that is connected to on-premises network.

Azure Express Route ARP records – Primary

We should now have some records for the secondary connection that is between the Azure Express Route Gateway and the MCR2. Select show secondary and reviewing the interface row of ‘On-Prem’ is the MCR Express Route peering IP.

Azure Express Route ARP records – Secondary

All is looking good from a layer 2 (ARP) and layer 3 perspective (BGP – below). From this point if we go look at route tables. We would see that all the primary BGP peer session will have all the on-premises routes and azure VNET routes. The secondary route table will only have the Azure routes and the peering /30 routes.

Azure Express Route route table – Secondary

If we go check our Express Route Virtual Circuits we can validate the peering IP’s used in each session match.

Azure Express Route – Peering Overview

Delete the connection between the MCR to on-premises router

Now we want to swing our on-premises router connectivity from the cross connect of the MCR1 (1) and physical port (2). Back in the designer view we have all of the required routers and connection objects to view. I’ve also underlined the button to ‘delete’ the virtual cross connect (VXC) between the on-premises router and my MCR1. Note – In our deployment this is where the outage will start, we will loose connectivity between Azure and on-premises router as I’ve not used VLAN tagging on the physical port in the example.

Delete the Virtual Cross Connect

Add a connection between the MCR2 and the on-premises router

Quickly, go and create a connection between the MCR2 and your Megaport “Port” (aka. Physical Port).

Attach MCR to Virtual Cross Connect

Make sure its your physical port not your other MCR 🙂

Select the Physical Port attachment

I’m using the exact same peering subnet for my new MCR2 so as long as I include my correct /30 subnet then my peering relationship with the on-premises router will come back willingly in a matter of seconds.

MCR to Physical Port details

Review what you have done in the designer view. You won’t have set anything into motion until you click ‘order’. So do it! From the below view you can see the following summary:

  1. Old MCR with a single Express Route Connection
  2. New MCR with single Express Route Connection
  3. Physical Port with the new connection to the MCR2
Switch the physical port from old to new MCR

Once you click order, you barely have time to scratch yourself again and the status moves from deploying (little red Megaport rocket icons), too deployed. Hurray!

Physical port to MCR connection – status deploying

Once that has all come up green. The rest of work would be done in your edge routers. The edge routers being your on-premises physical edge router connected to Megaport and your Express Route Virtual Circuits/Gateway. Do a few checks to make sure you have established end-to-end connectivity. Here is some ideas:

  • Edge Router – Review the BGP Status of the MCR.
    • show ip bgp neighbors
    • Check the MCR neighbor existing and is BGP State = Active.
    • Remote AS of the MCR by default is 133937.
    • Remember the IP address of the neighbor
  • Edge Router – Review received routes from MCR.
    • show ip bgp neighbors x.x.x.x received-routes
    • You’ll no doubt see routes from your VNET with a path of your MCR+Microsoft e.g. 133937 12076. (Microsoft uses AS 12076 for Azure public, Azure private and Microsoft peering)
  • Azure Portal – Review the ARP records and route tables.
    • The secondary connection should show all your received routes to the Express Route Gateway from the on-premises router.
  • Branch Site Router – Go check what has been advertised down to your client sites. A good old trace route will show the IP addresses of the MCR in the hops.
Express Route secondary connection with on-premises routes received for MCR peer IP

Finishing Up

You pretty much done at this stage. The software defined network engine has done its job, you now are in control of your own destiny with on-premises route propagation.

How good is Megaport! We love networking! Especially when its fast, scalable and consistent.


We love Megaport

Let us take the stress out of public cloud connectivity. Get in touch with us to understand the benefits of using a service like Megaport Cloud Router.