Every once in a while, when working with ARM templates you come across something that is missing from the official Microsoft ARM template reference. In my case yesterday, I was looking to update the configuration of an Azure App Service to use the DotNetCore stack (rather than .NET 4.8).
While I initially thought this would be a quick job to simply look up the ARM reference and make the required changes, I found that there was nothing about DotNetCore in the ARM reference. Funny enough, there is a value for “netFrameworkVersion”, but don’t be deceived, if you are looking to setup DotNetCore – this value is not for you (this is for regular .Net only).
To better understand the problem, I Clickly Clikcy’d in an App Service and configured it for DotNetCore (Clickly Clicky is our lingo for deploying infrastructure using the portal rather than a CLI or template). With this, I attempted my usual trick of exporting a template and observing the JSON it spits out. However, much to my amazement I couldn’t see any reference to dotnetcore in there at all.
In the end it was the Azure Resource Explorer which came to my rescue. Used the tool to explore the example I created and found a value called “CURRENT_STACK” in the properties of the “Microsoft.Web/sites/config” resource type.
After playing this this for a while, I was able to translate this into my ARM template with the following JSON.
The key permissions outlined in the prerequisites at point 3 are:
A virtual network.
A Windows virtual machine in the virtual network.
The following required roles:
Reader role on the virtual machine.
Reader role on the NIC with private IP of the virtual machine.
Reader role on the Azure Bastion resource.
Ports: To connect to the Windows VM, you must have the following ports open on your Windows VM:
Inbound ports: RDP (3389)
My scenario is to invite a guest AAD account, add them to a group and grant the group access as per below:
Grant Contributor role to the resource group that has the VM’s for the application.
Grant Reader role to the resource group that has the Bastion host.
This way the guest user logs into the Azure Portal complying with our conditional access policy and then they are presented with only the resources they have read or higher access too. In this scenario that is the two resource groups outlined above.
The guest user locates the virtual machine they wish to connect and then chooses Connect > Bastion > UseBastion the following error message is presented.
“Unable to query Bastion data”
Initially working with Microsoft support we found that granting reader access at the subscription level gave the user permission to in-act the Bastion service, which simply give a username and password input.
These permissions were too lacks as a workaround and exposed a lot of production data to accounts that didn’t really have any business looking at it.
[12/11/2020] The case is on-going inside Microsoft and I will leave a definitive response when I get the information. I’ve done some further investigation what would be the least amount of additional ‘Reader‘ permissions are required. I’ve found the following permissions are required in my scenario:
Reader permissions on the Virtual Network that has the ‘AzureBastionSubnet‘ subnet.
Reader permissions on the Virtual Network that has the connected virtual machine network interface.
In my scenario, the virtual machines are located in a development Virtual Network that is peered with the production Virtual Network which has the subnet ‘AzureBastionHost‘. So I had two sets of permissions to add. After applying the permissions you may need to get a coffee and return to the portal as it took 5-10 minutes to kick in for me.
Hope this helps someone that has done some googling but is still scratching their head with this error message.
Getting restore points out of Azure can be like getting blood from a stone. The portal likes to always set a custom filter showing only ~90 days and your Powershell cmdlet only allows for a 30 day interval for retrieval dates. When running ‘Get-AzRecoveryServicesBackupRecoveryPoint’ you get the following:
Get-AzRecoveryServicesBackupRecoveryPoint : Time difference should not be more than 30 days
Sigh.. I just want all my restore points for a virtual machine please! All of them, because its my butt if for some reason I don’t have them. Using something like this can be useful to audit your backups against business needs for data retention.
Example: Get recovery points from the last two years for a single VM
Working with one of your customers this week who is implementing Azure API Management alongside their web applications. We are funnelling all the request logs into an Application Insights services to manage visibility of the end-to-end transaction data. We noticed that all the client GET requests had ‘0.0.0.0’ in Client IP Address.
Client IP address
Update ApplicationInsightsComponentProperties value DisableIpMasking
As this value only seems to be exposed through the API we have to either push a new incremental ARM template through the sausage maker or perform a API request directly. An API request seems like the quicker request method, but doing this in a script with authentication and correct structure takes time. I have a nice trick when wanting to update or add a value to an object when either of those feel like overkill.
You will be shown the JSON definition of your Application Insights Object. You can tell this by the line:
To know your in the right place, under properties there will be many values, we should see Application_Type, InstrumentationKey, ConnectionString, Retention, but what will be missing is DisableIpMasking. So it’s as simple as adding it.
Up the top of the page toggle the blue switch to ‘Read/Write’ from ‘Read Only’.
Remember to add a ‘,’ to the previous last line (in my case “HockeyAppToken“) before adding your new property.
The final step is to use the PUT button to update the object. Which intern has authenticated you to the API using your existing login token, constructed the JSON object and is sending a ‘POST’ method to the API endpoint for ‘management.azure.com/subscriptions/<subscriptionId>/resourceGroups/<rgName>/providers/microsoft.insights/components/<resourceName>?api-version=2015-05-01‘. Much simpler than doing a Powershell or Bash script, what a clever little tool it is.
The result will be that new request in Application Insights will have the source NAT IP address. Unfortunately all previous requests will remain scrubbed with ‘0.0.0.0’.
This is a great way to tweak services while attempting to understand whether it’s the correct knob to turn in the Azure service. But while it’s quick, it isn’t documented. If you have a repository of deployment ARM templates make sure you go back and amend the deployment JSON. The day will come when it gets re-deployed and it wont come out the sausage maker the same. The finger will get pointed back at that Azure administrator who doesn’t follow good DevOps practices.
Recently I had the pleasure of upgrade a Megaport Cloud Router (MCR) from version 1 to the new version 2. Version 2 MCR sits on a whole new code base and a side by side migration is required. In this blog I’ll show you how we went about the process, this could also be used if migrating MCR’s in general or any cloud connectivity for that matter.
The aim is to create the smallest outage possible with the customer on-premises connectivity to the cloud datacenter. In a fault tolerant environment this is usually done via having multiple routes advertised to the on-premises routers via a dynamic protocol. In the case of my example BGP is used throughout the environment and standard times for route propagation is only a few minutes end-to-end without interference.
In my example I will be moving an Azure Express Route. The key to moving the Express Route is that there is a primary and secondary BPG session (Green) for fault tolerance. I’ll move the secondary connection (2) from my active MCR to another MCRv2 in a staged approach to maintain connectivity for as long as possible. Once each Express Route peering sessions are connected to their own MCR, I will move the Megaport physical connection (Blue) from the MCR1 to the MCR2.
Create the new MCR
Create a new MCR in the correct datacenter location.
Add a connection to the cloud provider, using the existing service key out of the Express Route Virtual Circuit panel in your Azure subscription.
We can see above that we have a secondary connection available. (This was completed ahead of starting this blog).
Finalise your connection, and after you select ‘order’ the designer view will deploy the Express Route connection for you.
Check the Connection
Give it all of an itch and a scratch and the BPG peers of Microsoft and the MCR will light up.
Head over to Azure and we can check the ARP records to see the secondary peering endpoints now populating.
Here is the primary that is still online with the existing MCR that is connected to on-premises network.
We should now have some records for the secondary connection that is between the Azure Express Route Gateway and the MCR2. Select show secondary and reviewing the interface row of ‘On-Prem’ is the MCR Express Route peering IP.
All is looking good from a layer 2 (ARP) and layer 3 perspective (BGP – below). From this point if we go look at route tables. We would see that all the primary BGP peer session will have all the on-premises routes and azure VNET routes. The secondary route table will only have the Azure routes and the peering /30 routes.
If we go check our Express Route Virtual Circuits we can validate the peering IP’s used in each session match.
Delete the connection between the MCR to on-premises router
Now we want to swing our on-premises router connectivity from the cross connect of the MCR1 (1) and physical port (2). Back in the designer view we have all of the required routers and connection objects to view. I’ve also underlined the button to ‘delete’ the virtual cross connect (VXC) between the on-premises router and my MCR1. Note – In our deployment this is where the outage will start, we will loose connectivity between Azure and on-premises router as I’ve not used VLAN tagging on the physical port in the example.
Add a connection between the MCR2 and the on-premises router
Quickly, go and create a connection between the MCR2 and your Megaport “Port” (aka. Physical Port).
Make sure its your physical port not your other MCR 🙂
I’m using the exact same peering subnet for my new MCR2 so as long as I include my correct /30 subnet then my peering relationship with the on-premises router will come back willingly in a matter of seconds.
Review what you have done in the designer view. You won’t have set anything into motion until you click ‘order’. So do it! From the below view you can see the following summary:
Old MCR with a single Express Route Connection
New MCR with single Express Route Connection
Physical Port with the new connection to the MCR2
Once you click order, you barely have time to scratch yourself again and the status moves from deploying (little red Megaport rocket icons), too deployed. Hurray!
Once that has all come up green. The rest of work would be done in your edge routers. The edge routers being your on-premises physical edge router connected to Megaport and your Express Route Virtual Circuits/Gateway. Do a few checks to make sure you have established end-to-end connectivity. Here is some ideas:
Edge Router – Review the BGP Status of the MCR.
show ip bgp neighbors
Check the MCR neighbor existing and is BGP State = Active.
Remote AS of the MCR by default is 133937.
Remember the IP address of the neighbor
Edge Router – Review received routes from MCR.
show ip bgp neighbors x.x.x.x received-routes
You’ll no doubt see routes from your VNET with a path of your MCR+Microsoft e.g. 133937 12076. (Microsoft uses AS 12076 for Azure public, Azure private and Microsoft peering)
Azure Portal – Review the ARP records and route tables.
The secondary connection should show all your received routes to the Express Route Gateway from the on-premises router.
Branch Site Router – Go check what has been advertised down to your client sites. A good old trace route will show the IP addresses of the MCR in the hops.
You pretty much done at this stage. The software defined network engine has done its job, you now are in control of your own destiny with on-premises route propagation.
How good is Megaport! We love networking! Especially when its fast, scalable and consistent.
We love Megaport
Let us take the stress out of public cloud connectivity. Get in touch with us to understand the benefits of using a service like Megaport Cloud Router.