Azure Application Insights – No Client Source IP Address

Working with one of your customers this week who is implementing Azure API Management alongside their web applications. We are funnelling all the request logs into an Application Insights services to manage visibility of the end-to-end transaction data. We noticed that all the client GET requests had ‘0.0.0.0’ in Client IP Address.

Request PropertiesValue
Client IP address0.0.0.0

I since learned that Microsoft obfuscate this data from Azure Monitor as it’s ingested into Applications Insights for what I call a ‘privacy policy‘. As this was a corporate application anonymity wasn’t needed and the development team wanted to understand when a request was made from their application either from inside corporate network or an unknown internet address.

A good habit to get into is first do a quick review of the latest API version for ‘Microsoft.Insights/components’ which does show a boolean value for DisableIpMasking.

{
  "name": "string",
  "type": "Microsoft.Insights/components",
  "apiVersion": "2020-02-02-preview",
  "location": "string",
  "tags": {},
  "kind": "string",
  "properties": {
    "Application_Type": "string",
    "Flow_Type": "Bluefield",
    "Request_Source": "rest",
    "HockeyAppId": "string",
    "SamplingPercentage": "number",
    "DisableIpMasking": "boolean",
    "ImmediatePurgeDataOn30Days": "boolean",
    "WorkspaceResourceId": "string",
    "publicNetworkAccessForIngestion": "string",
    "publicNetworkAccessForQuery": "string"
  }
}

Reviewing the property values for ApplicationInsightsComponentProperties object DisableIpMasking gave the following short but sweet answer.

NameTypeRequriedValue
DisableIpMaskingbooleanNoDisable IP masking.

Yeah I reckon that is worth a shot!

Update ApplicationInsightsComponentProperties value DisableIpMasking

As this value only seems to be exposed through the API we have to either push a new incremental ARM template through the sausage maker or perform a API request directly. An API request seems like the quicker request method, but doing this in a script with authentication and correct structure takes time. I have a nice trick when wanting to update or add a value to an object when either of those feel like overkill.

  1. Navigate to the Azure Resource Explorer
  2. Find the Application Insights Resource Group
  3. Select Providers > Microsoft.Insights
  4. Select Components > ‘Application Insights Name

You will be shown the JSON definition of your Application Insights Object. You can tell this by the line:

"type": "microsoft.insights/components"

To know your in the right place, under properties there will be many values, we should see Application_Type, InstrumentationKey, ConnectionString, Retention, but what will be missing is DisableIpMasking. So it’s as simple as adding it.

  1. Up the top of the page toggle the blue switch to ‘Read/Write’ from ‘Read Only’.
  2. Select ‘Edit‘.
  3. Remember to add a ‘,’ to the previous last line (in my case “HockeyAppToken“) before adding your new property.

The final step is to use the PUT button to update the object. Which intern has authenticated you to the API using your existing login token, constructed the JSON object and is sending a ‘POST’ method to the API endpoint for ‘management.azure.com/subscriptions/<subscriptionId>/resourceGroups/<rgName>/providers/microsoft.insights/components/<resourceName>?api-version=2015-05-01‘. Much simpler than doing a Powershell or Bash script, what a clever little tool it is.

The result will be that new request in Application Insights will have the source NAT IP address. Unfortunately all previous requests will remain scrubbed with ‘0.0.0.0’.

Closing thoughts

This is a great way to tweak services while attempting to understand whether it’s the correct knob to turn in the Azure service. But while it’s quick, it isn’t documented. If you have a repository of deployment ARM templates make sure you go back and amend the deployment JSON. The day will come when it gets re-deployed and it wont come out the sausage maker the same. The finger will get pointed back at that Azure administrator who doesn’t follow good DevOps practices.


Automating Azure Site Recovery with PowerShell

In a recent consulting engagement, I’ve needed to perform a large-scale migration of a company’s virtual machine (VM) fleet from an On-premise datacenter to Microsoft Azure. Thinking about what that actually means – We’re picking up many compute workloads that are (in most cases) essential for day to day business operation and re-homing them to a new slice of a Microsoft-managed datacenter. After coming out the other end and completing the project, I thought I would shed some light on the tools that I used and developed to make the vision a reality.

In this particular engagement the customer is a large enterprise with a VMware environment servicing 300+ VM’s. When we consider the business value behind each of these compute workloads, it quickly becomes apparent that selecting the right tooling and approach is vital to deliver a successful outcome whilst causing as little disruption to the business as possible.

Enter Azure Site Recovery

Azure Site Recovery (ASR) is Microsoft’s Disaster Recovery as a Service solution which can replicate workloads running on physical and virtual machines from one location to a secondary location. As a disaster recovery platform, it’s possible for workloads to failover and successfully failback in a disaster scenario. ASR can also be used to migrate workloads to Azure by completing the failover component without failing back.

Why Should We Automate Azure Site Recovery?

I like to automate things like this because a computer following a process that someone writes will always perform it the same way. We can expect what the output will look like. In this case that means a like for like VM that looks and feels like it did in its previous life, before being migrated. When we introduce an operator into the mix we also introduce the human element. Things like resource names and groups, VM specs, disk settings, network location and ip addresses all need to be configured for each VM migration.

To have success running migrations at scale, it is important to use known, well-tested, repeatable processes. For me, that means figuring out the best way to use a tool then automating it so that you (or anyone else) can use it the right way, everytime, easily. 

How Can We Automate Azure Site Recovery?

I use PowerShell as an automation tool on top of ASR for a couple of reasons. The main reason being that Microsoft provide and maintain a set of PowerShell modules for interacting with Azure resources, including ASR. This is known as the Az module – See our previous post on the Azure PowerShell Az module for a deeper explanation.  PowerShell can also run almost anywhere thanks to PowerShell Core, a cross-platform edition of PowerShell that runs on Windows, macOS and Linux.

Armed with PowerShell and the Az module, we can get cracking on with the fun stuff – bashing out some lines of code. My approach and methodology here usually involve a fair bit of back and forth, playing with the commands that are available to me and learning the best ways to drive them. Importantly, you don’t want to do this with live data, setting up an isolated sandpit with dummy data will go a long way in allowing you to upskill knowledge around the tools while making sure your production systems remain untouched.

Once I’ve got a handle on the commands that are needed and how they fit together, I make a MVP (minimum viable product) script. The idea here is to demonstrate that its possible for the tooling to work (it’s not pretty but it works). To paint a picture, one of my MVP scripts will usually have a bunch of variables at the start declaring all the info that is required, things like VM name, source location, target location etc. From here, I usually design the script to be ran line by line, this is mostly for simplicity sake, complexity can come later, right now it just needs to be as simple as possible. At this stage, we can demonstrate our capability to perform a migration with PowerShell. A quick example of this is setting up a replication job, preceding this line, I do a series of get statements to build up all the variables seen in the command bellow.

$replicationJob = New-AzRecoveryServicesAsrReplicationProtectedItem -VMwareToAzure -ProtectableItem $vm -Name (New-Guid).Guid -ProtectionContainerMapping $replicationPolicy -ProcessServer $ProcessServer -Account $Account -RecoveryResourceGroupId $ResourceGroup.ResourceId -logStorageAccountId $LogStorageAccount.Id -RecoveryAzureNetworkId $vnet.Id -RecoveryAzureSubnetName $failoverSubnetName

From here, I like to put some lipstick on it and make it feel like a more polished product. Personally, I like to use a series of questions and prompts to generate the variables I described in the last paragraph. I also add status checks and operator prompts to continue. An example of this could be when performing a failover, once the operator confirms he is ready to begin, the command executes the failover, then continuously checks the failover job status until it has completed, once completed, tell the user running the script that its complete. Here is an example of a status check that I wrote for checking the progress of a failover job.

do {
    Clear-Host
    Write-Host "======== Monitoring Failover ========"
    Write-Host "This will refresh every 5 seconds."
    try {
        $failoverJob = Get-ASRJob -Name $failoverJob.Name
    }
    catch {
        Write-Host -ForegroundColor Red "ERROR - Unable to get status of Failover job"
        Write-Host -ForegroundColor Red "ERROR - " + $_
        log "ERROR" "Unable to get status of Failover job"
        log "ERROR" $_
        exit 
    }
    Write-Host "Failover status for $($VMName.FriendlyName) is $($failoverJob.State)"
    sleep 5;
} while (($failoverJob.State -eq "InProgress") -or ($failoverJob.State -eq "NotStarted"))

Once you get this far, the sky is the limit. Like most things, it can evolve over time. I like to add error handling and logging so we can elegantly handle a failure and have an audit trail of operations. I take this approach with most of the processes I automate, I think it’s important to start small and work up from there.


SD-WAN made easy

I’ll start by asking you two questions:

Are you paying too much for your Wide Area Network (WAN)?

And, is it the best method of connecting to the public Cloud?

At cloudstep.io we are constantly looking for ways to improve our customers connectivity to the public cloud. We consider cloud network connectivity a foundation service that must be implemented at the beginning of a successful cloud journey. Getting it right at the start is imperative to allow any cloud service adoption to truely reach its potential and not suffer from underserved network issues like latency, bandwidth and round-trip.

If the public cloud is going to become your new datacenter then why not structure your network around it.

What if I could solve your cloud connectivity and WAN connectivity in a single solution. Azure WAN is a service that offers you a centralised software defined managed network. Connect all your sites via VPN or ExpressRoute to Azure WAN and let Microsoft become your network layer 3 cloud that traditional telco providers are probably charging you hand over fist for. Who better to become your network service provider for your software defined network (SDN) then one of the largest software companies in the world! Microsoft.

Commodity business grade internet services are becoming cheaper now thanks to things like the NBN where it is truely a race to the bottom in regards to price in my opinion, which is great for the consumer… finally! Procuring NBN business grade connections for each of your office locations and then use Azure WAN to quickly deploy a secure network for site-to-site and site-to-Azure connectivity.

I believe that a service like this is really here to disrupt traditional network service providers and add great value to existing or new Microsoft Azure customers.

We are always looking to save money in a move to the cloud, potentially your network cost could be your biggest reduction. Get in contact with us at cloudstep.io to see if we can help you reform your network.


Add VC Accounts to Microsoft Teams Channels with Azure Automation

At cloudstep.io® HQ Microsoft Teams is a big part of how we organise digital asset structure in the business. We are a consulting firm by trade, as new prospects become paying customers we add them as a team. The default General channel is used for administration and accounts, additional channels are created per project name or scope of works. We find ourselves no longer needing to going into dark corners of SharePoint administration (commonly referred to in the office as ‘SwearPoint!’). We have adopted Microsoft Teams as our preferred web, audio and video conferencing platform for internal and external meetings. Our board room video conferencing unit runs a full version of Windows 10 and Microsoft Teams that we setup as a ‘do it yourself‘ versus the off the shelf room systems. The VC unit requirements we had were:

  • cloudstep.io®, our web application uses a full desktop browser experience.
  • Mouse and keyboard are preferred for web navigation inside the app.
  • VC to have full OS is preferred to eliminate employees having to BYOD and connect either physically or wirelessly for screen presentation.
  • We can connect to third party conferencing platforms by installing the addons for guest access (zoom, webex, gotomeeting, chime etc) with our partner lead meetings direct onto the machine.
  • Wirelessly present our Macs, iPads, iPhones, Androids and Windows laptops.
  • We are all ‘power users‘ and can handle the meeting join experience in Microsoft Teams client without the need for a single ‘click-to-join’ button on the table which the Microsoft Teams Room (MTR) system provides via a touch device.

We have a boardroom account that has a 365 license to be able to leverage the desktop tools. Windows 10 automatically logs in each morning and the Microsoft Teams client is started automatically. The bill of materials is notably:

  • Intel NUC
  • Windows 10
    • Teams Client
    • Office 365 Pro Plus (Word, Excel, PowerPoint, OneNote)
    • Windows 10 Calendar (Connect to Office 365 Mailbox)
    • AirServer client (ChromeCast, MiraCast, AirPlay)
    • Chrome Browser
  • Office 365 user license
  • Logitech Meetup camera
  • Biggest screen we could fit in the room
  • Microsoft Bluetooth keyboard and mouse

The VC mailbox type is set to ‘room‘ with the following to enhance the experience for scheduled meetings in the board room:

#Add tips when searching in Outlook
Set-Mailbox -Identity $VC -MailTip "This room is equipped to support native Teams & Skype for Business Meetings remember to add meeting invite details via the Teams outlook button in the ribbon." 

#Auto Accept
Set-CalendarProcessing -Identity $VC -AutomateProcessing AutoAccept -AddOrganizerToSubject $false -RemovePrivateProperty $false -DeleteComments $false -DeleteSubject $false –AddAdditionalResponse $true –AdditionalResponse "Your meeting is now scheduled and if it was enabled as a Teams Meeting will be available from the conference room client."

This has worked well in the last 12 months, the only user experience problem we have had is when running a meeting from the VC unit, the account isn’t a member of the team where the data attempting to be presented is stored and therefor cannot see/open the content. A simple solution for this is automation. We looked to investigated two automation solutions available in the Microsoft services offering we have available.

  1. Flow (Office 365 Suite)
  2. Azure Automation (Azure Subscription)

Unfortunately option 1 didn’t have any native integration for triggers based on Office 365 groups or teams creation. So we resorted to a quick Azure Powershell Runbook that executes on a simple schedule. The steps needed to run were:

  1. Get a list of all the teams.
  2. Query them against the UnifiedGroup properties to get…
    1. AccessType equals ‘Public
    2. CreationDate within 2 days
  3. Check the newly created teams group membership for the VC unit username.
  4. If it doesn’t exist add the VC unit as the role ‘member‘.
Write-verbose "Getting Credentials ..." -Verbose
$Credentials = Get-AutomationPSCredential -Name 'Admin-365'
Write-verbose  "Credential Imported : $($Credentials.UserName)" -Verbose

$o365Cred = New-Object System.Management.Automation.PSCredential ($Credentials.UserName, $Credentials.Password)
Write-verbose  "Credential Loaded : $($o365Cred.UserName)" -Verbose
Write-verbose 'Connecting to 365 ...' -Verbose
$Session = New-PSSession –ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/ -Credential $Credentials -Authentication Basic -AllowRedirection
Write-verbose 'Importing UnifiedGroups PowerShell Commands ...' -Verbose
Import-PSSession -Session $Session -DisableNameChecking:$true -AllowClobber:$true | Out-Null
Write-verbose 'Connecting to Teams ...' -Verbose
Connect-MicrosoftTeams -Credential $Credentials

$creationdate = ((Get-Date).AddDays(-2))
$teams = get-team
#$groups = Get-UnifiedGroup |Where-Object {$_.WelcomeMessageEnabled -like "False" -and $_.AccessType -like "Public" -and $_.WhenCreated -ge $creationdate}
$TeamsOutput = @()
foreach ($Team in $Teams){
$UnifiedGroup = Get-UnifiedGroup -Identity $Team.GroupId
    if($UnifiedGroup.AccessType -like "Public" -and $UnifiedGroup.WhenCreated -ge $creationdate){
    Write-verbose "Processing team named: $($UnifiedGroup.DisplayName)" -Verbose
        $VC = Get-TeamUser -GroupId $Team.GroupId | Where-Object {$_.User -like "user@domain.com"} 
        If($VC.count -eq 0){
            Write-verbose "VC not member, adding..." -Verbose
            Add-TeamUser -GroupId $Team.GroupId -User "user@domain.com" -Role Member
        }else{Write-verbose "VC is member already" -Verbose}
    }

$TeamsOutput+=$UnifiedGroup
}
Write-verbose "Total teams processed for selection: $($TeamsOutput.Count)" -Verbose 

The result is simple

Additional member added via PowerShell

Next day the board room account is logged in, the Microsoft Teams client will have access to all the teams channels, files, OneNote and apps. This is great for native Teams meetings, but also when we have customers in the board room without the need for an online meeting. The VC account has access to see the required teams and channel data to present to the physical display.

This solution doesn’t have to be for a video conferencing units, you may have some standardised members you want on all groups, or it could be certain owner enforcement or member list.

Hello Microsoft Teams! Bye bye SwearPoint, may you remain in the background forever.


Backup Palo Alto VM Series Config with Azure Automation

If you have implemented a VM-Series firewall in Azure, AWS or on-premises but don’t have a Panorama Server for your configuration backups. Here is a solutions for getting the firewall configuration into an Azure Blob Storage, this could be done similarly with Lambda and S3 using python and the boto3 library.

Why Do This?

If there are multiple administrators of the firewall and configuration changes are happening frequently you may want a daily/hourly backup of the configuration to restore in the event that a recent commit has caused unwanted disruption to your network.

Azure Automation is a great place to start, we will have to interact with the API interface of the firewall to ask for a copy of the XML. Generally speaking we don’t want to expose the API interface to the internet, nor is it easy to allow on the Azure Automation public IPs, so in this case a Hybrid Worker (VM inside your trusted network) can execute the code against the internal trusted interface that has the API listening.

Depending on your version of PowerShell and Invoke-WebRequest you may not be able to ignore a certificate error coming from the API interface. Which is why I’m updating system .Net class for X509 certificates policies.

The steps are pretty simple

  1. Create a directory on the file system (I’m using the Azure VM with temporary D drive local storage)
  2. Request the XML from the URL
  3. Login to Azure with service credentials
  4. Map to the cold storage account i’m putting the files in
  5. Copy the file

add-type @"     using System.Net;     using System.Security.Cryptography.X509Certificates;     public class TrustAllCertsPolicy : ICertificatePolicy {         public bool CheckValidationResult(             ServicePoint srvPoint, X509Certificate certificate,             WebRequest request, int certificateProblem) {             return true;         }     } "@ [System.Net.ServicePointManager]::CertificatePolicy = New-Object TrustAllCertsPolicy [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Ssl3, [Net.SecurityProtocolType]::Tls, [Net.SecurityProtocolType]::Tls11, [Net.SecurityProtocolType]::Tls12 
$todaydate = Get-Date -Format yy-MM-dd
$File = "PaloConfig-"+$todaydate+".xml"  
$FilePath = "D:\Palo\"+$File 
#Create Directory 
New-Item -ItemType directory -Path D:\Palo -Force 
#Download Config
Invoke-WebRequest -Uri "https://PaloIPAddress/api/?type=export&category=configuration&key=<ONETIMEKEY>=" -OutFile $FilePath 
#Login with service principal account 
$TenantId = 'AzureTenantID' 
$ApplicationId = 'ServiceID' 
$Thumbprint = (Get-ChildItem cert:\LocalMachine\My\ | Where-Object {$_.Subject -match "CN=AzureHybridWorker" }).Thumbprint 
Connect-AzureRMAccount -ServicePrincipal -TenantId $TenantId -ApplicationId $ApplicationId -CertificateThumbprint $Thumbprint 
#Get key to storage account 
$acctKey = (Get-AzureRmStorageAccountKey -Name StorageAccountName -ResourceGroupName ResourceGroupName).Value[0] 
#Map to the backup BLOB context 
$storageContext = New-AzureStorageContext -StorageAccountName StorageAccountName -StorageAccountKey $acctKey 
#Copy the file to the storage account 
Get-ChildItem -LiteralPath $FilePath | Set-AzureStorageBlobContent -Container "paloconfigbackup" -BlobType "Block" -Context $storageContext -Verbose 

If you are not currently using a Hybrid Worker in your subscription, create one from the below link:

https://docs.microsoft.com/en-us/azure/automation/automation-hybrid-runbook-worker

Paste the code into an Azure PowerShell Runbook and create a re-occuring schedule.

You’ll have backups saved in cold storage for as long as you would like to retain the data. Create a storage policy can help you