Deploy Craft CMS with Azure App Service for Linux Containers

Here is some key points to deploy a Craft CMS installation on Azure Web App using container images. In this blog we will step you through some of the modifications needed to make the container image run in Azure and the deployment steps to run in an Azure DevOps Pipeline.

CraftCMS have reference material for their docker deployments found here:
GitHub – craftcms/docker: Craft CMS Docker images

Components

The components required are:

  • Azure Web App for Linux Containers
  • Azure Database for MySQL
  • Azure Storage Account
  • Azure Front Door with WAF
  • Azure Container Registry

Custom Docker Image

To make this work in an Azure Web App we have to do the following additional steps:

  • Install OpenSSH & Enable SSH daemon on 2222 at startup
  • Set the password for root to “Docker!”
  • Install the Azure Database for MySQL root certificates for SSL connections from the Container

We do this in the Dockerfile. We are customizing the NGINX implementation of CraftCMS to allow for the front end to service the HTTP/HTTPS requests from the App Service.

# composer dependencies
FROM composer:1 as vendor
COPY composer.json composer.json
COPY composer.lock composer.lock
RUN composer install --ignore-platform-reqs --no-interaction --prefer-dist

FROM craftcms/nginx:7.4
# Install OpenSSH and set the password for root to "Docker!". In this example, "apk add" is the install instruction for an Alpine Linux-based image.
USER root
RUN apk add openssh sudo \
     && echo "root:Docker!" | chpasswd 
# Copy the sshd_config file to the /etc/ directory
COPY sshd_config /etc/ssh/
COPY start.sh /etc/start.sh
COPY BaltimoreCyberTrustRoot.crt.pem /etc/BaltimoreCyberTrustRoot.crt.pem 
RUN ssh-keygen -A
RUN addgroup sudo
RUN adduser www-data sudo
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers

# the user is `www-data`, so we copy the files using the user and group
USER www-data
COPY --chown=www-data:www-data --from=vendor /app/vendor/ /app/vendor/
COPY --chown=www-data:www-data . .

EXPOSE 8080 2222
ENTRYPOINT ["sh", "/etc/start.sh"]

The corresponding ‘start.sh’

#!/bin/bash
sudo /usr/sbin/sshd &
/usr/bin/supervisord -c /etc/supervisor/conf.d/supervisor.conf

Build the Web App

The Azure Web App resource is deployed using a ARM template. Here is a snippet of the template, the key is to have your environment variables defined:

{
            "comments": "This is the docker web app running craftcms/custom Docker image",
            "type": "Microsoft.Web/sites",
            "name": "[parameters('siteName')]",
            "apiVersion": "2020-06-01",
            "location": "[parameters('location')]",
            "tags": "[parameters('tags')]",
            "dependsOn": [
                "[variables('hostingPlanName')]",
                "[variables('databaseName')]"
            ],
            "properties": {
                "siteConfig": {
                    "appSettings": [
                        {
                            "name": "DOCKER_REGISTRY_SERVER_URL",
                            "value": "[reference(variables('registryResourceId'), '2019-05-01').loginServer]"
                        },
                        {
                            "name": "DOCKER_REGISTRY_SERVER_USERNAME",
                            "value": "[listCredentials(variables('registryResourceId'), '2019-05-01').username]"
                        },
                        {
                            "name": "DOCKER_REGISTRY_SERVER_PASSWORD",
                            "value": "[listCredentials(variables('registryResourceId'), '2019-05-01').passwords[0].value]"
                        },
                        {
                            "name": "WEBSITES_ENABLE_APP_SERVICE_STORAGE",
                            "value": "false"
                        },
                        {
                            "name": "DB_DRIVER",
                            "value": "mysql"
                        },
                        {
                            "name": "DB_SERVER",
                            "value": "[reference(resourceId('Microsoft.DBforMySQL/servers',variables('serverName'))).fullyQualifiedDomainName]"
                        },
                        {
                            "name": "DB_PORT",
                            "value": "3306"
                        },
                        {
                            "name": "DB_DATABASE",
                            "value": "[variables('databaseName')]"
                        },
                        {
                            "name": "DB_USER",
                            "value": "[variables('databaseUserName')]"
                        },
                        {
                            "name": "DB_PASSWORD",
                            "value": "[parameters('administratorLoginPassword')]"
                        },
                        {
                            "name": "DB_SCHEMA",
                            "value": "public"
                        },
                        {
                            "name": "DB_TABLE_PREFIX",
                            "value": ""
                        },
                        {
                            "name": "SECURITY_KEY",
                            "value": "[parameters('cmsSecurityKey')]"
                        },
                        {
                            "name": "WEB_IMAGE",
                            "value": "[parameters('containerImage')]"
                        },
                        {
                            "name": "WEB_IMAGE_PORTS",
                            "value": "80:8080"
                        }

                    ],
                    "linuxFxVersion": "[variables('linuxFxVersion')]",
                    "scmIpSecurityRestrictions": [
                        
                    ],
                    "scmIpSecurityRestrictionsUseMain": false,
                    "minTlsVersion": "1.2",
                    "scmMinTlsVersion": "1.0"
                },
                "name": "[parameters('siteName')]",
                "serverFarmId": "[variables('hostingPlanName')]",
                "httpsOnly": true      
            },
            "resources": [
                {
                    "apiVersion": "2020-06-01",
                    "name": "connectionstrings",
                    "type": "config",
                    "dependsOn": [
                        "[resourceId('Microsoft.Web/sites/', parameters('siteName'))]"
                    ],
                    "tags": "[parameters('tags')]",
                    "properties": {
                        "dbstring": {
                            "value": "[concat('Database=', variables('databaseName'), ';Data Source=', reference(resourceId('Microsoft.DBforMySQL/servers',variables('serverName'))).fullyQualifiedDomainName, ';User Id=', parameters('administratorLogin'),'@', variables('serverName'),';Password=', parameters('administratorLoginPassword'))]",
                            "type": "MySQL"
                        }
                    }
                }
            ]
        },

All other resources should be ARM defaults. No customisation required. Either put them all in a single ARM template or seperate them out on their own. Your choice to be creative.

Build Pipeline

The infrastructure build pipeline looks something like the below:

# Infrastructure pipeline
trigger: none

pool:
  vmImage: 'windows-2019'
variables:
  TEMPLATEURI: 'https://storageAccountName.blob.core.windows.net/templates/portal/'
  CMSSINGLE: 'singleCraftCMSTemplate.json'
  CMSSINGLEPARAM: 'singleCraftCMSTemplate.parameters.json'
  CMSFILEREG: 'ContainerRegistry.json'
  CMSFRONTDOOR: 'frontDoor.json'
  CMSFILEREGPARAM: 'ContainerRegistry.parameters.json'
  CMSFRONTDOORPARAM: 'frontDoor.parameters.json'
  LOCATION: 'Australia East'
  SUBSCRIPTIONID: ''
  AZURECLISPID: ''
  TENANTID: ''
  RGNAME: ''
  TOKEN: ''
  ACS : 'registryName.azurecr.io'
resources:
  repositories:
    - repository: coderepo
      type: git
      name: Project/craftcms
stages:
- stage: BuildContainerRegistry
  displayName: BuildRegistry
  jobs:
  - job: BuildContainerRegistry
    displayName: Azure Git Repository
    pool:
      vmImage: 'windows-latest'
    steps:
    - task: CopyFiles@2
      name: copyToBuildHost
      displayName: 'Copy files to the build host for execution'
      inputs:
        Contents: '**'
        TargetFolder: '$(Build.ArtifactStagingDirectory)'
    - task: AzureFileCopy@4
      inputs:
        SourcePath: '$(Build.Repository.LocalPath)\CMS\template\*'
        azureSubscription: ''
        Destination: 'AzureBlob'
        storage: ''
        ContainerName: 'templates'
        BlobPrefix: portal
        AdditionalArgumentsForBlobCopy: --recursive=true
    - task: AzureResourceManagerTemplateDeployment@3
      displayName: "Deploy Azure ARM template for Azure Container Registry"
      inputs:
        deploymentScope: 'Resource Group'
        azureResourceManagerConnection: 'azureDeployCLI-SP'
        subscriptionId: '$(SUBSCRIPTIONID)'
        action: 'Create Or Update Resource Group'
        resourceGroupName: '$(RGNAME)'
        location: '$(LOCATION)'
        templateLocation: 'URL of the file'
        csmFileLink: '$(TEMPLATEURI)$(CMSFILEREG)$(TOKEN)' 
        csmParametersFileLink: '$(TEMPLATEURI)$(CMSFILEREGPARAM)$(TOKEN)' 
        deploymentMode: 'Incremental'
    - task: AzurePowerShell@5
      displayName: 'Import the public docker images to the Azure Container Repository'
      inputs:
        azureSubscription: 'azureDeployCLI-SP'
        ScriptType: 'FilePath'
        ScriptPath: '$(Build.ArtifactStagingDirectory)\CMS\template\dockerImages.ps1'
        errorActionPreference: 'silentlyContinue'
        azurePowerShellVersion: 'LatestVersion'

- stage: BuildGeneralImg
  dependsOn: BuildContainerRegistry
  displayName: BuildImages
  jobs:
  - job: BuildCraftCMSImage
    displayName: General Docker Image
    pool:
      vmImage: 'ubuntu-18.04'
    steps:
    - checkout: self
    - checkout: coderepo
    - task: CopyFiles@2
      name: copyToBuildHost
      displayName: 'Copy files to the build host for execution'
      inputs:
        Contents: '**'
        TargetFolder: '$(Build.ArtifactStagingDirectory)'
    - task: Docker@2
      displayName: Build and push
      inputs:
        containerRegistry: ''
        repository: craftcms
        command: buildAndPush
        dockerfile: 'craftcms/Dockerfile'
        tags: |
          craftcms
          latest

- stage: Deploy 
  dependsOn: BuildGeneralImg
  displayName: DeployWebService
  jobs:
  - job:
    displayName: ARM Templates
    pool:
      vmImage: 'windows-latest'
    steps:
    - checkout: self
    - checkout: coderepo
    - task: CopyFiles@2
      name: copyToBuildHost
      displayName: 'Copy files to the build host for execution'
      inputs:
        Contents: '**'
        TargetFolder: '$(Build.ArtifactStagingDirectory)'
    
    - task: AzureResourceManagerTemplateDeployment@3
      displayName: "Deploy Azure ARM single template for remaining assets"
      inputs:
        deploymentScope: 'Resource Group'
        azureResourceManagerConnection: ''
        subscriptionId: '$(SUBSCRIPTIONID)'
        action: 'Create Or Update Resource Group'
        resourceGroupName: '$(RGNAME)'
        location: '$(LOCATION)'
        templateLocation: 'URL of the file'
        csmFileLink: '$(TEMPLATEURI)$(CMSSINGLE)$(TOKEN)' 
        csmParametersFileLink: '$(TEMPLATEURI)$(CMSSINGLEPARAM)$(TOKEN)' 
        deploymentMode: 'Incremental'

- stage: Secure 
  dependsOn: Deploy
  displayName: DeployFrontDoor
  jobs:
  - job:
    displayName: ARM Templates
    pool:
      vmImage: 'windows-latest'
    steps:
    - task: CopyFiles@2
      name: copyToBuildHost
      displayName: 'Copy files to the build host for execution'
      inputs:
        Contents: '**'
        TargetFolder: '$(Build.ArtifactStagingDirectory)'
    
    - task: AzureResourceManagerTemplateDeployment@3
      displayName: "Deploy Azure ARM single template for Front Door"
      inputs:
        deploymentScope: 'Resource Group'
        azureResourceManagerConnection: ''
        subscriptionId: '$(SUBSCRIPTIONID)'
        action: 'Create Or Update Resource Group'
        resourceGroupName: '$(RGNAME)'
        location: '$(LOCATION)'
        templateLocation: 'URL of the file'
        csmFileLink: '$(TEMPLATEURI)$(CMSFRONTDOOR)$(TOKEN)' 
        csmParametersFileLink: '$(TEMPLATEURI)$(CMSFRONTDOORPARAM)$(TOKEN)' 
        deploymentMode: 'Incremental'
    - task: AzurePowerShell@5
      displayName: 'Apply Front Door service tags to Web App ACLs'
      inputs:
        azureSubscription: 'azureDeployCLI-SP'
        ScriptType: 'FilePath'
        ScriptPath: '$(Build.ArtifactStagingDirectory)\CMS\template\enableFrontDoorOnWebApp.ps1'
        errorActionPreference: 'silentlyContinue'
        azurePowerShellVersion: 'LatestVersion'    

Enable Front Door with WAF

The pipeline stage DeployFrontDoor has an enableFrontDoorOnWebApp.ps1

$azFrontDoorName = ""
$webAppName = ""
$resourceGroup = ""

Write-Host "INFO: Restrict access to a specific Azure Front Door instance"
try{
    $afd = Get-AzFrontDoor -Name $azFrontDoorName -ResourceGroupName $resourceGroup
}
catch{
    Write-Host "ERROR: $($_.Exception.Message)"
}

Write-Host "INFO: Setting the IP ranges defined in the AzureFrontDoor.Backend service tag to the Web App"
try{
    Add-AzWebAppAccessRestrictionRule -ResourceGroupName $resourceGroup -WebAppName $webAppName -Name "Front Door Restrictions" -Priority 100 -Action Allow -ServiceTag AzureFrontDoor.Backend -HttpHeader @{'x-azure-fdid' = $afd.FrontDoorId}}
catch{
    Write-Host "ERROR: $($_.Exception.Message)"
}


You should now have a CraftCMS web app that is only available through the FrontDoor URL.

Continuous Deployment

There are many ways to deploy updates to your website, an Azure Web App has a beautiful thing called slots that can be used.

# Trigger on commit
# Build and push an image to Azure Container Registry
# Update Web App Slot

trigger:
  branches:
    include:
      - main
  paths:
    exclude:
      - pipelines
      - README.md
  batch: true

resources:
- repo: self

pool:
  vmImage: 'windows-2019'
variables:
  TEMPLATEURI: 'https://storageAccountName.blob.core.windows.net/templates/portal/'
  LOCATION: 'Australia East'
  SUBSCRIPTIONID: ''
  RGNAME: ''
  TOKEN: ''
  SASTOKEN: ''
  TAG: '$(Build.BuildId)'
  CONTAINERREGISTRY: 'registryName.azurecr.io'
  IMAGEREPOSITORY: 'craftcms'
  APPNAME: ''

stages:
- stage: BuildImg
  displayName: BuildLatestImage
  jobs:
  - job: BuildCraftCMSImage
    displayName: General Docker Image
    pool:
      vmImage: 'ubuntu-18.04'
    steps:
    - checkout: self
    - task: CopyFiles@2
      name: copyToBuildHost
      displayName: 'Copy files to the build host for execution'
      inputs:
        Contents: '**'
        TargetFolder: '$(Build.ArtifactStagingDirectory)'
    - task: Docker@2
      displayName: Build and push
      inputs:
        containerRegistry: ''
        repository: $(IMAGEREPOSITORY)
        command: buildAndPush
        dockerfile: 'Dockerfile'
        tags: |
          $(IMAGEREPOSITORY)
          $(TAG)


- stage: UpdateApp 
  dependsOn: BuildImg
  displayName: UpdateTestSlot
  jobs:
  - job:
    displayName: 'Update Web App Slot'
    pool:
      vmImage: 'windows-latest'
    steps:
    - task: AzureWebAppContainer@1
      displayName: 'Update Web App Container Image Reference' 
      inputs:
        azureSubscription: ''
        appName: $(APPNAME)
        containers: $(CONTAINERREGISTRY)/$(IMAGEREPOSITORY):$(TAG)
        deployToSlotOrASE: true
        resourceGroupName: $(RGNAME)
        slotName: test




Fault Tolerant Multi AZ EC2, On a beer budget – live from AWS Meetup

Filmed on 18th of March 2021 at the Adelaide AWS User Group, where Arran Peterson presented on how to put together best practice (and cheap!) cloud architecture for business continuity. The title:

“Enterprise grade fault tolerant multi-AZ server deployment on a beer budget”

Recording

RATED ‘PG’ – Mild course language.

Presenter

Arran Peterson
Arran Peterson

Arran is an Infrastructure Consultant with a passion for Microsoft Unified Communications and the true flexibility and scalability of cloud-based solutions.
As a Senior Consultant, Arran brings his expertise in enterprise environments to work with clients around Microsoft Unified Communications product portfolio of Office 365, Exchange and Skype/Teams, along with expertise around transitioning to the cloud-based platforms including AWS, Azure and Google.

More Reading

Amazon Elastic Block Store

https://aws.amazon.com/ebs/

AWS Sydney outage prompts architecture rethink

https://www.itnews.com.au/news/aws-sydney-outage-prompts-architecture-rethink-420506

Chalice Framework

https://aws.github.io/chalice/

Adelaide AWS User Group

https://www.meetup.com/en-AU/Amazon-Web-Services-User-Group-Adelaide/events/276728885/


Pimp my VS Code

Those who know me, know that I have a keen interest in software tools and exploring the various different ways that people use them. I take great joy in exploring custom or 3rd party plugins and add-ons to get the most out of the tools I use every day. From OS automation tools (like BetterTouchTool) to custom screen savers (Brooklyn is my current favourite), I love it all.

On a good day, I spend quite a bit of time in Visual Studio Code, my IDE of choice. VS Code has all that you need right out of the box, but why stop there? Heres a list of some of my favourite VS Code Extensions that I now consider essential when doing a fresh install.

Indent-Rainbow and Bracket Pair Colorizer 2 are must installs for me. Both really simple, change colours of indents and brackets so you can easily see them at a glance. Always useful when working with ident heavy languages like YAML.

GitLense is another essential if you are working with Git repositories. GitLense integrates lots of various Git tools and information into the editor. My favourite feature of GitLense is the current line blame, you can see it in the screenshot above which shows an unobtrusive annotation at the end of each line as you select it. The annotation shows commit information for that piece of code.

Beautify helps you make your code beautiful. Beautify can automatically indent Javascript, JSON, CSS, and HTML.

Better Comments makes your comments human readable by changing the colour of comments based on an opening tag. You can even define your own.

Source: Better Comments Documentation in Visual Studio Code

Next up, some extensions that I install to match the work I’m doing. In my day to day work, I’m regularly authoring infrastructure templates for Azure and AWS (ARM and CloudFormation). To assist with making this as simple as possible I install some specific extensions for syntax highlighting, autocompletion and even do some code snippet referencing.

Azure Resource Manager (ARM) Tools is a collection of extensions for working with Azure made by Microsoft. This one has lots of features so I’ll just pick a few. You can use the ‘arm!’ shortcut to create a blank ARM template with all the property you need – this one makes life so much better, spend less time lining up brackets in JSON and more time defining resources!

Image showing the arm template scaffolding snippet
Source: Azure Resource Manager (ARM) Tools Documentation in Visual Studio Code

Each time you use a snippet, you can also use tab complete to go through commonly modified configurations, again, less time reading documentation more time writing code!

Image showing snippet tab stops with pre-populated parameter values
Source: Azure Resource Manager (ARM) Tools Documentation in Visual Studio Code

CloudFormation Template Linter and CloudFormation Resource Snippets add some similar functionality for working with AWS CloudFormation templates. While neither of these are created by Amazon, they both do a good job at implementing similar functionality to the above ARM tools.

Next up is one of my new favourites, Dash, sorry Windows guru’s this one’s only on Mac. Dash is an API documentation browser which can hook into your VS code to quickly search documentation (from their 200+ built in doc sets, or add your own GitHub doc sets). Sounds boring, but I think it’s far from it. I’ve loaded mine up with lots of Microsoft Azure Documentation and AWS documentation. It’s really handy to be able to highlight a resource type or PowerShell Command, hit control + H and have the document reference instantly pop up, each time it saves me minutes.

Dash - Visual Studio Marketplace
Source: Dash Documentation in Visual Studio Code

Finally, my icon and colour theme I use VSCode Icons and Atom One Dark. This really comes down to personal preference. I like the syntax colour coding included in the Atom One Dark theme, I find it useful especially when writing PowerShell. VSCode icons is the most popular icons extension, and I’ve had no issues since installation.

Source: Atom One Dark Theme Documentation in Visual Studio Code

Thats my round up for my must have extensions. Are there any missing off this list that you think should be here? – Comment below with your must have extensions.

Cheers, Joel


Cognito authentication integration with Django using authorization code grant.

Note: Assumed knowledge of AWS Cognito backend configuration and underlying concepts, mostly it’s just the setup from an application integration perspective that is talked about here.

Recently we have been working on a Django project where a secure and flexible authentication system was required, as most of our existing structure is on AWS we chose Cognito as the backend.

Below are the steps we took to get this working and some insights learned on the way.

Django Warrant

The first attempt was using django_warrant, this is probably going to be the first thing that comes up when you google ‘how to django and cognito’.

Django_warrant works by injecting an authentication backend into django which does some magic that allows your username/password to be submitted and checked against a configured user pool, on success it authenticates you and if required creates a stub django user.

The basics of this were very easy to get working and integrated but had a few issues such as:

  • We still see username/password requests and have to send them on.
  • By default can only be configured for one user pool.
  • Does not support federated identity provider workflows.
  • Github project did not seem super active or updated.

Ultimately we chose not to use this module, however inspiration was taken from its source code to do some of the user handling stuff we implemented later on.

Custom authorization_code workflow implementation

This involves using the cognito hosted login form, which does both user pool and connected identity provider authentication (O365/Azure, Google, Facebook, Amazon) .

The form can be customised with HTML, CSS, images and put behind a custom URL, other aspects of the process and events can be changed and reacted upon using triggers and lambda.

Once you are authenticated in cognito it redirects you back to the page of your choosing (usually your applications login page or custom endpoint) with a set of tokens, using these tokens you then grab the authenticated users details and authenticate them within the context of your app.

The difference between authorization code grant and implicit grant are:

  • Implicit grant
    • Intended for client side authentication (javascript applications mostly)
    • Sends both the id_token (JWT) and acccess_token in the redirect response
    • Sends the tokens with an #anchor before them so it is not seen by the web server
    • https://your-app/login#id_token=n&auth_token=n
  • Authorization code grant
    • Intended for server side authentication
    • Sends a authorization code in the redirect response
    • Sends this as a normal GET parameter
    • https://your-app/login?code=n
    • Your application holds a preconfigured secret
    • Code + secret get turned into id_token token and access_token via oauth2/token endpoint

We chose to use the authorization code grant workflow, it takes a bit more effort to setup but is generally more secure and alleviates any hacky javascript shenanigans that would be needed to get implicit grant working with a django server based backend.

After these steps you can use boto3 or helpers to turn those tokens into a set of attributes (email, name, other custom attributes) kept by cognito. Then you simply hook this up to your internal user/session logic by matching them with your chosen attributes like email, username etc.

I was unable to find any specific library support to handle some aspects of this, like the token handling in python or the django integration so i have included some code which may be useful.

Code

This can be integrated into a view to get the user details from Cognito based on a token, this will be sitting at the redirect URL that cognito returns from.

import warrant
import cslib.aws

def tokenauth(request):
    authorization_code = request.GET.get("code")
    token_grabber = cslib.aws.CognitoToken(
        <client_id>
        <client_secret>
        <domain>
        <redir>
        <region>?
    )

    id_token, access_token = token_grabber.get(authorization_code)

    if id_token and access_token:
        # This uses warrant (different than django_warrant)
        # A helper lib that wraps cognito
        # Plain boto3 can do this also.  
        cognito = warrant.Cognito(
            <user_pool_id>
            <client_id>
            id_token=id_token,
            access_token=access_token,
        )

        # Their lib is a bit broken, because we dont supply a username it wont
        # build a legit user object for us, so we reach into the cookie jar....
        # {'given_name': 'Joe', 'family_name': 'Smith', 'email': 'joe@jtwo.solutions'}
        data = cognito.get_user()._data
        return data
    else:
        return None

Class that handles the oauth/token2 workflow, this is mysteriously missing from the boto3 library which seems to handle everything else quite well…

from http.client import HTTPSConnection
from base64 import b64encode
import urllib.parse
import json

class CognitoToken(object):
    """
    Why you no do this boto3...
    """
    def __init__(self, client_id, client_secret, domain, redir, region="ap-southeast-2"):
        self.client_id = client_id
        self.client_secret = client_secret
        self.redir = redir
        self.token_endpoint = "{0}.auth.{1}.amazoncognito.com".format(domain, region)
        self.token_path = "/oauth2/token"

    def get(self, authorization_code):
        headers = {
            "Authorization" : "Basic {0}".format(self._encode_auth()),
            "Content-type": "application/x-www-form-urlencoded",
        }

        query = urllib.parse.urlencode({
                "grant_type" : "authorization_code",
                "client_id" : self.client_id,
                "code" : authorization_code,
                "redirect_uri" : self.redir,
            }
        )

        con = HTTPSConnection(self.token_endpoint)
        con.request("POST", self.token_path, body=query, headers=headers)
        response = con.getresponse()

        if response.status == 200:
            respdata = str(response.read().decode('utf-8'))
            data = json.loads(respdata)
            return (data["id_token"], data["access_token"])

        return None, None

    def _encode_auth(self):
        # Auth is a base64 encoded client_id:secret
        string = "{0}:{1}".format(self.client_id, self.client_secret)
        return b64encode(bytes(string, "utf-8")).decode("ascii")

Further reading