Deploying Kubernetes Applications on Azure Stack Hub

Azure Stack Edge (left) and Azure Stack Hub (right, images from

Based on the project experiences working with Azure Stack Hub in 2020 in this article we will share a point of view on deploying Kubernetes applications on Azure Stack Hub to cluster(s) provisioned using AKS-E (Azure Kubernetes Service Engine) through DevOps with Azure DevOps Repos and Azure DevOps Pipelines (and/or GitOps with GitHub or GitLab).

If you are reading this article we assume you are already familiar with Azure Stack and Azure Stack Hub, in fact, you can always find more information about Azure Stack Hub here.

If interested, you can explore purchasing paths (a system you managed or a managed service) for Azure Stack Hub here. Specifically, we have got Azure Stack Hub version 2008 with 4 nodes with 144 cores, circa 3 TB RAM and plenty of storage which allows us to deploy and run some sophisticated Artificial Intelligence (AI) apps on a 3 nodes Kubernetes cluster (the number of nodes for Kubernetes cluster is configurable when deploying with AKS-E).

When implementing DevOps/DevSecOps and/or GitOps on the Software Development Lifecycle (SDLC) we will have to make certain technology choices. In this article we assume one of the feasible scenarios with the following details:

  • App is a microservices-based containerized Artificial Intelligence (AI) app
  • Code lives in a private Azure DevOps Repo (or GitHub repo or private GitLab repo)
  • Container images live in a private Azure Container Registry (ACR) in Azure Cloud or in a private Container Registry on Azure Stack Hub (CSI*)
  • App is deployed on Azure Stack Hub by Azure DevOps Pipeline(s) with Azure DevOps Pipelines Agent installed on Azure Stack Hub (or by Azure Arc for Kubernetes through GitOps via Flux)
  • Azure Active Directory (AAD) is used for identity management across Azure Cloud and Azure Stack Hub

End-to-end scenario

The app we deployed on Azure Stack Hub is an Artificial Intelligence (AI) app implementing a knowledge mining workload on the edge. The app takes advantage of Microsoft’s first-party Cognitive Services containers, containerized Open-Source Software (OSS) and purpose-built custom components which have also been containerized. All necessary container images have been stored in a private Azure Container Registry (ACR) in Azure Cloud or in a private Container Registry on Azure Stack Hub (CSI*). For the custom components we leveraged Azure DevOps pipeline(s) to build the code and package them into containers as well. There’re 5 paths depicted below:

  • Yellow path depicts how the necessary infrastructure for storing container images in the Cloud has been provisioned via Terraform pipeline (YAML) and how the purpose-built custom components’ code has been built and the corresponding container images have been pushed to a private Azure Container Registry (ACR) in Azure Cloud via Code pipeline (YAML)
  • Pink path depicts the activities performed to set up Azure Stack Hub to configure it and set up Kubernetes cluster using AKS-E (Azure Kubernetes Service Engine)
  • Green path depicts DevOps flow deploying your app on to Azure Stack Hub (only the part that leverages Azure DevOps Pipelines Agent)
  • Blue path (alternative to Green DevOps flow path leveraging Azure DevOps Pipelines Agent) depicts the GitOps flow deploying your app on to Azure Stack Hub via Azure Arc enabled Kubernetes with Flux

The following diagram illustrates the entire end-to-end sample scenario:

Sample deployment scenario

More information about Container Storage Interface on Azure Stack Hub can be found here.

Noteworthy: Azure Active Directory (AAD) or Active Directory Federation Services (ADFS) can be leveraged for identity management in connected or disconnected scenarios on Azure Stack Hub correspondingly. Also practically Azure Stack Hub networking and access may be configured requiring VPN for security purposes. In case you deploy a VM and can’t SSH into it (say, Connection time out), you may want to check the networking and security setup of Azure Stack Hub with your Operator.

Azure Stack Hub appliance

When working with Azure Stack Hub there typically a few roles involved such as Operators (more information is here) and Users (more information is here). When configuring Azure Stack Hub, the Operator will create Plans and Offers, download various Marketplace items and set up Service offerings which will define what services will be available for use on Azure Stack Hub for use. If using Azure Active Directory (AAD) for identity the Operator will invite users appropriately, or if using ADFS the Operator will set up user accounts for access. The Users will then be able to create Subscriptions, Resource Groups and Resources.

Noteworthy: Please note that configuration management of Azure Stack Hub appliance is performed via Administrator portal by Azure Stack Hub Operator as described here and resources management is performed via User portal by Azure Stack Hub User as described here.

Azure Stack Hub Kubernetes cluster

Kubernetes cluster on Azure Stack Hub can be created in different ways depending on the goals of the project:

  • Using a Marketplace item for POCs (proof-of-concepts)
  • Using AKS-E (IaaS) for production
  • Using AKS + ACR (PaaS)* currently (as of 3/13/2021) in private preview as depicted in the Release notes > What’s New here: This release of Azure Stack Hub includes a private preview of Azure Kubernetes Service (AKS) and Azure Container Registry (ACR). The purpose of the private preview is to collect feedback about the quality, features, and user experience of AKS and ACR on Azure Stack Hub.

For the purposes of this article, we’ll focus on deploying a 3 nodes Kubernetes cluster using AKS-E (Azure Kubernetes Service Engine). You can find more information about AKS-E (AKS Engine - Units of Kubernetes in Azure) here, also the specifics of using AKS-E on Azure Stack Hub is described here. To deploy a new Kubernetes cluster on Azure Stack Hub we can follow the guidance here and here, and in the nutshell the procedure will consist of the following steps:

1. Create client VM:

  • Create Linux Server VM and enable SSH (22) port (and use, say, PuttyGen on Windows to SSH keys generation) as described here and here
  • Connect to the client VM (and use, say, Putty on Windows to securely SSH into the VM)
  • Install AKS Engine ( from GitHub here as described here
  • Verify AKS Engine install (aks-engine version)
  • Install ASDK (Azure Stack Development Kit) if necessary

2. Deploy Kubernetes cluster:

  • Download and modify AKS Engine API model from GitHub (for example, for v0.60.1 here) as necessary to define the composition of your cluster (how many masters, how many workers, what VM versions to be used, etc.). The default admin username in the API model file is ‘azureuser’
  • Kick off AKS Engine deployment (aks-engine deploy) with parameters

Noteworthy: The settings you may need to look at for modifying in the API model file typically and practically are: orchestratorRelease (say, 1.17), orchestratorVersion (say, 1.17.17), customCloudProfile.portalURL, masterProfile.dnsPrefix, masterProfile.distro, masterProfile.count, masterProfile.vmSize (say, Standard_DS2_v2), agentPoolProfiles.count, agentPoolProfiles.vmSize, agentPoolProfiles.distro, linuxProfile.adminUsername, linuxProfile.ssh.publicKeys.keyData (SSH private key for enabling a secure SSH connection), servicePrincipal.clientId & servicePrincipal.secret. We typically use an existing Ubuntu vi editor for modifying files and these commands (“i” (enter the insert mode), “:wq” (write & quit), “:q” (quit)) allow to effectively do so. The settings you may need to look at while deploying the cluster via ‘aks-engine deploy’ command typically and practically are: azure-env (AzureStackCloud), location, resource-group (you may pre-create the resource group in Azure Stack Hub portal ahead of the time and specify it here), api-model (JSON file on the client VM you modified), output-directory, client-id & client-secret (App registration (SP) settings in Azure Cloud if using AAD for identity, it may also make sense to assign this Service Principal (SP) permissions in the scope of Azure Stack Hub Subscription, say, a Contributor role), subscription-id (Azure Stack Hub Subscription). More information about creating Service Principals on Azure Stack Hub can be found here.

There’re certain important prerequisites before we can successfully deploy a Kubernetes cluster on Azure Stack Hub using AKS-E which can be summarized as the following:

1. Operator downloads the following to Azure Stack Hub:

  • Ubuntu base image (for example, 16.04 LTS)
  • AKS Ubuntu images (for example, 16.04 LTS)
  • Custom Script for Linux extension (for example, 2.0)

2. User verifies the compatibility/mapping of AKS Engine and Azure Stack version and the mapping for AKS engine images as described here:

  • AKS engine and Azure Stack version mapping (here): for example, for 2008 version of Azure Stack Hub one of the supported versions of AKS-E is 0.60.1 (and this is what we have on our system atm).
  • AKS engine and corresponding image mapping (here): for example, for v0.60.1 AKS-E one of the supported images is AKS Base Ubuntu 16.04 LTS and one of the supported version of K8S is 1.17.17 (and this is what we have on our system atm).

3. Operator sets up the appropriate security and networking to properly secure Azure Stack Hub:

  • Say, Azure Stack Hub is available on VPN and connectivity to resources is allowed for a certain IP ranges.

Noteworthy: While it is obviously recommended and supported to use AKS Base images for cluster deployments (for example, “distro: aks-ubuntu-16.04“), it may also be possible to use the “default” images as well (for example, “distro: ubuntu”) if needed.

Depending on how you generate SSH keys (public and private) in the first place, you may need to convert an old PEM format to a new PPK format when SSHing into VM(s) and Putty can be used for that (“Load” PEM and “Save as” PPK).

Once the cluster is deployed you will have kube config residing in master VM(s) as well as docker and kubectl installed on master VM(s), you may also install helm there if necessary, or leverage the kube config and establish additional VM(s) on the same virtual network with kubectl + helm installation, say, to be able to do sideloading of your apps into the cluster if necessary. More information about helm installation can be found here.

Noteworthy: AKS base image(s) must be downloaded onto your Azure Stack Hub from the Marketplace. These images will not be available for a manual selection for deployment in the portal though while being a part of the corresponding solution (AKS-E).

Azure DevOps Pipelines agent on Azure Stack Hub

While it is possible to set up a direct connectivity from Azure DevOps to Azure Stack Hub via a service connection, this would mean to expose Kubernetes Server API (for inbound connectivity) which is inappropriate for many use cases from the security standpoint as described here. More information about accessing Kubernetes clusters using Kubernetes API can be found here.

Connecting to Kubernetes API server

Instead, you may consider installing Azure DevOps Pipelines agent inside of Azure Stack Hub (with only outbound connectivity) and configuring Azure DevOps Pool to use this custom agent.

Azure DevOps Pipelines Agent

To deploy an Azure DevOps Pipelines agent on an existing Kubernetes cluster we can follow the guidance here, and in the nutshell the procedure will consist of the following steps:

  • Create an Agent Pool in Azure DevOps project
  • Generate a PAT (Personal Access Token) in Azure DevOps project in User Settings > Personal access tokens as described here
  • Create a secret Kubernetes object in the cluster for PAT
  • Deploy Azure DevOps Pipelines agent as a Deployment/ReplicaSet into the cluster by using the DockerHub image here with the general guidance described here
  • Once running, the agent will show up in the list of available/connected agents in Agent Pool in Azure DevOps project
  • Now you can deploy Kubernetes objects into your cluster via Azure DevOps pipelines by the newly created agent (say, you create a new pipeline which deploys nginx for testing using the YAML from here)

Noteworthy: As described here it’s not recommended to use master VM(s) as jump-boxes for administrative tasks, instead you can copy the kube config file to a dedicated admin machine (VM) with the connectivity to Kubernetes cluster and perform administrative tasks from there.

Azure DevOps Pipelines configuration

Now you will be able to execute your DevOps flows from Azure DevOps Pipelines deploying you app(s) to Azure Stack Hub. For the purposes of this article we used Kubectl Azure DevOps task(s). More information about kubectl flags can be found here.

Azure DevOps Pipeline(s)

For the sake of simplicity in the illustration above we used the following kubectl flags:

  • s (server): Kubernetes API server URL, for example, https://xxx.cloudapp.azurestack.yyy/v1/api
  • token: pre-generated authentication token as described here; please automate this step appropriately!
  • insecure-skip-tls-verify: to avoid ‘x509: certificate signed by unknown authority’ error when accessing Kubernetes API server; please secure your setup/communication appropriately with valid certificate(s)!

Also to avoid ‘Error from server (Forbidden): abc is forbidden: User “system:serviceaccount:default:default” cannot list resource “abc” in API group “” in the namespace “xyz”’ permission error you may want to grant the default service account appropriate permissions in Kubernetes cluster via, for example, RoleBinding object(s) following the principle of the least privilege.

If you don’t have an existing cluster yet, another interesting option might be to deploy the cluster from Azure DevOps Pipeline(s) in the first place by using ‘aks-engine deploy’ command, securely store the kube config in Azure DevOps as an artifact, and then subsequently deploy Kubernetes objects into the cluster from Azure DevOps Pipeline(s) as necessary.

Noteworthy: Practically it will be worth considering a separate VM in your Azure Stack Hub Resource Group on the same Virtual Network that runs Azure DevOps Pipelines agent with kube config file copied over or running the agent from one of the Kubernetes VMs as a docker container on Docker or inside of your Kubernetes cluster as Deployment/ReplicaSet.


Once everything is deployed via Azure DevOps Pipelines agent you will be able to see the deployed pods in the corresponding namespace(s) in Kubernetes cluster on Azure Stack Hub via kubectl and start using your app as necessary. In terms of the duration, it took us about 10 mins to deploy a cluster using unmodified API model file with 3 masters and 3 workers.


If you would like to see an example of a sophisticated Artificial Intelligence (AI) app also deployed on Azure Stack using DevOps or GitOps flow, you are welcome to watch a Channel 9 video here. Enriched Search Experience Project sample Kubernetes app on Azure Stack Edge (and Azure Stack Hub) leverages circa 30 different container images to implement a knowledge mining workload in the hybrid cloud and on the edge (or in concert for, say, Hub-n-Spoke solution architecture leveraging event streaming platform such as Kafka). Examples of helm charts for deploying Kafka on Kubernetes can be found, for instance, here (deprecated in November 2020) and here.

Azure Arc for Kubernetes agent on Azure Stack Hub

The general guidance on how to connect an existing Kubernetes cluster to Azure Arc is described here and it applies well for Kubernetes clusters deployed on Azure Stack Hub via AKS-E. The summary of the steps required to enable GitOps flow(s) on Azure Stack Hub Kubernetes cluster (with Azure Arc for Kubernetes agent installed on the VM inside the same virtual network with the cluster) is the following:

  • Install Azure CLI (for example, Azure CLI for Linux instructions are here)
  • Install Helm 3
  • Install Azure CLI extensions for Azure Arc enabled Kubernetes (connectedk8s, k8s-configuration)
  • Make sure https (443) and git (9418) ports are enabled for the outbound connectivity
  • Execute ‘az connectedk8s connect’ command which will create Azure Arc enabled Kubernetes resource in Azure Cloud and connect Kubernetes cluster to it

The process of installation and configuration of Azure Arc enabled Kubernetes agent on Azure Stack Hub is illustrated below

Azure Arc enabled Kubernetes GitOps configuration (reading steps top-down, left-to-right)

In this example we leveraged a public GitHub repo with a sample stateless app from here. Please don’t forget to create a namespace in Kubernetes cluster for your app correspondingly to the code (YAML) ahead of the time before enabling GitOps flow.

Noteworthy: An example for setting up a private GitHub repo or a private GitLab project for Azure Arc enabled Kubernetes can also be found here. Please also note that the same guidance for GitOps setup applies well for Kubernetes clusters deployed on Azure Stack HCI as well.

Building Kubernetes application on Azure Stack Hub

While this topic may deserve a separate article (just like this one for Azure Stack Edge), here we’ll just mention a few aspects important for building Kubernetes applications on Azure Stack Hub. From the storage perspective, by default the following Kubernetes storage classes are available on the newly provisioned by AKS-E Kubernetes cluster: default (, managed-premium ( and managed-standard ( You can get an additional insight into the composition on masters and workers by running ‘kubectl describe node’ command, in the case of the standard API model with 3 masters and 3 workers it may show, for example, the following: 8 attachable-volumes-azure-disk, 2 cpu, 7 GB memory, etc. for masters and workers. Azure Stack Hub Storage overview is provided here and Kubernetes types of persistent volumes are described here with notable options like azureDisk (AzureDisk), azureFile (AzureFile), csi (ContainerStorageInterface) and hostPath (Host Path volume). When deploying stateful workloads you may want to create PersistentVolumes (PVs) first, then create PersistentVolumeClaims (PVCs) and finally create, say, Deployment objects referencing necessary volume(s) in Kubernetes as usually. You may also take advantage of Storage accounts on Azure Stack Hub as described here. From the compute perspective, the list of available services will depend upon what’s been enabled on Azure Stack Hub, we’ll just highlight App Service Resource Provider (more information here) and ability to deploy Web Apps, API Apps and Function Apps on Azure Stack Hub. We’ll also mention that when exposing your workloads via LoadBalancer, the IP address used will be the one of the Load Balancer resource (Public IP address) in the corresponding resource group on Azure Stack Hub. Capacity planning overview for Azure Stack Hub covering storage, compute and more is provided here.

Finally, you may also be interested in checking out Dapr (Distributed Application Runtime) project here which aims to simplify cloud-native application development including hybrid cloud use cases and it is suitable for Kubernetes.


Opinions expressed are solely of the author and do not express the views and opinions of author’s current employer, Microsoft.




Engineering & Data Science Leader

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

My Duties as a Scrum Master

Returning Two Column Values from Pandas Apply

Monitoring CI/CD Workflows

The Equal Opportunity Web

Propositions & Logical Operator Part 2

What happens when you type `ls -l *.c` in the shell

‘C’ Dynamic Library

I Googled ‘Agile’ — and wish I hadn’t.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Alex Anikiev

Alex Anikiev

Engineering & Data Science Leader

More from Medium

Disk Surgery in Azure

AGIC with private IP only — How to overcome the limitation

Connect to Azure Kubernetes Service from your local

DevOps — A thought process.