Prerequisite Infrastructure
The following components are required before setting up the infrastructure needed by e6data. These are commonly present in most cloud environments, but if any are not present, please follow the linked guides below to create them.
Create VNET, SUBNETS AND NAT Gateway
AKS Cluster
1. Prerequisites
Ensure that you have the Azure CLI installed on your system. You can install it from here.
Once installed, log in to your Azure account using the following command:
az login
2. Create Resource Group
In Azure, a resource group acts as a logical container that holds related resources for your solution. It allows you to manage, deploy, and organize resources conveniently. To create a resource group, use the following Azure CLI command:
az group create \
--name <resource-group-name> \
--location <region>
For example
az group create \
--name e6data-app-rg \
--location "EastUS"
3. Create a Virtual Network
After creating a resource group, the next step is to create a Virtual Network (VNet) within that group. A VNet is an essential part of Azure networking and allows you to manage your network resources efficiently.
az network vnet create \
--name <prefix>-network \
--resource-group <resource-group-name> \
--address-prefix <cidr-block> \
--location <region>
For example
az network vnet create \
--name e6data-app-network \
--resource-group e6data-app-rg \
--address-prefix 10.0.0.0/16 \
--location "EastUS"
4. Create Subnets
Create AKS Subnet
To create a subnet specifically for Azure Kubernetes Service (AKS), use the following command:
az network vnet subnet create \
--name <prefix>-subnet-aks \
--resource-group <resource-group-name> \
--vnet-name <prefix>-network \
--address-prefixes <aks-subnet-cidr>
For example
az network vnet subnet create \
--name e6data-subnet-aks \
--resource-group e6data-app-rg \
--vnet-name e6data-app-network \
--address-prefixes 10.0.1.0/24
Create ACI Subnet
To create a subnet specifically for Azure Container Instances (ACI), use the following command:
az network vnet subnet create \
--name <prefix>-subnet-aci \
--resource-group <resource-group-name> \
--vnet-name <prefix>-network \
--address-prefixes <aci-subnet-cidr>
For example
az network vnet subnet create \
--name e6data-subnet-aci \
--resource-group e6data-app-rg \
--vnet-name e6data-app-network \
--address-prefixes 10.0.2.0/24
5. Delegate ACI Subnet
Update ACI Subnet Delegation
To update an existing subnet and delegate it for use by Azure Container Instances (ACI), use the following command:
az network vnet subnet update \
--name <prefix>-subnet-aci \
--resource-group <resource-group-name> \
--vnet-name <prefix>-network \
--delegations Microsoft.ContainerInstance/containerGroups
For example
az network vnet subnet update \
--name e6data-subnet-aci \
--resource-group e6data-app-rg \
--vnet-name e6data-app-network \
--delegations Microsoft.ContainerInstance/containerGroups
Note:
Delegation is required to allow Azure Container Instances to use the subnet.
Ensure the subnet is properly configured and does not conflict with other network configurations.
6. Create a Public IP Address
To configure a NAT gateway, you need to create a static public IP address. Follow these steps to create the public IP address required for the NAT gateway:
az network public-ip create \
--resource-group <resource-group-name> \
--name <prefix>-PIP \
--sku Standard \
--location <region> \
--allocation-method Static
For example
az network public-ip create \
--resource-group e6data-app-rg \
--name e6data-app-pip \
--sku Standard \
--location "EastUS" \
--allocation-method Static
7. Create a NAT Gateway
To set up network address translation (NAT) for outbound traffic, you need to create a NAT gateway and associate it with a public IP address. Follow these steps:
az network nat gateway create \
--resource-group <resource-group-name> \
--name <prefix>-nat \
--public-ip-addresses <prefix>-PIP \
--idle-timeout 30 \
--location <region>
For example
az network nat gateway create \
--resource-group e6data-app-rg \
--name e6data-app-nat \
--public-ip-addresses e6data-app-pip \
--idle-timeout 30 \
--location "EastUS"
8. Associate the NAT Gateway with the AKS Subnet
To enable outbound connectivity through the NAT gateway for your AKS subnet, follow these steps:
az network vnet subnet update \
--resource-group <resource-group-name> \
--vnet-name <prefix>-network \
--name <prefix>-subnet-aks \
--nat-gateway <prefix>-nat
For example
az network vnet subnet update \
--resource-group e6data-app-rg \
--vnet-name e6data-app-network \
--name e6data-subnet-aks \
--nat-gateway e6data-app-nat
Note:
Associating the NAT gateway with the AKS subnet ensures that all outbound traffic from the AKS cluster is routed through the NAT gateway, providing a single, stable IP address for outbound traffic.
Verify that the NAT gateway and subnet configurations are correctly set up to avoid connectivity issues.
9. Create a Key Vault
Create an Azure Key Vault to securely store certificates used for TLS connectivity. This vault will provide centralized, secure management of certificates, ensuring encrypted communication within your services or applications, such as in an AKS cluster.
az keyvault create \
--name <vault-name> \
--resource-group <aks-resource-group-name> \
--location <region> \
--sku standard \
--enable-rbac-authorization true
For example
az keyvault create \
--name e6data-app-vault \
--resource-group e6data-app-rg \
--location "EastUS" \
--sku standard \
--enable-rbac-authorization true
10. Creating a New Azure AKS Cluster
Follow these instructions to set up a new Azure Kubernetes Service (AKS) cluster. Ensure that the Azure CLI is installed and configured on your local machine. If you haven’t installed the Azure CLI yet, please refer to the How to install the Azure CLI guide for setup.
Open a Terminal or Command Prompt
Run the Following Command to Create a New AKS Cluster:
az aks create \
--resource-group <your-resource-group-name> \
--name <your-cluster-name> \
--location <your-region> \
--kubernetes-version <kube-version> \
--node-count <default-node-pool-node-count> \
--node-vm-size <default-node-pool-vm-size> \
--nodepool-name <default-node-pool-name> \
--node-os-upgrade-channel none \
--vnet-subnet-id <aks-subnet-id> \
--network-plugin azure \
--network-policy cilium \
--network-plugin-mode overlay \
--network-dataplane cilium \
--enable-aad \
--aad-admin-group-object-ids <admin-group-object-ids> \
--enable-managed-identity \
--enable-oidc-issuer \
--enable-workload-identity \
--generate-ssh-keys \
--aci-subnet-name <aci-subnet-name> \
--tags <your-tags>
For detailed instructions and more advanced configurations, refer to the official Azure documentation onQuickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Azure CLI - Azure Kubernetes Service .
Example Command:
az aks create \
--resource-group e6data-app-rg \
--name e6data-app-cluster \
--location "EastUS" \
--kubernetes-version "1.30" \
--node-count 3 \
--node-vm-size Standard_DS2_v2 \
--nodepool-name e6datapool \
--node-os-upgrade-channel none \
--vnet-subnet-id $(az network vnet subnet show \
--resource-group e6data-app-rg \
--vnet-name e6data-app-network \
--name e6data-subnet-aks \
--query id -o tsv) \
--network-plugin azure \
--network-policy cilium \
--network-plugin-mode overlay \
--network-dataplane cilium \
--enable-aad \
--aad-admin-group-object-ids "abcdedftg-18b7-1234-acc4-ascgrgvvv" \
--enable-managed-identity \
--enable-oidc-issuer \
--enable-workload-identity \
--generate-ssh-keys \
--aci-subnet-name e6data-subnet-aci \
--tags "env=dev" "project=app"
Wait for the cluster creation process to complete. This may take some time.
Once the AKS cluster is created, you can retrieve the connection information by running the following command:
az aks get-credentials --resource-group [RESOURCE_GROUP] --name [CLUSTER_NAME]
For Example:
az aks get-credentials \
--resource-group e6data-app-rg \
--name e6data-app-cluster
Verify the connection to the AKS cluster by running the following command:
kubectl get nodes
This should display the list of nodes in your AKS cluster.
Set up Karpenter
To set up the Karpenter for your AKS cluster, refer to the official Karpenter documentation available on GitHub - Azure/karpenter-provider-azure: AKS Karpenter Provider .
Karpenter has two main components:
AKSNODECLASS
NODEPOOL
EC2 NodeClass
NodeClasses in Karpenter act as specialized templates for worker nodes, customized for specific cloud platforms like AKSNodeClasses for Azure. These templates specify essential node configurations, including the operating system image, network security settings, subnet placement, and access permissions.
A. Create an e6data EC2 Node Class
apiVersion: karpenter.azure.com/v1alpha2
kind: AKSNodeClass
metadata:
name: <NODECLASS_NAME>
labels:
app: e6data
e6data-workspace-name: <WORKSPACE_NAME>
spec:
imageFamily: AzureLinux
tags: <TAGS>
NodePool
A single Karpenter NodePool in Azure AKS manages diverse pods, streamlining node management by eliminating the need for multiple node groups. The consolidation policy set to WhenEmpty optimizes costs by removing nodes when they become empty.
B. Create and e6data nodpeool
apiVersion: karpenter.sh/v1beta1
kind: NodePool
metadata:
name: ${nodepool_name}
labels:
app: e6data
e6data-workspace-name: ${workspace_name}
spec:
template:
metadata:
labels:
app: e6data
e6data-workspace-name: ${workspace_name}
spec:
requirements:
- key: kubernetes.io/os
operator: In
values: ["linux"]
- key: karpenter.azure.com/sku-family
operator: In
values: ${sku_family}
nodeClassRef:
name: ${nodeclass_name}
taints:
- key: "e6data-workspace-name"
value: ${workspace_name}
effect: NoSchedule
limits:
cpu: ${nodepool_cpu_limits}
disruption:
consolidationPolicy: WhenEmpty
consolidateAfter: 30s
Set up Nginx Ingress Controller
An ingress controller is required in the AKS cluster to manage external access to services, particularly for connectivity between the e6data Console and e6data Cluster, as well as for providing connectivity between querying/BI tools and the e6data Query Engine.
To install the NGINX Ingress Controller in your Azure Kubernetes Service (AKS) cluster, follow these steps:
Add the NGINX Ingress Controller Helm repository:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
Install the NGINX Ingress Controller using Helm:
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace kube-system \
--create-namespace \
--set controller.service.annotations."service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path"=/healthz \
--set controller.service.externalTrafficPolicy=Local
Replace <nginx-ingress-namespace>
with your desired namespace.
Wait for the NGINX Ingress Controller to be fully deployed:
kubectl wait --namespace <nginx-ingress-namespace> \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=120s
Create a dummy Ingress resource to ensure the controller is working:
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dummy-ingress
namespace: <your-namespace>
annotations:
kubernetes.io/ingress.class: nginx
spec:
ingressClassName: nginx
rules:
- host: dummy.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: dummy-service
port:
number: 80
EOF
Replace <your-namespace>
with the namespace where you want to create the dummy Ingress.
Verify the Ingress resource was created:
kubectl get ingress -n <your-namespace>
Deploying Azure Key Vault to Kubernetes (akv2k8s
) using Helm
akv2k8s
) using HelmThe akv2k8s tool is essential for e6data's secure operation in AKS. It provides a seamless and secure method to access Azure Key Vault resources within the Kubernetes environment. Specifically for e6data:
TLS Connectivity: akv2k8s allows e6data to retrieve TLS certificates stored in Azure Key Vault, ensuring secure communications.
Gateway Connectivity: It facilitates the acquisition of domain certificates from Azure Key Vault, necessary for establishing gateway connections to the e6data cluster.
The following section provides a step-by-step guide to deploying the akv2k8s
(Azure Key Vault to Kubernetes) Helm chart into your Azure Kubernetes Service (AKS) cluster. This deployment allows seamless integration between Azure Key Vault and Kubernetes, enabling your workloads to securely fetch secrets directly from Azure Key Vault.
Prerequisites
Before starting the deployment, ensure the following prerequisites are met:
Helm Installed: Helm should be installed on your local machine. You can verify this by running
helm version
.Kubeconfig Access: Ensure you have access to your Kubernetes cluster via your kubeconfig file, typically located at
~/.kube/config
.Kubernetes Access: You need
kubectl
installed and configured to interact with your AKS cluster.Install Tools
Step-by-Step Deployment Instructions
Step 1: Add the Helm Repository
Start by adding the Helm repository containing the akv2k8s
chart:
helm repo add spv-charts http://charts.spvapi.no
This command adds the spv-charts
repository to Helm, where the akv2k8s
chart is hosted.
Step 2: Update Helm Repositories
Next, update your Helm repositories to ensure you have access to the latest charts:
helm repo update
Step 3: Install the akv2k8s
Chart
Install the akv2k8s
chart into the kube-system
namespace of your AKS cluster:
helm install akv2k8s spv-charts/akv2k8s --namespace kube-system
akv2k8s
: The release name for this deployment.spv-charts/akv2k8s
: Specifies the repository and chart name.--namespace kube-system
: Deploys the release into thekube-system
namespace.
Step 4: Verify the Installation
To confirm the successful installation of the chart, list the Helm releases in the kube-system
namespace:
helm list --namespace kube-system
You should see the akv2k8s
release listed among your installed Helm charts.
Step 5: Monitor the Pods
Check the status of the pods created by the akv2k8s
deployment:
kubectl get pods -n kube-system
This command will show the running status of the pods related to akv2k8s
in your cluster.
Summary of Commands
helm repo add spv-charts <http://charts.spvapi.no>
helm repo update
helm install akv2k8s spv-charts/akv2k8s --namespace kube-system
helm list --namespace kube-system
kubectl get pods -n kube-system
Last updated