Prerequisite Infrastructure
The following components are required before setting up the infrastructure needed by e6data. These are commonly present in most cloud environments, but if any are not present, please follow the linked guides below to create them.
Create VNET, SUBNETS AND NAT Gateway
AKS Cluster
1. Prerequisites
Ensure that you have the Azure CLI installed on your system. You can install it from here.
Once installed, log in to your Azure account using the following command:
az login2. Create Resource Group
In Azure, a resource group acts as a logical container that holds related resources for your solution. It allows you to manage, deploy, and organize resources conveniently. To create a resource group, use the following Azure CLI command:
az group create \
--name <resource-group-name> \
--location <region>Command Breakdown
--name <resource-group-name>: The name of the resource group you want to create. This name should be relevant to your project or environment.
--location <region>: The Azure region where your resource group will be created. The region determines the physical location of the resources in the group. Example regions include eastus, westeurope, or southeastasia.
For example
3. Create a Virtual Network
After creating a resource group, the next step is to create a Virtual Network (VNet) within that group. A VNet is an essential part of Azure networking and allows you to manage your network resources efficiently.
For example
4. Create Subnets
Create AKS Subnet
To create a subnet specifically for Azure Kubernetes Service (AKS), use the following command:
For example
Create ACI Subnet
To create a subnet specifically for Azure Container Instances (ACI), use the following command:
For example
5. Delegate ACI Subnet
Update ACI Subnet Delegation
To update an existing subnet and delegate it for use by Azure Container Instances (ACI), use the following command:
For example
Note:
Delegation is required to allow Azure Container Instances to use the subnet.
Ensure the subnet is properly configured and does not conflict with other network configurations.
6. Create a Public IP Address
To configure a NAT gateway, you need to create a static public IP address. Follow these steps to create the public IP address required for the NAT gateway:
For example
7. Create a NAT Gateway
To set up network address translation (NAT) for outbound traffic, you need to create a NAT gateway and associate it with a public IP address. Follow these steps:
For example
8. Associate the NAT Gateway with the AKS Subnet
To enable outbound connectivity through the NAT gateway for your AKS subnet, follow these steps:
For example
Note:
Associating the NAT gateway with the AKS subnet ensures that all outbound traffic from the AKS cluster is routed through the NAT gateway, providing a single, stable IP address for outbound traffic.
Verify that the NAT gateway and subnet configurations are correctly set up to avoid connectivity issues.
9. Create a Key Vault
Create an Azure Key Vault to securely store certificates used for TLS connectivity. This vault will provide centralized, secure management of certificates, ensuring encrypted communication within your services or applications, such as in an AKS cluster.
For example
10. Creating a New Azure AKS Cluster
Follow these instructions to set up a new Azure Kubernetes Service (AKS) cluster. Ensure that the Azure CLI is installed and configured on your local machine. If you haven’t installed the Azure CLI yet, please refer to the How to install the Azure CLI guide for setup.
Open a Terminal or Command Prompt
Run the Following Command to Create a New AKS Cluster:
For detailed instructions and more advanced configurations, refer to the official Azure documentation onQuickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Azure CLI - Azure Kubernetes Service .
Example Command:
If you haven't already configured Azure AD groups for AKS RBAC, you can refer to the following link for instructions: Configuring groups for Azure AKS with Azure AD RBAC. This will guide you in setting up and managing Azure AD groups for role-based access control within your AKS cluster.
Azure CNI Overlay networking is a prerequisite for using Karpenter in AKS. This networking mode is essential because:
It assigns pod IPs from a separate private CIDR, distinct from the VNet.
It prevents VNet IP exhaustion, which is crucial for Karpenter's dynamic node scaling.
Network Configuration: The cluster is configured with the Azure CNI and Cilium for network policy enforcement and data plane management.
Service and DNS IPs: The service CIDR and DNS service IP should be configured to avoid overlaps with your existing network.
This configuration requires you to have a Microsoft Entra group for your cluster. This group is registered as an admin group on the cluster to grant admin permissions. If you don't have an existing Microsoft Entra group, you can create one using the az ad group create command.
Important Note:
Here, we are disabling the node OS upgrade channel by setting it to none. This prevents automatic OS upgrades that would restart the nodes in the default node pool, which could result in the bootstrap token rotation. The bootstrap token is used in the environment variables for Karpenter.
If a manual upgrade is initiated, which causes the nodes to restart, it is critical to update the bootstrap token in the Karpenter environment variables to ensure smooth operation and prevent any potential disruptions in scaling.
Wait for the cluster creation process to complete. This may take some time.
Once the AKS cluster is created, you can retrieve the connection information by running the following command:
For Example:
Verify the connection to the AKS cluster by running the following command:
This should display the list of nodes in your AKS cluster.
Set up Karpenter
To set up the Karpenter for your AKS cluster, refer to the official Karpenter documentation available on
GitHub - Azure/karpenter-provider-azure: AKS Karpenter Provider .
Karpenter has two main components:
AKSNODECLASS
NODEPOOL
EC2 NodeClass
NodeClasses in Karpenter act as specialized templates for worker nodes, customized for specific cloud platforms like AKSNodeClasses for Azure. These templates specify essential node configurations, including the operating system image, network security settings, subnet placement, and access permissions.
A. Create an e6data EC2 Node Class
NodePool
A single Karpenter NodePool in Azure AKS manages diverse pods, streamlining node management by eliminating the need for multiple node groups. The consolidation policy set to WhenEmpty optimizes costs by removing nodes when they become empty.
B. Create and e6data nodpeool
Set up Nginx Ingress Controller
An ingress controller is required in the AKS cluster to manage external access to services, particularly for connectivity between the e6data Console and e6data Cluster, as well as for providing connectivity between querying/BI tools and the e6data Query Engine.
To install the NGINX Ingress Controller in your Azure Kubernetes Service (AKS) cluster, follow these steps:
Add the NGINX Ingress Controller Helm repository:
Install the NGINX Ingress Controller using Helm:
Replace <nginx-ingress-namespace> with your desired namespace.
Wait for the NGINX Ingress Controller to be fully deployed:
Create a dummy Ingress resource to ensure the controller is working:
Replace <your-namespace> with the namespace where you want to create the dummy Ingress.
Verify the Ingress resource was created:
Deploying Azure Key Vault to Kubernetes (akv2k8s) using Helm
akv2k8s) using HelmThe akv2k8s tool is essential for e6data's secure operation in AKS. It provides a seamless and secure method to access Azure Key Vault resources within the Kubernetes environment. Specifically for e6data:
TLS Connectivity: akv2k8s allows e6data to retrieve TLS certificates stored in Azure Key Vault, ensuring secure communications.
Gateway Connectivity: It facilitates the acquisition of domain certificates from Azure Key Vault, necessary for establishing gateway connections to the e6data cluster.
The following section provides a step-by-step guide to deploying the akv2k8s (Azure Key Vault to Kubernetes) Helm chart into your Azure Kubernetes Service (AKS) cluster. This deployment allows seamless integration between Azure Key Vault and Kubernetes, enabling your workloads to securely fetch secrets directly from Azure Key Vault.
Prerequisites
Before starting the deployment, ensure the following prerequisites are met:
Helm Installed: Helm should be installed on your local machine. You can verify this by running
helm version.Kubeconfig Access: Ensure you have access to your Kubernetes cluster via your kubeconfig file, typically located at
~/.kube/config.Kubernetes Access: You need
kubectlinstalled and configured to interact with your AKS cluster.
Install Tools
Step-by-Step Deployment Instructions
Step 1: Add the Helm Repository
Start by adding the Helm repository containing the akv2k8s chart:
This command adds the spv-charts repository to Helm, where the akv2k8s chart is hosted.
Step 2: Update Helm Repositories
Next, update your Helm repositories to ensure you have access to the latest charts:
Step 3: Install the akv2k8s Chart
Install the akv2k8s chart into the kube-system namespace of your AKS cluster:
akv2k8s: The release name for this deployment.spv-charts/akv2k8s: Specifies the repository and chart name.--namespace kube-system: Deploys the release into thekube-systemnamespace.
Step 4: Verify the Installation
To confirm the successful installation of the chart, list the Helm releases in the kube-system namespace:
You should see the akv2k8s release listed among your installed Helm charts.
Step 5: Monitor the Pods
Check the status of the pods created by the akv2k8s deployment:
This command will show the running status of the pods related to akv2k8s in your cluster.
Summary of Commands
If a Key Vault is not already present, you can follow the official Microsoft documentation to create one via the Azure portal: Quickstart: Create a Key Vault using the Azure portal.
Last updated
