Setup using Terraform in AWS

If you prefer to setup all the necessary infrastructure for e6data automatically, Terraform scripts and instructions to use them are provided below.

Prerequisites

Before you begin, ensure that you have the following prerequisites in place:

  1. An AWS account with appropriate permissions to create and manage resources.

  2. A local development environment with Terraform installed.

Create the e6data Workspace

  1. Login to the e6data Console

  2. Navigate to Workspaces in the left-side navigation bar or click Create Workspace

  3. Select AWS as the Cloud Provider.

  4. Proceed to the next section to deploy the Workspace.

Setup e6data

Using the Terraform script, the e6data Workspace will be deployed inside an Amazon EKS Cluster. The subsequent sections will provide instructions to edit two Terraform files required for the deployment:

If an Amazon EKS cluster is not available, please follow these instructions.

If Terraform is not installed, please follow these instructions.

Download e6data Terraform Scripts

Please download/clone the e6x-labs/terraform repo from Github.

Configure provider.tf

The Amazon Web Services (AWS) provider in Terraform allows you to manage AWS resources efficiently. However, before utilizing the provider, it's crucial to configure it with the appropriate credentials.

Extract the scripts downloaded in the previous step and navigate to the aws_workspace folder.

Edit the provider.tf file according to your requirements. Please refer to the official Terraform documentation to find instructions to use the authentication method most appropriate to your environment.

Sample provider.tf
provider.tf (sample)
provider "aws" {
  region  = var.aws_region
  default_tags {
    app  = "e6data"
    tags = var.cost_tags
  }
}

terraform {
  backend "s3" {
    bucket = "<BUCKET_TO_STORE_TF_STATE>"
    key    = "terraform/state.tfstate"
    region = var.aws_region
  }
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "4.22.0"
    }
  }
}

Specifying AWS S3 Bucket for Terraform State file

Specifying an AWS S3 bucket for Terraform state files enhances security, provides durability, and supports collaborative infrastructure management. It isolates the state from resources, ensures high availability, and facilitates teamwork, offering versioning for easy rollback and auditability.

To specify an AWS S3 bucket for storing the Terraform state when using AWS as the provider, add the following configuration to the Terraform script replace <BUCKET_TO_STORE_TF_STATE> with the target S3 bucket name to store the Terraform state.

The key parameter specifies the name of the state file within the bucket. It is set to "terraform/state.tfstate", but it can be edited as required.

  • Make sure that the IAM user or role associated with the provided access credentials has the necessary permissions to read from and write to the specified S3 bucket.

  • For more information and to explore additional backend options, refer to the Terraform Backend Configuration documentation.

Configuration Variables in terraform.tfvars File

The terraform.tfvars file contains the following variables that need to be configured before executing the Terraform script.

Sample terraform.tfvars
terraform.tfvars (sample)
# AWS Variables
aws_region                      = "us-east-1" ### AWS region of the EKS cluster.

# e6data Workspace Variables
workspace_name                  = "workspace" ### Name of the e6data workspace to be created.
# Note: The variable workspace_name should meet the following criteria:
# a) Accepts only lowercase alphanumeric characters.
# b) Must have a minimum of 3 characters.

helm_chart_version              = "2.0.4" ### e6data workspace Helm chart version to be used.

# Kubernetes Variables
kube_version                    = "1.28" ### The Kubernetes cluster version. Version 1.24 or higher is required.
eks_disk_size                   = 100 ### Disk size for the disks in the node group. A minimum of 100 GB is required.
nodepool_instance_family        = ["c7g", "c7gd", "c6g", "c6gd", "r6g", "r6gd", "r7g", "r7gd", "i3"]

# Network Variables
cidr_block                      = "10.200.0.0/16"
excluded_az                     = ["us-east-1e"]

# EKS Cluster Variables
cluster_name                    = "ekscluster"            ### The name of the Kubernetes cluster to be created for the e6data workspace.
cluster_log_types               = ["scheduler", "controllerManager","authenticator", "audit"] ### List of the desired control plane logging to enable.

public_access_cidrs             = ["0.0.0.0/0"]   
###Indicates which CIDR blocks can access the Amazon EKS public API server endpoint when enabled. The default value is set to the CIDR of e6data(i.e.,44.194.151.209/32)
###Please include the IP address of the EC2 instance or the CIDR range of the local network from which Terraform is being executed.This is to allow the terraform scripts to access Kubernetes components(serviceaccounts,configmaps..etc).

# Data Bucket names
bucket_names                    = ["*"] ### List of bucket names that the e6data engine queries and therefore, require read access to. Default is ["*"] which means all buckets, it is advisable to change this.

# Kubernetes Namespace
kubernetes_namespace            = "namespace" ### Value of the Kubernetes namespace to deploy the e6data workspace.

# Cost Tags
cost_tags = {
  app = "e6data"
}

# AWS Command Line Variable
aws_command_line_path           = "aws"  ### Specify the path to the AWS Command Line Interface executable. Run "which aws" command to get the exact path.

# ALB Ingress Controller Variables
alb_ingress_controller_namespace = "kube-system"
alb_ingress_controller_service_account_name = "alb-ingress-controller"
alb_controller_helm_chart_version = "1.6.1"

# Karpenter Variables
karpenter_namespace            = "kube-system"          ### Namespace to deploy the karpenter
karpenter_service_account_name = "karpenter"   ### Service account name for the karpenter
karpenter_release_version   = "0.36.0"               ### Version of the karpenter Helm chart

#### Additional ingress/egress rules for the EKS Security Group
# additional_ingress_rules = [
#   {
#     from_port   = 22
#     to_port     = 22
#     protocol    = "tcp"
#     self        = false
#     cidr_blocks = ["0.0.0.0/0"]
#   }
# ]

##Additonal egress rule to allow hive metastore port
additional_egress_rules = [
  {
    from_port   = 9083
    to_port     = 9083
    protocol    = "tcp"
    self        = false
    cidr_blocks = ["0.0.0.0/0"]
  }
]

Please update the values of these variables in the terraform.tfvars file to match the specific configuration details for your environment:

aws_region

AWS region of the EKS cluster.

workspace_name

Name of the e6data workspace to be created.

helm_chart_version

e6data helm chart should be installed (1.0.7 and above).

kube_version

The Kubernetes cluster version. Version 1.24 or higher is required.

eks_disk_size

Disk size for the disks in the node group. A minimum of 100 GB is required.

nodepool_instance_family

The instance types to include in the karpenter nodepool.

cidr_block

The IPv4 CIDR block for the VPC to deploy the EKS cluster .

excluded_az

List of Availability Zone IDs to exclude.(we recommend excluding “us-east-1e” because the availability of “SPOT” instances is limited in this availability zone)

cluster_name

The name of the Kubernetes cluster to deploy e6data workspace.

cluster_log_types

List of the desired control plane logging to enable. For more information, see Amazon EKS Control Plane Logging.

public_access_cidrs

Indicates which CIDR blocks can access the Amazon EKS public API server endpoint when enabled. The default value is set to the CIDR of e6data(i.e.,44.194.151.209/32)

bucket_names

List of bucket names that the e6data engine queries and require read access to.

kubernetes_namespace

Value of the Kubernetes namespace to deploy the e6data workspace.

cost_tags

The tags that will be applied to all the resources created by this Terraform script.

aws_command_line_path

Specify the path to the AWS Command Line Interface executable. Run "which aws" command to get the exact path.

alb_ingress_controller_namespace

The kubernetes namespace to deploy alb ingress controller.

alb_ingress_controller_service_account_name

Service account name for the ALB Ingress controller.

alb_controller_helm_chart_version

The version of the ALB Ingress controller Helm chart.

karpenter_namespace

Namespace to deploy the karpenter

karpenter_service_account_name

Service account name for the karpenter

karpenter_release_version

Version of the karpenter Helm chart

additional_ingress_rules

Specify any extra ports to be opened in the eks security group for inbound traffic.

additional_egress_rules

Specify any extra ports to be opened in the eks security group for outbound traffic.

When using an existing EKS Cluster with Karpenter, please ensure the Karpenter controller role includes the following permissions:

  1. The Karpenter controller should have access to the e6data nodepool and EC2 nodeclass.

  2. The iam:PassRole permission must be assigned to the e6data Karpenter node role.

The identifiers for the e6data nodepool, EC2 nodeclass and Karpenter node role will be available in the Terraform outputs.

Execution Commands

Once you have configured the necessary variables in the provider.tf & terraform.tfvars files, you can proceed with the deployment of the e6data workspace. Follow the steps below to initiate the deployment:

  1. Navigate to the directory containing the Terraform files. It is essential to be in the correct directory for the Terraform commands to execute successfully.

  2. Initialize Terraform:

terraform init

  1. Generate a Terraform plan and save it to a file (e.g., e6.plan):

terraform plan -var-file="terraform.tfvars" --out="e6.plan"

The -var-file flag specifies the input variable file (terraform.tfvars) that contains the necessary configuration values for the deployment.

  1. Review the generated plan.

  2. Apply the changes using the generated plan file:

terraform apply "e6.plan"

This command applies the changes specified in the plan file (e6.plan) to deploy the e6data workspace in your environment.

  1. Make note of the values returned by the script.

  2. Return to the e6data Console and enter the values returned in the previous step.

Deployment Overview and Resource Provisioning

This section provides a comprehensive overview of the resources deployed using the Terraform script for the e6data workspace deployment.

  • Only the e6data engine, residing within the customer account has permission access to data stores.

  • The cross-account role does not have access to data stores, therefore access to data stores from the e6data platform is not possible.

  • All roles and role-bindings are created only within the e6data namespace.

Permissions

Engine Permissions

The e6data Engine which is deployed inside the customer boundary, requires the following permissions:

glueReadOnlyAccess to read Glue catalogs:

actions = [
      "glue:GetDatabase*",
      "glue:GetTable*",
      "glue:GetPartitions"
    ]

ListBucket:

actions = ["s3:ListBucket"]

ReadE6dataBucket:

actions = [
      "s3:GetObject",
      "s3:GetObjectTagging",
      "s3:GetObjectVersion",
      "s3:ListObjects"
    ]

Cross-account Role Permissions

  • ListBucket:

    • actions = ["s3:ListBucket"]
  • ReadWriteE6dataBucket:

    • actions = [
            "s3:PutObject",
            "s3:GetObject",
            "s3:GetObjectTagging",
            "s3:GetObjectVersion",
            "s3:PutObjectTagging",
            "s3:DeleteObjectVersion",
            "s3:DeleteObject",
            "s3:DeleteObjectTagging",
            "s3:ListObjects"
          ]
  • describeEKSCluster:

    • actions = ["eks:DescribeCluster", "eks:DescribeNodegroup"]
  • Permissions to create endpoint related resources

Monitoring Permissions

The e6data microservices used for monitoring the cluster & reporting metrics require these permissions.

Resources Created

  1. VPC and Subnets

    • The network module creates a Virtual Private Cloud (VPC) within the specified CIDR range provided by the customer, establishing public and private subnets across all available Availability Zones (excluding those mentioned in the "excluded_az" input). In this setup, a single Internet Gateway (IG) and a single Network Address Translation (NAT) Gateway are deployed, serving all the Availability Zones. Public subnets are designed for resources requiring direct internet access, while private subnets use the shared NAT Gateway to enable secure outbound connectivity. This resource design ensures efficient networking while minimizing costs by centralizing the IG and NAT Gateway resources across the VPC's Availability Zones, promoting security and scalability for the customer's AWS environment.

  2. EKS Cluster

    • A new EKS cluster will be created within the previously created VPC. This EKS cluster will have OIDC (OpenID Connect) configured to enhance security for user and service authentication within the cluster. Additionally, two IAM roles will be generated to manage permissions within the cluster:

      • "iam_eks_cluster_role": This role permits the EKS cluster to assume its identity, allowing it to interact securely with various AWS services and resources.

      • "iam_eks_node_role": This role is utilized by the EKS worker nodes to gain similar permissions, allowing them to assume this role for executing tasks and operations within the cluster.

  3. Karpenter

    • Along with creating the EKS cluster, we will also configure Karpenter.

    • Karpenter is an open-source Kubernetes autoscaler that automatically provisions the right-sized compute resources in response to changing application loads within a Kubernetes cluster.

    • Auto-scaling is necessary to automatically increase the number of executors when the e6data engine identifies heavy query loads.

    • The "scale-out" autoscaling strategy is effective for managing concurrent workloads. Karpenter is generally deployed in your cluster using a Helm chart or YAML manifest.

    • For Karpenter to function effectively, it needs permissions to inspect and modify EC2 instances and manage node groups within the cluster. This requires providing Karpenter with the appropriate AWS Identity and Access Management (IAM) roles and policies.

    • An SQS queue is configured to manage node interruption messages for Karpenter within an EKS cluster. The setup includes IAM policies that permit AWS services to send messages to this queue. CloudWatch Event Rules are used to capture a variety of AWS events, such as Spot Instance Interruption Warnings, AWS Health Events, EC2 Instance Rebalance Recommendations, and EC2 Instance State-change Notifications. These events are directed to the SQS queue, allowing Karpenter to proactively handle node interruptions, ensuring high availability and minimizing workload disruptions.

  4. ALB Ingress controller

    • AWS ALB Ingress Controller will be deployed in the EKS cluster using helm release resource in terraform.

    • The AWS ALB Ingress Controller is an open-source project that simplifies the management of AWS Elastic Load Balancers (ALBs and NLBs) within Kubernetes clusters. It provisions AWS load balancers automatically based on Kubernetes Ingress and Service resources, allowing users to define routing and load balancing configurations effortlessly.

  5. AWS S3 Bucket

    • The Terraform script sets up an AWS S3 bucket with enhanced security and access controls to store the query results. It enables versioning, server-side encryption, and logging for the bucket. Public access is blocked, and the bucket ACL is set to private to ensure secure access to the bucket contents.

Roles & Permissions Created

  1. Configuring IAM Policies and Roles for e6data Engine with Federated Credentials:

    • The Terraform script defines policies for AWS Glue and S3 access and creates an IAM role with federated credentials. This enables the e6data engine to securely interact with Glue and read from designated S3 buckets in AWS.

  2. Cross Account Role for e6data Control Plane:

    • It also sets up a cross-account IAM role that grants read/write access to the workspace S3 bucket and permission to describe the EKS cluster. This role enables secure interaction and allows the e6data control plane to assume the role based on the AWS OIDC role.

  3. Helm Chart Deployment:

    • The Helm chart deployed using the Terraform script creates roles and role bindings in the EKS cluster for the e6data control plane user.

    • These roles and role bindings define the permissions and access levels for the control plane user within the cluster, allowing it to perform specific actions and interact with resources as required by the e6data workspace.

    • The defined permissions include the ability to manage various resources such as pods, nodes, services, ingresses, configmaps, secrets, jobs, deployments, daemonsets, statefulsets, and replicasets.

    • By deploying these cluster roles through the Helm release, the e6data control plane user is equipped with the necessary permissions to effectively manage and interact with the resources within the cluster, enabling seamless operation and configuration of the e6data platform.

To ensure proper communication between the Terraform script and the EKS cluster, it is essential to retrieve the existing "aws-auth" ConfigMap in the "kube-system" namespace using the data "kubernetes_config_map_v1" "aws_auth_read" block. This allows the script to access and read the configuration map for further usage.

However, please note that if you encounter an "unauthorized" error while retrieving the ConfigMap, it may indicate that the IAM role or user executing the Terraform script does not have sufficient privileges to access and modify the ConfigMap.

  • When an Amazon EKS cluster is created, the IAM principal that creates the cluster is automatically granted system:masters permissions in the cluster's RBAC configuration.

  • To grant additional IAM principals the ability to interact with a cluster, you need to edit the aws-auth ConfigMap within Kubernetes and create a Kubernetes rolebinding or clusterrolebinding with the name of a group that you specify in the aws-auth ConfigMap. You can find the detailed steps and further information in the AWS documentation: Add User or IAM Role to a Cluster.

    • By granting the appropriate permissions to access and modify the "aws-auth" ConfigMap, you will resolve the "unauthorized" error and ensure seamless communication between the Terraform script and the EKS cluster during the deployment of the e6data workspace.

Creating an Amazon EKS Cluster (optional)

To create an Amazon EKS cluster, please follow the instructions provided in the AWS documentation: Creating an Amazon EKS cluster - Amazon EKS.

By following the steps in the AWS Documentation to create an Amazon EKS cluster, the below components will be created:

  1. Amazon Virtual Private Cloud (VPC)

    • A VPC will be created with both public and private subnets. The VPC provides the networking infrastructure for the EKS cluster.

  2. Security Groups

    • Security groups will be created to control inbound and outbound traffic to the EKS cluster. These security groups ensure that the cluster is secure and accessible only to authorized entities.

  3. IAM Roles

    • IAM roles will be created for the EKS cluster and its associated resources. These roles define the permissions and access control policies for managing the cluster.

  4. Elastic Load Balancer (ELB)

    • An ELB will be created to distribute incoming traffic to the worker nodes in the EKS cluster. This load balancer enables high availability and scalability for the application.

  5. Auto-Scaling Group

    • An auto-scaling group will be set up to automatically adjust the number of worker nodes based on the demand of your applications. This ensures optimal resource utilization and availability.

  6. Amazon EKS Cluster

    • The EKS cluster itself will be created, serving as the control plane for the Kubernetes environment. The cluster manages the lifecycle of applications and provides the necessary resources for running containers.

Installing Terraform Locally (optional)

To install Terraform on your local machine, you can follow the steps, adapted from the official HashiCorp Terraform documentation:

  1. Visit the official Terraform website at Terraform by HashiCorp

  2. Navigate to the "Downloads" page or click here to directly access the downloads page.

  3. Download the appropriate package for your operating system (Windows, macOS, Linux).

  4. Extract the downloaded package to a directory of your choice.

  5. Add the Terraform executable to your system's PATH environment variable.

    • Windows:

      1. Open the Start menu and search for "Environment Variables."

      2. Select "Edit the system environment variables."

      3. Click the "Environment Variables" button.

      4. Under "System variables," find the "Path" variable and click "Edit."

      5. Add the path to the directory where you extracted the Terraform executable (e.g., C:\\terraform) to the list of paths.

      6. Click "OK" to save the changes.

    • For macOS and Linux:

      1. Open a terminal.

      2. Run the following command, replacing <path_to_extracted_binary> with the path to the directory where you extracted the Terraform executable: export PATH=$PATH:<path_to_extracted_binary>

      3. Optionally, you can add this command to your shell's profile file (e.g., ~/.bash_profile, ~/.bashrc, ~/.zshrc) to make it persistent across terminal sessions.

  6. Verify the installation by opening a new terminal window and running the following command: terraform version. If Terraform is installed correctly, you should see the version number displayed.

Last updated