In the AWS Management Console, go to the bucket's properties.
Navigate to "Versioning" and enable it.
Enable server-side encryption for the bucket:
In the AWS Management Console, go to the bucket's properties
Navigate to "Default encryption"
Select the encryption algorithm "AES-256"
Make sure the “Enable the bucket key” option is enabled.
Block public access to the bucket:
In the AWS Management Console, go to the bucket's properties
Navigate to "Permissions"
Click on "Block public access."
Disable all public access settings and save the changes.
Enable logging for the bucket:
In the AWS Management Console, go to the bucket's properties
Navigate to "Logging"
Choose a target bucket to store the logs and a prefix for the logs.
Configure the bucket's ACL to be private:
In the AWS Management Console, go to the bucket's properties.
Navigate to "Permissions"
Select "Access Control List (ACL)"
Set the bucket's ACL to private.
Configure ownership controls for the bucket.
In the AWS Management Console, go to the bucket's properties
Navigate to "Ownership"
Enable "Bucket owner preferred" as the object ownership.
Please make note of the S3 Bucket Name, it will be required when creating the Workspace in the e6data Console.
Create an OIDC IAM Role for e6data Query Engine
The e6data Query Engine requires access to the S3 buckets containing the target data for querying. To provision the required access we need to create an IAM Role and associate it with a Kubernetes service account.
This configuration allows us to establish a secure connection between the Kubernetes environment and AWS. Once this IAM Role is associated with the service account, any Pods within the e6data clusters that are configured to use this service account will inherit the permissions defined in the IAM Role.
Retrieve the OIDC Provider Suffix
First retrieve the OIDC Provider Suffix, which is required to create the IAM Role:
Open a Terminal
Open a terminal or command prompt where you can run AWS CLI commands.
Run the Command
Execute the following command to retrieve the OIDC provider suffix for your EKS cluster. Replace <EKS_CLUSTER_NAME> with the actual name of your EKS cluster, and <AWS_REGION> with the AWS region where your cluster is located.
aws eks describe-cluster --name <EKS_CLUSTER_NAME> --region <AWS_REGION> --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///"
Create an IAM Role for the e6data Query Engine
Create the AssumeRole policy for the e6data Query Engine using the template provided below. Replace the <OIDC_PROVIDER_SUFFIX> with the value retrieved in the previous step:
Please make note of the created CrossAccountRole ARN, it will be required later.
Cross-Account IAM Role to use Unload Operator
To grant the e6data engine access to the S3 bucket where query results are stored using the unload operator, specific permissions must be configured. The following IAM policy must be attached to the engine role, which was created while adding prerequisites:
This policy grants the necessary permissions for the e6data engine role to list the contents of the S3 bucket (s3:ListBucket) and perform read/write operations on objects within the bucket (s3:PutObject, s3:GetObject, s3:GetObjectTagging, s3:GetObjectVersion, s3:PutObjectTagging, s3:DeleteObjectVersion, s3:DeleteObject, s3:DeleteObjectTagging, s3:ListObjects).
Ensure to replace <UNLOAD_BUCKET> it with the actual ARN of your S3 bucket.
Update ConfigMap in the EKS Cluster
Open a terminal or command prompt and connect to your EKS cluster by updating the context.
Use the kubectl command-line tool to view the current ConfigMap "aws-auth" in the "kube-system" namespace by running the following command:
kubectl get configmap aws-auth -n kube-system -o yaml
This will display the current configuration of the "aws-auth" ConfigMap, including its YAML representation.
Modify the ConfigMap and add mapRoles similar to the YAML file below.
RoleARN of the e6data cross-account role that was previously created, with the username e6data-<WORKSPACE_NAME>-user.
RoleARN of the Karpenter node role that was previously created, with the username "system:node: {{EC2PrivateDNSName}}" and groups ["system: bootstrappers", "system: nodes"].
Be cautious when modifying the "aws-auth" ConfigMap, as it controls the authentication and authorization of your Amazon EKS worker nodes. Incorrect changes can lead to issues with the cluster's functionality. Always verify your changes before applying them to the cluster and ensure you have the necessary permissions to make updates.