LogoLogo
  • Welcome to e6data
  • Introduction to e6data
    • Concepts
    • Architecture
      • e6data in VPC Deployment Model
      • Connect to e6data serverless compute
  • Get Started
  • Sign Up
  • Setup
    • AWS Setup
      • In VPC Deployment (AWS)
        • Prerequisite Infrastructure
        • Infrastructure & Permissions for e6data
        • Setup Kubernetes Components
        • Setup using Terraform in AWS
          • Update a AWS Terraform for your Workspace
        • AWS PrivateLink and e6data
        • VPC Peering | e6data on AWS
      • Connect to e6data serverless compute (AWS)
        • Workspace Creation
        • Catalog Creation
          • Glue Metastore
          • Hive Metastore
          • Unity Catalog
        • Cluster Creation
    • GCP Setup
      • In VPC Deployment (GCP)
        • Prerequisite Infrastructure
        • Infrastructure & Permissions for e6data
        • Setup Kubernetes Components
        • Setup using Terraform in GCP
        • Update a GCP Terraform for your Workspace
      • Connect to e6data serverless compute (GCP)
    • Azure Setup
      • Prerequisite Infrastructure
      • Infrastructure & Permissions for e6data
      • Setup Kubernetes Components
      • Setup using Terraform in AZURE
        • Update a AZURE Terraform for your Workspace
  • Workspaces
    • Create Workspaces
    • Enable/Disable Workspaces
    • Update a Workspace
    • Delete a Workspace
  • Catalogs
    • Create Catalogs
      • Hive Metastore
        • Connect to a Hive Metastore
        • Edit a Hive Metastore Connection
        • Delete a Hive Metastore Connection
      • Glue Metastore
        • Connect to a Glue Metastore
        • Edit a Glue Metastore Connection
        • Delete a Glue Metastore Connection
      • Unity Catalog
        • Connect to Unity Catalog
        • Edit Unity Catalog
        • Delete Unity Catalog
      • Cross-account Catalog Access
        • Configure Cross-account Catalog to Access AWS Hive Metastore
        • Configure Cross-account Catalog to Access Unity Catalog
        • Configure Cross-account Catalog to Access AWS Glue
        • Configure Cross-account Catalog to Access GCP Hive Metastore
    • Manage Catalogs
    • Privileges
      • Access Control
      • Column Masking
      • Row Filter
  • Clusters
    • Edit & Delete Clusters
    • Suspend & Resume Clusters
    • Cluster Size
    • Load Based Sizing
    • Auto Suspension
    • Query Timeout
    • Monitoring
    • Connection Info
  • Pools
    • Delete Pools
  • Query Editor
    • Editor Pane
    • Results Pane
    • Schema Explorer
    • Data Preview
  • Notebook
    • Editor Pane
    • Results Pane
    • Schema Explorer
    • Data Preview
  • Query History
    • Query Count API
  • Connectivity
    • IP Sets
    • Endpoints
    • Cloud Resources
    • Network Firewall
  • Access Control
    • Users
    • Groups
    • Roles
      • Permissions
      • Policies
    • Single Sign-On (SSO)
      • AWS SSO
      • Okta
      • Microsoft My Apps-SSO
      • Icons for IdP
    • Service Accounts
    • Multi-Factor Authentication (Beta)
  • Usage and Cost Management
  • Audit Log
  • User Settings
    • Profile
    • Personal Access Tokens (PAT)
  • Advanced Features
    • Cross-Catalog & Cross-Schema Querying
  • Supported Data Types
  • SQL Command Reference
    • Query Syntax
      • General functions
    • Aggregate Functions
    • Mathematical Functions & Operators
      • Arithematic Operators
      • Rounding and Truncation Functions
      • Exponential and Root Functions
      • Trigonometric Functions
      • Logarithmic Functions
    • String Functions
    • Date-Time Functions
      • Constant Functions
      • Conversion Functions
      • Date Truncate Function
      • Addition and Subtraction Functions
      • Extraction Functions
      • Format Functions
      • Timezone Functions
    • Conditional Expressions
    • Conversion Functions
    • Window Functions
    • Comparison Operators & Functions
    • Logical Operators
    • Statistical Functions
    • Bitwise Functions
    • Array Functions
    • Regular Expression Functions
    • Generate Functions
    • Cardinality Estimation Functions
    • JSON Functions
    • Checksum Functions
    • Unload Function (Copy into)
    • Struct Functions
  • Equivalent Functions & Operators
  • Connectors & Drivers
    • DBeaver
    • DbVisualiser
    • Apache Superset
    • Jupyter Notebook
    • Tableau Cloud
    • Tableau Desktop
    • Power BI
    • Metabase
    • Zeppelin
    • Python Connector
      • Code Samples
    • JDBC Driver
      • Code Samples
      • API Support
    • Configure Cluster Ingress
      • ALB Ingress in Kubernetes
      • GCE Ingress in Kubernetes
      • Ingress-Nginx in Kubernetes
  • Security & Trust
    • Best Practices
      • AWS Best Practices
    • Features & Responsibilities Matrix
    • Data Protection Addendum(DPA)
  • Tutorials and Best Practices
    • How to configure HIVE metastore if you don't have one?
    • How-To Videos
  • Known Limitations
    • SQL Limitations
    • Other Limitations
    • Restart Triggers
    • Cloud Provider Limitations
  • Error Codes
    • General Errors
    • User Account Errors
    • Workspace Errors
    • Catalog Errors
    • Cluster Errors
    • Data Governance Errors
    • Query History Errors
    • Query Editor Errors
    • Pool Errors
    • Connectivity Errors
  • Terms & Condition
  • Privacy Policy
    • Cookie Policy
  • FAQs
    • Workspace Setup
    • Security
    • Catalog Privileges
  • Services Utilised for e6data Deployment
    • AWS supported regions
    • GCP supported regions
    • AZURE supported regions
  • Release Notes & Updates
    • 6th Sept 2024
    • 6th June 2024
    • 18th April 2024
    • 9th April 2024
    • 30th March 2024
    • 16th March 2024
    • 14th March 2024
    • 12th March 2024
    • 2nd March 2024
    • 10th February 2024
    • 3rd February 2024
    • 17th January 2024
    • 9th January 2024
    • 3rd January 2024
    • 18th December 2023
    • 12th December 2023
    • 9th December 2023
    • 4th December 2023
    • 27th November 2023
    • 8th September 2023
    • 4th September 2023
    • 26th August 2023
    • 21st August 2023
    • 19th July 2023
    • 23rd May 2023
    • 5th May 2023
    • 28th April 2023
    • 19th April 2023
    • 15th April 2023
    • 10th April 2023
    • 30th March 2023
Powered by GitBook
On this page
  • Usage
  • Cost
  • How Are Costs Incurred?

Usage and Cost Management

Usage and Cost Management tracks resource usage and optimizes costs for efficient operations.

PreviousMulti-Factor Authentication (Beta)NextAudit Log

Last updated 3 months ago

e6data offers a powerful framework to help you monitor, manage, and optimize your resource usage and associated costs. The Usage and Cost pane provides detailed insights into the performance and financial aspects of your e6data clusters, driven by ecores consumption.

What are ecores?

ecores are the virtual CPUs (vCPUs) that power your e6data clusters. They enable efficient data processing and query handling. Each ecore corresponds to a thread on a physical CPU core, defining the computational capacity of your cluster.

This Usage and Cost Management feature allows users to:

  • Track Resource Utilization: View comprehensive cluster usage data, including ecores consumed over specific timeframes.

  • Analyse Cost Trends: Understand the financial implications of your cluster usage with detailed cost breakdowns, enabling better budgeting and cost control.

  • Make Informed Decisions: Use the provided statistics and analytics to fine-tune your operations, ensuring efficient resource allocation and minimizing unnecessary expenditure.

With this feature, e6data empowers you to maintain transparency, manage cluster efficiency, and optimize costs effectively for a streamlined operational experience.

Cluster Usage and Cost Breakdown in the UI

The Cluster Usage and Cost Breakdown feature in the user interface (UI) provides a comprehensive overview of your cluster’s ecores consumption and associated costs. This overview section displays detailed data on a daily or hourly basis, offering a clear picture of how your clusters are performing over a specified time range.

Data Points Displayed in the Overview Section

The overview section highlights the following key metrics:

Total Cost for ecores Usage: This metric shows the cumulative cost incurred from using ecores (a unit of computing resources) during the selected period. It helps you track the financial expenditure related to your cluster's operation.

Total ecores Consumed: This value represents the total number of ecores consumed by your clusters. It provides insight into how much computational capacity has been utilized over time.

Average Daily Cost of Used ecores: This figure indicates the average daily cost for the ecores used during the specified period. It helps you understand the typical daily expenditure associated with running the clusters.

Average Daily ecores Consumed: This metric shows the average number of ecores consumed daily, helping you monitor and manage your resource usage over time.

Exporting Data as a CSV File

  1. Navigate to the Usage and Cost section.

  2. Select the desired date range.

  3. Click the Export CSV button.

  4. Download the CSV file containing the data.

  5. Use the exported file for offline analysis, reporting, or to share with others.

  6. Monitor usage, costs, and resource allocation over time using the overview and export features.

Usage

The Usage tab displays a chart and lists detailed information about the usage of the selected cluster(s).

By default, usage statistics for all workspaces and clusters from the last 24 hours are displayed. You can narrow the details by changing the date range in the Date drop-down menu and/or selecting one of the following component types from the workspace and cluster drop-down menu.

Note: Access to the Usage and Cost pane requires the Billing Manager role.

Step-by-Step Guide

  1. Navigate to Usage and Cost in the left navigation panel.

  2. Under Usage, choose the desired date range (default: last 24 hours).

  3. Select the workspace from the drop-down menu.

  4. Choose the cluster(s) within the selected workspace.

  5. Select Day for daily usage or Hour for hourly usage from the drop-down menu.

The bar graph provides a detailed representation of ecores usage for each cluster, breaking down the consumption into two key components: planner and executor. This visualization allows users to analyze and compare resource utilization across clusters, offering valuable insights into the specific functions driving ecores consumption.

Cost

The Cost tab displays a chart and lists that provide detailed information about the cost of the selected cluster(s) based on their ecores usage.

By default, cost statistics for all workspaces and clusters from the last 24 hours are displayed. You can narrow the details by changing the date range in the Date drop-down menu or selecting one of the following component types from the workspace and cluster drop-down menu.

Note: Access to the Usage and Cost pane requires the Billing Manager role.

Step-by-Step Guide

  1. Navigate to Usage and Cost from the left navigation panel.

  2. Under Billing, choose the desired date range (default: last day).

  3. Select the workspace from the drop-down menu.

  4. Choose the cluster(s) within the selected workspace.

  5. Select Day for daily billing or Hour for hourly billing from the drop-down menu.

How Are Costs Incurred?

The total cost of using e6data is determined by the cumulative usage of ecores.

Example: Total Cost for One Week

Scenario:

An organization uses 3 e6data clusters daily for 5 days a week, each running for 8 hours (9 pm to 5 pm).

  • Total ecores per Cluster: 38

Note: Cluster configurations can vary based on individual customer requirements.

Cost Calculation:

Total Cost= No of clusters * ecores per cluster * Hours per day * Days per week * Cost factor

Example Calculation: Total cost = 3 (No of clusters) * 38 (ecores per cluster) * 8 (Hours per day) * 5 (Days per week) * Cost factor

Total Cost: 4560 * (Cost Factor)

Data is updated and recorded at the start of each hour (e.g., 1:00 AM, 2:00 AM, etc.) in Coordinated Universal Time (UTC), ensuring consistent and accurate hourly tracking.

Cluster Size: XS

(Cluster sizing)