For Data Subject Access Requests (DSARs) or any privacy-related inquiries, please reach out to us at privacy@firebolt.io
For Data Subject Access Requests (DSARs) or any privacy-related inquiries, please reach out to us at privacy@firebolt.io
Switching production workloads to Firebolt typically involves updating configuration to point to Firebolt endpoints. If all validation is complete and data is already present, this process is straightforward.
Firebolt recommends using aggregating indexes where possible for regularly queried granularities (e.g., daily or weekly), and employing pre-joined or pre-aggregated tables to simplify and speed up dashboard queries. Ensure indexes align closely with filter criteria to optimize query performance across various granularities.
When a tenant comprises a large percentage of data (e.g., 20-25% of all data), avoid subqueries or joins that initially select large volumes of data and subsequently discard most rows. Instead, optimize queries and table structures to filter data as early and narrowly as possible, potentially using aggregated or pre-joined tables.
Firebolt supports both using views and pre-joined tables. However, if most of the query execution time is spent on joins rather than aggregations, pre-joining tables (i.e., creating wider, denormalized tables during data ingestion) is often more performant. Views are effective for reusable SQL but may become slower with complex joins at scale. Aggregating indexes, which can pre-materialize aggregation results for fast query responses, work best on single tables without cross-table joins.
Yes, primary indexes significantly impact query performance in Firebolt. Ensuring correct and optimized indexes is crucial, especially during migration. Indexes should be carefully reviewed and implemented based on query patterns and use cases.
You can add more users to your Firebolt account by either adding them through the web application under or with SQL commands. First create a login, using the email address of your invitee as the login_id. Next, associate the login to a user and assign them the appropriate permissions. Your invitee wiill automatically receive an email invitation to join your account. For more information visit our documentation.
Setting up Apache Superset with Firebolt involves: - Installing Superset locally or on a server. - Configuring the Firebolt connector with appropriate credentials and connection parameters. - Testing queries in Superset to ensure Firebolt’s indexing structure is leveraged efficiently. - Optimizing queries for dashboard performance by using Firebolt’s indexing features to minimize latency. In this case, there were some challenges with reinstalling Superset, but Firebolt’s team is available to assist with setup and troubleshooting.
Primary indexes should include the most frequently used filters, such as tenant_id and date/time columns if queries consistently filter data by tenant and date ranges. A well-chosen primary index ensures queries access only relevant data partitions, maintaining fast performance even as data volumes scale significantly.
Query performance in high-cardinality joins is significantly impacted by data cardinality, joins resulting in large intermediate row outputs, and data shuffles across nodes. Firebolt users should leverage the EXPLAIN ANALYZE functionality to identify expensive operations such as table scans, joins, and shuffles. Reducing data volume before joins through effective indexing, semi-joins, or aggregation indexes can mitigate these impacts.
Yes, semi-joins (implemented via WHERE IN clauses) can be more performant than explicit joins, as Firebolt has built-in optimizations that leverage semi-joins for better data pruning. Using semi-joins helps reduce intermediate row counts earlier in query execution, especially beneficial for high-cardinality datasets.
First, on your S3 account, confirgure the permission policy found in the help center article, https://docs.firebolt.io/Guides/loading-data/configuring-aws-role-to-access-amazon-s3.html#use-aws-iam-roles-to-access-amazon-s3. While still in your AWS Identity and Access Management (IAM) Console, start the process to upload data through the plus sign icon in the develop space. After selecting an ingestion engine, you can select 'IAM Role' as your authetnication method and you can create an IAM role in the application. Copy the trust policy here and follow the rest of the instructions in the article to apply to your AWS account. Note that you don't actually have to upload anything to create the IAM role.
In Firebolt's query profiling, CPU time refers to the actual processing time on CPU cores, while thread time represents the total wall-clock time across all threads and nodes. When thread time is significantly higher than CPU time, it typically indicates waits due to data loading from storage (like S3) or node concurrency constraints. This distinction helps diagnose bottlenecks related to IO-bound or compute-bound workloads.
For high concurrency, use multiple clusters within your engine. Clusters help handle more simultaneous queries by distributing the load. Keep in mind that cache is shared across nodes in a cluster, but not between clusters, so the right balance depends on your workload. You can also consider using auto-scaling to dynamically adjust resources based on demand.
Firebolt proatively maintains a status page at https://firebolt.statuspage.io/ where we keep you notified about any active incidents that may cause interruption to your access or services. From this page, you can also hit the 'subscribe' button to stay informed by phone, RSS, email, or Slack.
You can label a query by setting the query_label system setting before running it:
cursor.execute("set query_label = '<label>';")
cursor.execute("your_query_here")
Here’s a full example using the Firebolt Python SDK:
id = '****'
secret = '****'
connection = connect(
database="<db_name>",
account_name="<account_name>",
auth=ClientCredentials(id, secret)
)
cursor = connection.cursor()
cursor.execute("start engine <engine_name>")
cursor.execute("use engine <engine_name>")
cursor.execute("use database <database_name>")
cursor.execute("set query_label = '123';")
cursor.execute("select 1;")
print(cursor.fetchone())
connection.close()
If you created your database containing upper-case letters without quotation marks, the saved name of your database will be in all lowercase letters. Confirm the name of your database from the expolorer, information_schema.catalogs, or show catalogs. From other systems, such as an SDK, use the always use the 'official' name of your database. Within the application, you can still access your all-lowercase database name using upper case letters, without quotation marks, since that is transformed into a lower-case name behind the scenes. If you wish your object names to be case sensitive, always wrap definitions in double quotes. Please note that definitions in information_schema are constructed and will not match exactly what was executed on creation, including use of quotes.
Firebolt is available as a connector directly from within Tableau. At this time, when you select the Firebolt connector from within Tableau, we will install a Firebolt V1 integration. To connect to V2 you will need to download the new connector and place it on the appropriate directories locally or on your server. You will also need a version of JDBC compatibile with Tableau. Full instructions can be found at https://docs.firebolt.io/Guides/integrations/tableau.html#integrate-with-tableau.
At this time, COPY FROM does not support direct manipulation of S3 bucket data, however starting with 4.18 you can filter and alter data when reading using READ table-valued functions using full glob pattern capabilities (https://en.wikipedia.org/wiki/Glob_(programming)):
- Insert into an existing table using INSERT INTO + READ_PARQUET or READ_CSV
- Create a new table with CREATE TABLE AS + READ_PARQUET or READ_CSV
Yes, we can view query history prior to the last engine restart. The support team is able to retrieve the query history for the customer if they are able to provide the type of query it was (e.g., SELECT, INSERT, etc.), the approximate time it was executed, and which engine they used to execute it.
Monitoring the status of your Firebolt engine using the REST API is a key step to ensure smooth operations. Firebolt provides a way to programmatically check engine status by querying the system engine. This article explains how to authenticate, retrieve the system engine URL, and query the engine status using Firebolt's REST API.To begin, ensure that you have an active service account with the necessary permissions. You will need the service account credentials to generate an access token for API authentication.
After obtaining an access token, use the following request to retrieve the system engine URL:
curl https://api.app.firebolt.io/web/v3/account/<account_name>/engineUrl \
-H 'Accept: application/json' \
-H 'Authorization: Bearer <access_token>'
Once you have the system engine URL, you can query it to check the engine's status with a simple SQL query, as shown below:
curl --location 'https://<system_engine_URL>/query' \
--header 'Authorization: Bearer <access_token>' \
--data "select status from information_schema.engines where engine_name = '<your_engine_name>'
This will return the current status of your engine, helping you monitor its activity and health.
For more information please refer to the Using the API Documentation.
Yes. The engine_running_querie and engine_query_history tables provide insights into current workloads. For more information see our Information Schema Documenation.
Use the engine_metering_history information schema view to track FBU consumption for each engine.
Firebolt provides three different observability views that provide insight into the performance of your engine.
1/ engine_running_queries - This view provides Information about currently running queries. This includes whether a query is running or in the queue. For queries that are currently running, this view also provides information on how long it has been running.
2/ engine_query_history - This view provides historical information about past queries - for each query in history, this includes the execution time of the query, amount of CPU and Memory consumed and amount of time the query spent in queue, among other details.
3/ engine_metrics_history - This view provides information about the utilization of CPU, RAM and Storage for each of the engine clusters. You can use these views to understand whether your engine resources are being utilized optimally, whether your query performance is meeting your needs, what percentage of queries are waiting in the queue and for how long. Based on these insights, you can resize your engine accordingly.
For more information, please refer to the Sizing Engines article in our Documentation.
If the engine has the AUTO_START
option set to True, an engine in a stopped state will be automatically started when it receives a query. By default, this option is set to True. If this option is set to False, you must explicitly start the engine using the START ENGINE
command. For more information, please refer to the Work with Engines Using DDL article in the Firebolt Documentation.
You can use the AUTO_STOP
feature available in Firebolt engines to make sure that your engines are automatically stopped after a certain amount of idle time. Engines in stopped state will not be charged, hence do not incur any costs. As with other engine operations, this can be done via SQL or the UI. For example, while creating an engine, you can specify the idle time, using AUTO_STOP
, as below:
CREATE ENGINE IF NOT EXISTS MyEngine WITH
TYPE = “S” NODES = 2 CLUSTERS =1 AUTO_STOP = 15;
The above command will ensure that MyEngine will be automatically stopped if it has been idle for 15 minutes continuously. Alternatively, you can achieve the same after an engine has been created.
ALTER ENGINE MyEngine SET AUTO_STOP = 15;
For mor information, please see the Engine Consumption Documentation.
To start an engine:
sql
START ENGINE MyEngine;
To stop an engine:
vbnet
STOP ENGINE MyEngine;
For more information, please refer to the Work with Engines Using DDL article in the Firebolt Documentation.
Yes. By default, creating an engine would result in the creation of the underlying engine clusters and start the engine. This would enable the engine to be in a running state where it is ready to start serving the queries. However, you have the option to defer the creation of the underlying clusters for an engine by setting the property “INITIALLY STOPPED” to True while calling CREATE ENGINE
. You can start the engine at a later point, when you are ready to start running queries on the engine. Note that you cannot modify this property after an engine has been created.
CREATE ENGINE IF NOT EXISTS MyEngine WITH
TYPE = “S” NODES = 2 CLUSTERS =1 START_IMMEDIATELY = FALSE;
Your queries will continue to run uninterrupted during a scaling operation. When you perform horizontal or vertical scaling operations on your engine, Firebolt adds additional compute resources per your new configuration. While new queries will be directed to the new resources, the old compute resources will finish executing any queries currently running, after which they will be removed from the engine.
For more information, check out our Engine Consumption Documentation.
No. Scaling operations in Firebolt are dynamic and do not require stopping the engine, so your applications will not experience downtime.
For more information, check out our Engine Fundamentals Documentation.
In Firebolt, you can scale an engine across multiple dimensions. All scaling operations in Firebolt are dynamic, meaning you do not need to stop your engines to scale them.
Scale Up/Down You can vertically scale an engine by using a different node type that best fits the needs of your workload.
Scaling Out/In You can horizontally scale an engine by modifying the number of nodes per cluster in the engine. Horizontal scaling can be used when your workload can benefit by distributing your queries across multiple nodes.
Concurrency Scaling Firebolt allows you to add or remove clusters in an engine. You can use concurrency scaling when your workload has to deal with a sudden spike in the number of users or number of queries. Note that you can scale along more than one dimension simultaneously. For example, the command below changes both the node type to “L” and the number of clusters to two.
ALTER ENGINE MyEngine SET TYPE = “L” CLUSTERS = 2
;
All Scaling operations can be performed via SQL using the ALTER ENGINE statement or via UI. For more information on how to perform scaling operations in Firebolt, see the Guides section in documentation.
All operations in Firebolt can be performed via SQL or UI. To create an engine, you can use the “CREATE ENGINE” command (shown above), specifying a name for the engine, number of clusters the engine will use, number of nodes in each cluster and the type of the nodes used in the engine. After the engine is successfully created, users will get an endpoint that they can use to submit their queries. For example, you can create an engine named MyEngine with two clusters, each with two nodes of type “M” as below:
CREATE ENGINE IF NOT EXISTS MyEngine WITH TYPE = “M” NODES = 2 CLUSTERS = 2;
This creates an engine named "MyEngine" with two clusters, each containing two nodes of type "M". For more details, see the documentation.
The typical start-up time for a Firebolt engine is 10-15 seconds, but this is not guaranteed due to potential resource constraints on AWS.
For more information, check out our Sizing Engines Documentation.
No. While there is no theoretical limit on the number of databases you can use with a given engine, note that the configuration of your engine will determine the performance of your applications. Based on the performance demands of your applications and the needs of your business, you may want to create the appropriate number of engines.
For more information, check out our Engine Permissions Documentation.
No. Engines and databases are fully decoupled in Firebolt. A given engine can be used with multiple databases, and conversely, multiple engines can be used with a given database. On Firebolt, all engines can write to the same database. No need to segregate engines as read-write and read-only.
For more information, check out our Engine Permissions Documentation.
You can use up to 10 clusters per engine.
For more information, check out our documentation.
You can use anywhere from 1-128 nodes per cluster in a given engine.
For more information, check out our documentation.
There are four node types available in Firebolt: Small, Medium, Large, and X-Large. Each node type provides a certain amount of CPU, RAM, and SSD. These resources scale linearly with the node type. For example, an "M" type node provides twice as much CPU, RAM, and SSD as a "S" type node.
For more information, check out the Engine Fundamentals article in our documentation.
An engine has three key dimensions:
Type - This refers to the type of nodes used in an engine.
Cluster - A collection of nodes of the same type.
Nodes - The number of nodes in each cluster.
An engine comprises one or more clusters. Every cluster in the engine has the same type and the same number of nodes.
In Firebolt, an “engine” refers to a virtual compute resource that provides the processing power to execute queries, load data, and perform various SQL driven tasks. Unlike traditional cloud data warehouses, Firebolt engines can be resized, paused, and resumed in a much more granular, and cost effective way to optimize performance and cost.