You can query the INFORMATION_SCHEMA.ENGINES table to check the last known status of an engine:
SELECT engine_name, last_started, last_stopped
FROM INFORMATION_SCHEMA.ENGINES;
This will show when each engine was last started and stopped.
You can query the INFORMATION_SCHEMA.ENGINES table to check the last known status of an engine:
SELECT engine_name, last_started, last_stopped
FROM INFORMATION_SCHEMA.ENGINES;
This will show when each engine was last started and stopped.
Yes. By default, creating an engine would result in the creation of the underlying engine clusters and start the engine. This would enable the engine to be in a running state where it is ready to start serving the queries. However, you have the option to defer the creation of the underlying clusters for an engine by setting the property “INITIALLY STOPPED” to True while calling CREATE ENGINE
. You can start the engine at a later point, when you are ready to start running queries on the engine. Note that you cannot modify this property after an engine has been created.
CREATE ENGINE IF NOT EXISTS MyEngine WITH
TYPE = “S” NODES = 2 CLUSTERS =1 START_IMMEDIATELY = FALSE;
We use AWS Shield, WAF, and other logical layers to protect against DDoS. Additionally, we leverage auto-scaling to maintain availability during attacks by dynamically adjusting resources like EC2 instances, ELBs, and other global services capacity. (Though some scenarios may require manual intervention).
FBU consumption is reported in real time and can be used to calculate costs by multiplying the consumed FBU by the price listed on the pricing page or a custom deal rate. However, the Billing and Consumption page updates daily, and AWS storage costs have a ~48-hour delay. For more information, check our documentation.
To start an engine:
sql
START ENGINE MyEngine;
To stop an engine:
vbnet
STOP ENGINE MyEngine;
For more information, please refer to the Work with Engines Using DDL article in the Firebolt Documentation.
Firebolt's Python SDK provides detailed error message handling for SQL queries. When an error occurs, the SDK generates helpful error messages, allowing users to quickly diagnose and fix issues such as syntax problems or missing credentials. The SDK also offers robust logging and debugging capabilities, making it easier for developers to troubleshoot errors in their applications. For more information, refer to the Firebolt Python SDK documentation or visit the GitHub repository for examples.
Firebolt employs a comprehensive security strategy that includes network security policies, encryption practices, tenant isolation, and governance controls. We are committed to safeguarding your data through state-of-the-art security systems, policies, and practices.
You can use the AUTO_STOP
feature available in Firebolt engines to make sure that your engines are automatically stopped after a certain amount of idle time. Engines in stopped state will not be charged, hence do not incur any costs. As with other engine operations, this can be done via SQL or the UI. For example, while creating an engine, you can specify the idle time, using AUTO_STOP
, as below:
CREATE ENGINE IF NOT EXISTS MyEngine WITH
TYPE = “S” NODES = 2 CLUSTERS =1 AUTO_STOP = 15;
The above command will ensure that MyEngine will be automatically stopped if it has been idle for 15 minutes continuously. Alternatively, you can achieve the same after an engine has been created.
ALTER ENGINE MyEngine SET AUTO_STOP = 15;
For mor information, please see the Engine Consumption Documentation.
Multistage distributed execution allows complex ELT queries to utilize all cluster resources by splitting stages across nodes. This parallelization optimizes resource utilization, speeding up ELT processes.
When using service accounts in Firebolt, access errors can occur due to incorrect credentials, missing permissions, or expired tokens. To troubleshoot these errors, follow these steps:
- Check your credentials: Ensure that the service account credentials (client ID and secret) are correct and active.
- Verify permissions: Make sure the service account has the appropriate role or permissions to access the resources you are trying to interact with, such as S3 buckets or Firebolt tables.
- Refresh authentication tokens: If you're using tokens, ensure they have not expired. Tokens generated for service accounts typically have a limited lifespan.
- Logs and error details: Review the error message logs for more specific information about the access failure.
- Review documentation: Follow the Firebolt documentation on service accounts for proper setup, permissions, and token management.
If the engine has the AUTO_START
option set to True, an engine in a stopped state will be automatically started when it receives a query. By default, this option is set to True. If this option is set to False, you must explicitly start the engine using the START ENGINE
command. For more information, please refer to the Work with Engines Using DDL article in the Firebolt Documentation.
To work with authentication tokens in Firebolt:
Generate a token via Firebolt’s authentication endpoint using your client ID and secret.
Example:
curl -X POST https://id.app.firebolt.io/oauth/token \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'grant_type=client_credentials' \
--data-urlencode 'client_id=YOUR_CLIENT_ID' \
--data-urlencode 'client_secret=YOUR_CLIENT_SECRET'
Use the token in API requests by including it in the authorization header:
--header 'Authorization: Bearer YOUR_ACCESS_TOKEN'
Refresh tokens regularly, as they expire. Keep tokens secure using environment variables or secret managers.
For more details, refer to the Firebolt API documentation.
Firebolt provides three different observability views that provide insight into the performance of your engine.
1/ engine_running_queries - This view provides Information about currently running queries. This includes whether a query is running or in the queue. For queries that are currently running, this view also provides information on how long it has been running.
2/ engine_query_history - This view provides historical information about past queries - for each query in history, this includes the execution time of the query, amount of CPU and Memory consumed and amount of time the query spent in queue, among other details.
3/ engine_metrics_history - This view provides information about the utilization of CPU, RAM and Storage for each of the engine clusters. You can use these views to understand whether your engine resources are being utilized optimally, whether your query performance is meeting your needs, what percentage of queries are waiting in the queue and for how long. Based on these insights, you can resize your engine accordingly.
For more information, please refer to the Sizing Engines article in our Documentation.
Use the engine_metering_history information schema view to track FBU consumption for each engine.
Yes, ELT processes can be automated using: Firebolt Python SDK for programmatic database operations. Apache Airflow for scheduling and automating complex workflows. dbt (Data Build Tool) for managing data transformations in a version-controlled environment.
Yes. The engine_running_querie and engine_query_history tables provide insights into current workloads. For more information see our Information Schema Documenation.
Firebolt enables running ELT jobs on a separate engine isolated from the customer-facing engine. This prevents disruptions and allows scaling ELT engines dynamically with auto-start and auto-stop features to reduce costs.
Monitoring the status of your Firebolt engine using the REST API is a key step to ensure smooth operations. Firebolt provides a way to programmatically check engine status by querying the system engine. This article explains how to authenticate, retrieve the system engine URL, and query the engine status using Firebolt's REST API.To begin, ensure that you have an active service account with the necessary permissions. You will need the service account credentials to generate an access token for API authentication.
After obtaining an access token, use the following request to retrieve the system engine URL:
curl https://api.app.firebolt.io/web/v3/account/<account_name>/engineUrl \
-H 'Accept: application/json' \
-H 'Authorization: Bearer <access_token>'
Once you have the system engine URL, you can query it to check the engine's status with a simple SQL query, as shown below:
curl --location 'https://<system_engine_URL>/query' \
--header 'Authorization: Bearer <access_token>' \
--data "select status from information_schema.engines where engine_name = '<your_engine_name>'
This will return the current status of your engine, helping you monitor its activity and health.
For more information please refer to the Using the API Documentation.
No, Firebolt doesn't support adding new columns without rebuilding the table. However, you can create a new table with the updated schema or use views to simulate schema changes.
This ensures that the schema remains optimized for performance, which is critical in high-performance analytical databases like Firebolt.
Alternatively, Firebolt offers a flexible approach where you can create views to simulate changes like renaming or restructuring tables without needing to rebuild or re-ingest data. For instance, you can create a view that selects all columns from the original table, effectively simulating the addition of new columns:
Example Usage: To simulate renaming a table or altering its structure, create a view:
CREATE VIEW IF NOT EXISTS new_games ASSELECT * FROM games;
This approach allows you to redirect queries to the new view (new_games), making it function like a table with updated schema without altering the original table.
Firebolt uses Firebolt Units (FBU) to track engine consumption. For example, for an engine with Type "S", 2 nodes, and 1 cluster running for 30 minutes, it would consume 8 FBUs:
FBU per Hour = 8 * 2 * 1 = 16 FBUs
Consumption = (16 / 3600) * 1800 seconds = 8 FBUs
Firebolt database itself inherently reduces the risk of SQL injection by minimizing the use of certain vulnerable constructs. Customers are still encouraged to implement additional controls at their application level such as:
- Ensure all user inputs are strictly validated before being processed.
- Escape potentially dangerous characters that could be used in unexpected ways.
- Include SQL injection tests in your regular security testing and code review processes.
Use the AUTO_STOP feature to automatically stop engines after a certain amount of idle time. Example:
:ALTER ENGINE MyEngine SET AUTO_STOP = 15;
For more information, read more about Engine Consumption in our Documentation.
CSV file ingestion into an external table may fail with errors such as:
Cannot parse input: expected '|' but found '<CARRIAGE RETURN>' instead.
This usually means the file delimiter in the CSV doesn't match the table definition or the number of columns differs.
To troubleshoot check the delimiter: Ensure FIELD_DELIMITER matches the CSV file's delimiter. Also, compare the file to the table definition column-by-column. Finally, if the file is large, create a temporary external table to view the entire row as a single string:
CREATE EXTERNAL TABLE ext_tmp (blob text) URL = 's3://some_bucket/somefolder/' TYPE = (CSV FIELD_DELIMITER=' ');
Example Query:
SELECT * FROM ext_tmp LIMIT 2;
This helps inspect rows and verify column consistency.
You can use the engine_metering_history information schema view for detailed tracking of FBU consumption.
Use the NULLIF function to convert empty strings to NULL, which can then be cast to the appropriate data type.
When casting columns to data types like DATE or NUMERIC in Firebolt, empty strings in the source data can cause errors. This occurs because empty strings cannot be directly cast to other data types.
Use the NULLIF function to convert empty strings to NULL, which can then be cast to the appropriate data type without causing errors.
Example:
INSERT INTO tournaments_nullif_example_fact
SELECT
NULLIF(dt, '')::date
FROM tournaments_nullif_example;
In this example, NULLIF(dt, '') converts empty strings in the dt column to NULL, allowing the data to be safely cast to a DATE type. This method ensures smooth casting of columns with empty strings in Firebolt.
Firebolt provides Role-based Access Control (RBAC) to help customers control which users can perform what operations on a given engine. For example, you can provide users with only the ability to use or operate existing engines but not allow them to create new engines. In addition, you can also prevent users from starting or stopping engines, allowing them to only run queries on engines that are already running. These fine-grained controls help ensure that customers do not end up with runaway costs resulting from multiple users in an organization creating and running new engines.
Customer data is stored in S3 buckets with high availability and durability. Our recovery objectives are:
- RTO (Recovery Time Objective): 12 hours
- RPO (Recovery Point Objective): 1 hour
- SLA (Service Level Agreement): 99.9%
To investigate query timeouts or delays, you can start by using Firebolt’s Query History and Query Profile tools, which provide detailed insights into query performance, including execution time, memory usage, and any potential bottlenecks. You can also check engine logs and metrics using Firebolt’s Engine Metrics History to identify issues like memory limitations, network latency, or resource constraints.
For troubleshooting steps, check the Firebolt documentation on query analysis.
Spilling happens when a query requires more memory than allocated, causing intermediate query results to be stored on disk (SSD) instead of in-memory. While this ensures the query completes, it may affect performance.
For more information, check the Firebolt Documentation on Engine Metrics History.
Set up SSO authentication: https://docs.firebolt.io/godocs/Guides/security/sso/sso.html
Configure your IdP: https://docs.firebolt.io/godocs/Guides/security/sso/configuring-idp-for-sso.html#custom.
Stopping an engine in Firebolt results in the eviction of the local cache. This leads to a "cold start" upon restarting the engine, as queries initially must fetch data directly from storage, slowing down performance until the cache is replenished with frequently accessed data. To minimize performance degradation, consider pre-warming the engine with essential queries or data after it is restarted. For more information please check our documentation article Work with Engines using DDL.
What is the system engine, and how is it used for metadata-related queries?The system engine in Firebolt is a lightweight, always-available engine specifically designed for metadata-related queries and administrative tasks. It supports various commands:
Access Control Commands: Manage roles, permissions, and users.
Metadata Commands: Execute queries on information schema views, such as information_schema.tables and information_schema.engines.
Non-Data Queries: Perform operations like SELECT CURRENT_TIMESTAMP()
that do not involve table data.
Typical Use Cases:
Retrieve information about databases, tables, indexes, and engines.
Manage system configurations or user permissions.
Execute DDL operations like creating tables and views, and managing all engine-related operations (start, stop, drop, alter).
Yes, Firebolt allows for dynamic resizing of engines during operation. You can adjust the number of nodes or the node type without stopping the engine, which lets workloads continue with minimal disruption. Use the ALTER ENGINE command to resize an engine. Newly started clusters post-resize will initially perform slower until they are warmed up.
When an engine is resized dynamically, queries in execution will continue under the engine's original configurations until completion or until a timeout of 24 hours, after which they will be dropped if still running. The changes in engine size or type will only affect new queries submitted post-resize. Please check our documentation for more information.
You can query the INFORMATION_SCHEMA.ENGINES table to check the last known status of an engine:
SELECT engine_name, last_started, last_stopped
FROM INFORMATION_SCHEMA.ENGINES;
This will show when each engine was last started and stopped.
Scaling with More Clusters:
This approach is ideal when you need to improve query concurrency—i.e., the ability to handle multiple queries simultaneously without significant performance degradation.
Scaling with a Higher Number of Nodes:
This is suitable when you find that the CPU utilization is consistently high, and queries are CPU-intensive. Adding more nodes spreads the workload across more computing units, thus alleviating CPU bottlenecks.
Scaling with Bigger Nodes:
This method is effective when the workload requires more memory or higher disk I/O capacity than what is currently available.
If it's a new account, you'll need to set up a new user within that account, although this user can be linked to the existing service account.
If it's a new organization, you'll need to establish both a new service account and a new user within that organization and any associated accounts.
Firebolt’s AI-related features, such as vector search, are included within our standard pricing model. While these capabilities do utilize compute resources, there are no separate licensing fees or AI-specific upcharges. You only pay for the compute and storage you use, ensuring cost efficiency without hidden AI-related costs.
For a detailed breakdown of how AI workloads impact pricing, reach out to our team for a tailored estimate based on your use case.
Firebolt is purpose-built for AI applications that require low-latency analytics. Unlike traditional cloud data warehouses, Firebolt delivers sub-second query performance for AI-driven workloads while supporting vector search and AI-driven optimizations. It enables faster and more efficient AI-powered analytics without the high costs and performance bottlenecks of legacy solutions.
Firebolt is optimized for low-latency, high-performance queries, but it is not a real-time processing platform. It excels in fast analytics on large-scale data but is not designed for event-driven streaming workloads. If your AI use case requires sub-second query execution, Firebolt is a great fit.
Firebolt supports vector search but does not generate embeddings. Unlike dedicated vector databases, which specialize in unstructured data, Firebolt integrates vector search within a high-performance analytical data warehouse. This allows you to run hybrid queries (structured + unstructured) efficiently without managing separate systems.
If you already have embeddings generated from models like OpenAI, Hugging Face, or your own ML pipeline, Firebolt can store and query them at high speed and low latency, enabling AI-powered search and recommendations within your existing analytics environment.
Firebolt is built on advanced indexing, vectorized query execution, and efficient storage optimizations, ensuring sub-second query performance even on large datasets. While it is not a real-time platform, Firebolt is ideal for AI-driven analytics, interactive dashboards, and personalized AI applications that demand ultra-fast queries on your data.
Users can verify the correct setup by ensuring they have obtained the API token, created the service account, and set up the appropriate automation steps. They can test ingestion by running queries to check if data has been successfully loaded into Firebolt.
Firebolt users should define primary indexes based on frequently filtered columns, such as event dates and brand identifiers. By including relevant dimensions in the index, query performance can be significantly improved, as seen in the session where adding "brand" as a primary index reduced query execution time from minutes to seconds.
Firebolt does not natively save queries across different user accounts. Users need to manually copy queries and store them externally, such as in Slack, Google Docs, or a shared repository, to ensure accessibility across their team.
Users should avoid scanning large datasets unnecessarily by leveraging filtering on indexed columns. For example, filtering on both "event date" and "brand" significantly improves performance. Additionally, aggregating indexes can be used to precompute and store frequently used aggregations, reducing query execution time.
Aggregating indexes in Firebolt store precomputed aggregations for faster query performance. They update automatically upon new data ingestion, reducing query execution time significantly. The trade-off is a slightly increased ingestion time since the indexes must be maintained.
Yes, Firebolt provides a query execution plan. Users can view a visual representation after query execution or generate a text-based plan using the EXPLAIN command. This helps users compare performance against other engines like Athena.
In the discussion, the Firebolt team recommended creating new, pre-joined (or otherwise streamlined) tables rather than performing large, multi-table joins at query time. This approach, sometimes called "join elimination," can significantly reduce query overhead. In addition, the Firebolt team highlighted the importance of setting appropriate primary indexes on these new tables to further optimize performance.
Firebolt does not strictly require more tables; however, to achieve high-performance queries on large datasets, many teams choose to create specialized tables with carefully designed primary indexes. Although Firebolt can perform joins directly in SQL, the discussion emphasized that pre-joining or restructuring certain data tables often yields better performance. This approach leverages Firebolt’s indexing and reduces the run-time cost of large joins.
Firebolt advises creating a brand-new organization under the desired domain or AWS account via the standard sign-up process. Once the new organization is set up, copy over any needed configurations, tables, or data from the old organization. Because a new organization starts as a new trial, you also receive fresh Firebolt usage credits. After verifying that everything works correctly in the new organization, the old one can be retired or deleted.
Batch inserts are generally recommended for Firebolt. Inserting rows one at a time creates excessive overhead on the engine, leading to performance issues (especially on smaller engines). By sending records in small to moderate batches (e.g., once per second or at some reasonable time interval), the engine processes data more efficiently without overloading resources.
Use derived date columns (e.g., day-level granularity) to lower cardinality. For instance, store closed_at_day by truncating a TIMESTAMP to DATE. Incorporate the derived column (e.g., closed_at_day) into the primary index alongside other frequently used filters (e.g., tenant_id), allowing Firebolt to skip irrelevant data segments. Because raw timestamp columns can be extremely granular, indexing them directly often leads to poor selectivity. Restructuring the schema to include day- or hour-level columns can significantly improve performance. Leverage Firebolt’s caching features (result cache, sub-result cache) for repeated queries.
Firebolt has introduced an auto-vacuum feature that runs on the write engine. It triggers after a set number of transactions and reclaims space without blocking ingestion. Manual vacuuming is largely unnecessary in most use cases now. A dedicated engine is not typically required; auto-vacuum operates seamlessly on the existing write engine.
Query performance primarily depends on the amount of data scanned. If queries remain selective (e.g., filtering by tenant ID and a truncated date range), Firebolt only scans relevant slices, keeping query times stable as data grows. Broad queries (e.g., SELECT * over wide date ranges) will naturally slow as more data must be scanned. Best practices include indexing on commonly used filters, leveraging caching, and avoiding unbounded queries to maintain good performance over time.
Firebolt offers multiple engine sizes (S, M, L, etc.). Smaller engines can handle many queries per minute if those queries are well-optimized. Heavier workloads or larger datasets may require a bigger engine or multiple concurrent engines. Firebolt charges by actual runtime (hourly or per second). Costs can be reduced by auto-stopping engines when not in use.
A viable workaround is to export query results to a file in S3, then read and process that file in smaller chunks. While it adds complexity (you must manage file paths, permissions, and cleanup), it avoids buffering the entire result set in application memory until real streaming is available in the Firebolt Node.js SDK.
Firebolt can ingest data at terabytes-per-hour scale, supported by internal benchmarks (e.g., half a terabyte in ~800 seconds on four S-sized engines). Actual throughput depends on factors such as file format, table schema, partitioning, and engine size. Organizations can scale up (larger engines or more engines) to accelerate big batch loads and scale down for smaller, more frequent delta loads.
Order columns by frequency of use in queries. For example, if tenant_id and closed_at_day appear in most filters, list them first. Within equally common columns, order from lowest cardinality to highest cardinality (fewest unique values to most). This approach ensures that Firebolt’s indexing effectively prunes unnecessary data scans for highly repetitive or frequently queried columns.
Firebolt’s Customer Success and Support teams provide complementary query optimization guidance—including index design, join performance tuning, and ingestion configuration—at no extra cost. Users can reach out via Slack or support tickets for best practices and troubleshooting. In many cases, optimization is a collaborative, ongoing process. If you notice a slower query, you can share it with Support; sometimes the solution is a schema or indexing change, other times a product fix may be required.
In most cases, no. The sub-result cache benefits queries that overlap in the underlying data scanned or join results. If the tenant ID changes and there is little or no data overlap, the previous sub-results become irrelevant, so the cache will not offer a speed-up.
Queries with highly selective filters (e.g., smaller date ranges or high-selectivity columns) scan less data and often run in sub-second time. Queries that must scan large portions of the dataset (e.g., SELECT * over a broad date range) naturally take longer, especially on first (cold) runs when data must be read from storage. Firebolt’s sub-result caching reduces execution time for repeated or similar queries by caching portions of join results and aggregations. Proper indexing on commonly used filter columns can also significantly reduce the amount of data scanned, improving performance.
Firebolt is designed for a “decoupled compute” architecture where you can spin up separate engines for different workloads. A dedicated write engine handles ingestion, while one or more read engines handle queries. This ensures that write operations do not slow down queries and vice versa. You can also configure auto-start/auto-stop so that engines only run (and incur costs) when needed.
Firebolt’s ingestion engine can be turned off when not actively loading data. Many teams schedule ingestion windows (e.g., hourly or daily) and then auto-stop the engine to save on costs. Billing is based on actual runtime, so you are not charged for idle ingestion clusters.
Separate Accounts: Each account cleanly isolates its data and can map to separate AWS buckets or IAM roles. Single Account with Multiple Databases: Environments share the same account, so you must carefully permission each database. Most teams that maintain separate AWS resources (e.g., dev vs. staging vs. production buckets and roles) find it more straightforward to mirror that approach with separate Firebolt accounts.
Before auto vacuum, you would typically schedule vacuum after a certain number of inserts or on a time-based schedule (e.g., nightly). Firebolt’s auto vacuum feature (released around early 2025) automatically triggers a non-blocking vacuum every few hundred transactions in the background, substantially reducing or eliminating the need for manual vacuum scheduling. This occurs with minimal overhead and typically does not require an engine size increase.
Firebolt support engineers have the ability to access customer accounts for troubleshooting via Okta—only if they have the specific permissions. While it is generally recommended to keep support access open for fast incident resolution, you can request to block or limit their access if you have strict security requirements.
One approach is to restructure the table by setting the primary index on event_time to better leverage Firebolt’s indexing capabilities. Additionally, an aggregating index on event_time can be beneficial. However, if queries still take longer than expected (e.g., 15 seconds for 30 days of data), it may help to review: - The structure of the primary index and ensure it aligns with the query’s filtering. - Whether unnecessary dimensions are included in the dataset, increasing granularity unnecessarily. - If joins or aggregations can be optimized, possibly through pre-aggregated tables. Firebolt’s architecture is designed to improve query efficiency by avoiding costly full scans and optimizing indexing structures.
Yes, Firebolt can support a single table design that includes multiple reporting dimensions, such as unique counts, event times, and injected data. This consolidation can improve performance by reducing the need for complex joins and maintaining a single source of truth for analytics. However, when merging different data use cases into a single table, it is important to: - Optimize indexing to balance performance across different query patterns. - Consider partitioning or using aggregating indexes to precompute frequent aggregations. - Evaluate whether all reporting needs can be met within a single table without sacrificing efficiency.
The “1.8 TB” figure refers to the SSD cache associated with a particular engine, not the total limit on data Firebolt can handle. Firebolt stores your full data in S3 for effectively unlimited capacity. Only the segments (tablets) relevant to a query are pulled into the SSD cache for faster processing. If your dataset exceeds 1.8 TB, Firebolt will still process it by cycling portions of data into and out of the SSD cache (first-in, first-out).
By subscribing through AWS Marketplace, you can consolidate Firebolt billing under your existing AWS billing arrangements. You will be directed to complete a few additional steps (“more clicks”) to finalize the purchase. Once completed, charges for your Firebolt usage appear in your AWS bill, simplifying vendor management if you prefer a single billing channel.
Firebolt has “GenAI” initiatives on its product roadmap. While exact capabilities may evolve, the published information highlights plans for: AI-Assisted Querying (e.g., query recommendations, natural language querying), Auto-Tuning & Optimization powered by machine learning, and Improved Developer Experience leveraging AI-based insights. For a deeper discussion of upcoming features, Firebolt can arrange a roadmap review session with its product team.
It is not clearly documented whether you can rename an existing organization URL. The typical workaround is to contact Firebolt support to see if they can rename it. If that is not feasible, you might need to recreate the organization under a new domain (e.g., using an email address at “velo”) and then migrate data or user setups.
Concurrency & Overlapping Updates: If two CDC operations try to update the same row simultaneously, one transaction may fail. Implement a retry mechanism if you anticipate this scenario. Vacuum Operations: Frequent small inserts create multiple “tablets.” Vacuum consolidates and optimizes these for better query performance. Firebolt’s new “auto vacuum” (rolling out in early 2025) will greatly reduce the need for manual vacuum scheduling by automatically running a non-blocking vacuum in the background after a set number of transactions.
Organization Level (Authentication): You manage logins (email addresses) at the organization/workspace level. Account Level (Authorization): Each account defines its own users, roles, and permissions. A single login can exist in multiple accounts with different roles. Support Access: Firebolt support engineers can access accounts via Okta (with appropriate permissions), but you can opt to block this access if desired (not recommended).
Yes, Firebolt provides monitoring capabilities through its information schema and metadata. Users are encouraged to implement custom monitoring and alerting processes on their side, although Firebolt also monitors performance and proactively alerts users to critical issues.
For users, it’s mainly about governance and logical isolation. Separate databases allow for different owners and permissions. Since custom schemas aren’t available yet, databases are the main way to group tables and views (this will change once schemas are supported).On the backend, metadata caching happens per database, so a single large database could add slight load. However, this is unlikely to have a practical impact unless in very large or complex cases.
Aggregating indexes in Firebolt pre-compute aggregated values to significantly speed up aggregation queries. They perform best when aggregations occur on a single fact table. They are less effective or infeasible when aggregation queries require multiple table joins because an aggregating index must be built on a single table only.
Firebolt recommends an incremental ingestion approach using S3 as a staging area. Data from PostgreSQL can be segmented (e.g., by ID range or time interval), pushed to S3, and loaded into Firebolt using the Firebolt SDK. This method ensures manageable load times and easy scaling by controlling the volume of data incrementally loaded.
While Firebolt adheres to NIST SP 800-53, NIST 800-171, and NIST CSF guidelines, we are not currently FedRAMP compliant.
Firebolt is not PCI-DSS compliant and does not permit credit card data storage on its platform.
A separate BAA with Firebolt is required since our service includes proprietary technology and other sub-processors not covered under the standard AWS HIPAA Eligible Services.
Yes. As a business associate under HIPAA, we support business associate agreements (BAAs) to ensure healthcare data protection. Our SOC 2 Type-2 + HIPAA report is available subject to a Non-Disclosure Agreement (NDA)
Firebolt is certified for ISO 27001 and ISO 27018. Certification reports are available here.
Yes, our SOC 2 Type-2 + HIPAA report is available subject to a Non-Disclosure Agreement (NDA).
Yes, more details in our End User License Agreement (EULA) and Data Processing Addendum (DPA).
Yes, during our POC process, Firebolt's team will provide you with fast and accurate cost estimates based on real usage data. During the POC, our team will closely support you, analyzing engine usage, query patterns, and resource consumption to deliver a precise cost breakdown. With our efficient benchmarking and expert guidance, you’ll quickly understand your projected costs, ensuring transparency and confidence in scaling with Firebolt.
Firebolt provides engine consumption and spend information in the Web UI. Additionally, granular engine-level consumption can be found via the information_schema.engine_metering_history view that details the hourly consumption of all the engines within an account. Users can also drill down into how the topology of their engines (node type, number of nodes and number of clusters) was modified over time, providing visibility into the FBU consumption of their engines.
Firebolt provides multidimensional scaling to help right-size workloads. Autostop and Autostart are features that help reduce costs by eliminating idle time. Firebolt also provides global visibility of consumption and costs through built-in organizational governance and account-level consumption breakdown.
Yes, commitment based discounts are available. Contact our sales team for more information.
US East (Virginia), US West (Oregon) and EU (Frankfurt)
Consumption starts when the engine endpoint is available for querying.
Each node type consumes a specified number of FBUs per hour. Compute consumption is billed in one-second increments. For example, a type ‘M’ node consumes 16 FBUs per hour. The same node running for one minute will consume FBU calculated as such: Consumed FBU = (Available FBU per hour / 3600) x ( 1 x 60 seconds) = (16/3600) x 60 = 0.27 FBUs.
While FBUs measure consumption, the performance profile of a workload depends on the engine topology.
Firebolt Unit is a normalized measurement of consumption. FBU normalizes consumption management irrespective of node type, number of nodes, number of clusters, duration of consumption, etc. Thanks to Firebolt’s multidimensional scaling, per-second billing, and auto-stop/start capabilities, compute consumption can be a fraction of a minute. FBU eliminates the need to keep track of individual node types, nodes, and the number of clusters. There’s no binding to specific instance types, so you are free to use pre-paid credits on any node type.
Firebolt's billing is generally sent monthly, aligning with the AWS billing cycle. The bill email provides a breakdown of engine usage and storage consumption, giving you visibility into your total cost. Because Firebolt runs on AWS infrastructure, its billing is influenced by the resources consumed in AWS, and the timing of Firebolt’s billing is closely aligned with AWS bills for the same period.
Firebolt provides comprehensive billing view that break down both compute (engine) consumption and storage usage. You can access detailed information on engine usage through the information_schema.engines_billing table and storage usage through the information_schema.storage_billing table. These tables and UI view offer granular insights into usage by specific engines, storage by table, and usage patterns, allowing for better cost tracking and resource optimization. The billing details can be viewed by hour, day, or month in the Firebolt UI, helping users stay informed about their resource consumption.
The considerations for splitting into separate databases include governance, logical isolation, and performance aspects related to metadata caching. Here are the key points:
Governance and Isolation: Different databases can have different owners and permissions, allowing for better governance. This is particularly important when different teams or departments manage their own data.
Logical Grouping: Currently, without support for custom schemas, databases serve as the primary mechanism to logically group tables and views. This will change when custom schemas are introduced.
Performance on Metadata Caching: The packdb caches metadata per database. A single large database with all tables may complicate this caching process, although the practical impact is likely minimal except in specific scenarios.
Cross Database Queries: At present, cross-database queries are not supported, making it impractical to have a separate database for each table if joins are required. When cross-database queries are supported, they may incur some performance degradation compared to querying within the same database due to metadata storage methods.
Security: From a security perspective, Role-Based Access Control (RBAC) can be applied at the table level to restrict access to specific users, enhancing data security.
In summary, while there are some advantages to splitting databases, such as improved governance and security, the current limitations regarding cross-database queries and potential performance issues should be carefully considered before making a decision.
There is a soft limit of 100 databases per account, that can be increased if needed.
Firebolt does not yet support automatic cross-region replication. If you need to replicate data across regions, you will need to handle the data replication process manually using external tools or services like AWS DataSync or S3 cross-region replication.
On Firebolt Data is stored in Amazon S3, which inherently offers durability and availability features leveraging copies of data stored in 3 Availability Zones per Region. However, Firebolt does not natively provide cross-region disaster recovery (DR) at this time, so manual processes would need to be in place for cross-region DR setups. Compute High Availability across Availability Zones is a roadmap item.
Firebolt supports deployment in multiple AWS regions, allowing you to choose the most appropriate region for your data and workloads. However, Firebolt does not currently offer seamless, cross-region deployments within a single account. To deploy across multiple regions, you need to create separate accounts in each region.
Firebolt is built natively on AWS and currently does not support running directly on Google Cloud Platform (GCP) or MS Azure. You would need to use AWS as the backend for Firebolt, but you can still ingest data from other cloud platforms through various data ingestion tools and connectors, or by loading data from those platforms into S3.
Yes, transferring data between different AWS regions incurs cross-region data transfer costs according to AWS pricing. Firebolt itself does not add additional fees for cross-regional data transfers, but users should consider AWS network charges when moving data across regions.
To work with authentication tokens in Firebolt:
Generate a token via Firebolt’s authentication endpoint using your client ID and secret.
Example:
curl -X POST https://id.app.firebolt.io/oauth/token \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'grant_type=client_credentials' \
--data-urlencode 'client_id=YOUR_CLIENT_ID' \
--data-urlencode 'client_secret=YOUR_CLIENT_SECRET'
Use the token in API requests by including it in the authorization header:
--header 'Authorization: Bearer YOUR_ACCESS_TOKEN'
Refresh tokens regularly, as they expire. Keep tokens secure using environment variables or secret managers.
For more details, refer to the Firebolt API documentation.
When using service accounts in Firebolt, access errors can occur due to incorrect credentials, missing permissions, or expired tokens. To troubleshoot these errors, follow these steps:
- Check your credentials: Ensure that the service account credentials (client ID and secret) are correct and active.
- Verify permissions: Make sure the service account has the appropriate role or permissions to access the resources you are trying to interact with, such as S3 buckets or Firebolt tables.
- Refresh authentication tokens: If you're using tokens, ensure they have not expired. Tokens generated for service accounts typically have a limited lifespan.
- Logs and error details: Review the error message logs for more specific information about the access failure.
- Review documentation: Follow the Firebolt documentation on service accounts for proper setup, permissions, and token management.
Firebolt's Python SDK provides detailed error message handling for SQL queries. When an error occurs, the SDK generates helpful error messages, allowing users to quickly diagnose and fix issues such as syntax problems or missing credentials. The SDK also offers robust logging and debugging capabilities, making it easier for developers to troubleshoot errors in their applications. For more information, refer to the Firebolt Python SDK documentation or visit the GitHub repository for examples.
Firebolt does not natively support Presto. However, Firebolt provides its own high-performance SQL engine and you can integrate it with your existing data infrastructure via ODBC, JDBC, and REST API.
While this is on our roadmap, Firebolt currently does not have a native integration with Kafka. However, you can ingest Kafka data into Firebolt using intermediate storage systems like S3.
Firebolt provides a custom Airflow connector that allows you to orchestrate and automate your Firebolt data workflows directly from Airflow. This integration helps in managing ETL processes, scheduling queries, and handling data pipelines efficiently.
Yes, Firebolt supports data migration from Redshift through standard ETL tools. You can move data from Redshift to Firebolt by exporting Redshift data to S3 and then using Firebolt’s COPY FROM command to ingest data into Firebolt tables.
Firebolt is continuously expanding its integration ecosystem to support a wide range of data sources and connectors. If your preferred connector isn't listed in the current documentation, don’t worry! Firebolt’s development team is actively working on adding new integrations, and you can expect ongoing enhancements to its capabilities.
In the meantime, you can reach out to Firebolt support to inquire about upcoming connectors or even request a specific integration. Firebolt also supports custom connectors through its API and can integrate with many systems using standard protocols like JDBC and ODBC, giving you the flexibility to connect to external sources in various ways.
While this is on our roadmap, Firebolt does not natively integrate with Delta Lake or Databricks. However, you can use data transfer solutions to migrate data between Firebolt and Databricks or Delta Lake via standard ETL tools, enabling the two platforms to coexist in a broader data architecture
System settings in Firebolt allow you to control query execution behavior and performance, providing flexibility when needed. This is particularly useful when you want to override default settings for specific queries via the REST API.
To adjust settings such as the time_zone, you can embed them directly in the URL of your API call. For example, if you need to set the time_zone to UTC, include the parameter in the API call URL.
Example API call:
curl --location 'https://<user engine URL>?engine=<engine_name>&database=<database_name>&time_zone=UTC' \
--header 'Authorization: Bearer <authentication_token>' \
--data "SELECT TIMESTAMPTZ '1996-09-03 11:19:33.123456 Europe/Berlin'"
This query sets the time_zone system setting to UTC for the duration of the query. Each new API call requires you to include the necessary system settings again if you want to apply specific overrides.
Firebolt can be integrated with Coralogix through OpenTelemetry. Firebolt’s OTel Exporter allows you to export Firebolt engine metrics, query logs, and other telemetry data to any OpenTelemetry-compatible platform, including Coralogix. This integration enables real-time monitoring and troubleshooting, giving you better insights into engine performance, query execution, and resource usage. You can refer to Firebolt's GitHub repository for additional setup details and code samples.
If you encounter errors due to missing credentials when accessing AWS S3 from Firebolt, ensure that you have the correct IAM roles and policies assigned. Alternatively, you can provide AWS keys directly within your external table definition using the CREDENTIALS parameter. Check your AWS permissions and Firebolt’s documentation for troubleshooting credential errors.
While this is coming soon, Firebolt does not natively support geospatial data types or queries. However, you can still store and manage geospatial data using standard data types like strings and numeric values, and process geospatial information via external tools or data pipelines integrated with Firebolt.
Yes, Firebolt integrates with a wide range of popular BI and data tools, including Looker, Tableau, and Power BI, among others. These integrations allow users to leverage Firebolt’s performance while visualizing and analyzing data in their preferred tools. Additionally, Firebolt offers JDBC and ODBC drivers to facilitate connectivity with other tools.
At present, Firebolt does not have a direct API connection to external data sources like Google Sheets. However, you can leverage third-party tools or custom ETL pipelines to load data from sources like Google Sheets into Firebolt for analysis.
Yes, Firebolt integrates seamlessly with dbt (data build tool). Firebolt’s dbt adapter allows you to model, transform, and manage your data workflows using dbt. This integration combines dbt’s transformation capabilities with Firebolt’s high-performance query engine, enabling ELT workflows. You can also define models in dbt to run directly on Firebolt, helping you process large volumes of data more efficiently. For more details, visit Firebolt's blog on ELT with dbt.
When using a NOT IN filter, rows where the column value is NULL are excluded from the results, even though NULL is not in the list of values. This is because SQL treats comparisons with NULL as UNKNOWN, which prevents those rows from being returned.
How to include NULL in NOT IN results:
To include rows with NULL values, add an explicit condition checking for NULL using OR column IS NULL.
Example:
SELECT *
FROM players
WHERE playerid NOT IN (1, 2, 3) OR playerid IS NULL;
This query will include rows where playerid is either NOT IN the list or is NULL, ensuring that NULL values are part of the result set.
This error occurs when Firebolt cannot convert data from a text format (e.g., CSV or TSV) to the expected column data type defined in the external table schema.
Common Scenarios:
Mismatched Data Types: If a column contains a value that doesn’t match the expected type (e.g., a string in a numeric column).
Example: A file contains the value "abc" in a column defined as LONG, which leads to the error.
Header Rows in Files: If a CSV file includes a header row and it's not excluded, Firebolt tries to interpret the header text as data.
Solution: Use SKIP_HEADER_ROWS in the TYPE parameter of the CREATE EXTERNAL TABLE DDL.
Troubleshooting Tip: Use a text editor to inspect the first few rows of the file for mismatches. If the issue isn’t obvious, use SELECT...LIMIT and OFFSET to locate problematic rows and identify the file using the SOURCE_FILE_NAME column.
Example query:
SELECT SOURCE_FILE_NAME, COUNT(*)
FROM (SELECT *, SOURCE_FILE_NAME FROM my_external_table LIMIT 10000 OFFSET 0)
GROUP BY SOURCE_FILE_NAME;
To implement LEFT() and RIGHT() string functions in Firebolt, you can use the SUBSTR() function, as Firebolt does not natively support these functions.
LEFT() Alternative
To replicate the LEFT() function, use SUBSTR() to extract characters from the left side of a string. For example:
SELECT SUBSTR(nickname, 1, 6) FROM players WHERE nickname = 'murrayrebecca';
-- This returns "murray"
This extracts the first 6 characters from the string.
RIGHT() Alternative
For the RIGHT() function, combine SUBSTR() with LENGTH() to extract characters from the right side of the string. For example:
SELECT SUBSTR(nickname, LENGTH(nickname) - 6) FROM players WHERE nickname = 'murrayrebecca';
-- This returns "rebecca"
This extracts the last 7 characters by calculating the length of the string and subtracting the desired number of characters.
These methods allow you to achieve the same functionality as LEFT() and RIGHT() using SUBSTR() in Firebolt.
Use PARTITION BY when you need to split the table into distinct data segments for better data management or to prune large amounts of data quickly. Partitioning allows for efficient data removal (e.g., ALTER TABLE...DROP PARTITION).
Use the Primary Index when you want to organize the order of data for optimal query performance. The primary index helps Firebolt efficiently prune data during queries based on filter conditions.
Example:
If you often query by playerid but also need to manage data by tournamentid, you could use playerid in the primary index and tournamentid in PARTITION BY. This would allow you to both optimize query performance and manage large data segments.
CREATE TABLE playstats_partition (
playerid integer,
tournamentid integer,
stattime timestampntz
) PRIMARY INDEX playerid
PARTITION BY tournamentid;
In Firebolt's UI, numeric values are automatically displayed with commas for readability (e.g., 123,456,789). However, this may be undesirable for fields like IDs or other values where commas aren’t needed.
Solution:
To remove commas from numbers in the UI, CAST the numeric field to TEXT using ::TEXT. This ensures that the number is displayed as a plain text string, without commas.
Example:
SELECT
playerid AS playerid_default,
playerid::text AS playerid_text,
nickname,
email
FROM players
LIMIT 10;
In this example, playerid_default will display with commas, while playerid_text will display the number without commas.
This method only affects how numbers are displayed in the Firebolt UI and does not alter the underlying data or its formatting in external tools.
When deciding between a fact or dimension table in Firebolt, it's important to consider how the data will be used and queried, as this choice impacts performance and how data is handled in multi-node engines.
Fact tables are typically large and contain measurable events, like sales or sensor readings. They usually hold foreign keys to dimension tables and measures that are aggregated (e.g., sums or averages). Fact tables benefit from aggregate indexes, which optimize heavy aggregations.
Dimension tables describe the entities in fact tables, such as product details or customer information. Dimension tables are usually smaller, updated more frequently, and replicated across nodes for faster lookups. Join indexes can be applied to dimensions to speed up lookup queries.
In general, choose a fact table when you need to aggregate large volumes of data, and a dimension table for smaller, descriptive datasets primarily used for lookups. For multi-node engines, keep in mind that fact tables are sharded, while dimension tables are replicated.
Yes, Firebolt is designed for ease of use, leveraging SQL simplicity and PostgreSQL compliance. It allows data professionals to manage, process, and query data effortlessly using familiar SQL commands.
Everything in Firebolt is done through SQL. Firebolt’s SQL dialect is compliant with Postgres’s SQL dialect and supports running SQL queries directly on structured and semi-structured data without compromising speed. Firebolt also has multiple extensions in its SQL dialect to better serve modern data applications.
To optimize query performance in Firebolt, follow these guidelines for selecting a primary index:
Frequently Queried Columns: Choose columns often used in WHERE clauses or joins for faster data retrieval.
Range Queries: Include columns used in range filters, like dates, to improve performance in range-based queries.
Data Distribution: Pick columns with many unique values (high cardinality) to ensure even data distribution.
Sorting: Select columns based on how data is typically sorted in queries to minimize the amount of scanned data.
For more detailed information, check out Firebolt’s comprehensive guide on primary indexes.
These steps ensure efficient data pruning and faster query execution.
Firebolt's aggregating index pre-calculates and stores aggregate function results for improved query performance, similar to a materialized view that works with Firebolt's F3 storage format. Firebolt selects the best aggregating indexes to optimize queries at runtime, avoiding full table scans. These indexes are automatically updated with new or modified data to remain consistent with the underlying table data. In multi-node engines, Firebolt shards aggregating indexes across nodes, similar to the sharding of the underlying tables.
The Shuffle operation is the key ingredient to executing queries at scale in distributed systems like Firebolt. Firebolt leverages close to all available network bandwidth and streams intermediate results from one execution state to the next whenever possible. By overlapping the execution of different stages, Firebolt reduces the overall query latency
Firebolt scales to manage hundreds of terabytes of data without performance bottlenecks. Its distributed architecture allows it to leverage all available network bandwidth and execute queries at scale with efficient cross-node data transfer using streaming data shuffle.
Warming up an aggregating index preloads the data into the cache, improving query performance. Use the CHECKSUM function on a query matching the index definition to warm up the index, leading to faster execution when it is utilized.
Solution:
Use the CHECKSUM function to preload specific data into the cache. Focus on frequently accessed columns or data ranges to optimize performance and minimize cache usage.
Example:
-- Warm-up the entire table SELECT CHECKSUM(*) FROM playstats;
-- Warm-up specific columnsSELECT CHECKSUM(GameID, PlayerID, CurrentScore) FROM playstats;
-- Warm-up specific data rangeSELECT CHECKSUM(*) FROM playstats WHERE CurrentLevel BETWEEN 1 AND 5;
Warming up tables using CHECKSUM ensures data is stored in the cache, improving performance for large tables or frequently queried datasets. Use filters or column selection to target relevant data efficiently.
Warming up a table can improve query performance by preloading data into the cache. Running warm-up queries after an engine starts ensures faster execution of subsequent queries.
Querying cold data, or data not yet cached in Firebolt's local SSD storage, may result in slightly slower performance compared to querying hot (cached) data. However, Firebolt's efficient caching mechanisms ensure that even cold data is accessed quickly, minimizing the performance impact.
Firebolt offers observability views through information_schema, allowing you to access real-time engine metrics. These insights help you size your engines for optimal performance and cost efficiency. Read more here- https://docs.firebolt.io/general-reference/information-schema/views.html
Firebolt engines can scale up and out to handle high-concurrency workloads. Firebolt supports adding up to 10 clusters within a single engine to manage spikes in concurrent queries. These clusters can be dynamically added on-demand, ensuring optimal performance even during peak loads.
Sub-plan result caching allows Firebolt to reuse intermediate query artifacts, such as hash tables computed during previous requests when serving new requests, reducing query processing times significantly.It includes built-in automatic cache eviction for efficient memory utilization while maintaining real-time, fully transactional results.
Firebolt’s engine uses vectorized execution, which processes batches of thousands of rows at a time, leveraging modern CPUs for maximum efficiency. Combined with multi-threading, this approach allows queries to scale across all CPU cores, optimizing performance.*
Boncz, Peter A., Marcin Zukowski, and Niels Nes. "MonetDB/X100: Hyper-Pipelining Query Execution." CIDR. Vol. 5. 2005.
* Nes, Stratos Idreos Fabian Groffen Niels, and Stefan Manegold Sjoerd Mullender Martin Kersten. "MonetDB: Two decades of research in column-oriented database architectures." Data Engineering 40 (2012).
Data pruning in Firebolt involves using sparse indexes to minimize the amount of data scanned during queries. This allows for tens of millisecond response times by reducing I/O usage, making your queries highly performant.
Firebolt uses advanced query processing techniques such as granular range-level data pruning with sparse indexes, incrementally updated aggregating indexes, vectorized multi-threaded execution, and tiered caching, including sub-plan result caching.These techniques both minimize data being scanned and reduce CPU time by reusing precomputed, enabling query processing times in tens of milliseconds latency on hundreds of TBs of data.
Firebolt has been benchmarked against several major data warehouses, including Snowflake, BigQuery, and Redshift. These benchmarks highlight Firebolt's superior performance in low-latency, high-concurrency queries, especially for fast aggregations and real-time analytics. See further details in our benchmark Github repo and our benchmark articles about handling concurency, high-volume ingestion and DML operations.
Firebolt is engineered to handle hundreds of analytical queries per second; without compromising speed. It offers unparalleled cost efficiency with industry-leading price-to-performance ratios and scales seamlessly to handle terabytes of data with minimal performance impact.
Drop the associated indexes manually or use the CASCADE option to automatically remove all dependencies when dropping the table.
Use the NULLIF function to convert empty strings to NULL, which can then be cast to the appropriate data type.
When casting columns to data types like DATE or NUMERIC in Firebolt, empty strings in the source data can cause errors. This occurs because empty strings cannot be directly cast to other data types.
Use the NULLIF function to convert empty strings to NULL, which can then be cast to the appropriate data type without causing errors.
Example:
INSERT INTO tournaments_nullif_example_fact
SELECT
NULLIF(dt, '')::date
FROM tournaments_nullif_example;
In this example, NULLIF(dt, '') converts empty strings in the dt column to NULL, allowing the data to be safely cast to a DATE type. This method ensures smooth casting of columns with empty strings in Firebolt.
CSV file ingestion into an external table may fail with errors such as:
Cannot parse input: expected '|' but found '<CARRIAGE RETURN>' instead.
This usually means the file delimiter in the CSV doesn't match the table definition or the number of columns differs.
To troubleshoot check the delimiter: Ensure FIELD_DELIMITER matches the CSV file's delimiter. Also, compare the file to the table definition column-by-column. Finally, if the file is large, create a temporary external table to view the entire row as a single string:
CREATE EXTERNAL TABLE ext_tmp (blob text) URL = 's3://some_bucket/somefolder/' TYPE = (CSV FIELD_DELIMITER=' ');
Example Query:
SELECT * FROM ext_tmp LIMIT 2;
This helps inspect rows and verify column consistency.
Firebolt distributes data across nodes and uses spilling to local SSDs when the working set exceeds available memory, allowing the system to scale even with limited resources.
No, Firebolt doesn't support adding new columns without rebuilding the table. However, you can create a new table with the updated schema or use views to simulate schema changes.
This ensures that the schema remains optimized for performance, which is critical in high-performance analytical databases like Firebolt.
Alternatively, Firebolt offers a flexible approach where you can create views to simulate changes like renaming or restructuring tables without needing to rebuild or re-ingest data. For instance, you can create a view that selects all columns from the original table, effectively simulating the addition of new columns:
Example Usage: To simulate renaming a table or altering its structure, create a view:
CREATE VIEW IF NOT EXISTS new_games ASSELECT * FROM games;
This approach allows you to redirect queries to the new view (new_games), making it function like a table with updated schema without altering the original table.
Firebolt enables running ELT jobs on a separate engine isolated from the customer-facing engine. This prevents disruptions and allows scaling ELT engines dynamically with auto-start and auto-stop features to reduce costs.
Yes, ELT processes can be automated using: Firebolt Python SDK for programmatic database operations. Apache Airflow for scheduling and automating complex workflows. dbt (Data Build Tool) for managing data transformations in a version-controlled environment.
CSV, TSV, JSON, and Parquet formats are supported for exporting data.
Multistage distributed execution allows complex ELT queries to utilize all cluster resources by splitting stages across nodes. This parallelization optimizes resource utilization, speeding up ELT processes.
Firebolt supports the ARRAY data type for mapping arrays from Parquet files and provides functions like ARRAY_AGG and UNNEST for working with arrays.
Yes, Firebolt supports semi-structured data types like JSON. JSON data can be ingested as text columns or parsed into individual columns for flexible schema-on-read or flattened structures.
Data transformations can be applied directly within INSERT INTO SELECT statements during ingestion. Standard SQL functions can be used to manipulate data types, perform calculations, and format strings.
Optimize file sizes (500MB–1GB), use efficient formats (Parquet), and relocate files after ingestion to avoid reprocessing.
Firebolt uses transactional semantics and ACID guarantees. Ingestion operations are fully isolated from ongoing reads or queries, ensuring consistency. There are no partial inserts or copies to clean up.
Firebolt boosts data ingestion performance through parallel processing, multi-node scaling as the engine grows, and pipelined execution for efficient resource use. Using COPY FROM enables linear scaling with the number of nodes, accelerating ingestion speed with larger engines—ideal for latency-sensitive ELT scenarios.
Start with a small node type (CREATE ENGINE ingest_engine TYPE=S NODES=1) and monitor CPU and RAM utilization via information_schema.engine_metrics_history. Scale out the engine (e.g., ALTER ingest_engine SET NODES=4) as needed to increase throughput. As a general rule of thumb, most ingestion workloads benefit from paralellism, specifically when importing multiple files. Adding to that, Firebolt will be even more efficient when files are roughly equivalent in size.
While streaming ingestion is on the roadmap, Firebolt currently don't have a native straming ability. However, Firebolt has the ability to run hiligh preforment micro-batching to persist data to S3 in Parquet or Avro format for near real-time ingestion.
Firebolt allows filtering on file-level information such as name, modified time, and size using metadata fields like $source_file_timestamp, $source_file_name, and $source_file_size.
Use the CREATE EXTERNAL TABLE
command to reference data stored outside Firebolt, like in an S3 bucket, while specifying the file format and schema.
Yes, Firebolt’s COPY FROM command can automatically create the destination table using AUTO_CREATE = TRUE
, which maps columns and creates the table when it doesn't exist.
Firebolt currently supports Parquet and CSV formats. For AVRO, JSON, or ORC, use the external table option.
Firebolt offers multiple data import options, including: COPY FROM SQL command for importing data from S3 buckets with built-in schema inference and automatic table creation. The 'Load data' wizard in the WebUI to explore, set options, infer schema, and load data into Firebolt tables. Direct read for CSV and Parquet files from S3 using read_csv and read_parquet table-valued functions. External tables for data stored in Amazon S3 buckets, supporting formats such as CSV, Parquet, AVRO, ORC, and JSON.
Firebolt is ACID compliant and treats every operation as a transaction. For example, data from a COPY FROM operation is visible only after the entire operation is successful, ensuring data integrity. This eliminates partial updates ensuring data integrity at all times.
Spilling happens when a query requires more memory than allocated, causing intermediate query results to be stored on disk (SSD) instead of in-memory. While this ensures the query completes, it may affect performance.
For more information, check the Firebolt Documentation on Engine Metrics History.
To investigate query timeouts or delays, you can start by using Firebolt’s Query History and Query Profile tools, which provide detailed insights into query performance, including execution time, memory usage, and any potential bottlenecks. You can also check engine logs and metrics using Firebolt’s Engine Metrics History to identify issues like memory limitations, network latency, or resource constraints.
For troubleshooting steps, check the Firebolt documentation on query analysis.
If AWS instance availability is low:
Change the engine instance type.
Retry after some time.
Contact Firebolt support if the issue persists.
Firebolt provides Role-based Access Control (RBAC) to help customers control which users can perform what operations on a given engine. For example, you can provide users with only the ability to use or operate existing engines but not allow them to create new engines. In addition, you can also prevent users from starting or stopping engines, allowing them to only run queries on engines that are already running. These fine-grained controls help ensure that customers do not end up with runaway costs resulting from multiple users in an organization creating and running new engines.
You can use the engine_metering_history information schema view for detailed tracking of FBU consumption.
Use the AUTO_STOP feature to automatically stop engines after a certain amount of idle time. Example:
:ALTER ENGINE MyEngine SET AUTO_STOP = 15;
For more information, read more about Engine Consumption in our Documentation.
Firebolt uses Firebolt Units (FBU) to track engine consumption. For example, for an engine with Type "S", 2 nodes, and 1 cluster running for 30 minutes, it would consume 8 FBUs:
FBU per Hour = 8 * 2 * 1 = 16 FBUs
Consumption = (16 / 3600) * 1800 seconds = 8 FBUs
Monitoring the status of your Firebolt engine using the REST API is a key step to ensure smooth operations. Firebolt provides a way to programmatically check engine status by querying the system engine. This article explains how to authenticate, retrieve the system engine URL, and query the engine status using Firebolt's REST API.To begin, ensure that you have an active service account with the necessary permissions. You will need the service account credentials to generate an access token for API authentication.
After obtaining an access token, use the following request to retrieve the system engine URL:
curl https://api.app.firebolt.io/web/v3/account/<account_name>/engineUrl \
-H 'Accept: application/json' \
-H 'Authorization: Bearer <access_token>'
Once you have the system engine URL, you can query it to check the engine's status with a simple SQL query, as shown below:
curl --location 'https://<system_engine_URL>/query' \
--header 'Authorization: Bearer <access_token>' \
--data "select status from information_schema.engines where engine_name = '<your_engine_name>'
This will return the current status of your engine, helping you monitor its activity and health.
For more information please refer to the Using the API Documentation.
Yes. The engine_running_querie and engine_query_history tables provide insights into current workloads. For more information see our Information Schema Documenation.
Use the engine_metering_history information schema view to track FBU consumption for each engine.
Firebolt provides three different observability views that provide insight into the performance of your engine.
1/ engine_running_queries - This view provides Information about currently running queries. This includes whether a query is running or in the queue. For queries that are currently running, this view also provides information on how long it has been running.
2/ engine_query_history - This view provides historical information about past queries - for each query in history, this includes the execution time of the query, amount of CPU and Memory consumed and amount of time the query spent in queue, among other details.
3/ engine_metrics_history - This view provides information about the utilization of CPU, RAM and Storage for each of the engine clusters. You can use these views to understand whether your engine resources are being utilized optimally, whether your query performance is meeting your needs, what percentage of queries are waiting in the queue and for how long. Based on these insights, you can resize your engine accordingly.
For more information, please refer to the Sizing Engines article in our Documentation.
If the engine has the AUTO_START
option set to True, an engine in a stopped state will be automatically started when it receives a query. By default, this option is set to True. If this option is set to False, you must explicitly start the engine using the START ENGINE
command. For more information, please refer to the Work with Engines Using DDL article in the Firebolt Documentation.