Sub-plan result caching allows Firebolt to reuse intermediate query artifacts, such as hash tables computed during previous requests when serving new requests, reducing query processing times significantly.It includes built-in automatic cache eviction for efficient memory utilization while maintaining real-time, fully transactional results.
Firebolt is engineered to handle hundreds of analytical queries per second; without compromising speed. It offers unparalleled cost efficiency with industry-leading price-to-performance ratios and scales seamlessly to handle terabytes of data with minimal performance impact.
To optimize query performance in Firebolt, follow these guidelines for selecting a primary index:
Frequently Queried Columns: Choose columns often used in WHERE clauses or joins for faster data retrieval.
Range Queries: Include columns used in range filters, like dates, to improve performance in range-based queries.
Data Distribution: Pick columns with many unique values (high cardinality) to ensure even data distribution.
Sorting: Select columns based on how data is typically sorted in queries to minimize the amount of scanned data.
For more detailed information, check out Firebolt’s comprehensive guide on primary indexes.
These steps ensure efficient data pruning and faster query execution.
Firebolt's aggregating index pre-calculates and stores aggregate function results for improved query performance, similar to a materialized view that works with Firebolt's F3 storage format. Firebolt selects the best aggregating indexes to optimize queries at runtime, avoiding full table scans. These indexes are automatically updated with new or modified data to remain consistent with the underlying table data. In multi-node engines, Firebolt shards aggregating indexes across nodes, similar to the sharding of the underlying tables.
Warming up an aggregating index preloads the data into the cache, improving query performance. Use the CHECKSUM function on a query matching the index definition to warm up the index, leading to faster execution when it is utilized.
Solution:
Use the CHECKSUM function to preload specific data into the cache. Focus on frequently accessed columns or data ranges to optimize performance and minimize cache usage.
Example:
-- Warm-up the entire table SELECT CHECKSUM(*) FROM playstats;
-- Warm-up specific columnsSELECT CHECKSUM(GameID, PlayerID, CurrentScore) FROM playstats;
-- Warm-up specific data rangeSELECT CHECKSUM(*) FROM playstats WHERE CurrentLevel BETWEEN 1 AND 5;
Warming up tables using CHECKSUM ensures data is stored in the cache, improving performance for large tables or frequently queried datasets. Use filters or column selection to target relevant data efficiently.
Querying cold data, or data not yet cached in Firebolt's local SSD storage, may result in slightly slower performance compared to querying hot (cached) data. However, Firebolt's efficient caching mechanisms ensure that even cold data is accessed quickly, minimizing the performance impact.
The Shuffle operation is the key ingredient to executing queries at scale in distributed systems like Firebolt. Firebolt leverages close to all available network bandwidth and streams intermediate results from one execution state to the next whenever possible. By overlapping the execution of different stages, Firebolt reduces the overall query latency
Firebolt’s engine uses vectorized execution, which processes batches of thousands of rows at a time, leveraging modern CPUs for maximum efficiency. Combined with multi-threading, this approach allows queries to scale across all CPU cores, optimizing performance.*
Boncz, Peter A., Marcin Zukowski, and Niels Nes. "MonetDB/X100: Hyper-Pipelining Query Execution." CIDR. Vol. 5. 2005.
* Nes, Stratos Idreos Fabian Groffen Niels, and Stefan Manegold Sjoerd Mullender Martin Kersten. "MonetDB: Two decades of research in column-oriented database architectures." Data Engineering 40 (2012).
Firebolt uses advanced query processing techniques such as granular range-level data pruning with sparse indexes, incrementally updated aggregating indexes, vectorized multi-threaded execution, and tiered caching, including sub-plan result caching.These techniques both minimize data being scanned and reduce CPU time by reusing precomputed, enabling query processing times in tens of milliseconds latency on hundreds of TBs of data.
Firebolt scales to manage hundreds of terabytes of data without performance bottlenecks. Its distributed architecture allows it to leverage all available network bandwidth and execute queries at scale with efficient cross-node data transfer using streaming data shuffle.
Firebolt engines can scale up and out to handle high-concurrency workloads. Firebolt supports adding up to 10 clusters within a single engine to manage spikes in concurrent queries. These clusters can be dynamically added on-demand, ensuring optimal performance even during peak loads.
Firebolt offers observability views through information_schema, allowing you to access real-time engine metrics. These insights help you size your engines for optimal performance and cost efficiency. Read more here- https://docs.firebolt.io/general-reference/information-schema/views.html
Firebolt has been benchmarked against several major data warehouses, including Snowflake, BigQuery, and Redshift. These benchmarks highlight Firebolt's superior performance in low-latency, high-concurrency queries, especially for fast aggregations and real-time analytics. See further details in our benchmark Github repo and our benchmark articles about handling concurency, high-volume ingestion and DML operations.
In Firebolt, an “engine” refers to a virtual compute resource that provides the processing power to execute queries, load data, and perform various SQL driven tasks. Unlike traditional cloud data warehouses, Firebolt engines can be resized, paused, and resumed in a much more granular, and cost effective way to optimize performance and cost.
Firebolt is ACID compliant and treats every operation as a transaction. For example, data from a COPY FROM operation is visible only after the entire operation is successful, ensuring data integrity. This eliminates partial updates ensuring data integrity at all times.
Everything in Firebolt is done through SQL. Firebolt’s SQL dialect is compliant with Postgres’s SQL dialect and supports running SQL queries directly on structured and semi-structured data without compromising speed. Firebolt also has multiple extensions in its SQL dialect to better serve modern data applications.
Yes, Firebolt integrates seamlessly with dbt (data build tool). Firebolt’s dbt adapter allows you to model, transform, and manage your data workflows using dbt. This integration combines dbt’s transformation capabilities with Firebolt’s high-performance query engine, enabling ELT workflows. You can also define models in dbt to run directly on Firebolt, helping you process large volumes of data more efficiently. For more details, visit Firebolt's blog on ELT with dbt.
Yes, transferring data between different AWS regions incurs cross-region data transfer costs according to AWS pricing. Firebolt itself does not add additional fees for cross-regional data transfers, but users should consider AWS network charges when moving data across regions.
Firebolt provides comprehensive billing view that break down both compute (engine) consumption and storage usage. You can access detailed information on engine usage through the information_schema.engines_billing table and storage usage through the information_schema.storage_billing table. These tables and UI view offer granular insights into usage by specific engines, storage by table, and usage patterns, allowing for better cost tracking and resource optimization. The billing details can be viewed by hour, day, or month in the Firebolt UI, helping users stay informed about their resource consumption.
Firebolt stores unsaved scripts in your browser’s local storage, which has a limit of around 5 MB. If multiple websites use local storage, it can get full, causing unsaved scripts in the Firebolt SQL editor to be erased.
To avoid this:
Save your scripts regularly.
Clear your browser cache/cookies to free up local storage and prevent data loss.
Remember, clearing your cache will also remove other saved data, so use this solution carefully.
Firebolt values transparency and customer feedback when planning its roadmap. To view the current roadmap or see open feature requests, reach out to Firebolt’s support or your customer success manager. Additionally, Firebolt’s team actively gathers feedback from users and considers feature requests as part of ongoing development efforts. Regular updates are communicated through newsletters and user forums. Stay connected to get insights into upcoming releases and features tailored to your needs.
An engine has three key dimensions:
Type - This refers to the type of nodes used in an engine.
Cluster - A collection of nodes of the same type.
Nodes - The number of nodes in each cluster.
An engine comprises one or more clusters. Every cluster in the engine has the same type and the same number of nodes.
Firebolt offers multiple data import options, including: COPY FROM SQL command for importing data from S3 buckets with built-in schema inference and automatic table creation. The 'Load data' wizard in the WebUI to explore, set options, infer schema, and load data into Firebolt tables. Direct read for CSV and Parquet files from S3 using read_csv and read_parquet table-valued functions. External tables for data stored in Amazon S3 buckets, supporting formats such as CSV, Parquet, AVRO, ORC, and JSON.
At present, Firebolt does not have a direct API connection to external data sources like Google Sheets. However, you can leverage third-party tools or custom ETL pipelines to load data from sources like Google Sheets into Firebolt for analysis.
Firebolt is built natively on AWS and currently does not support running directly on Google Cloud Platform (GCP) or MS Azure. You would need to use AWS as the backend for Firebolt, but you can still ingest data from other cloud platforms through various data ingestion tools and connectors, or by loading data from those platforms into S3.
Firebolt's billing is generally sent monthly, aligning with the AWS billing cycle. The bill email provides a breakdown of engine usage and storage consumption, giving you visibility into your total cost. Because Firebolt runs on AWS infrastructure, its billing is influenced by the resources consumed in AWS, and the timing of Firebolt’s billing is closely aligned with AWS bills for the same period.
Firebolt has recognized the demand for table cloning and time travel capabilities, which are important features for various use cases, such as data versioning and simplified testing environments. While these features are not currently available, Firebolt's product team is actively evaluating them as part of its long-term roadmap. Stay tuned for updates, and feel free to check in with Firebolt support or your customer success manager for the latest developments.
There are four node types available in Firebolt: Small, Medium, Large, and X-Large. Each node type provides a certain amount of CPU, RAM, and SSD. These resources scale linearly with the node type. For example, an "M" type node provides twice as much CPU, RAM, and SSD as a "S" type node.
When deciding between a fact or dimension table in Firebolt, it's important to consider how the data will be used and queried, as this choice impacts performance and how data is handled in multi-node engines.
Fact tables are typically large and contain measurable events, like sales or sensor readings. They usually hold foreign keys to dimension tables and measures that are aggregated (e.g., sums or averages). Fact tables benefit from aggregate indexes, which optimize heavy aggregations.
Dimension tables describe the entities in fact tables, such as product details or customer information. Dimension tables are usually smaller, updated more frequently, and replicated across nodes for faster lookups. Join indexes can be applied to dimensions to speed up lookup queries.
In general, choose a fact table when you need to aggregate large volumes of data, and a dimension table for smaller, descriptive datasets primarily used for lookups. For multi-node engines, keep in mind that fact tables are sharded, while dimension tables are replicated.
Yes, Firebolt integrates with a wide range of popular BI and data tools, including Looker, Tableau, and Power BI, among others. These integrations allow users to leverage Firebolt’s performance while visualizing and analyzing data in their preferred tools. Additionally, Firebolt offers JDBC and ODBC drivers to facilitate connectivity with other tools.
Firebolt supports deployment in multiple AWS regions, allowing you to choose the most appropriate region for your data and workloads. However, Firebolt does not currently offer seamless, cross-region deployments within a single account. To deploy across multiple regions, you need to create separate accounts in each region.
Firebolt Unit is a normalized measurement of consumption. FBU normalizes consumption management irrespective of node type, number of nodes, number of clusters, duration of consumption, etc. Thanks to Firebolt’s multidimensional scaling, per-second billing, and auto-stop/start capabilities, compute consumption can be a fraction of a minute. FBU eliminates the need to keep track of individual node types, nodes, and the number of clusters. There’s no binding to specific instance types, so you are free to use pre-paid credits on any node type.
In Firebolt's UI, numeric values are automatically displayed with commas for readability (e.g., 123,456,789). However, this may be undesirable for fields like IDs or other values where commas aren’t needed.
Solution:
To remove commas from numbers in the UI, CAST the numeric field to TEXT using ::TEXT. This ensures that the number is displayed as a plain text string, without commas.
Example:
SELECT
playerid AS playerid_default,
playerid::text AS playerid_text,
nickname,
email
FROM players
LIMIT 10;
In this example, playerid_default will display with commas, while playerid_text will display the number without commas.
This method only affects how numbers are displayed in the Firebolt UI and does not alter the underlying data or its formatting in external tools.
While this is coming soon, Firebolt does not natively support geospatial data types or queries. However, you can still store and manage geospatial data using standard data types like strings and numeric values, and process geospatial information via external tools or data pipelines integrated with Firebolt.
On Firebolt Data is stored in Amazon S3, which inherently offers durability and availability features leveraging copies of data stored in 3 Availability Zones per Region. However, Firebolt does not natively provide cross-region disaster recovery (DR) at this time, so manual processes would need to be in place for cross-region DR setups. Compute High Availability across Availability Zones is a roadmap item.
Use PARTITION BY when you need to split the table into distinct data segments for better data management or to prune large amounts of data quickly. Partitioning allows for efficient data removal (e.g., ALTER TABLE...DROP PARTITION).
Use the Primary Index when you want to organize the order of data for optimal query performance. The primary index helps Firebolt efficiently prune data during queries based on filter conditions.
Example:
If you often query by playerid but also need to manage data by tournamentid, you could use playerid in the primary index and tournamentid in PARTITION BY. This would allow you to both optimize query performance and manage large data segments.
CREATE TABLE playstats_partition (
playerid integer,
tournamentid integer,
stattime timestampntz
) PRIMARY INDEX playerid
PARTITION BY tournamentid;
If you encounter errors due to missing credentials when accessing AWS S3 from Firebolt, ensure that you have the correct IAM roles and policies assigned. Alternatively, you can provide AWS keys directly within your external table definition using the CREDENTIALS parameter. Check your AWS permissions and Firebolt’s documentation for troubleshooting credential errors.
Firebolt does not yet support automatic cross-region replication. If you need to replicate data across regions, you will need to handle the data replication process manually using external tools or services like AWS DataSync or S3 cross-region replication.
Each node type consumes a specified number of FBUs per hour. Compute consumption is billed in one-second increments. For example, a type ‘M’ node consumes 16 FBUs per hour. The same node running for one minute will consume FBU calculated as such: Consumed FBU = (Available FBU per hour / 3600) x ( 1 x 60 seconds) = (16/3600) x 60 = 0.27 FBUs.
No. Engines and databases are fully decoupled in Firebolt. A given engine can be used with multiple databases, and conversely, multiple engines can be used with a given database. On Firebolt, all engines can write to the same database. No need to segregate engines as read-write and read-only.
To implement LEFT() and RIGHT() string functions in Firebolt, you can use the SUBSTR() function, as Firebolt does not natively support these functions.
LEFT() Alternative
To replicate the LEFT() function, use SUBSTR() to extract characters from the left side of a string. For example:
SELECT SUBSTR(nickname, 1, 6) FROM players WHERE nickname = 'murrayrebecca';
-- This returns "murray"
This extracts the first 6 characters from the string.
RIGHT() Alternative
For the RIGHT() function, combine SUBSTR() with LENGTH() to extract characters from the right side of the string. For example:
SELECT SUBSTR(nickname, LENGTH(nickname) - 6) FROM players WHERE nickname = 'murrayrebecca';
-- This returns "rebecca"
This extracts the last 7 characters by calculating the length of the string and subtracting the desired number of characters.
These methods allow you to achieve the same functionality as LEFT() and RIGHT() using SUBSTR() in Firebolt.
Firebolt can be integrated with Coralogix through OpenTelemetry. Firebolt’s OTel Exporter allows you to export Firebolt engine metrics, query logs, and other telemetry data to any OpenTelemetry-compatible platform, including Coralogix. This integration enables real-time monitoring and troubleshooting, giving you better insights into engine performance, query execution, and resource usage. You can refer to Firebolt's GitHub repository for additional setup details and code samples.
No. While there is no theoretical limit on the number of databases you can use with a given engine, note that the configuration of your engine will determine the performance of your applications. Based on the performance demands of your applications and the needs of your business, you may want to create the appropriate number of engines.
While streaming ingestion is on the roadmap, Firebolt currently don't have a native straming ability. However, Firebolt has the ability to run hiligh preforment micro-batching to persist data to S3 in Parquet or Avro format for near real-time ingestion.
This error occurs when Firebolt cannot convert data from a text format (e.g., CSV or TSV) to the expected column data type defined in the external table schema.
Common Scenarios:
Mismatched Data Types: If a column contains a value that doesn’t match the expected type (e.g., a string in a numeric column).
Example: A file contains the value "abc" in a column defined as LONG, which leads to the error.
Header Rows in Files: If a CSV file includes a header row and it's not excluded, Firebolt tries to interpret the header text as data.
Solution: Use SKIP_HEADER_ROWS in the TYPE parameter of the CREATE EXTERNAL TABLE DDL.
Troubleshooting Tip: Use a text editor to inspect the first few rows of the file for mismatches. If the issue isn’t obvious, use SELECT...LIMIT and OFFSET to locate problematic rows and identify the file using the SOURCE_FILE_NAME column.
Example query:
SELECT SOURCE_FILE_NAME, COUNT(*)
FROM (SELECT *, SOURCE_FILE_NAME FROM my_external_table LIMIT 10000 OFFSET 0)
GROUP BY SOURCE_FILE_NAME;
System settings in Firebolt allow you to control query execution behavior and performance, providing flexibility when needed. This is particularly useful when you want to override default settings for specific queries via the REST API.
To adjust settings such as the time_zone, you can embed them directly in the URL of your API call. For example, if you need to set the time_zone to UTC, include the parameter in the API call URL.
Example API call:
curl --location 'https://<user engine URL>?engine=<engine_name>&database=<database_name>&time_zone=UTC' \
--header 'Authorization: Bearer <authentication_token>' \
--data "SELECT TIMESTAMPTZ '1996-09-03 11:19:33.123456 Europe/Berlin'"
This query sets the time_zone system setting to UTC for the duration of the query. Each new API call requires you to include the necessary system settings again if you want to apply specific overrides.
The considerations for splitting into separate databases include governance, logical isolation, and performance aspects related to metadata caching. Here are the key points:
Governance and Isolation: Different databases can have different owners and permissions, allowing for better governance. This is particularly important when different teams or departments manage their own data.
Logical Grouping: Currently, without support for custom schemas, databases serve as the primary mechanism to logically group tables and views. This will change when custom schemas are introduced.
Performance on Metadata Caching: The packdb caches metadata per database. A single large database with all tables may complicate this caching process, although the practical impact is likely minimal except in specific scenarios.
Cross Database Queries: At present, cross-database queries are not supported, making it impractical to have a separate database for each table if joins are required. When cross-database queries are supported, they may incur some performance degradation compared to querying within the same database due to metadata storage methods.
Security: From a security perspective, Role-Based Access Control (RBAC) can be applied at the table level to restrict access to specific users, enhancing data security.
In summary, while there are some advantages to splitting databases, such as improved governance and security, the current limitations regarding cross-database queries and potential performance issues should be carefully considered before making a decision.
Start with a small node type (CREATE ENGINE ingest_engine TYPE=S NODES=1) and monitor CPU and RAM utilization via information_schema.engine_metrics_history. Scale out the engine (e.g., ALTER ingest_engine SET NODES=4) as needed to increase throughput. As a general rule of thumb, most ingestion workloads benefit from paralellism, specifically when importing multiple files. Adding to that, Firebolt will be even more efficient when files are roughly equivalent in size.
When using a NOT IN filter, rows where the column value is NULL are excluded from the results, even though NULL is not in the list of values. This is because SQL treats comparisons with NULL as UNKNOWN, which prevents those rows from being returned.
How to include NULL in NOT IN results:
To include rows with NULL values, add an explicit condition checking for NULL using OR column IS NULL.
Example:
SELECT *
FROM players
WHERE playerid NOT IN (1, 2, 3) OR playerid IS NULL;
This query will include rows where playerid is either NOT IN the list or is NULL, ensuring that NULL values are part of the result set.
While this is on our roadmap, Firebolt does not natively integrate with Delta Lake or Databricks. However, you can use data transfer solutions to migrate data between Firebolt and Databricks or Delta Lake via standard ETL tools, enabling the two platforms to coexist in a broader data architecture
All operations in Firebolt can be performed via SQL or UI. To create an engine, you can use the “CREATE ENGINE” command (shown above), specifying a name for the engine, number of clusters the engine will use, number of nodes in each cluster and the type of the nodes used in the engine. After the engine is successfully created, users will get an endpoint that they can use to submit their queries. For example, you can create an engine named MyEngine with two clusters, each with two nodes of type “M” as below:
CREATE ENGINE IF NOT EXISTS MyEngine WITH TYPE = “M” NODES = 2 CLUSTERS = 2;
This creates an engine named "MyEngine" with two clusters, each containing two nodes of type "M". For more details, see the documentation.
Firebolt boosts data ingestion performance through parallel processing, multi-node scaling as the engine grows, and pipelined execution for efficient resource use. Using COPY FROM enables linear scaling with the number of nodes, accelerating ingestion speed with larger engines—ideal for latency-sensitive ELT scenarios.
Firebolt is continuously expanding its integration ecosystem to support a wide range of data sources and connectors. If your preferred connector isn't listed in the current documentation, don’t worry! Firebolt’s development team is actively working on adding new integrations, and you can expect ongoing enhancements to its capabilities.
In the meantime, you can reach out to Firebolt support to inquire about upcoming connectors or even request a specific integration. Firebolt also supports custom connectors through its API and can integrate with many systems using standard protocols like JDBC and ODBC, giving you the flexibility to connect to external sources in various ways.
Firebolt provides multidimensional scaling to help right-size workloads. Autostop and Autostart are features that help reduce costs by eliminating idle time. Firebolt also provides global visibility of consumption and costs through built-in organizational governance and account-level consumption breakdown.
In Firebolt, you can scale an engine across multiple dimensions. All scaling operations in Firebolt are dynamic, meaning you do not need to stop your engines to scale them.
Scale Up/Down You can vertically scale an engine by using a different node type that best fits the needs of your workload.
Scaling Out/In You can horizontally scale an engine by modifying the number of nodes per cluster in the engine. Horizontal scaling can be used when your workload can benefit by distributing your queries across multiple nodes.
Concurrency Scaling Firebolt allows you to add or remove clusters in an engine. You can use concurrency scaling when your workload has to deal with a sudden spike in the number of users or number of queries. Note that you can scale along more than one dimension simultaneously. For example, the command below changes both the node type to “L” and the number of clusters to two.
ALTER ENGINE MyEngine SET TYPE = “L” CLUSTERS = 2
;
All Scaling operations can be performed via SQL using the ALTER ENGINE statement or via UI. For more information on how to perform scaling operations in Firebolt, see the Guides section in documentation.
Yes, Firebolt supports data migration from Redshift through standard ETL tools. You can move data from Redshift to Firebolt by exporting Redshift data to S3 and then using Firebolt’s COPY FROM command to ingest data into Firebolt tables.
Firebolt provides engine consumption and spend information in the Web UI. Additionally, granular engine-level consumption can be found via the information_schema.engine_metering_history view that details the hourly consumption of all the engines within an account. Users can also drill down into how the topology of their engines (node type, number of nodes and number of clusters) was modified over time, providing visibility into the FBU consumption of their engines.
Firebolt provides a custom Airflow connector that allows you to orchestrate and automate your Firebolt data workflows directly from Airflow. This integration helps in managing ETL processes, scheduling queries, and handling data pipelines efficiently.
Yes, during our POC process, Firebolt's team will provide you with fast and accurate cost estimates based on real usage data. During the POC, our team will closely support you, analyzing engine usage, query patterns, and resource consumption to deliver a precise cost breakdown. With our efficient benchmarking and expert guidance, you’ll quickly understand your projected costs, ensuring transparency and confidence in scaling with Firebolt.
Yes, customer access is managed via Auth0, while organizational access is controlled using Okta. All accesses are logged and monitored, and alerts are in place for any unauthorized configuration changes across our systems.
Your queries will continue to run uninterrupted during a scaling operation. When you perform horizontal or vertical scaling operations on your engine, Firebolt adds additional compute resources per your new configuration. While new queries will be directed to the new resources, the old compute resources will finish executing any queries currently running, after which they will be removed from the engine.
While this is on our roadmap, Firebolt currently does not have a native integration with Kafka. However, you can ingest Kafka data into Firebolt using intermediate storage systems like S3.
We use tools like SCA, SAST for code analysis, along with practices such as Fuzzing, scanning for pipeline weaknesses (like the use of unverified external sources), and secret scans as part of our secure software development lifecycle.
Yes. By default, creating an engine would result in the creation of the underlying engine clusters and start the engine. This would enable the engine to be in a running state where it is ready to start serving the queries. However, you have the option to defer the creation of the underlying clusters for an engine by setting the property “INITIALLY STOPPED” to True while calling CREATE ENGINE. You can start the engine at a later point, when you are ready to start running queries on the engine. Note that you cannot modify this property after an engine has been created.
CREATE ENGINE IF NOT EXISTS MyEngine WITH
TYPE = “S” NODES = 2 CLUSTERS =1 START_IMMEDIATELY = FALSE;
We use AWS Shield, WAF, and other logical layers to protect against DDoS. Additionally, we leverage auto-scaling to maintain availability during attacks by dynamically adjusting resources like EC2 instances, ELBs, and other global services capacity. (Though some scenarios may require manual intervention).
Firebolt's Python SDK provides detailed error message handling for SQL queries. When an error occurs, the SDK generates helpful error messages, allowing users to quickly diagnose and fix issues such as syntax problems or missing credentials. The SDK also offers robust logging and debugging capabilities, making it easier for developers to troubleshoot errors in their applications. For more information, refer to the Firebolt Python SDK documentation or visit the GitHub repository for examples.
Firebolt employs a comprehensive security strategy that includes network security policies, encryption practices, tenant isolation, and governance controls. We are committed to safeguarding your data through state-of-the-art security systems, policies, and practices.