BigQuery was one of the first decoupled storage and compute architectures. It is a unique piece of engineering and not a typical data warehouse in part because it started as an on-demand serverless query engine. It runs in multi-tenancy with shared resources, allocated as “slots” which represent a virtual CPU that executes SQL. BigQuery determines how many slots a query requires, without the ability of the user to control it. BigQuery can be priced on a $/TB scanned basis or through slot reservations. A slot in BigQuery is logically equivalent to 0.5 vCPU and 0.5GB of RAM. There are multiple models to allocate slots in BigQuery.
Clickhouse was originally developed at Yandex, the Russian search engine, as an OLAP engine for low latency analytics. It was built as an on-premise solution with coupled storage & compute, and a large variety of tuning options in the form of indexes and and merge trees. Clickhouse’s architecture is famous for its focus on performance and low-latency queries. The tradeoff is that it is considered very difficult to work with. SQL support is very limited, and tuning/running it requires significant engineering resources.
BigQuery scales very well to large data volumes, and automatically assigns more compute resources when needed behind the scenes, in the form of “slots”. BigQuery works either in an “on-demand pricing model”, where slot assignment is completely in the hands of BigQuery and the state of the shared resource pool, or in “flat-rate pricing model” where slots are reserved in advanced. With reserved slots there is more control over compute resources, thus making scaling more predictable. Concurrency is limited to 100 users by default.
Clickhouse doesn’t offer any dedicated scaling features or mechanisms. While it can deliver linearly scalable performance for some types of queries, scaling itself has to be done manually. Hardware is self-managed in Clickhouse. This means that to scale you would have to provision a cluster and migrate.
BigQuery lines up in benchmarks in the same ballpark as other cloud data warehouses but does come in consistently last in most queries. Beyond implementing according to best practices, there is little you can do to accelerate BigQuery performance, as it determines the amount of resources (slots) the query needs for you. BigQuery can be used together with “BigQuery BI Engine” for lower latency analytics. However, BI Engine is limited in terms of scale because it runs in memory. Its maximum capacity is 100GB.
Clickhouse is famous for being one of the fastest local runtimes ever built for OLAP workloads. Its columnar storage, compression and indexing capabilities make it a consistent leader in benchmarks. Its lack of support for standard SQL and lack of query optimizer means that it’s less suitable for traditional BI workloads, and more suitable for engineering managed workloads. While fast, it requires a lot of tuning and optimization.
BigQuery is a mature general-purpose data warehouse, which lends itself well to internal BI & reporting. The fact that it’s serverless in nature and tightly integrated with GCP, makes it very convenient for Ad-Hoc analytics and ML use cases on GCP. On the other hand, because BigQuery makes resource allocation decisions for you, it is not always the best fit for operational use cases and Data Apps where performance needs to be consistent and predictable.
Clickhouse was not designed to be a data warehouse, but rather a low-latency query execution runtime. Managing it typically requires significant engineering overhead. Hence, it’s a good fit for engineering managed operational use cases and customer-facing data apps, where low latency matters. It is not a good fit for a general purpose data warehouse, nor for Ad-Hoc analytics or ELT.