We recently ran a series of benchmarking tests to evaluate how KDB-X performs compared to several open-source databases on standard analytical workloads, including aggregation, filtering, and group-by queries.
Each system ingested the same dataset and executed identical query definitions from the TSBS DevOps workload. All tests were run on the same hardware under identical conditions.
KDB-X was configured using its community setup — a single q process limited to 16GB of memory and four execution threads — while the other databases ran with their default open-source configurations and had access to the full hardware resources. In this benchmark, a single client issued hundreds of queries sequentially. We’ll explore KDB-X’s architecture for enterprise-grade workloads, including multi-…
We recently ran a series of benchmarking tests to evaluate how KDB-X performs compared to several open-source databases on standard analytical workloads, including aggregation, filtering, and group-by queries.
Each system ingested the same dataset and executed identical query definitions from the TSBS DevOps workload. All tests were run on the same hardware under identical conditions.
KDB-X was configured using its community setup — a single q process limited to 16GB of memory and four execution threads — while the other databases ran with their default open-source configurations and had access to the full hardware resources. In this benchmark, a single client issued hundreds of queries sequentially. We’ll explore KDB-X’s architecture for enterprise-grade workloads, including multi-client and parallel query scenarios, in a follow-up article.
The full configuration details, dataset generator, and query scripts are available in our public GitHub repository. You can reproduce every test run, review the parameters, and adapt the setup to suit your own environment.
A brief history of TSBS
TSBS was initially developed by engineers at InfluxDB to benchmark various time-series databases, including their own. However, their benchmarks were often criticized for favoring InfluxDB’s architecture.
The project was later forked and significantly improved by TimescaleDB engineers, who made it more flexible, extensible, and vendor-agnostic. Their goal was to create a standardized framework adaptable to various use cases (e.g., DevOps, IoT). TSBS eventually became the de facto community standard for benchmarking time-series databases.
Unfortunately, pull requests to the TimescaleDB repository are no longer being merged, leaving valuable contributions pending. QuestDB also forked the TimescaleDB repo, applied fixes, and uses TSBS in their performance-related blog posts. We followed a similar approach: forking the TimescaleDB repo and integrating QuestDB’s changes to ensure a fair comparison. For full transparency and reproducibility, we will make the repository publicly available in an upcoming announcement.
Databases tested
QuestDB, ClickHouse, TimescaleDB, and InfluxDB are well-known open-source time-series databases. Most of them use a columnar storage model and vectorized query execution, optimized for online analytical processing (OLAP).
KDB-X is a modern data and analytics platform built on top of the latest generation of kdb+. kdb+ is a proprietary, high-performance, column-oriented time series database system with a 30+ year pedigree in handling any volume of streaming, real-time, and historical data quickly and efficiently. It is widely used in high-frequency trading and financial services due to its in-memory processing capabilities and integration with the q programming language, which supports fast and concise data analysis. Tested version: 0.1.0 community edition (release date: 2025.06.25).
QuestDB supports ingestion via the InfluxDB Line Protocol and extends standard SQL with time-series-specific constructs such as LATEST ON and SAMPLE BY. Its architecture is centered on the handling of chronological records, with a focus on ingestion and querying operations involving such data sets. The system was configured according to QuestDB’s recommendations, including setting the Linux virtual memory areas limit to 8388608 and the maximum number of open files to 1048576. Tested version: 9.0.0 (release date: 2025.07.11).
InfluxDB is a purpose-built time-series database intended to handle large volumes of time-stamped data. It is widely used for monitoring, IoT, and real-time analytics. InfluxDB’s architecture includes a specialized Time-Series Index (TSI) for data retrieval and an open-source query language called Flux, the purpose of which is to offer a user-friendly experience for time-series data. Tested version: 2.7.11 (release date: 2024.12.02).
TimescaleDB is engineered as an extension on top of PostgreSQL, which allows it to integrate with the broader ecosystem of tools and libraries associated with that database. A central aspect of its design is the automatic partitioning of data into segments called “chunks” based on time and, optionally, other key identifiers. This approach aims to manage the growth of time-series data, which often accumulates steadily over time. We ran timescaledb-tune after installation for optimal settings. Tested version: 2.20.2 (release date: 2025.06.02) over PostgreSQL 17.5 (released on the 2025.05.08).
ClickHouse is another open-source, column-oriented database management system designed for online analytical processing (OLAP). The system is structured to work on multiple servers in a shared-nothing cluster configuration, allowing it to scale across hardware resources. It employs a dialect of SQL as its primary query language, enabling users to perform aggregations and generate reports on large datasets. Tested version: 25.6.5.41 (release date: 2025.06.26).
We used a server with 256 cores and 2.2TB of physical memory. The free version of KDB-X could only use the fraction of the available resources. For example:
- Only 4 threads (1.5% of the total) were used
- Only 16 GB of memory (8% of the total) was permitted
All other solutions had full access to the hardware.
Tests
We ran the benchmark with three datasets. Since the focus of this testing was queries, we only generated the cpu table, as the other tables are used to test ingest speeds only. The scale and interval values below are parameters that determine the size of the input data used to populate the database. For convenience, we have included the file size and row counts.
| Test Scenario | Scale | Interval | Influx Input (gz) Size | Total Row Count | Daily Row Count |
|---|---|---|---|---|---|
| 3 days, medium rate | 4000 | 10s | 3.6GB | 103,680,000 | 34,560,000 |
| 1 year, low rate | 800 | 10s | 86GB | 2,529,792,000 | 6,912,000 |
| 7 days, high rate | 800 | 1s | 17GB | 483,840,000 | 69,120,000 |
For all combinations of DBMS and scenario, we used a single client to send queries to simplify compliance with the terms of the free KDB-X community edition. The commercial version does not impose such resource limits. Stay tuned for future results using multiple clients.
Results
TSBS calculates the mean response time for each query. We compute the** ratio** relative to KDB-X’s mean response time. For example, a value of 2 means KDB-X executed the query twice as fast as the other product on average. The geometric mean of these ratios for each solution is shown below:
All competitors were significantly slower than KDB-X on average. The closest was QuestDB, with an average slowdown factor of 3.36.
It’s important to note that KDB-X achieved these results using only a fraction of the available resources.
The graph below shows how much slower each product was for the query it performed worst in compared to KDB-X:
In some cases, users would wait 20x longer with QuestDB or 1100x longer with ClickHouse compared to KDB-X.
We further broke down the results by test scenario:
Detailed Results
Below, we focus on the 1 year, low rate performance ratios broken down by query type. This test includes the longest time span of data, closely resembling typical client datasets that often cover multiple years. Due to the extended time range, queries are more likely to access disk storage rather than benefit from page cache. To ensure consistency, we flushed the page cache prior to running the test. Each query was executed with 500 distinct parameter sets. You can find a brief description of each test in the main document of TSBS.
| | QuestDB | InfluxDB | TimescaleDB | ClickHouse | | | —–– | –––– | ———– | ––––– | | cpu-max-all-1 | 14.1 | 23.3 | 127.2 | 425.9 | | cpu-max-all-32-24 | 5.8 | 28.5 | 16.8 | 32.3 | | cpu-max-all-8 | 5.7 | 31.9 | 42.7 | 139.5 | | double-groupby-1 | 0.7 | 11.9 | 4.9 | 21.0 | | double-groupby-5 | 0.6 | 31.9 | 3.4 | 5.1 | | double-groupby-all | 0.6 | 42.2 | 2.7 | 5.6 | | groupby-orderby-limit | 6.7 | CRASH | 0.3 | 19.3 | | high-cpu-1 | 8.3 | 2.4 | 519.3 | 443.7 | | high-cpu-all | 0.9 | 13.5 | 5.2 | 4.0 | | lastpoint | 0.8 | 7069.8 | 17.4 | 112.3 | | single-groupby-1-1-1 | 16.2 | 48.1 | 119.9 | 9791.9 | | single-groupby-1-1-12 | 25.9 | 39.7 | 528.2 | 5741.8 | | single-groupby-1-8-1 | 7.1 | 61.1 | 41.0 | 3243.4 | | single-groupby-5-1-1 | 10.0 | 27.1 | 46.4 | 1198.0 | | single-groupby-5-1-12 | 17.6 | 35.9 | 244.8 | 819.8 | | single-groupby-5-8-1 | 4.1 | 44.7 | 16.8 | 267.2 | | GEO MEAN RATIO | 4.2 | 53.1 | 25.5 | 161.3 | | MAX RATIO | 25.9 | 7069.8 | 528.2 | 9791.9 |
** **Background coloring indicates performance: darker green = greater slowdown compared to KDB-X.
Notable observations
- Across 64 benchmark scenarios, KDB-X outperformed the competition in 58 cases, demonstrating consistent superiority in query performance.
- InfluxDB** crashed** for query
groupby-orderby-limit. - ClickHouse was nearly four orders of magnitude slower on average than KDB-X for
single-groupby-1-1-1. - TimescaleDB outperformed all others for
groupby-orderby-limitbut was more than 100x slower than KDB-X for a few queries. - QuestDB excelled in
double-groupby-*andlastpointqueries. Considering all queries, it is 4.2 times slower on average compared to KDB-X.
Hardware testbed
We’re grateful to AMD for generously providing access to the hardware used in this benchmarking.
- CPU: AMD EPYC 9755 (Turin), 2 sockets, 128 cores/sockets, 2 threads/core, 512 MB L3 cache, SMT off
- Memory: 2.2 TB, DDR5@ 6400 MT/s, 12 channels/socket
- Disk: SAMSUNG MZWLO3T8HCLS-00A07, 3.84 TB, PCIe 5.0 x4, 14,000 MB/s sequential read, 2.5M IOPS random read (4 KB)
- OS: RHEL 9.5, Kernel 5.14.0-503.11.1.el9_5.x86_64
Further readings
As mentioned before, all configuration files, dataset generators, and query scripts used in this benchmark are available in our public GitHub repository. You can reproduce the tests, modify parameters such as dataset size or query mix, and run comparisons on your own hardware.
If you extend or adapt the benchmark — for example, by testing new workloads, ingestion profiles, or additional databases — we encourage you to share your findings through pull requests or issues in the repository. Contributions and replication studies help validate results and keep the dataset and tooling useful for the wider community.
For discussion and feedback, join the conversation on the KX Developer Community forum, our community Slack channel, or open a thread in the repository’s Discussions tab.
To explore KDB-X hands-on, visit our Developer Center to start with the KDB-X Community Edition.