Another year passes. I was hoping to write more articles instead of just these end-of-the-year screeds, but I almost died in the spring semester, and it sucked up my time. Nevertheless, I will go through what I think are the major trends and happenings in databases over the last year.
There were many exciting and unprecedented developments in the world of databases. Vibe coding entered the vernacular. The Wu-Tang Clan announced their time capsule project. Rather than raising one massive funding round this year instead of going public, Databricks raised two massive rounds instead of going public.
Meanwhile, ot…
Another year passes. I was hoping to write more articles instead of just these end-of-the-year screeds, but I almost died in the spring semester, and it sucked up my time. Nevertheless, I will go through what I think are the major trends and happenings in databases over the last year.
There were many exciting and unprecedented developments in the world of databases. Vibe coding entered the vernacular. The Wu-Tang Clan announced their time capsule project. Rather than raising one massive funding round this year instead of going public, Databricks raised two massive rounds instead of going public.
Meanwhile, other events were expected and less surprising. Redis Ltd. switched their license back one year after their rugpull (I called this shot last year). SurrealDB reported great benchmark numbers because they weren’t flushing writes to disk and lost data. And Coldplay can break up your marriage. Astronomer did make some pretty good lemonade on that last one though.
Before I begin, I want to address the question I get every year in the comments about these articles. People always ask why I don’t mention system X, talk about database Y, or include company Z in my analysis. The reason is that I can only write about so many things, and unless something interesting/notable happened in the past year, then there is nothing to really discuss. But not all notable database events are appropriate for me to opine about. For example, the recent attempt to unmask the AvgDatabase CEO is fair game, but the MongoDB suicide lawsuit is decidedly not.
With that out of the way, let’s do this. These articles are getting longer each year, so I apologize in advance.
Previous entries:
- Databases in 2024: A Year in Review
- Databases in 2023: A Year in Review
- Databases in 2022: A Year in Review
- Databases in 2021: A Year in Review
The Dominance of PostgreSQL Continues
I first wrote about how PostgreSQL was eating the database world in 2021. That trend continues unabated as most of the most interesting developments in the database world are happening once again with PostgreSQL. The DBMS’s latest version (v18) dropped in November 2025. The most prominent feature is the new asynchronous I/O storage subsystem, which will finally put PostgreSQL on the path to dropping its reliance on the OS page cache. It also added support for skip scans; queries can still use multi-key B+Tree indexes even if they are missing the leading keys (i.e., prefix). There are some additional improvements to the query optimizer (e.g., removing superfluous self-joins).
Savvy database connoisseurs will be quick to point out that these are not groundbreaking features and that other DBMSs have had them for years. PostgreSQL is the only major DBMS still relying on the OS page cache. And Oracle has supported skip scans since 2002 (v9i)! You may wonder, therefore, why I am claiming that the hottest action in databases for 2025 happened with PostgreSQL?
Acquisitions + Releases:
The reason is that most of the database energy and activity is going into PostgreSQL companies, offerings, projects, and derivative systems. In the last year, the hottest data start-up (Databricks) paid $1b for a PostgreSQL DBaaS company (Neon). Next, one of the biggest database companies in the world (Snowflake) paid $250m for another PostgreSQL DBaaS company (CrunchyData). Then, one of the biggest tech companies on the planet (Microsoft)launched a new PostgreSQL DBaaS (HorizonDB). Neon and HorizonDB follow Amazon Aurora’s original high-level architecture from the 2010s, with a single primary node separating compute and storage. For now, Snowflake’s PostgreSQL DBaaS uses the same core architecture as standard PostgreSQL because they built on Crunchy Bridge.
Distributed PostgreSQL:
All of the services I listed above are single-primary node architectures. That is, applications send writes to a primary node, which then sends those changes to secondary replicas. But in 2025, there were two announcements on new projects to create scale-out (i.e., horizontal partitioning) services for PostgreSQL. In June 2025, Supabase announced that it had hired Sugu, the Vitess co-creator and former PlanetScale co-founder/CTO, to lead the Multigres project to create sharding middleware for PostgreSQL, similar to how Vitess shards MySQL. Sugu left PlanetScale in 2023 and had to lie back in the cut for two years. He is now likely clear of any legal issues and can make things happen at Supabase. You know it is a big deal when a database engineer joins a company, and the announcement focuses more on the person than the system. The co-founder/CTO of SingleStore joined Microsoft in 2024 to lead HorizonDB, but Microsoft (incorrectly) did not make a big deal about it. Sugu joining Supabase is like Ol’ Dirty Bastard (RIP)getting out on parole after two years and then announcing a new record deal on the first day of his release.
One month after the Multigres news dropped, PlanetScale announced its own Vitess-for-PostgreSQL project, Neki. PlanetScale launched its initial PostgreSQL DBaaS in March 2025, but the core architecture is stock PostgreSQL with pgBouncer.
Commercial Landscape:
With Microsoft’s introduction of HorizonDB in 2025, all major cloud vendors now have serious projects for their own enhanced PostgreSQL offerings. Amazon has offered RDS PostgreSQL since 2013 and Aurora PostgreSQL since 2017. Google put out AlloyDB in 2022. Even the old flip-phone IBM has had its cloud version of PostgreSQL since 2018. Oracle released its PostgreSQL service in 2023, though there is a rumor that its in-house PostgreSQL team was collateral damage in its MySQL OCI layoffs in September 2025. ServiceNow launched its RaptorDB service in 2024, based on its 2021 acquisition of Swarm64.
Yes, I know Microsoft bought Citus in 2019. Citus was rebranded as Azure Database for PostgreSQL Hyperscale in 2019 and was then renamed to Azure Cosmos DB for PostgreSQL in 2022. But then there is Azure Database for PostgreSQL with Elastic Clusters that also uses Citus, but it is not the same as the Citus-powered Azure Cosmos DB for PostgreSQL. Wait, I might be wrong about this. Microsoft discontinued Azure PostgreSQL Single Server in 2023, but kept Azure PostgreSQL Flexible Server. It is sort of like how Amazon could not resist adding "Aurora" to the DSQL’s name. Either way, at least Microsoft was smart enough to keep the name for their new system to just "Azure HorizonDB" (for now).
There are still a few independent (ISV) PostgreSQL DBaaS companies. Supabase is likely the largest of these by the number of instances. Others include YugabyteDB, TigerData (née TimeScale), PlanetScale, Xata, PgEdge, and Nile. Other systems provide a Postgres-compatible front-end, but the back-end systems are not derived from PostgreSQL (e.g., CockroachDB, CedarDB, Spanner). Xata built its original architecture on Amazon Aurora, but this year, it announced it is switching to its own infrastructure. Tembo dropped its hosted PostgreSQL offering in 2025 to pivot to a coding agent that can do some database tuning. ParadeDB has yet to announce its hosted service. Hydra and PostgresML went bust in 2025 (see below), so they’re out of the game. There are also hosting companies that offer PostgreSQL DBaaS alongside other systems, such as Aiven and Tessel.
Andy’s Take:
It is not clear who the next major buyer will be after Databricks and Snowflake bought PostgreSQL companies. Again, every major tech company already has a Postgres offering. EnterpriseDB is the oldest PostgreSQL ISV, but missed out on the two most significant PostgreSQL acquisitions in the last five years. But they can ride along with Bain Capital’s jock for a while, I guess, or hope that HPE buys them, even though that partnership is from eight years ago. This M&A landscape is reminiscent of OLAP acquisitions in the late 2000s, when Vertica was the last one waiting at the bus stop after AsterData, Greenplum, and DATAllegro were acquired.
The development of the two competing distributed PostgreSQL projects (Multigres, Neki) is welcome news. These projects are not the first time somebody has attempted this. Of course, Greenplum, ParAccel, and Citus have been around for two decades for OLAP workloads. Yes, Citus supports OLTP workloads, but they started in 2010 with a focus on OLAP. For OLTP, 15 years ago, the NTT RiTaDB project joined forces with GridSQL to create Postgres-XC. Developers from Postgres-XC founded StormDB, which Translattice later acquired in 2013. Postgres-X2 was an attempt to modernize XC, but the developers abandoned that effort. Translattice open-sourced StormDB as Postgres-XL, but the project has been dormant since 2018. YugabyteDB came out in 2016 and is probably the most widely deployed sharded PostgreSQL system (and remains open-source!), but it is a hard fork, so it is only compatible with PostgreSQL v15. Amazon announced its own sharded PostgreSQL (Aurora Limitless) in 2024, but it is closed source.
The PlanetScale squad has no love for the other side and throws hands at Neon and Timescale. Database companies popping off at each other is nothing new (see Yugabyte vs. CockroachDB). I suspect we will see more of this in the future as the PostgreSQL wars heat up. I suggest that these smaller companies call out the big cloud vendors and not fight with each other.
MCP For Every Database!
If 2023 was the year every DBMS added a vector index, then 2025 was the year that every DBMS added support for Anthropic’s Model Context Protocol (MCP). MCP is a standardized client-server JSON-RPC interface that lets LLMS interact with external tools and data sources without requiring custom glue code. An MCP server acts as middleware in front of a DBMS and exposes a listing of tools, data, and actions it provides. An MCP client (e.g., an LLM host such as Claude or ChatGPT) discovers and uses these tools to extend its models’ capabilities by sending requests to the server. In the case of databases, the MCP server converts these queries into the appropriate database query (e.g., SQL) or administrative command. In other words, MCP is the middleman who keeps the bricks counted and the cream straight, so the database and LLMs trust each other enough to do business.
Anthropic announced MCP in November 2024, but it really took off in March 2025 when OpenAI announced it would support MCP in its ecosystem. Over the next few months, every DBMS vendor released MCP servers for all system categories: OLAP (e.g., ClickHouse, Snowflake, Firebolt, Yellowbrick), SQL (e.g., YugabyteDB, Oracle, PlanetScale), and NoSQL (e.g., MongoDB, Neo4j, Redis). Since there is no official Postgres MCP server, every Postgres DBaaS has released its own (e.g., Timescale, Supabase, Xata). The cloud vendors released multi-database MCP servers that can talk to any of their managed database services (e.g., Amazon, Microsoft, Google). Allowing a single gateway to talk to heterogeneous databases is almost, but not quite, a holy-grail federated database. As far as I know, each request in these MCP servers targets only a single database at a time, so the application is responsible for performing joins across sources.
Beyond the official vendor MCP implementations, there are hundreds of rando MCP server implementations for nearly every DBMS. Some of them attempt to support multiple systems (e.g., DBHub, DB MCP Server). DBHub put out a good overview of PostgreSQL MCP servers.
An interesting feature that has proven helpful for agents is database branching. Although not specific to MCP servers, branching allows agents to test database changes quickly without affecting production applications. Neon reported in July 2025 that agents create 80% of their databases. Neon was designed from the beginning to support branching (Nikita showed me an early demo when the system was still called “Zenith”), whereas other systems have added branching support later. See Xata’s recent comparison article on database branching.
Andy’s Take:
On one hand, I’m happy that there is now a standard for exposing databases to more applications. But nobody should trust an application with unfettered database access, whether it is via MCP or the system’s regular API. And it remains good practice only to grant minimal privileges to accounts. Restricting accounts is especially important with unmonitored agents that may start going wild all up in your database. This means that lazy practices like giving admin privileges to every account or using the same account for every service are going to get wrecked when the LLM starts popping off. Of course, if your company leaves its database open to the world while you cause the stock of the wealthiest companies to drop by $600b, then rogue MCP requests are not your top concern.
From my cursory examination of a few MCP server implementations, they are simple proxies that translate the MCP JSON requests into database queries. There is no deep introspection to understand what the request aims to do and whether it is appropriate. Somebody is going to order 18,000 water cups in your application, and you need to make sure it doesn’t crash your database. Some MCP servers have basic protection mechanisms (e.g., ClickHouse only allows read-only queries). DBHub provides a few additional protections, such as capping the number of returned records per request and implementing query timeouts. Supabase’s documentation offers best-practice guidelines for MCP agents, but they rely on humans to follow them. And of course, if you rely on humans to do the right thing, bad things will happen.
Enterprise DBMSs already have automated guardrails and other safety mechanisms that open-source systems lack, and thus, they are better prepared for an agentic ecosystem. For example, IBM Guardium and Oracle Database Firewall identify and block anomalous queries. I am not trying to shill for these big tech companies. I know we will see more examples in the future of agents ruining lives, like accidentally dropping databases. Combining MCP servers with proxies (e.g., connection pooling) is an excellent opportunity to introduce automated protection mechanisms.
MongoDB, Inc. v. FerretDB Inc.
MongoDB has been the NoSQL stalwart for two decades now. FerretDB was launched in 2021 by Percona’s top brass to provide a middleware proxy that converts MongoDB queries into SQL for a PostgreSQL backend. This proxy allows MongoDB applications to switch over to PostgreSQL without rewriting queries.
They coexisted for a few years before MongoDB sent FerretDB a cease-and-desist letter in 2023, alleging that FerretDB infringes MongoDB’s patents, copyrights, and trademarks, and that it violates MongoDB’s license for its documentation and wire protocol specification. This letter became public in May 2025 when MongoDB went nuclear on FerretDB by filing a federal lawsuit over these issues. Part of their beef is that FerretDB is out on the street, claiming they have a "drop-in replacement" for MongoDB without authorization. MongoDB’s federal lawsuit;) over these issues. Part of their beef is that FerretDB is out on the street, claiming they have a "drop-in replacement" for MongoDB without authorization. MongoDB’s court filing has all the standard complaints about (1) misleading developers, (2) diluting trademarks, and (3) damaging their reputation.
The story is further complicated by Microsoft’s announcement that it donated its MongoDB-compatible DocumentDB to the Linux Foundation. The project website mentions that DocumentDB is compatible with the MongoDB drivers and that it aims to "build a MongoDB compatible open source document database". Other major database vendors, such as Amazon and Yugabyte, are also involved in the project. From a cursory glance, this language seems similar to what MongoDB is accusing FerretDB of doing.
Andy’s Take:
I could not find an example of a database company suing another one for replicating their API. The closest is Oracle suing Google for using a clean-room copy of the Java API in Android. The Supreme Court ultimately ruled in favor of Google on fair use grounds, and the case affected how re-implementation is treated legally.
I don’t know how the lawsuit will play out if it ever goes to trial. A jury of random people off the street may comprehend the specifics of MongoDB’s wire protocol, but they are definitely going to understand that the original name of FerretDB was MangoDB. It is going to be challenging to convince a jury that you were not trying to divert customers when you changed one letter in the other company’s name. Never mind that it is not even an original name: there is already another parody DBMS called MangoDB that writes everything to /dev/null.
And while we are on the topic of database system naming, Microsoft’s choice of “DocumentDB” is unfortunate. There are already Amazon DocumentDB (which, by the way, is also compatible with MongoDB, but Amazon probably pays for that), InterSystems DocDB, and Yugabyte DocDB. Microsoft’s original name for “Cosmos DB” was also DocumentDB back in 2016.
Lastly, MongoDB’s court filing claims they “...pioneered the development of ‘non-relational’ databases”. This statement is incorrect. The first general-purpose DBMSs were non-relational because the relational model had not yet been invented. General Electric’s Integrated Data Store (1964) used a network data model, and IBM’s Information Management System (1966) used a hierarchical data model. MongoDB is also not the first document DBMS. That title goes to the object-oriented DBMSs from the late 1980s (e.g., Versant) or the XML DBMSs from the 2000s (e.g., MarkLogic). MongoDB is the most successful of these approaches by a massive margin (except maybe IMS).
File Format Battleground
File formats are an area of data systems that have been mostly dormant for the last decade. In 2011, Meta released a column-oriented format for Hadoop called RCFile. Two years later, Meta refined RCFile and announced the PAX-based ORC (Optimized Record Columnar File) format. A month after ORC’s release, Twitter and Cloudera released the first version of Parquet. Nearly 15 years later, Paquet is the dominant file open-source format.
In 2025, there were five new open-source file formats released vying to dethrone Parquet:
These new formats joined the other formats released in 2024:
SpiralDB made the biggest splash this year with their announcement of donating Vortex to the Linux Foundation and the establishment of their multi-organization steering committee. Microsoft quietly killed off Amudai (or at least closed sourced it) at some point at the end of 2025. The other projects (FastLanes, F3, Anyblox) are academic prototypes. Anyblox won the VLDB Best Paper award this year.
This fresh competition has lit a fire in the Parquet developer community to modernize its features. See this in-depth technical analysis of the columnar file format landscape by Parquet PMC Chair (Julien Le Dem).
Andy’s Take:
The main problem with Parquet is not inherent in the format itself. The specification can and has evolved. Nobody expected organizations to rewrite petabytes of legacy files to update them to the latest Parquet version. The problem is that there are so many implementations of reader/writer libraries in different languages, each supporting a distinct subset of the specification. Our analysis of Paraquet files in the wild found that 94% of them use only v1 features from 2013, even though their creation timestamps are after 2020. This lowest common denominator means that if someone creates a Parquet file using v2 features, it is unclear whether a system will have the correct version to read it.
I worked on the F3 file format with brilliant people at Tsinghua (Xinyu Zeng, Huanchen Zhang), CMU (Martin Prammer, Jignesh Patel), and Wes McKinney. Our focus is on solving this interoperability problem by providing both native decoders as shared objects (Rust crates) and embedded WASM versions of those decoders in the file. If somebody creates a new encoding and the DBMS does not have a native implementation, it can still read data using the WASM version by passing Arrow buffers. Each decoder targets a single column, allowing a DBMS to use a mix of native and WASM decoders for a single file. AnyBlox takes a different approach, generating a single WASM program to decode the entire file.
I don’t know who will win the file format war. The next battle is likely to be over GPU support. SpiralDB is making the right moves, but Parquet’s ubiquity will be challenging to overcome. I also didn’t even discuss how DuckLake seeks to upend Iceberg...
Of course, when this topic comes up, somebody always posts this xkcd comic on competing standards. I’ve seen it before. You don’t need to email it to me again.
Random Happenings
Databases are big money. Let’s go through them all!
Acquisitions:
Lots of movement on the block. Pinecone replaced its CEO in September to prepare for an acquisition, but I have not heard anything else about it. Here are the ones that did happen:
The Cassandra stalwart got picked up by IBM at the beginning of the year for an estimated $3b.
The leading company behind the Lucene replacement, Tantivy, a full-text search engine, was acquired at the beginning of the year. The good news is that Tantivy development continues unabated.
This acquisition was a solid pick-up for dbt as part of their Fusion announcement this year. It allows them to perform more rigorous SQL analysis in their DAGs.
Mongo picked up an early-stage AI company to expand its RAG capabilities in its cloud offering. One of my best students joined Voyage one week before the announcement. He thought he was going against the "family" by not signing with a database company, only to end up at one.
Apparently, there was a bidding war for this PostgreSQL company, but Databricks paid a mouthwatering $1b for it. Neon still exists today as a standalone service, but Databricks quickly rebranded it in its ecosystem as Lakebase.
You know Snowflake could not let Databricks get all the excitement during the summer, so they paid $250m for the 13-year-old PostgreSQL company CrunchyData. Crunchy had picked up top ex-Citus talent in recent years and was expanding its DBaaS offering before Snowflake wrote them a check. Snowflake announced the public preview of its Postgres service in December 2025.
The 1990s old-school ETL company Informatica got picked up by Salesforce for $8b. This is after they went public in 1999, reverted to PE in 2015, and went public again in 2021.
To be honest, I never understood how Couchbase went public in 2021. I guess they were riding on MongoDB’s coattails? Couchbase did interesting work a few years ago by incorporating components from the AsterixDB project at UC Irvine.
Tecton provides Databricks with additional tooling to build agents. Another one of my former students was the
This team is behind two useful tools:SQLMesh and SQLglot. The former is the only viable open-source contender to dbt (see below for their pending merger with Fivetran). SQLglot is a handy SQL parser/deparser that supports a heuristic-based query optimizer. The combination of this in Fivetran and SDF with dbt makes for an interesting technology play in this space in the coming years.
The PE firm buying SingleStore (Vector Capital) has prior experience in managing a database company. They previously purchased the XML database company MarkLogic in 2020 and flipped it to Progress in 2023.
After getting bought by PE in 2024, the MariaDB Corporation went on a buying spree this year. The first up is the company behind the Galera Cluster scale-out middleware for MariaDB. See my 2023 overview of the MariaDB dumpster fire.
And then we have the second MariaDB acquisition. Just so everyone is clear, the original commercial company backing MariaDB was called "SkySQL Corporation" in 2010, but it changed its name to "MariaDB Corporation" in 2014. Then in 2020, the MariaDB Corporation released a MariaDB DBaaS called SkySQL. But because they were hemorrhaging cash, the MariaDB Corporation spun SkySQL Inc. out as an independent company in 2023. And now, in 2025, MariaDB Corporation has come full circle by buying back SkySQL Inc. I did not have this move on my database bingo card this year.
The automated database optimization tool company heads off to Temporal to automatically optimize their databases! I’m happy to hear that Crystal’s founder and Berkeley database group alumnus Johann Schleier-Smith is doing well there.
This system (formerly OmniSci, formerly MapD) was one of the first GPU-accelerated databases, launched in 2013. I couldn’t find an official announcement of their closing, aside from an M&A firm listing the successful deal. And then we had a meeting with Nvidia to discuss potential database research collaborations, and some HeavyDB friends showed up.
Dgraph was previously acquired by Hypermode in 2023. It looks like Istari just bought Dgraph and not the rest of Hypermode (or they ditched it). I still haven’t met anybody who is actively using Dgraph.
This was one of the first "chat with your database" out of the University of Wisconsin and now CMU-DB professor Jignesh Patel. But they were bought by a European hotel management SaaS. Take that to mean what you think it means.
Datometry has been working on the perilous problem of automatically converting legacy SQL dialects (e.g., Teradata) to newer OLAP systems for several years. Snowflake picked them up to expand their migration tooling. See Datometry’s 2020 CMU-DB tech talk for more info.
Like Snowflake buying Datometry, ClickHouse’s acquisition here is a good example of improving the developer experience for high-performance commodity OLAP engines.
After buying Neon, Databricks bought Mooncake to enable PostgreSQL to read/write to Apache Iceberg data. See their November 2025 CMU-DB talk for more info.
This is the archetype of how to make a company out of a grassroots open-source project. Kafka was originally developed at Linkedin in 2011. Confluent was then spun out as a separate startup in 2014. They went IPO seven years later in 2021. Then IBM wrote a big check to take it over. Like with DataStax, it remains to be seen whether IBM will do to Confluent what IBM normally does with acquired companies, or whether they will be able to remain autonomous like RedHat.
The embedded graph DBMS out of the University of Waterloo was acquired by an unnamed company in 2025. The KuzuDB company then announced it was abandoning the open-source project. The LadybugDB project is an attempt at maintaining a fork of the Kuzu code.
Mergers:
Unexpected news dropped in October 2025 when Fivetran and dbt Labs announced they were merging to form a single company.
The last merger I can think of in the database space was the 2019 merger between Cloudera and Hortonworks. But that deal was just weak keys getting stepped on in a kitchen: two companies that were struggling to find market relevance with Hadoop merged into a single company to try to find it (spoiler: they did not). The MariaDB Corporation merger with Angel Pond Holdings Corporation in 2022 via a SPAC technically counts too, but that deal was so MariaDB could backdoor their way to IPO. And it didn’t end well for investors. The Fivetran + dbt merger is different (and better) than these two. They are two complementary technology companies combining to become an ETL juggernaut, preparing for a legit IPO in the near future.
Funding:
Unless I missed them or they weren’t announced, there were not as many early-stage funding rounds for database startups. The buzz around vector databases has muted, and VCs are only writing checks for LLM companies.
- Databricks - $4b Series L
- **Databricks **- $1b Series K
- ClickHouse - $350m Series C
- Supabase - $200m Series D
- Astronomer - $93m Series D
- Timescale - $110m Series C
- Tessel - $60m Series B
- ParadeDB - $12m Series A
- SpiralDB - $22m Series A
- CedarDB $5.9m Seed
- TopK - $5.5m Seed
- Columnar - $4m Seed
- SereneDB - $2.1m Pre-Seed
- Starburst - Undisclosed?
Name Changes:
A new category in my yearly write-up is database companies changing their names.
The JSON database company dropped the "DB" suffix from its name to emphasize its positioning as a platform for database-backed applications, similar to Convex and Heroku. I like the Harper people. Their 2021 CMU-DB tech talk presented the worst DBMS idea I have ever heard. Thankfully, they ditched that once they realized how bad it was and switched to LMDB.
This was a smart move because the name "Edge" conveys that it is a database for edge devices or services (e.g., Fly.io). But I’m not sure "Gel" conveys the project’s higher-level goals. See the 2025 talk on Gel’s query language (still called EdgeQL) from CMU alums.
This is a rare occurrence of a database company renaming itself to distinguish itself from its main database product. It is usually companies renaming themselves to be the name of the database (e.g., "Relational Software, Inc." to "Oracle Systems Corporation", "10gen, Inc." to "MongoDB, Inc."). But it makes sense for the company to try to shed the perception of being a specialized time-series DBMS instead of an improved version of PostgreSQL for general applications, since the former is a much smaller market segment than the