Apache Kafka is one of the most commonly used stream-processing applications. Its high performance, low latency and open source license are just some of the reasons 80% of Fortune 100 enterprises use Kafka to power event-driven applications and deliver resilient data pipelines.
But when you start working with massive amounts of high-velocity data, you quickly run into cost, operational and complexity problems. For cloud native businesses, high replication costs when running Kafka across multiple cloud availability zones are a very serious issue.
Recently, the Kafka community has offered [three Kafka Improvement Proposals](https://cwiki.apa…
Apache Kafka is one of the most commonly used stream-processing applications. Its high performance, low latency and open source license are just some of the reasons 80% of Fortune 100 enterprises use Kafka to power event-driven applications and deliver resilient data pipelines.
But when you start working with massive amounts of high-velocity data, you quickly run into cost, operational and complexity problems. For cloud native businesses, high replication costs when running Kafka across multiple cloud availability zones are a very serious issue.
Recently, the Kafka community has offered three Kafka Improvement Proposals designed specifically to address this problem. One of them, KP-1150 Diskless Topics, “proposes a new class of topics in Apache Kafka that delegates replication to object storage,” wrote Filip Yonov, head of streaming services at Aiven, which developed KP-1150, in a blog post.
“Rather than eliminating disks altogether, Diskless abstracts them away — leveraging object storage (like S3) to keep costs low and flexibility high,” Yonov said.
How To Improve Your Kafka Architecture
If running Kafka at scale is giving you headaches, join us on Nov. 20 at 8 a.m. PT | 11 a.m. ET for a special online event: Kafka at Scale: Smarter Architectures for Real-Time Business Impact.
During this free webinar, Greg Harris, open source software engineer at Aiven (and lead author of KP-1150), David Esposito, streaming architect at Aiven, and Chris Pirillo, TNS host, will explore how leading companies are adapting their Kafka architecture to manage larger, faster data streams and deliver real business impact today.
In addition to discussing the pitfalls to avoid when scaling Kafka in the cloud, they’ll also share an exclusive sneak peek at Diskless Topics for Apache Kafka, a breakthrough that removes local disks to make Kafka up to 80% leaner, significantly reducing total cost of ownership (TCO) while simplifying operations and unlocking new ways to scale in the cloud.
Register for This Free Webinar Today!
If you can’t join us live, register anyway, and we’ll send you a recording following the webinar.
What You’ll Learn
By attending this special online event, you’ll leave with best practices, real-world examples and actionable tips, including:
- The role of Kafka in driving growth, agility and customer experience for digital-native businesses.
- Common challenges when running Kafka in the cloud, including cost, rebalancing and cross-AZ replication.
- How to measure business impact from real-time streaming.
- Introduction to Diskless Topics (KIP-1150) and what it means for the future of Kafka.
- Real-world use cases: blending diskless and classic topics in a single cluster.
Register for this free webinar today!
TRENDING STORIES