Operations | Monitoring | ITSM | DevOps | Cloud

Kafka To ClickHouse in 6min

Learn how to stream data from Kafka into ClickHouse using Aiven’s integration wizard. In this demo, we show you how to generate sample logistics data in Kafka, configure the integration to map Avro-formatted fields, and connect to ClickHouse to view and query the ingested data. We also demonstrate how to create a materialised view in ClickHouse to store and query streamed data efficiently, making real-time analytics fast and easy.

Get Kafka-Nated Ep 10: From MSK to Diskless Kafka w/ Kyle McCullough

Get Kafka-Nated Ep. 10 Wednesday, November 5th 2025 Guest Focus: Co-Founder & CTO at OpsHelm, former Head of Infrastructure Engineering at ProdPerfect and Lead Engineer at Vivid Seats Kyle McCullough joins host Hugh Evans to explore what it takes to build real-time, multi-cloud streaming infrastructure at scale. As Co-Founder and CTO of OpsHelm, Kyle shares how his team processes hundreds of terabytes of cloud events daily, maintaining sub-second visibility while reducing streaming costs by 78% after migrating from MSK and NATS to Aiven Diskless Kafka.

Get Kafka-Nated Episode 9: Live from Current New Orleans

Get Kafka-Nated: Live from Current (New Orleans) Thursday, October 30th, 2025 10:00–11:00 AM CDT Broadcasting Live from the Aiven booth at Current 2025 We’re shaking things up for a special live edition of Get Kafka-Nated! Join host Hugh Evans for a high-energy hour of rapid-fire conversations with experts shaping the future of streaming data. Broadcasting directly from Current 2025 in New Orleans, this episode packs six lightning interviews into one hour, mixing deep technical insights with a few fun surprises along the way.

Get Kafka-Nated Ep 8: Realistic Synthetic Streaming Data w/ Michael Drogalis

Get Kafka-Nated Ep. 8 Wednesday, October 15th 2025 Guest Focus: Founder of ShadowTraffic, former Confluent stream-processing lead (Kafka Streams, ksqlDB), creator of the Onyx Platform Michael Drogalis joins host Hugh Evans to unpack one of the toughest challenges in stream processing: creating realistic synthetic test data for Kafka. Michael founded ShadowTraffic after leading Kafka Streams and ksqlDB at Confluent and building open-source stream systems like Onyx.

Building Real-Time Data Pipelines with Kafka, Telegraf, and InfluxDB 3

When milliseconds matter and data never stops flowing, you need a pipeline that can handle high-velocity streaming data with reliability and scale. The modern streaming stack of Kafka, Telegraf, and InfluxDB 3 Core delivers exactly that. To give you a concrete example, this blog works with a fictitious use case: “Papa Giuseppe’s Pizzeria.” Every oven, prep station, and order in this pizza restaurant generates data. Our workflow looks like this.

Grafana & Friends Stockholm meetup at 0+X

In this talk, we’ll introduce the Kafka Data Source plugin we developed for Grafana, which enables users to query and visualise Kafka topic data directly in their dashboards—without the need for intermediate storage or external services. We'll share how the idea came about, how we collaborated with the Grafana community and developers to bring it to life, and the challenges we faced along the way. We'll also discuss our vision for the plugin’s future and its role in the evolving observability landscape.

Diskless 2.0: Unified, Zero-Copy Apache Kafka

We’ve added Tiered Storage to Diskless Kafka—using plain old KIP-405 as the read-optimizer, Diskless Kafka materializes fast-to-read segments—unifying Tiered and Diskless into a single path. This leverages production-grade Tiered Storage plugin, removes the need for bespoke components, and simplifies the community discussion. We’ve also upgraded KIP-1150 and KIP-1163 to address the community’s most pressing questions such as transactions and queues support.