Operations | Monitoring | ITSM | DevOps | Cloud

OpsHelm goes multi-cloud with Aiven Diskless BYOC, cuts costs by 78% over MSK

In under a month, OpsHelm the continuous, enriched changelog for cloud infrastructure - migrated its streaming backbone from MSK and NATS to Aiven Diskless Kafka (BYOC on AWS). The switch eliminated cross-cloud networking fees, collapsed multiple storage layers into one, and cut total streaming costs by 5x (from >$50,000/year to <$10,000/year) while serving the team a single logical event bus that stretches across multiple regions and accounts.

The Open-Source BigQuery Sink Connector Saga

The BigQuery Sink connector is a critical piece of Kafka infrastructure that allows you to offload your Kafka topic data into BigQuery in real time. It is the third most-used connector among Kafka users (after the Google Cloud Managed Service for Apache Kafka and the original WePay sink connector), but it's not without its fair share of plot twists. Here's the story of how this connector switched hands three times and we ultimately ended up helping to re-build it.

Get Kafka-Nated Ep 7: Redpanda vs Kafka with Tristan Stevens

Get Kafka-Nated Ep. 7 Thursday, September 25th 2025 Guest Focus: VP Global Customer Success at Redpanda Data, former Cloudera streaming expert Tristan Stevens from Redpanda Data will be joining our host Hugh Evans. Tristan brings a unique perspective, having led customer success for Hadoop ecosystems at Cloudera and now shaping next-generation streaming platforms at Redpanda.

Welcome PostgreSQL 18: A New Era of Performance on Aiven

Aiven is proud to launch the newest version of PostgreSQL, version 18, alongside with the open source community as the first managed PostgreSQL provider to support the latest version. This year we had three Aiveners join in on contributing to this major release, a trend which we hope to only see increase. Congrats to Patrick Stählin, Ronan Dunklau, and Thomas Krennwallner for your contributions to the codebase.

Seamlessly Migrating 15k Redis servers to Valkey

For many years Redis has been the default for caching, message queues, and fast data storage. However recent changes to its licensing mean that companies who want truly open-source tools need to make a switch. This is why Valkey was created. We are leading the way in offering a smooth path to move your current Redis setups to a fully open-source managed Valkey service.

Kafka UI: Connect to Kafka Brokers, Produce and View Messages

As part of our mission to help developers "Start with Aiven", we built a new plugin to make working with Apache Kafka easier. The Kafka UI Connect plugin allows you to interact with your Kafka clusters directly from any JetBrains IDE (e.g., IntelliJ, PyCharm). No more juggling multiple applications!

Diskless 2.0: Unified, Zero-Copy Apache Kafka

We’ve added Tiered Storage to Diskless Kafka—using plain old KIP-405 as the read-optimizer, Diskless Kafka materializes fast-to-read segments—unifying Tiered and Diskless into a single path. This leverages production-grade Tiered Storage plugin, removes the need for bespoke components, and simplifies the community discussion. We’ve also upgraded KIP-1150 and KIP-1163 to address the community’s most pressing questions such as transactions and queues support.

Getting Started with Iceberg Topics for Apache Kafka: A Beginner's Guide

Understand how Kafka integrates with Apache Iceberg and experiment locally with Docker and Spark The streaming data landscape is evolving rapidly, and one of the most exciting developments is the integration between Apache Kafka and Apache Iceberg. While Kafka excels at real-time data streaming, organizations often struggle with the complexity of moving streaming data into analytical systems.

Using PostgreSQL Anonymizer to Generate Synthetic Data

Set aside the data masking features of PostgreSQL Anonymizer. This plugin can save the day during development by simplifying your workflow and generating schema-accurate, privacy-compliant test data. In a previous post, we discussed using static and dynamic masking to anonymize data. I spent the last two weeks trying to write followups for the anonymization posts. It's time to confess... I'm an over-engineerer.