Operations | Monitoring | ITSM | DevOps | Cloud

Kafka

Secure, Segregated Multi-tenant Analytics in PostgreSQL using Aiven for Apache Kafka, Debezium, and Aiven for ClickHouse

Enabling secure and performant multi-tenant analytics for your PostgreSQL® deployments on Aiven's data platform. In the realm of multi-tenant Software-as-a-Service (SaaS) applications, managing a centralized PostgreSQL® database for multiple customers can present challenges in maintaining secure segregation of their data. While a single database offers infrastructure efficiency, it becomes crucial to ensure each organization has isolated access and control over their information.

Introduction to Apache Kafka

Have you heard about Apache Kafka but aren’t quite sure about its functions or applications? This webinar is tailored for you. Apache Kafka is more than just a buzzword in the tech community; it’s a critical tool for data processing and management. Join our expert-led webinar to explore the world of Apache Kafka, a powerful distributed event streaming platform. Learn how Canonical simplifies Kafka operations, offering secure, automated deployments and maintenance across various clouds.

Canonical announces the general availability of Charmed Kafka

27 February 2024: Today, Canonical announced the release of Charmed Kafka – an advanced solution for Apache Kafka® that provides everything users need to run Apache Kafka at scale. Apache Kafka is an event store that supports a range of contemporary applications including microservices architectures, streaming analytics and AI/ML use cases. Canonical Charmed Kafka simplifies deployment and operation of Kafka across public clouds and private data centres alike.

Aiven workshop: Learn Apache Kafka with Python

What's in the Workshop Recipe? Apache Kafka is the industry de-facto standard for data streaming. An open-source, scalable, highly available and reliable solution to move data across companies' departments, technologies or micro-services. In this workshop you'll learn the basics components of Apache Kafka and how to get started with data streaming using Python. We'll dive deep, with the help of some prebuilt Jupyter notebooks, on how to produce, consume and have concurrent applications reading from the same source, empowering multiple use-cases with the same streaming data.

Explore Apache Kafka Tiered Storage

Get an introduction to Apache Kafka® Tiered Storage which enables more effective data management by utilizing two different storage types—local disk and remote cloud storage solutions such as Amazon S3, Google Cloud Storage, and Azure Blob Storage. This feature offers a tailored approach to data storage, allowing you to allocate frequently accessed data to high-speed local disks while offloading less critical or infrequently accessed data to more cost-effective remote storage solutions. Tiered storage enables you to indefinitely store data on specific topics without running out of space. Once enabled, it is configured per topic, giving you granular control over data storage needs.

Lessons Learned from Managing Kafka Costs

You probably have seen ads where someone claims that their app can save you money by finding subscriptions you forgot about. I have a hard time imaging someone with $100s of dollars of expenses they forgot about, but I have had the occasional one that was missed. The problem is that people are inefficient when it comes to managing “stuff”. That is why there are so many places to store “stuff”.

Enabling change data capture from MySQL to Apache Kafka with Debezium

Change Data Capture (CDC) is the process of tracking the changes happening in one or more database table in a source system and propagating them to a target technology. The objective of this video is to create a CDC flow from a source table in a MySQL database to Apache Kafka® via the Kafka Connect Debezium Source connector. Check out these resources to learn more.

Configuring Elastic Agent's new output to Kafka

Introducing Elastic Agent's new feature: native output to Kafka. With this latest addition, Elastic’s users can now effortlessly route their data to Kafka clusters, unlocking unparalleled scalability and flexibility in data streaming and processing. In this video, we'll guide you through a step-by-step configuration with Fleet and Confluent Cloud.