Operations | Monitoring | ITSM | DevOps | Cloud

Kafka

Identifying and Resolving a Kafka Issue With AppSignal

Last week, we had an issue with one of our Kafka brokers. Don’t worry, it didn’t impact any customers. When monitoring things closely, you can often solve things before they impact a customer ;-). In today’s post, I’ll show you how we use AppSignal to dogfood our own issues. I’ll go through how we monitor the non-Ruby part of our stack and how we used AppSignal to detect and resolve the issue.

How We Use Quarkus With Kafka in Our Multi-Tenant SaaS Architecture

At LogicMonitor, we deal primarily with large quantities of time series data. Our backend infrastructure processes billions of metrics, events, and configurations daily. In previous blogs, we discussed our transition from monolith to microservice. We also explained why we chose Quarkus as our microservices framework for our Java-based microservices. In this blog we will cover.

How AppSignal Monitors Their Own Kafka Brokers

Today, we dip our toes into collecting custom metrics with a standalone agent. We’ll be taking our own Kafka brokers and using the StatsD protocol to get the metrics into AppSignal. This post is for those with some experience in using monitoring tools, and who want to take monitoring to every corner of their architecture, or want to add their own metrics to their monitoring setup.

What is Apache Kafka and will it transform your cloud?

Everyone hates waiting in a queue. On the other hand, when you’re moving gigabytes of data around a cloud environment, message queues are your best friend. Enter Apache Kafka. Apache Kafka enables organisations to create message queues for large volumes of data. That’s about it – it does one simple but critical element of cloud-native strategies, really well.

From Monolith to Microservices

Today, monolithic applications evolve to be too large to deal with as all the functionalities are placed in a single unit. Many enterprises are tasked with breaking them down into microservices architecture. At LogicMonitor we have a few legacy monolithic services. As business rapidly grew we had to scale up these services, as scaleout was not an option.

Datadog on Kafka

As a company, Datadog ingests trillions of data points per day. Kafka is the messaging persistence layer underlying many of our high-traffic services. Consequently, our Kafka usage is quite high: double-digit gigabytes per second bandwidth and the need for petabytes of high performance storage, even for relatively short retention windows. In this episode, we’ll speak with two engineers responsible for scaling the Kafka infrastructure within Datadog, Balthazar Rouberol and Jamie Alquiza. They'll share their strategy in scaling Kafka, how it’s been deployed on Kubernetes, and introduce kafka-kit; our open source toolkit for scaling Kafka clusters. You'll leave with lessons learned while scaling persistent storage on modern orchestrated infrastructure, and actionable insights you can apply at your organization

Kafka monitoring: Metrics that matter

Kafka is a distributed streaming platform that acts as a publish-subscribe messaging queue by receiving data from various source systems and making it available to various systems and applications in real time. Key advantages for utilizing Kafka are that it provides durable storage, meaning the data stored within it cannot be easily tampered with, and it is highly scalable, so it can handle a large increase in users, workloads, and transactions when necessary.

Monitor Confluent Platform with Datadog

Confluent Platform is an event streaming platform built on Apache Kafka. If you’re using Kafka as a data pipeline between microservices, Confluent Platform makes it easy to copy data into and out of Kafka, validate the data, and replicate entire Kafka topics. We’ve partnered with Confluent to create a new Confluent Platform integration.