Kafka

appsignal

How AppSignal Monitors Their Own Kafka Brokers

Today, we dip our toes into collecting custom metrics with a standalone agent. We’ll be taking our own Kafka brokers and using the StatsD protocol to get the metrics into AppSignal. This post is for those with some experience in using monitoring tools, and who want to take monitoring to every corner of their architecture, or want to add their own metrics to their monitoring setup.

qlik

A Message To You Kafka - The Advantages of Real-time Data Streaming

In these uncertain times of the COVID-19 crisis, one thing is certain – data is key to decision making, now more than ever. And, the need for speed in getting access to data as it changes has only accelerated. It’s no wonder, then, that organisations are looking to technologies that help solve the problem of streaming data continuously, so they can run their businesses in real-time.

canonical

What is Apache Kafka and will it transform your cloud?

Everyone hates waiting in a queue. On the other hand, when you’re moving gigabytes of data around a cloud environment, message queues are your best friend. Enter Apache Kafka. Apache Kafka enables organisations to create message queues for large volumes of data. That’s about it – it does one simple but critical element of cloud-native strategies, really well.

logicmonitor

From Monolith to Microservices

Today, monolithic applications evolve to be too large to deal with as all the functionalities are placed in a single unit. Many enterprises are tasked with breaking them down into microservices architecture. At LogicMonitor we have a few legacy monolithic services. As business rapidly grew we had to scale up these services, as scaleout was not an option.

Datadog on Kafka

As a company, Datadog ingests trillions of data points per day. Kafka is the messaging persistence layer underlying many of our high-traffic services. Consequently, our Kafka usage is quite high: double-digit gigabytes per second bandwidth and the need for petabytes of high performance storage, even for relatively short retention windows. In this episode, we’ll speak with two engineers responsible for scaling the Kafka infrastructure within Datadog, Balthazar Rouberol and Jamie Alquiza. They'll share their strategy in scaling Kafka, how it’s been deployed on Kubernetes, and introduce kafka-kit; our open source toolkit for scaling Kafka clusters. You'll leave with lessons learned while scaling persistent storage on modern orchestrated infrastructure, and actionable insights you can apply at your organization
manageengine

Kafka monitoring: Metrics that matter

Kafka is a distributed streaming platform that acts as a publish-subscribe messaging queue by receiving data from various source systems and making it available to various systems and applications in real time. Key advantages for utilizing Kafka are that it provides durable storage, meaning the data stored within it cannot be easily tampered with, and it is highly scalable, so it can handle a large increase in users, workloads, and transactions when necessary.

instana

Optimizing Cross AZ Data Transfer Cost with Apache Kafka

When using a cloud provider like Amazon Web Services (AWS), Google Cloud (GCP) or Microsoft Azure, one is subject to very complicated on-demand pricing models. One even more complicated aspect of the pricing model is about traffic, also referred to as data transfer. While it is obvious that traffic between Europe and Australia is billed, it is unintuitive that traffic from eu-west-1a to eu-west-1b is billed as well.

datadog

Monitor Confluent Platform with Datadog

Confluent Platform is an event streaming platform built on Apache Kafka. If you’re using Kafka as a data pipeline between microservices, Confluent Platform makes it easy to copy data into and out of Kafka, validate the data, and replicate entire Kafka topics. We’ve partnered with Confluent to create a new Confluent Platform integration.

How to Build Custom Software Stack Container Image (Kafka) and Add Template to Jelastic Private PaaS

Jelastic Platform-as-a-Service provides certified support of various stacks (application servers, databases, load balancers, cache servers and others) and this list can be extended with custom Docker-based templates. In this video, you'll see the steps on how to build a software stack as a container image (using Apache Kafka as a sample) and make it available as a custom template within the dedicated Jelastic platform installed on premise or on top of preferred cloud infrastructure (DigitalOcean in our case).
jelastic

How to Build Custom Software Stack Container Image and Add Template to Jelastic Private PaaS

Jelastic Platform-as-a-Service provides certified support of various stacks (application servers, databases, load balancers, cache servers and others) and this list can be extended with custom Docker-based templates. In this article, we’ll cover the steps on how to build a software stack as a container image (using Apache Kafka as a sample) and make it available as a custom template within the dedicated platform installed on-premise or on top of preferred cloud infrastructure.