Operations | Monitoring | ITSM | DevOps | Cloud

A Developer's Framework for Selecting the Right Tracing Vendor

Distributed tracing tracks requests as they flow through microservices, revealing bottlenecks, failures, and performance patterns. Without proper tracing, debugging production issues becomes guesswork—especially in complex architectures with dozens of services. Modern applications generate millions of traces daily. The right vendor helps you extract actionable insights without drowning in data or breaking your budget.

Monitor OpenTelemetry-native metrics with Datadog

OpenTelemetry (OTel) is emerging as the industry standard for collecting and transmitting observability data. Datadog supports several ways to send and accept OTel-native data, while also continuing to support its own native telemetry format. To provide a consistent monitoring experience, Datadog now supports using OTel-native metrics alongside Datadog-native metrics across dashboards, queries, and core visualizations in the Datadog platform.

Your Collector, Your Rules: Introducing BYOC and the OpenTelemetry Distribution Builder

Join the live stream at 11 am ET, here. OpenTelemetry’s super-power has always been: Choice. Yet, most observability vendors still insist you run their collector. Today we’re removing that last point of friction. With Bring Your Own Collector (BYOC), Bindplane now accepts any upstream-compatible build, recognizes exactly which receivers, processors, and exporters it contains, and adapts the UI and configuration workflow on the fly.

How to Set Up Tracing for Elixir Apps Using AppSignal

Over time, web applications have evolved from simple request/response-based systems into complex, distributed ones with lots of moving parts. If something goes wrong (and you can be sure it will), finding the cause can be nearly impossible. But this need not be the case: enter tracing. Tracing refers to the process of collecting detailed information about the execution of requests within an application, including function calls, execution time, and other relevant data.

Jaeger vs Zipkin: Which is Right for Your Distributed Tracing

When requests slow down across your microservices, tracing helps you understand where time is spent. Jaeger and Zipkin are two popular tools for distributed tracing, built to answer a simple question: where did the request go? If you're choosing between them or just exploring options, this guide breaks down the differences and when each one might be a better fit.

Traceparent: How OpenTelemetry Connects Your Microservices

In a microservices setup, tracking a single request across services quickly gets complex. One service calls another, then a third, and your logs don’t line up. The traceparent header carries context between services, so all parts of a request connect back to the start. For example, when a frontend sends a request to an API, which then calls a database service, traceparent it links those calls in the trace. Without it, you’re left guessing how requests flow.

Shedding Light on Kafka's Black Box Problem (with OpenTelemetry)

"All language is but a poor translation." — Franz Kafka This quote by Franz Kafka reminds me of the time when I used to look at metrics from “Apache Kafka” topics trying to figure out what was causing the huge lags and manually deleting the messages in certain partitions to get rid of polluted messages. Yep, pretty lost in translation. I wasn’t aware of the power of observability for a Kafka producer-topic-consumer system.

Easy Way to Convert Wavefront Metrics Using OpenTelemetry

Once upon a time in the world of metrics, Wavefront was a pioneer. Before Prometheus took over and tools like OpenTelemetry unified tracing and metrics, Wavefront brought something novel to the table: human-readable metrics with real-time querying and tag-based dimensionality. In enterprise environments running VMware or early microservices, it offered a scalable way to understand a system's behavior. But as the telemetry landscape evolved, many systems that spoke Wavefront were left behind.

Using the OpenTelemetry Operator to boost your observability

If you’ve ever wrangled sidecars or sprinkled instrumentation code just to get basic trace data, you know the setup overhead isn’t always worth the payoff. But what if it was… just easier? That’s where the OpenTelemetry Operator for Kubernetes steps in… and it plays great with Coralogix out of the box!