Operations | Monitoring | ITSM | DevOps | Cloud

From raw data to flame graphs: A deep dive into how the OpenTelemetry eBPF profiler symbolizes Go

Imagine you're troubleshooting a production issue: your application is slow, the CPU is spiking, and users are complaining. You turn to your profiler for answers—after all, this is exactly what it's built for. The profiler runs, collecting thousands of stack samples. eBPF profilers, including the OpenTelemetry eBPF profiler, operate at the kernel level, so they capture raw program counters: memory addresses pointing into your binary.

Explore Kubernetes with native OpenTelemetry data

Kubernetes environments generate a constant stream of signals across clusters, nodes, pods, and workloads. For teams that have standardized on OpenTelemetry (OTel), maintaining ownership of that data is critical. But in practice, many observability platforms require translation into vendor-specific data formats, leading to fragmented product experiences, blank dashboards, and uncertainty about data integrity.

Annotate traces to improve LLM quality with Datadog LLM Observability

LLM applications rarely crash. They degrade quietly. Once these applications are shipped to production, subtle quality failures become harder to catch with traditional signals. Tone shifts, hallucinated details, off-topic responses, and incomplete reasoning can emerge while latency and token usage look stable.

Building a dry-run mode for the OpenTelemetry Collector

Teams continuously deploy programmable telemetry pipelines to production, without having access to a dry-run mode. At the same time, most organizations lack staging environments that resemble production – especially with regards to observability and other platform-level services.

OpAMP for OpenTelemetry: Managing Collector Fleets and Introducing the New OpAMP Gateway Extension

Today, Bindplane is launching the OpAMP Gateway Extension in alpha — a new component that extends OpAMP fleet management into network-segmented and firewalled environments where direct agent-to-server connectivity is not possible. It also addresses fleet scaling by fanning many agent connections into a small upstream pool, reducing connection load on the OpAMP server. We also hope to donate the OpAMP Gateway Extension upstream to the OpenTelemetry project and welcome community contributions.

Native OpenTelemetry inside Alloy: Now you can get the best of both worlds

We're big proponents of OpenTelemetery, which has quickly become a new unified standard for delivering metrics, logs, traces, and even profiles. It's an essential component of Alloy, our popular telemetry agent, but we're also aware that some users would prefer to have a more "vanilla" OpenTelemetry experience.

Routing OpenTelemetry logs to Sentry using OTLP

If you've already instrumented your app with OpenTelemetry, you don't have to rip it out to use Sentry. Two environment variables and your logs start flowing into Sentry, no SDK changes, no re-instrumentation. Here's how to set it up in a sample app, and when the native Sentry SDK might be the better call.

Generating metrics from traces with cardinality control: A closer look at HyperLogLog in Tempo

While tracing is a critical component of any observability strategy, metrics — especially RED metrics (request rate, error rate, and duration) — are widely considered the gold standard for monitoring service health. Tempo, the open source, easy-to-use, and highly scalable distributed tracing backend, is well known in the OSS community for storing and querying traces. It can also, however, generate RED metrics directly from those traces using the optional metrics-generator component.

OpenTelemetry traces for Bitbucket Pipelines via webhooks

Continuous delivery is only as good as your ability to understand what’s happening inside your pipelines. When a build is slow, flaky, or burning through capacity, you need more than a green/red status and a wall of logs — you need traces. Bitbucket Pipelines now exposes pipeline execution as OpenTelemetry (OTel) traces via webhook events. This lets you stream detailed pipeline spans into your own observability stack and correlate them with the rest of your system. This post walks through.
Sponsored Post

SAP Application Performance Monitoring (APM): Beyond Generic Metrics

Your enterprise APM tool shows SAP is using 90% CPU. The dashboard turns red. An alert fires. Now what? You open Dynatrace. You see the Java Virtual Machine metrics for your NetWeaver stack. You see HTTP response times for your Fiori apps. You see a spike in database calls. None of this tells you why VA01 takes 45 seconds to create a sales order. None of this tells you which custom ABAP report is consuming memory. None of this explains the short dump that crashed your pricing routine. This is the gap between generic APM and true SAP application performance monitoring. Your enterprise tools see the symptoms.