Operations | Monitoring | ITSM | DevOps | Cloud

OpenMetrics vs OpenTelemetry - A guide on understanding these two specifications

OpenMetrics and OpenTelemetry are popular standards for instrumenting cloud-native applications. Both projects are part of the Cloud Native Computing Foundation (CNCF) and aim to simplify how we generate, collect and monitor services in a modern cloud-native distributed application environment. Let's have a look at how both the standards are aiming to help solve the observability conundrum.

LLM Observability in the Wild - Why OpenTelemetry should be the Standard

A few days ago I hosted a live conversation with Pranav, co-founder of Chatwoot, about issues his team was running into with LLM observability. The short version: building, debugging, and improving AI agents in production gets messy fast. There's multiple competing standards for default libraries for LLM observability. And many such libraries like OpenInference which claim to be based on OpenTelemetry don't strictly adhere to it's conventions.

An overview of Context Propagation in OpenTelemetry

To effectively manage modern applications, you need to understand how they work on the inside. Distributed tracing is the key to this, providing a detailed picture of a request's journey across every service. OpenTelemetry has emerged as the industry-standard framework for implementing tracing and achieving true observability in complex, distributed systems. In this article, we embark on a journey to explore the core concept of context propagation within Open Telemetry.

OpenTelemetry and Jaeger | Key Features & Differences [2025]

OpenTelemetry is a broader, vendor-neutral framework for generating and collecting telemetry data (logs, metrics, traces), offering flexible backend integration. Jaeger, on the other hand, is focused on distributed tracing in microservices. Earlier Jaeger had its own SDKs based on OpenTracing APIs for instrumenting applications, but now Jaeger recommends using OpenTelemetry instrumentation and SDKs. Warning The original Jaeger client SDKs (based on OpenTracing) are archived and no longer maintained.

New Relic's CCU-based pricing is creating unpredictable costs, pushing teams to sample heavily

We talked to 7 companies in August 2025 who were looking to switch from New Relic. One engineering director said they're paying $1,000 a month and only ingesting 10% of their traces. Teams are defaulting to aggressive sampling, some at 1%, others at 10%, to manage costs.

OpenTelemetry Exporters - Types and Configuration Steps

In this post, we will talk about OpenTelemetry exporters. OpenTelemetry exporters help in exporting the telemetry data collected by OpenTelemetry. OpenTelemetry frees you from any kind of vendor lock-in by letting you export the collected telemetry data to any backend of your choice. In modern distributed systems, efficiently collecting, transmitting, and analyzing telemetry data from diverse sources poses a significant challenge.

OpenTelemetry Logs - A Complete Introduction & Implementation

OpenTelemetry is a Cloud Native Computing Foundation(CNCF) incubating project aimed at standardizing the way we instrument applications for generating telemetry data(logs, metrics, and traces). OpenTelemetry aims to provide a vendor-agnostic observability framework that provides a set of tools, APIs, and SDKs to instrument applications.

LLM app Observability: Opentelemetry as a standard

LLM observability is broken There are too many new libraries floating around, but they don't follow accurately the OpenTelemetry conventions. OTel isn’t perfect for LLMs yet—but extending a proven standard beats inventing another one. Why not use the same standard (OTel) which works so well for rest of the apps, and just work on top of it? This is what I was ranting with Pranav Raj S, co-founder at Chatwoot and we thought there must be other folks facing similar issues.

OpenTelemetry Operator Complete Guide [OTel Collector + Auto-Instrumentation Demo]

Manually deploying and managing OpenTelemetry components in a Kubernetes environment can be a complex and time-consuming task. It involves creating various Kubernetes resources, setting up configurations, and ensuring the components are properly integrated with the applications.

Introducing Cost Meter - Proactive Observability Cost Control with Per-Hour Granularity

The irony isn't lost on us - observability platforms are built to be proactive about system health, yet when it comes to managing observability costs themselves, teams are forced to be reactive. Today, that changes with Cost Meter, now live in our platform. Cost Meter transforms observability spend management from a monthly billing surprise into a proactive, data-driven process with hourly aggregated metrics that give you complete visibility into your telemetry ingestion patterns.

Understanding OpenTelemetry Spans in Detail

Debugging errors in distributed systems can be a challenging task, as it involves tracing the flow of operations across numerous microservices. This complexity often leads to difficulties in pinpointing the root cause of performance issues or errors. OpenTelemetry provides instrumentation libraries in most programming languages for tracing.

Breaking Free from SQLite - Why We Added PostgreSQL Support to SigNoz

"Let us support different relational databases apart from SQLite. Nobody likes to run SQLite in production." This was one of the most requested features from our community. Your requests have been heard, and we've added support for different relational databases, starting with PostgreSQL. If you're self-hosting SigNoz, you no longer need to worry about SQLite's limitations. Let's dive into what we've built and why it matters for your production deployments.

Query Builder v5 - Two Years of Technical Debt, 80 Closed Issues, and a Fundamental Rethinking

In 2022, we had three different query interfaces. Logs had a custom search syntax with no autocomplete. Traces only had predefined filters - no query builder at all. Metrics had a raw PromQL input box where you'd paste queries from somewhere else and hope they worked. Each system spoke a different language. An engineer debugging a production issue had to context-switch not just between data types, but between entirely different mental models of how to query data.

Interactive Dashboards - Click Any Panel to Start Debugging

Your dashboard shows a latency spike. To investigate it, you copy the query, open logs in a new tab, paste and modify the query, lose your dashboard filters, and repeat for traces. By the time you find the issue, you have 15 tabs open. Starting today, you can click any panel and investigate right there. All your filters and variables carry over. No more tab juggling.

Interactive Dashboards | SigNoz Launch Week 5.0 | Day 1

Interactive Dashboards eliminate the current workflow of opening new tabs and manually recreating queries every time you need to investigate a spike or anomaly. Click directly on any data point to drill down and explore. ​What you can do: ​Built for developers who need to debug production issues efficiently, not juggle with multiple tabs.

Monitoring Claude Code Usage with OpenTelemetry and SigNoz

In this video, we’ll walk you through how to monitor Claude code activity using OpenTelemetry and SigNoz. You’ll learn how to instrument your usage, capture telemetry data, and visualize it with SigNoz to get better insights into your system performance. Whether you’re exploring observability for AI workloads or looking for an open-source solution to monitor your llm activity, this guide will help you get started.

Bringing Observability to Claude Code: OpenTelemetry in Action

AI coding assistants like Claude Code are becoming core parts of modern development workflows. But as with any powerful tool, the question quickly arises: how do we measure and monitor its usage? Without proper visibility, it’s hard to understand adoption, performance, and the real value Claude brings to engineering teams. For leaders and platform engineers, that lack of observability can mean flying blind when it comes to understanding ROI, productivity gains, or system reliability.

kubectl logs: How to View & Tail Kubernetes Pod Logs

When debugging containerized applications in Kubernetes, kubectl logs serves as your primary command-line tool for accessing container logs directly. Understanding how to effectively retrieve, filter, and analyze logs becomes essential for maintaining application health and resolving issues quickly, especially in multi-container environments where correlation across services can make or break your troubleshooting efforts.