Bangalore, India
2019
  |  By Dhruv Ahuja
As application systems grow more complex, it becomes ever more important to understand how services interact across distributed systems. Observability sheds light on the behavior of instrumented applications and the infrastructure they run on. This enables engineering teams to gain better track system health and prevent critical failures. OpenTelemetry (OTel) has standardized how we generate and transmit telemetry, and the OpenTelemetry Collector is the engine that processes and export this data.
  |  By Dhruv Ahuja
If you have worked with observability tools in the last decade, you have likely managed, and been burnt by, a fragmented collection of tools and libraries. Each observability signal required its own tool, data formats were incompatible and had little or no correlation. For example, log records would not link to traces, meaning you had to guess which traces led to which events. The OpenTelemetry Protocol (OTLP) solves this by decoupling how telemetry is generated from where it is analyzed.
  |  By Elizabeth Mathew
When I was building applications, I used to always rely on the DevTools console of my web browser to examine logs in the frontend. But, with UI log messages only being accessible within your browser rather than forwarded to a file somewhere, which is the common pattern with backend services, losing visibility of this resource when triaging user issues was a real dilemma.
  |  By Dhruv Ahuja
If you search for “OpenTelemetry Agent”, you will likely encounter two completely different definitions. This ambiguity often leads to confusion between infrastructure teams and application developers. SREs and DevOps engineers would describe it as a component deployed as a sidecar, whereas application developers would understand it as a language-specific library. Let’s break it down in the next section.
  |  By Elizabeth Mathew
Everyone knows that debugging is twice as hard as writing a program in the first place. So, if you’re as clever as you can be when you write it, how will you ever debug it? — Brian W. Kernighan and P. J. Plauge, The Elements of Programming Style, 2nd ed. Maybe you can let SigNoz do some heavy lifting for you!
  |  By Elizabeth Mathew
Picture this, your observability tool already nails the basics like request rates, latency and memory usage, but you need more insight. Think user churn rates, engagement spikes, or even how many carts get abandoned mid-checkout. That’s where OpenTelemetry steps in, providing a way to track those critical custom metrics with ease.
  |  By Anushka Karmakar
AI agents are fundamentally different beasts to monitor compared to traditional applications. A single user request can trigger a cascade of 10+ internal operations: sub-agent transfers, tool executions, LLM calls, API requests, each with unpredictable latency and failure modes. When something goes wrong (and with LLMs, things go wrong in creative ways), you need to see the entire execution flow to debug effectively.
  |  By Elizabeth Mathew
Customer logs data is always messy. Being (and building!) an observability platform, we get to see all the beautiful, creative ways it can be messy, every single day. And yet, our customers expect, quite fairly, I might add, perfect query results and peak performance. Info SigNoz is an open-source observability platform that can be your one-stop solution for logs, metrics and traces.
  |  By Elizabeth Mathew
So, you've embraced OpenTelemetry, and it's been great. Pat, Pat. That single, vendor-neutral pipeline for your traces, metrics, and logs felt like the future. But now, the future is getting bigger. That simple OTel Collector configuration that worked perfectly for a few services is starting to show its limits as you scale. The data volume is climbing, reliability is becoming a concern, and you're wondering if that single collector instance is now a bottleneck waiting to happen.
  |  By Aayush Sharma
Golang (Go) applications are known for their high performance, concurrency model, and efficient resource use, making Go an easy choice for building modern distributed systems. But just because your Go application is built for speed doesn't mean it's running perfectly in production. When things go wrong, just checking if your service is "UP" isn't enough.
Learn how to monitor your n8n Cloud workflow executions using OpenTelemetry by capturing traces and sending them directly to SigNoz for real-time visibility into performance, errors, and execution flow.
Learn how to monitor your n8n Cloud workflow executions using OpenTelemetry by capturing traces and sending them directly to SigNoz for real-time visibility into performance, errors, and execution flow.
Learn how to implement end-to-end monitoring and observability for Agno-based AI systems using OpenTelemetry and SigNoz. In this video, we walk through instrumenting your Agno workflows, collecting traces, metrics, and logs, and visualizing everything in SigNoz to gain real-time visibility into performance, failures, and bottlenecks. You'll see how to move from basic logging to production-grade observability—so you can debug faster, optimize latency, and confidently run AI systems at scale.
Learn how to implement monitoring and observability for OpenClaw systems using OpenTelemetry and SigNoz. In this video, we cover how to instrument OpenClaw, collect traces, metrics, and logs, and visualize everything in SigNoz for real-time insights into performance and reliability. You’ll see how to quickly identify bottlenecks, debug issues, and improve system stability in production.
Learn how to implement monitoring and observability for the Claude Agent SDK using OpenTelemetry and SigNoz. In this video, we walk through instrumenting your Claude-based agents, capturing traces, metrics, and logs, and visualizing everything in SigNoz for real-time insights. You’ll learn how to debug agent behavior, identify latency bottlenecks, and monitor performance in production environments.
SigNoz uses distributed tracing to gain visibility into your software stack. If you need any clarification or find something missing, feel free to raise a GitHub issue with the label documentation or reach out to us at the community slack channel.
AI agents are powerful, but debugging them in production is hard. Non-deterministic behavior, LLM latency, and token costs create observability challenges that traditional monitoring tools don't address. In this webinar, engineers from Inkeep and SigNoz walk through how Inkeep monitors its AI agent framework in production using OpenTelemetry-native observability.
Using SigNoz MCP Server & Claude to find root cause of Alerts.
Using SigNoz MCP Server & Claude to find root cause of Alerts.
SigNoz Demo Video - Interactive Dashboards and Correlation.

Open-source Observability platform. Understand issues in your deployed applications & solve them quickly.

Why SigNoz?

  • Your data in your boundary: No need to worry about GDPR and other data protection laws. All your tracing and monitoring data is now in YOUR infra.
  • Forget HUGE SaaS bills: No abrupt pricing changes. No unexpected month-end bills. Get transparent usage data.
  • Take Control: No abrupt pricing changes. No need to spend weeks in vendor slack for that one small feature. Extend SigNoz to suit your needs.

Single pane for complete metrics and traces, no need to shift to different systems.