Operations | Monitoring | ITSM | DevOps | Cloud

Interactive Dashboards | SigNoz Launch Week 5.0 | Day 1

Interactive Dashboards eliminate the current workflow of opening new tabs and manually recreating queries every time you need to investigate a spike or anomaly. Click directly on any data point to drill down and explore. ​What you can do: ​Built for developers who need to debug production issues efficiently, not juggle with multiple tabs.

Bringing Observability to Claude Code: OpenTelemetry in Action

AI coding assistants like Claude Code are becoming core parts of modern development workflows. But as with any powerful tool, the question quickly arises: how do we measure and monitor its usage? Without proper visibility, it’s hard to understand adoption, performance, and the real value Claude brings to engineering teams. For leaders and platform engineers, that lack of observability can mean flying blind when it comes to understanding ROI, productivity gains, or system reliability.

kubectl logs: How to View & Tail Kubernetes Pod Logs

When debugging containerized applications in Kubernetes, kubectl logs serves as your primary command-line tool for accessing container logs directly. Understanding how to effectively retrieve, filter, and analyze logs becomes essential for maintaining application health and resolving issues quickly, especially in multi-container environments where correlation across services can make or break your troubleshooting efforts.

Full-Circle Observability: Using SigNoz to monitor a LangChain agent that queries SigNoz MCP

In Part 1 of this series, we explored how to instrument a LangChain trip planner agent with OpenTelemetry and send telemetry data to SigNoz. By tracing each step of the planning process: LLM reasoning, tool calls for flights, hotels, weather, and activities, and the final itinerary response, we saw how observability turns a black-box agent workflow into a transparent, debuggable system.

LangChain Observability: How to Monitor LLM Apps with OpenTelemetry (With Demo App)

LangChain has become one of the most popular frameworks for building LLM-powered applications, making it easier to create agents that can reason, plan, and take actions. But like any production-grade AI app, LangChain agents can run into performance bottlenecks, hallucinations, or tool call failures. And without proper LangChain observability, it’s hard to know where things break down.

How our engineers use AI for coding (and where they refuse to)

Okay, picture this: if you drew a Venn diagram of folks in tech right now, it'd probably look something like this: You'll probably find yourself in one of those circles, right? I’m guilty of falling in the intersection! Because let's be real, the 'will AI replace developers by 20xx?' debate is everywhere – Reddit, Hacker News, team Slack and even your local cafe. Well, we decided to go straight to the source.

Observing LlamaIndex Apps with OpenTelemetry + SigNoz

LlamaIndex has become a popular choice for building Retrieval-Augmented Generation (RAG) applications, helping developers seamlessly connect large language models with private or domain-specific data. But RAG workflows can be complex with slow retrieval times, irrelevant or inconsistent responses, and silent failures in the data pipeline can all degrade the user experience. That’s why observability is essential.

How We Think About "Developer Marketing" at SigNoz

“Developers hate marketing.” Do they, really? I often hear this thrown around on podcasts about DevTools marketing, and while it’s true that developers don’t respond to the same old marketing tactics, they do respond to genuine communication. The reason developers are hard to “market” to is that they are also the builders of the stuff you want to sell.