Debugging AI Agents in Production Without Losing Your Mind

🔗 SigNoz GitHub (⭐ 25K+): https://github.com/SigNoz/signoz
📚 SigNoz LLM Observability Docs: https://signoz.io/docs/llm-observability/
📚 Inkeep + SigNoz Integration Guide: https://signoz.io/docs/inkeep-monitoring/
🤖 Inkeep GitHub: https://github.com/inkeep/agents
📖 Inkeep Docs: https://docs.inkeep.com

AI agents are powerful, but debugging them in production is hard. Non-deterministic behavior, LLM latency, and token costs create observability challenges that traditional monitoring tools don't address.

In this webinar, engineers from Inkeep and SigNoz walk through how Inkeep monitors its AI agent framework in production using OpenTelemetry-native observability.

TIMESTAMPS:

0:00 - Intro: Why AI agent debugging is hard

2:00 - Shagun (Inkeep): AI agent architecture overview

9:48 - Live demo: Debugging tab showing agent execution in real-time

10:56 - Viewing errors in SigNoz traces

11:15 - Using evaluators to monitor agent quality

13:00 - Goutham (SigNoz): What is SigNoz

15:30 - Live demo: Monitoring AI agents with traces, metrics, and dashboards

27:26 - Q&A: Observability in CI/CD pipelines and pre-production

WHAT YOU'LL LEARN:

  • How Inkeep's AI agent framework is architected (push/pull workflow, sub-agents, tools)
  • Why observability is critical for agentic systems
  • How to trace agent execution, tool calls, latency, and token usage
  • Best practices for monitoring and optimizing AI agents in production
  • Real-world debugging workflow from error detection to resolution

SPEAKERS:

ABOUT INKEEP:
Inkeep helps teams build and manage AI agents using a no-code visual builder or developer SDK, with full two-way sync between technical and non-technical users.

ABOUT SIGNOZ:
SigNoz is an open-source, OpenTelemetry-native observability platform for logs, metrics, and traces. A modern alternative to Datadog and New Relic built for developers who need full control over their observability stack.

#AIAgents #Observability #OpenTelemetry #SigNoz #Inkeep #LLMObservability #Debugging