Operations | Monitoring | ITSM | DevOps | Cloud

Sentry + Stripe Projects: From Zero to Error Monitoring in Two Commands

No signup form. No dashboard. No copy-pasting DSNs. Sentry is now a provider on Stripe Projects, which means you can provision a fully configured Sentry project — error monitoring, tracing, and session replay — straight from the CLI in two commands. In this demo, we walk through the full workflow: initializing a project, provisioning Sentry, upgrading and downgrading plans, using magic login to jump straight into your dashboard, and letting a coding agent (Claude Code) handle it all for you.

Sentry + Claude Agents: Automatic Bug Fixes from Root Cause to PR

Seer, Sentry's AI debugger, automatically analyzes your issues and finds the root cause. Now you can pass that analysis directly to a Claude agent - a managed agent session in the Claude Console at platform.claude.com. Once it's done, a link to the branch appears in Sentry so you can review and merge the PR. This video walks through how the integration works and how to set it up in under two minutes.

When agents orchestrate agents, who's watching?

You used to monitor services. Then you started monitoring AI calls inside services. Now your AI agent is spinning up other AI agents to complete tasks. Your old monitoring instincts need to evolve. This isn't hypothetical. Agentic architectures are already in production. Coding agents are calling search agents; orchestrators are spawning specialized sub-agents for retrieval, planning, and execution. Teams are shipping these systems faster than they're figuring out how to watch them.

No more monkey-patching: Better observability with tracing channels

Almost every production application uses a number of different tools and libraries,whether that’s a library to communicate with a database, a cache, or frameworks like Nest.js or Nitro. To be able to observe what’s going on in production, application developers reach out for Application Performance Monitoring (APM) tools like Sentry. But there’s an inherent problem: the performance data that APM tools need is most often not coming natively from the libraries themselves.

Sentry Built AI Dashboards: Monitor Your AI Agents End-to-End

Building AI applications? There's a lot more to monitor beyond errors. With tracing enabled, Sentry's built-in AI Dashboards give you deep visibility into how your agents are actually performing. This video walks through three key dashboard views: You'll also see how to drill from a dashboard widget straight into the trace explorer to pinpoint the root cause of errors, how to duplicate and customize dashboards to fit your needs, and how to set up monitors with alert thresholds - like getting notified if your LLM calls exceed 20 seconds.

Debugging multi-agent AI: When the failure is in the space between agents

I've been building a multi-agent research system. The idea is simple: give it a controversial technical topic like "Should we rewrite our Python backend in Rust?", and three agents work on it. An Advocate argues for it, a Skeptic argues against, and a Synthesizer reads both briefs blind and produces a balanced analysis. Each agent has its own model, its own tools, its own system prompt. It worked great in testing. Then I noticed the Synthesizer kept producing analyses that leaned heavily toward one side.

Grave improvements: Native crash postmortems via Android tombstones

Native crashes on Android have always been harder to debug than they should be. The platform has its own crash reporter (debuggerd) that captures the crashing thread, every other running thread, register state, and memory maps into a file called a tombstone. Tombstones have been a part of Android for a long time; in fact, they’ve been there in one form or another since Android's first commit.