Operations | Monitoring | ITSM | DevOps | Cloud

Automated Seer in Under 2 Minutes

What if you had 5 errors, and instead of coming back to 5 issues in your feed, you got 5 pull requests fixing them? Seer is Sentry's new AI Debugging agent. it's able to stitch together all the context from your logs, stack traces, distributed tracing, codebase, and issues and figure out what broke, where, and how to fix it. Seer automation lets you automate that flow - and end up with a nice PR waiting for you to merge if it looks good. Check it out!

Smarter debugging with Sentry MCP and Cursor

Debugging a production issue with Cursor? Your workflow probably looks like this: Alt-Tab to Sentry, copy error details, switch back to your IDE, paste into Cursor. By the time you’ve context-switched three times, you’ve lost your flow and you’re looking at generic suggestions that don’t show any understanding of your actual production environment or codebase.

Introducing new issue detectors: Spot latency, overfetching, and unsafe queries early

Not everything in production is on fire. Sometimes it’s just... a little warm. A page that loads a second too slow. An API that returns way more than anyone asked for. A query that feels totally fine until someone sends something unexpected and suddenly you’ve got an incident.

Evals are just tests, so why aren't engineers writing them?

You’ve shipped an AI feature. Prompts are tuned, models wired up, everything looks solid in local testing. But in production, things fall apart—responses are inconsistent, quality drops, weird edge cases appear out of nowhere. You set up evals to improve quality and consistency. You use Langfuse, Braintrust, Promptfoo—whatever fits. You start running your evals, tracking regressions, fixing issues, and confidence goes up as a result. Things improve.

How Sentry could stop npm from breaking the Internet

Caching is great! When it works… When it fails, it puts a big load on your backend, resulting in either a self-inflicted DoS, increased server bills, or both. This article is inspired by a real-world incident that happened to npm back in 2016. In the next part, Ben recounts his personal experience responding to the incident while working at npm.

Introducing Sentry's Godot SDK 1.0 Alpha, with support for Godot 4.5 Beta

Debugging during development is easy. You've got a debugger, stack traces, and logs right in front of you. But once your Godot game is in the hands of players, things get trickier. Most won’t report bugs, and if they do, you’re lucky if they include anything more than “it crashed”.

Want to hear your users' complaints? There's a widget for that (now available on mobile)

A disappearing “Submit” button. A modal stuck half-offscreen. It's not a crash or a performance regression. Just broken UX. Frustrating enough to make users rage-quit or leave a 1-star review. Error and performance monitoring catch the technical stuff: crashes, bottlenecks, slow APIs. But they won’t tell you when a layout breaks, or a UI flow subtly unravels after a redesign.

What You Actually Need to Monitor AI Systems in Production

You did it. You added the latest AI agent into your product. Shipped it. Went to sleep. Woke up to find it returning a blank string, taking five seconds longer than yesterday, or confidently outputting lies in perfect JSON. Naturally, you check your logs. You see a prompt. You see a response. And you see nothing helpful. Surprise. Prompt in and response out is not observability. It is vibes.