Operations | Monitoring | ITSM | DevOps | Cloud

Stop choosing between fast incident response and secure access

Every production system will eventually break. It's not pessimism, it's just reality. That's why engineers go on-call, and why companies invest heavily in incident response tooling. But here's the problem: the moment an engineer goes on call, they typically need elevated access to production systems, databases, and sensitive customer data. And that elevated access? It's often permanent, overly broad, and a security nightmare waiting to happen.

Bloom filters: the niche trick behind a 16× faster API

This post is a deep dive into how we improved the P95 latency of an API endpoint from 5s to 0.3s using a niche little computer science trick called a bloom filter. We’ll cover why the endpoint was slow, the options we considered to make it fast and how we decided between them, and how it all works under the hood.

Weaving AI into the fabric of the company | incident.io

At incident.io, we’ve spent the past year shifting how we work to incorporate the AI into both how we build and what we build. The result? AI has become a fundamental pillar of our company. This is the story of how we built reliable AI for reliability itself — reshaping how teams manage and resolve incidents. From early experiments to a company-wide culture of building with AI, this is how we’re redefining incident response for the future.

The one where we scaled

From 3 people in 2020 to 93 in 2025—incident.io has come a long way, and we’re just getting started. Whether you’ve been here since the early days or just joined, this is what it looks like to build something great *together*. If you're after:️️ Great people Real impact (across the globe, not just in Greece) A place where growth is the default And teammates who’ll always be there for you... We’re hiring! (And we're going to need a bigger couch…)

Service disruption on October 20, 2025

When the internet goes down, our primary job is to help everyone get back up, as fast as possible. Of the almost half a million incidents we've helped our customers solve, there are some which stand out for both their scale and impact. One of these happened on Monday, October 20, when AWS had a widely covered major outage in their us-east-1 region, from 07:11 to 10:53 UTC. We’re hosted in multiple regions of Google Cloud and so the majority of our product was unaffected by the outage.

Recapping SEV0 San Francisco 2025

Earlier this week, we gathered in San Francisco for our second SEV0—almost a year after our very first event. SEV0 has always been about shining a light on the biggest challenges (and opportunities) in incident response. Last year, we were still talking about the fundamentals: blameless culture, strong processes, and lessons from the best in reliability. This year felt different. AI has moved from background noise to front and center in every conversation, every team, everywhere.