Operations | Monitoring | ITSM | DevOps | Cloud

Beyond Uptime: Building a Self-Healing OpenClaw Observability Stack

The allure of OpenClaw is undeniable. You deploy a highly autonomous, self-hosted AI agent, give it access to your repositories and inboxes, and watch it reason through complex workflows while you sleep. It is the dream of the ultimate 10x developer tool realized. But as any veteran DevOps engineer will tell you: running an LLM-backed Node.js agent in production is vastly different from testing it on your local machine.

When AWS us-east-1 Fails, Much of the Internet Fails With It

There are cloud outages, and then there are us-east-1 outages. That distinction matters because failures in AWS’s Northern Virginia region rarely feel like ordinary regional incidents. They tend instead to expose something larger and more uncomfortable: too much of the modern internet still behaves as though one place is an acceptable concentration point for infrastructure, control, recovery, and communication. When us-east-1 goes wrong, the problem is not only that workloads fail.

In the Age of AI, Operational Memory Matters Most During Incidents

Artificial intelligence is making software easier to produce. That much is already obvious. Code that once took hours to scaffold can now be drafted in minutes. Boilerplate, integration logic, tests, refactors and small internal tools can be generated with startling speed. In some cases, even substantial pieces of implementation can be assembled quickly enough to make older assumptions about software effort look dated. It is tempting, then, to conclude that the hard part of software is receding.

AI Didn't Kill the SDLC. It Made It Harder to See

Whilst AI has compressed the visible stages of software delivery; requirements, validation, review and release discipline have not disappeared. They have been pushed into automation, runtime and governance. The real risk is not that the lifecycle is dead, but that organisations start acting as if accountability died with it.

When Code Becomes Cheap: The New Reliability Constraint in Software Engineering

For most of the history of software engineering, the primary constraint was production. Code was expensive, skilled engineers were scarce, and shipping features required concentrated human effort. Velocity was limited by how fast people could reason, implement, test, and deploy. That constraint shaped everything from team size, architecture, release cadence, through to how we thought about technical debt. When production is expensive, you optimise for output. You remove friction from shipping.

Buy vs Build in the Age of AI (Part 3)

In Part 1, we looked at how AI has reduced the cost of building monitoring tools. Then in Part 2, we explored the operational and economic burden of owning them. Now we need to talk about something deeper. Because the real shift isn’t just economic; it’s structural. AI isn’t just helping engineers write code faster. It’s accelerating the entire software ecosystem; including how monitoring tools are built, maintained, and trusted.

Buy vs Build in the Age of AI (Part 2)

In Part 1, we explored how AI has dramatically reduced the cost of building monitoring tooling. That much is clear. You can scaffold uptime checks quickly, generate alert logic in minutes, and set-up dashboards faster than most teams used to schedule the kickoff meeting. So the barriers to entry have fallen. But there’s a quieter question that rarely gets asked in the excitement of building. Have you ever calculated what it would actually cost to replace your monitoring provider?

Buy vs Build in the Age of AI (Part 1)

A few months ago, I spoke to an engineering manager who proudly told me they had rebuilt their monitoring stack over a long weekend. They’d used AI to scaffold synthetic checks. They’d generated alert logic with dynamic thresholds. They’d then wired everything into Slack and PagerDuty, and built a clean internal dashboard. “It used to take us weeks to prototype something like this,” they said. “Now it’s basically instant.” They weren’t wrong.

Alerting Is a Socio-Technical System

In the previous posts, we’ve looked at how alert noise emerges from design decisions, why notification lists fail to create accountability, and why alerts only work when they’re designed around a clear outcome. Taken together, these ideas point to a broader conclusion. That alerting is not just a technical system, it’s a socio-technical one. Alerting systems encode assumptions about how people behave, how responsibility is distributed, and how decisions are made under pressure.

Designing Alerts for Action

In the first two posts of this series, we explored how alert noise emerges from design decisions, and why notification lists fail to create accountability when responsibility is unclear. There’s a deeper issue underneath both of those problems. Many alerting systems are designed without being clear about the outcome they’re meant to produce. When teams don’t explicitly decide what they want to happen as a result of a signal, they default to the loudest option available.