Operations | Monitoring | ITSM | DevOps | Cloud

Reliability lessons from the 2025 Cloudflare outage

On November 18, 2025, X, ChatGPT, Shopify, and many other major sites went offline simultaneously. Even Downdetector, Ookla’s popular outage tracking website, briefly went offline. What caused this issue? Why were so many major websites affected by it? And what steps can you take to reduce the impact on your own applications? ‍

Use OpenTelemetry with Observability Pipelines for vendor-neutral log collection and cost control

Today, many DevOps and security teams operate in a world of complex, hybrid, or multi-vendor environments. As more teams look to avoid lock-in by adopting open standards, OpenTelemetry (OTel) is quickly gaining adoption as the primary open source method for DevOps and security teams to instrument and aggregate their telemetry data. However, OTel alone may lack the advanced processing functions, native volume control rules, and hybrid environment support that large organizations need.

AI Observability: How to Keep LLMs, RAG, and Agents Reliable in Production

AI observability closes the gap between “something’s wrong” and “here’s what to fix.” If you run AI in production, you might have felt the whiplash. Yesterday, your LLM answered in 300 milliseconds (ms). Today p99 crawls, costs spike, and nobody’s sure if the culprit is model behavior, data freshness, or GPUs stuck at the ceiling. Dashboards light up, but they don’t tell you which issue puts customers at risk. That’s the gap AI observability closes.

What Are AI Workloads? Everything Ops Teams Need to Know

AI workloads break every assumption you have about infrastructure management. AI is everywhere. Machine learning-based tools are answering customer service questions, accelerating incident resolution, catching fraudulent transactions, spotting defects on production lines, and powering late-night searches that delve into the random topic that pops into your head right before bedtime. Behind every prediction, response, or generated sentence is massive computing power doing serious, continuous work.

AI Monitoring, Explained: Challenges, Core Components, and Why Observability Is the Next Step

Monitoring AI systems isn’t business as usual. Monitoring AI isn’t like monitoring traditional systems. You can’t just track uptime or response times and call it a day. AI models evolve, data shifts, and behavior drifts over time, which means your monitoring has to evolve, too. If you’re running AI workloads in production, you already know this. Your models might look healthy according to your infrastructure metrics, but they’re still making bad predictions.

AI Workload Infrastructure Requirements: What You Actually Need

Artificial intelligence (AI) infrastructure requires four pillars working in tandem as a system (compute, storage, networking, and orchestration) tailored to your actual workload needs, not hype. Artificial intelligence (AI) infrastructure isn’t just more hardware. It’s a new class of system—highly distributed, resource-intensive, and tightly coupled across compute, storage, and network layers.

AI for Good: Securing Networks in the Age of Autonomous Attacks

The rise of autonomous AI attacks operating at machine speed demands that network security evolve beyond human capacity and manual processes. Kentik AI Advisor counters this threat by using AI for good, reasoning across full network context to proactively eliminate vulnerabilities and guide immediate, confident defense.