Operations | Monitoring | ITSM | DevOps | Cloud

Arazzo vs Traditional Chatbots: What Actually Works?

What happens when you give an AI agent hundreds of API endpoints and hope it figures out the right workflow? Spoiler: it nearly gets it right... but never reliably. In this talk, Frank Kilcommins (Head of Enterprise Architecture at Jentik and co-author of the Arazzo Specification) breaks down why API documentation quality is the core knowledge problem holding agentic systems back (and how Arazzo solves it).

The Secret to 10x Faster API Testing #speedscale #apitesting #api #automation #production

Stop living in the past. See how to use real production traffic to automate your API testing with zero code changes. Replay real-world patterns in your CI/CD and catch regressions before your users do. Learn more: speedscale.com.

Rob Zuber on quality, metrics, and what it means to move in the right direction at CircleCI

In this episode of Braintrust, Cortex co-founder and CTO Ganesh Datta sits down with Rob Zuber, CTO at CircleCI. Rob shares how the industry's move away from dedicated QA has cost teams more than they realize, and explains how AI is changing what good software quality actually looks like.

Feature Friday: How to Track GitHub Copilot Adoption with Cortex Scorecards

Are you getting the most out of your GitHub Copilot investment? In this week's, Cortex Engineer Aaron Warrick demonstrates how to turn "AI adoption" from a buzzword into a measurable metric. Using the CQL (Cortex Query Language) Query Builder, you can now pull real-time GitHub Copilot data into your service maturity scorecards. In this video, we cover: How to use the new AI Tools Analysis in the CQL Query Builder.

Telegraf Enterprise Beta is Now Available: Centralized Control for Telegraf at Scale

Telegraf is incredibly good at what it does: collecting metrics, logs, and events from just about anywhere and sending them wherever you need. But once Telegraf becomes part of your production telemetry pipeline, spread across environments, teams, regions, and edge locations, the hard part isn’t installing agents; it’s operating them. Configs drift. “Temporary” overrides linger. Rolling out changes across hundreds (or thousands) of agents becomes a careful, manual process.

One CLI, Two Audiences: How We Built for Agents and Human

Half of the Checkly CLI users are already coding agents. This is not a prediction — it's what the data shows today. Since February, more and more agents have been using the CLI to manage and configure their Checkly monitoring setups. Right now, we're at 50% human and 50% agentic CLI users. And we predict that by the end of 2026, it won't be humans using the CLI; the agents will have taken over. The terminal became the primary interface for AI agents doing real work in the Checkly ecosystem.

Checkly and the Agentic Software Layer

November 24th, the Opus 4.5 release turned around the entire tech industry. This was the moment when agents became capable. Capable enough to write solid staff-level code. Capable enough to reason about alerts, investigate root causes much faster than most engineers, and set up the reliability layer faster. For me, this feels like an iPhone moment on steroids; the adoption of AI is accelerating much faster than any adoption curve I’ve seen over the past few decades.