Operations | Monitoring | ITSM | DevOps | Cloud

Inside the AI Agents Transforming Network Operations - Joby Rudolph & James Schnebly | Selector

AI agents are becoming a core part of modern network operations — but what does it actually take to build and deploy them effectively? In this session from AI for Network Leaders – Powered by Selector, Joby Rudolph and James Schnebly break down how AI agents are designed, implemented, and applied in real-world network environments. They cover: This session provides a practical look at how AI agents are moving from concept to production — and what it takes to make them work at scale.

Beyond the Dashboard: Selector's Patented Approach to Conversational Observability

For years, IT operations teams have been trapped in a frustrating paradox: the data they need to solve critical issues is right at their fingertips, yet entirely out of reach. Accessing it requires engineers to master complex, platform-specific query languages, dig through endless layers of dashboards, and hunt for the exact visualization that holds the answer. Under the intense pressures of modern speed, scale, and complexity, this rigid model is breaking down.

The Business Case for AI-Driven Observability in Network Operations

Modern network operations generate an extraordinary amount of telemetry. Metrics, logs, events, topology data, cloud signals, and service context all contribute to a richer picture of system behavior. As environments expand across cloud, data center, edge, and SaaS, the opportunity for operations teams is clear: when that telemetry is unified and understood in context, it becomes a powerful source of resilience, efficiency, and business insight.

Solving the Ticket Noise Problem: What We Learned from Our ServiceNow Webinar

On March 18th, we hosted a session focused on a challenge that continues to undermine even the most mature IT operations teams: ticket noise. It’s easy to dismiss noise as just “too many alerts”. But as we explored in the webinar, the real issue runs deeper. Ticket noise is a symptom of something more fundamental — a lack of correlation, context, and shared visibility across the stack.

Cloud Observability Is Broken - Hybrid Operations Need a New Intelligence Model

Cloud adoption was supposed to simplify operations. Infrastructure would become programmable, scalability would become elastic, and distributed architectures would enable resilience at global scale. In practice, cloud has delivered extraordinary flexibility, but it has also introduced a level of operational complexity that traditional observability approaches were never designed to handle.

Full-Stack Observability Is Becoming a Business Imperative

As enterprises accelerate digital transformation, technology performance has become inseparable from business performance. Customer experiences, revenue streams, and operational efficiency increasingly depend on the reliability of complex, distributed systems. In this environment, full-stack observability is no longer a technical aspiration — it is a strategic necessity.

AI Agents in IT Operations: From Concept to Practical Value

Artificial intelligence has been a defining theme in IT operations for nearly a decade. Early AIOps initiatives focused on predictive analytics and anomaly detection, promising to reduce operational overhead and improve system reliability. While these capabilities delivered incremental value, they often fell short of transforming how operations actually functioned.

Event Intelligence is Replacing Monitoring - Here's Why That Matters

For more than two decades, monitoring has been the foundation of IT operations. Organizations invested heavily in tools designed to collect metrics, visualize performance, and trigger alerts when thresholds were breached. This model was effective in an era when infrastructure was largely static, workloads were predictable, and system dependencies were relatively easy to trace. That environment no longer exists.

The Fragmentation Tax: What Multi-Tool Incident Response is Really Costing You

Here’s a question that sounds simple but isn’t: When something breaks in your environment, how long does it take your team to agree on what they’re looking at? Not how long it takes to fix it—that’s a different problem. I mean: how long does it take for everyone on the bridge to have the same basic understanding of what’s broken, where it started, and what it’s affecting?