Operations | Monitoring | ITSM | DevOps | Cloud

Open standards in 2026: The backbone of modern observability

Open source software and open standards are now an essential part of how organizations maintain their systems. That's not to say they haven't always been important, but the fourth annual Observability Survey, brought to you by Grafana Labs, shows just how deeply the shift to open has taken hold, with 77% of respondents saying open source and open standards are important1 to their observability strategy.

AI in observability in 2026: Huge potential, lingering concerns

The role of AI in observability is evolving rapidly, but the data from our fourth annual Observability Survey makes one thing abundantly clear: the potential is real, and so are the reservations. Practitioners overwhelmingly see value in using AI to help surface anomalies, forecast and spot trends, assist with root cause analysis, and get new users up to speed quicker.

How Catalog changes the game for long-term maintenance

Every incident platform needs to know who owns what. Which team owns which service. Which backlog to send follow-ups to. Which escalation path to page when something breaks. The problem is that most platforms encode this ownership logic separately in every configuration: alert routing, workflows, ITSM ticket syncing, and more. Each one maintains its own copy of the same information, in its own format.

How to design cloud environments for AI-powered threat analysis

Cloud environments generate high volumes of security signals every day. With each one, you have to determine if it’s benign, a clear false positive, or something worth investigating. The challenge is needing to make these calls continuously, often without knowing whether any single event is part of a larger attack. Spending too much time investigating benign activity reduces the ability to detect threats elsewhere, and missing a legitimate threat has clear consequences.

Scaling Kubernetes workloads on custom metrics

The 2025 State of Containers and Serverless report found that 64% of organizations use the Kubernetes Horizontal Pod Autoscaler (HPA) to manage Kubernetes workload capacity. But only 20% of those deployments scale on custom metrics. The other four-fifths of organizations rely on resource metrics—CPU and memory utilized by their pods—to trigger autoscaling activity.

The silent infrastructure tax: why AI agents will break your legacy cloud

For the first time in a decade, humans are the minority on the open web. In 2025, automated traffic officially crossed the Rubicon to account for 51% of all web activity, while generative AI-driven referrals to retail sites surged by a staggering 693% year-over-year. As we move through 2026, these are no longer just "bot" statistics to be handled by a WAF. They represent a fundamental shift in user behavior. The fastest-growing segment of your audience is now agentic.

AppSignal's MCP Server: Connect AI Agents to Your Monitoring Data

Your AI coding assistant already knows your codebase. Now it can know your production environment too. AppSignal's MCP server gives AI agents and AI code editors direct access to your monitoring data — errors, performance metrics, and more — so they can help you debug, investigate and resolve issues without switching context. And with our new public endpoint, getting started is simpler than ever.

Best Enterprise Asset Management Software 2026

Best enterprise asset management software is becoming essential for organizations that manage large-scale operations, multiple facilities, and diverse asset categories. Enterprises today rely on advanced systems to monitor physical assets, digital assets, infrastructure, and operational equipment across departments and locations.

What are test hooks in AI-native development?

Summary: A test hook connects a test or lint command to an event in your AI coding agent’s workflow. When the event fires, the agent runs the command automatically. If it fails, the agent’s action is blocked. You can wire your existing test commands into your agent’s lifecycle hooks to get deterministic local validation before code ever reaches CI. AI coding agents write code at a pace where stopping to manually run tests breaks your flow.