Operations | Monitoring | ITSM | DevOps | Cloud

Analyzing round trip query latency

It’s an all too common scenario: You get paged for some queries timing out, but when you investigate, the database performance looks unchanged. Something must have changed, though. If the database doesn’t look overloaded, where are these timeouts coming from? The answer often lies outside the database itself. Round trip query latency includes every hop between your application and the database, including connection pools, load balancers, and proxies.

Observability and Security for the AI Era

Datadog has always been driven by a broader vision of helping teams understand and operate complex systems. In this session, you’ll hear from Yrieix Garnier, VP of Product, and Hugo Kaczmarek, Senior Director of Product, as they share the latest updates across the Datadog product suite and discuss how that vision continues to shape the platform’s evolution and support the next generation of AI-driven applications.

Monitor Nutanix clusters, hosts, and VMs with Datadog

Nutanix is a hyperconverged infrastructure (HCI) platform that combines compute, storage, and virtualization into a single software-defined stack. By collapsing traditional infrastructure tiers into one platform, Nutanix simplifies provisioning and operations for virtualized workloads. Clusters are managed through Prism Central, which provides visibility into health, performance, capacity, and operational activity across hosts and VMs.

Datadog achieves ISO 42001 certification for responsible AI

As AI-powered products and services become central to how organizations operate, the need for responsible AI governance has never been greater. Customers, partners, and regulators are seeking assurance that AI systems are built, managed, and monitored responsibly and effectively. Datadog is committed to the responsible use of AI, both in how we build our products and in how we help customers observe their AI workloads.

Introducing Bits AI Dev Agent for Code Security

As organizations adopt AI-assisted development and increase their release velocity, they are not only generating more code but also finding more vulnerabilities from static analysis. The traditional remediation workflow of manually triaging issues, creating tickets, and opening individual pull requests (PRs) cannot keep pace. Fixing tens of thousands of vulnerabilities one by one is not a viable remediation strategy.

Olivier Pomel and Alexis Lê-Quôc on Datadog's origin, AI, and more | This Month in Datadog

Get an insider’s view of Datadog from the people who built it. On a special episode of This Month in Datadog, co-founders Olivier Pomel and Alexis Lê-Quôc sit down for a rare, in-depth look at the challenge that inspired them to build the Datadog platform, what the company is working on today, AI, and more. This Month in Datadog brings you the latest updates on our newest product features, announcements, resources, and events.

A new Host Map for modern infrastructure

A host map is a visual representation of your infrastructure that displays hosts and related resources such as clusters, pods, and containers in a single, interactive view. We introduced the Datadog Host Map more than a decade ago to help you “know thy infrastructure” and answer critical questions: Does everything look healthy? Has anything changed? Does the shape of my environment match what I expect?

Monitor Juniper Mist in Datadog

From point-of-sale (POS) terminals to cloud-based applications and mobile devices, reliable connectivity is critical to business operations. Even brief disruptions can negatively impact user experiences, resulting in failed transactions, delayed application responses, or repeated attempts to reconnect. Juniper Mist is an AI-powered networking platform that provides insight into wireless environments, including access point performance and radio frequency health.

Monitor Oracle Fusion Cloud Applications with Datadog

Many organizations rely on Oracle Fusion Cloud Applications to run core business workflows across finance, HR, and supply chain operations. Because these SaaS-based applications run on Oracle Cloud Infrastructure (OCI), engineering teams have limited visibility into their performance. Without direct access to the underlying stack, they often lack the signals needed to detect regressions or investigate degraded user experience.

Explore Kubernetes with native OpenTelemetry data

Kubernetes environments generate a constant stream of signals across clusters, nodes, pods, and workloads. For teams that have standardized on OpenTelemetry (OTel), maintaining ownership of that data is critical. But in practice, many observability platforms require translation into vendor-specific data formats, leading to fragmented product experiences, blank dashboards, and uncertainty about data integrity.

Annotate traces to improve LLM quality with Datadog LLM Observability

LLM applications rarely crash. They degrade quietly. Once these applications are shipped to production, subtle quality failures become harder to catch with traditional signals. Tone shifts, hallucinated details, off-topic responses, and incomplete reasoning can emerge while latency and token usage look stable.

Balancing Data Locality, Data Sovereignty, and Data Replication

Modern distributed systems must simultaneously respect where data must live, where it should live for performance, and where it needs to live for resilience. Data sovereignty and residency requirements increasingly affect technical design decisions, not only in regulated industries, but in any global product that must navigate regional expectations, latency constraints, cost structures, and operational realities.

Datadog Data Observability, enables you to detect data quality and pipeline issues early.

See our latest Episode of This Month in Datadog, for a spotlight of Datadog Data Observability, which enables you to detect data quality and pipeline issues early, as well as remediate those issues with end-to-end lineage. We also cover: This Month in Datadog brings you the latest updates on our newest product features, announcements, resources, and events.

Architecting Log Management for Privacy and Scale without the Headache

As companies grow, they inevitably hit a wall: observability data explodes while privacy requirements become stricter. For years, engineers have faced a painful tradeoff—either ship petabytes of sensitive data to a central cloud (incurring egress costs and compliance risks) or manage a complex self-hosted stack that is painful to scale.

Scaling Kubernetes workloads on custom metrics

The 2025 State of Containers and Serverless report found that 64% of organizations use the Kubernetes Horizontal Pod Autoscaler (HPA) to manage Kubernetes workload capacity. But only 20% of those deployments scale on custom metrics. The other four-fifths of organizations rely on resource metrics—CPU and memory utilized by their pods—to trigger autoscaling activity.

How to design cloud environments for AI-powered threat analysis

Cloud environments generate high volumes of security signals every day. With each one, you have to determine if it’s benign, a clear false positive, or something worth investigating. The challenge is needing to make these calls continuously, often without knowing whether any single event is part of a larger attack. Spending too much time investigating benign activity reduces the ability to detect threats elsewhere, and missing a legitimate threat has clear consequences.

Monitor your application and network load balancer logs

Load balancers are the primary entry points to distributed applications. By strategically directing the flow of incoming web traffic to specific endpoints, load balancers help optimize throughput and ensure the horizontal scalability of applications. In modern systems, load balancers often do more than their name suggests: Beyond basic load distribution, they analyze requests and route traffic based on a wide range of variables, such as client identity.

Captur: Observability-First Mobile ML Inference for Better Customer Confidence

Captur builds a mobile SDK that brings real-time image recognition and actionable feedback directly into customers’ apps, running complex machine learning models entirely on device without cloud inference. This architecture delivers privacy and performance, but also creates unique challenges when it comes to observability and debugging, especially as crashes can originate from the host app rather than the SDK itself.

Understanding Karpenter architecture for Kubernetes autoscaling

Karpenter is a fast, flexible Kubernetes autoscaler designed to improve cluster performance and cost efficiency. When the cluster doesn’t have capacity to schedule a pod, Karpenter requests additional compute from the cloud provider, specifying a right-sized instance that matches the preferences you’ve set (for example, instance family).

Key metrics for monitoring Karpenter

In Part 1 of this series, we explored how Karpenter’s architecture enables just-in-time provisioning and active node consolidation. Because Karpenter is constantly making infrastructure decisions based on real-time scheduling pressure, its metrics can give you early warning of provisioning slowdowns, cloud API throttling, and misconfigurations that prevent it from scaling the way you expect.

Tools for collecting metrics and logs from Karpenter

In the first two parts of this series, we explored how Karpenter’s architecture enables just-in-time provisioning and active node consolidation, and we identified the key Karpenter metrics you should track to keep your cluster performant and cost-efficient. In this post, we’ll look at vendor-agnostic tools you can use to capture these signals.

Monitor Karpenter with Datadog

In this series, we’ve explored Karpenter’s architecture, the key metrics that reflect its health and performance, and the vendor-agnostic tools for collecting and analyzing its telemetry data. In this final post, we’ll show you how Datadog helps you monitor and alert on Karpenter alongside your Kubernetes cluster and the infrastructure that runs it.

What your product data is actually saying

As tools such as AI agents become more integrated with the instrumentation, governance, and centralization of product analytics data, product managers (PMs) still own the meaning of those events and the connected outcomes. Knowing when to trust the data, forming strong hypotheses, and being able to act on the insights requires an expert in the loop.

Release software with confidence using Datadog Feature Flags

In this technical product demo, see how Datadog Feature Flags helps teams release software with confidence by connecting every feature flag to real-time observability data. Configure progressive, multi-step rollouts with automated guardrails tied to APM, RUM, and Product Analytics so you can pause or roll back instantly if latency, errors, or key business metrics degrade.

Approaching your observability migration with the right mindset

This guest blog post is authored by Nick Vecellio, Principal Engineer and Co-founder of NoBS, a Premier Datadog Partner specializing in hands-on Datadog migrations and optimizations. At NoBS, we help enterprises migrate their observability stack to Datadog. Teams often come to us after a migration has technically “worked,” but the new setup requires optimization tweaks to provide the clarity, reliability, or operational benefits they’re looking for.

Four ways engineering teams use the Datadog MCP Server to power AI agents

Since the Datadog Model Context Protocol (MCP) Server first launched in Preview, Datadog has experienced an overwhelming amount of interest and feedback from customers. We appreciate those who requested access to test our product, provided feedback, and shared their stories of how the MCP Server helped them overcome engineering challenges.

Datadog Incident Response: One platform from alert to resolution

When incidents strike, speed and clarity are critical. Datadog Incident Response brings the full incident lifecycle into one platform so teams can move from detection to resolution with confidence. Operate from a single, unified view of your systems, coordinate across the tools your teams already use, and leverage AI that analyzes incidents in real time to surface context, guide decisions, and accelerate resolution.

Meet the new Bits AI SRE: Deeper reasoning, twice as fast

When we announced Bits AI SRE at DASH 2025, we introduced an autonomous SRE agent that investigates alerts the moment they trigger. Bits AI SRE reads the same telemetry data as your team, understands your architecture, and follows your runbooks to identify likely root causes before you even open your laptop. It’s your AI teammate that’s always on call.

Use plain English to query your multi-cloud infrastructure in Resource Catalog

Modern cloud environments include thousands of resources across providers, teams, and accounts. Organizations need the ability to quickly locate the right resources so that they can manage resource compliance and troubleshoot issues. When engineers need to answer questions such as which databases are still on extended support or which storage buckets lack encryption, they often have to switch consoles, use provider-specific query languages, and know obscure version strings or configuration flags.

Simplifying troubleshooting across the user journey with Datadog Synthetic Monitoring

Every digital experience is a chain reaction. A customer logs in to an application, an API authenticates the request, a backend call retrieves data, a page loads, and somewhere along the way, something might break. When it does, teams often chase symptoms while the root cause remains hard to find. The more distributed the system, the more difficult it becomes to see how one small failure can cascade into a visible outage.