Operations | Monitoring | ITSM | DevOps | Cloud

Manage your dashboards and monitors at scale

In the early stages of building a system, a few well-placed dashboards and monitors can provide sufficient visibility into service health and performance. However, as infrastructure scales and teams grow, so does the complexity of the monitoring landscape. In organizations where individual teams manage their own services but rely on a central platform or observability team for tooling and guidance, this complexity can quickly multiply.

Identify slowdowns across your entire network with Datadog Network Path

As modern infrastructure becomes increasingly distributed across on-premises data centers, multi-cloud environments, ISPs, and remote offices, understanding how traffic flows across your network is critical to delivering reliable performance and great user experiences. But pinpointing the source of network slowdowns remains one of the most persistent challenges for operations, network, and IT teams.

Instrument your Azure Container Apps workloads with the new Datadog Agent sidecar

Modern application development is evolving rapidly, with serverless containers and microservices becoming the standard for scalable, resilient architectures. Azure Container Apps is at the forefront of this movement, enabling developers to deploy containerized applications without having to manage infrastructure.

Datadog governance 101: From chaos to consistency

As your organization scales, managing observability resources and usage becomes increasingly important. More users and teams mean more dashboards, tags, API keys, and costs to manage. The job of keeping track of these resources and ensuring that they’re compliant can quickly grow in complexity.

How we saved $1.5 million per year with Cloud Cost Management

In collecting and analyzing trillions of events each day, Datadog ingests a massive amount of data. We spend substantially to process and store this data in the cloud, and teams across the organization are committed to optimizing the return on this investment. To this end, our FinOps analysts have always tracked the costs of delivering our services and identified opportunities for savings.

How to use AI tools more effectively: Tips from Datadog Engineers

A growing number of engineering organizations have adopted or are trialing agentic AI-based coding tools and LLMs in an effort to increase their teams’ development velocity. If you’re a developer, this means you’ve likely had to try out different agentic tools and models and determine how to best incorporate them into your existing workflows.

Monitor Claude usage and cost data with Datadog Cloud Cost Management

Managing the cost of foundation models is a critical challenge as AI adoption surges, particularly for teams using powerful models like Anthropic's Claude Opus and Claude Sonnet. Growing teams generate larger prompt volumes and escalating model complexity, making it difficult to have clear visibility, accountability, and control of cloud AI spending.

Simplify XML log collection and processing with Observability Pipelines

In Microsoft-based environments, Windows event logs capture critical security events like user logins, privilege escalations, and system changes. These logs are vital for compliance and investigations. However, they’re natively formatted in XML, a verbose and deeply nested structure that is hard to search without preprocessing and inefficient to store.

Build secure and scalable Azure serverless applications with the Well-Architected Framework

Serverless platforms like Azure Functions and Azure Container Apps make it easier to scale your applications without managing infrastructure. But successful serverless apps require thoughtful planning. They must be designed to account for cold starts, unpredictable scaling behavior, and ephemeral compute lifecycles, all while ensuring secure data handling and end-to-end observability across highly distributed components.

Keep an eye on remote access to your Kubernetes infrastructure with Datadog Workload Protection

To improve efficiency and reduce cloud spending, teams frequently schedule pods on Kubernetes nodes dynamically, based on available resources. However, this practice has also introduced a new security challenge: The workloads maintained by a development team are now spread between Kubernetes nodes, exposing more hosts and increasing the blast radius when user credentials are compromised.