Operations | Monitoring | ITSM | DevOps | Cloud

Latest Posts

Formalize your organization's best practices with custom Scorecards in Datadog

The Datadog Service Catalog is a centralized hub of information around the performance, reliability, security, efficiency, and ownership of your distributed services. By using the Service Catalog, teams can eliminate knowledge silos and realize seamless DevSecOps workflows.

How we manage incidents at Datadog

Incidents put systems and organizations to the test. They pose particular challenges at scale: in complex distributed environments overseen by many different teams, managing incidents requires extensive structure and planning. But incidents, by definition, break structures and foil plans. As a result, they demand carefully orchestrated yet highly flexible forms of response. This post will provide a look into how we manage incidents at Datadog. We’ll cover our entire process.

Plan new architectures and track your cloud footprint with Cloudcraft by Datadog

In a rapidly expanding, highly distributed cloud infrastructure environment, it can be difficult to make decisions about the design and management of cloud architectures. That’s because it’s hard for a single observer to see the full scope when their organization owns thousands of cloud resources distributed across hundreds of accounts. You need broad, complete visibility in order to find underutilized resources and other forms of bloat.

Use Datadog Dynamic Instrumentation to add application logs without redeploying

Modern distributed applications are composed of potentially hundreds of disparate services, all containing code from different internal development teams as well as from third-party libraries and frameworks with limited external visibility. Instrumenting your code is essential for ensuring the operational excellence of all these different services. However, keeping your instrumentation up to date can be challenging when new issues arise outside the scope of your existing logs.

Prioritize and promote service observability best practices with Service Scorecards

The Datadog Service Catalog consolidates knowledge of your organization’s services and shows you information about their performance, reliability, and ownership in a central location. The Service Catalog now includes Service Scorecards, which inform service owners, SREs, and other stakeholders throughout your organization of any gaps in observability or deviations from reliability best practices.

Stream your Google Cloud logs to Datadog with Dataflow

IT environments can produce billions of log events each day from a variety of hosts and applications. Collecting this data can be costly, often resulting in increased network overhead from processing inefficiencies and inconsistent ingestion during major system events. Google Cloud Dataflow is a serverless, fully managed framework that enables you to automate and autoscale data processing.

Optimize your infrastructure with CloudNatix and Datadog

CloudNatix is an infrastructure monitoring and optimization platform for VMs, containers, and other cloud resources. Customers can use CloudNatix’s Autopilot feature to automatically configure and run infrastructure optimization workflows that allocate and run their resources more efficiently. CloudNatix can take action to auto-size Kubernetes and VM workloads, defragment Kubernetes clusters, and create harvest pods from unused VMs, among other key optimizations.

Understanding Request Latency with Profiling

It can be hard to figure out why response times are high in Java applications. In my experience, when engineers investigate this type of issue, they typically use one of two methods: They either apply a process of elimination to find a recent commit that might have caused the problem, or they use profiles of the system to look for the cause of value changes in relevant metrics.

Visualize user interactions with your pages by using Scroll Maps in Datadog Heatmaps

When developing modern applications, product managers, designers, and website developers need to understand how users interact with web pages in order to guide those users through their desired journeys. For example, teams need to know if users ever see the content near the bottom of the page, where to place CTAs to ensure they are in high-traffic areas, and how to compare different pages based on user engagement.