Operations | Monitoring | ITSM | DevOps | Cloud

Latest Posts

NIST 800-53: Understanding the Rosetta Stone of security frameworks

Across various industries including the Department of Defense (DoD), federal service agencies, financial institutions, healthcare and other highly regulated organizations, the National Institute of Standards and Technologies (NIST) 800 53 security framework, is used to describe the security compliance. It is a standard catalog of security controls for protecting organizations’ operations, assets, and users from cyber threats. To be sure that is a broad definition that requires more nuance.

Introducing CloudWatch Metric Stream Support in Lumigo

At Lumigo, we are constantly working to help you gain full visibility into your AWS environments with minimal friction. That’s why we’re excited to announce our support for CloudWatch Metric Stream. Now, AWS users can easily send their CloudWatch metrics to Lumigo to create dashboards, set alerts, and unify all their observability data—traces, logs, and metrics—into one powerful, centralized view.

Transformative Assessments in Tidal Accelerator

Cloud migrations can be complex, but they don’t have to be overwhelming. With the Tidal Accelerator platform, we’ve simplified the process, enabling organizations to not only migrate to the cloud efficiently by embracing the full spectrum of cloud migration methods, but also plan for a modernized infrastructure that supports long-term success. Here’s how Tidal Accelerator makes it possible.

Kickstart your investigations and reduce alert noise with Doctor Droid's offering in the Datadog Marketplace

Being an on-call engineer is often overwhelming, requiring you to pivot between tickets, dashboards, runbooks, and different data sources as you try to separate legitimate incidents from unnecessary noise. Not only does the process of investigating irrelevant alerts take time away from remediating important issues, but it also compounds alert fatigue.

The three pillars of observability

Do you feel you’re always playing catch-up with incidents? If so, you’re not alone. As IT environments become more complex, alerts keep piling up, and finding the root cause feels like searching for a needle in a haystack. And ITOps and incident responders are left scratching their heads and wondering: what went wrong? It can be frustrating when you don’t have end-to-end visibility into your systems. This is where observability comes in.

What's That Collector Doing?

The Collector is one of many tools that the OpenTelemetry project provides end users to use in their observability journey. It is a powerful mechanism that can help you collect telemetry in your infrastructure and it is a key component of a telemetry pipeline. The Collector helps you better understand what your systems are doing—but who watches the Collector? Let’s look at how we can understand the Collector by looking at all the signals it’s emitting.

Anatomy of an OTT Traffic Surge: Netflix Rumbles Into Wrestling

On Monday, Netflix debuted professional wrestling as its latest foray into live event streaming. The airing of WWE’s Monday Night Raw followed Netflix’s broadcasts of a heavily-promoted boxing match featuring a 58-year-old Mike Tyson and two NFL games on Christmas Day. In this post, we look into the traffic statistics of how these programs were delivered.

Network Observability: Boosting NOC Performance in an AI-Driven World

In today’s digital battleground, a business’ survival depends on the robustness and reliability of its network infrastructure. Network connectivity represents the backbone of critical operations and services. Optimized network performance and experience is the lifeblood of corporate success. With the surge in cloud computing and cutting-edge technologies, networks are becoming intricate and multi-layered beasts.

Structured Logging Best Practices: Implementation Guide with Examples

In structured logging, log messages are broken down into key-value pairs, making it easier to search, filter, and analyze logs. This is in contrast to traditional logging, which usually consists of unstructured text that is difficult to parse and analyze.