Operations | Monitoring | ITSM | DevOps | Cloud

A Complete Guide to Linux Log File Locations and Their Usage

Linux log files are text-based records that capture system events, application activities, and user actions. They're stored primarily in the /var/log directory and provide essential information for debugging issues, monitoring system health, and maintaining security. This guide covers the most important Linux log files and a few detailed techniques for reading and analyzing them.

Steve Owens of Verizon Discusses TDM Switch Decommissioning at Ribbon Insights 2024

The video discusses Verizon’s strategic approach to accelerating the decommissioning (decom) of legacy Time Division Multiplexing (TDM) switches. The speaker emphasizes the importance of TDM switch decom in reducing power consumption, cutting expenses, complying with regional climate regulations, and reclaiming valuable technical space. A key driver for Verizon is the steep costs associated with maintaining aging infrastructure under increasingly stringent local carbon emission laws, particularly in the Northeast Corridor.

Site24x7: Synthetic monitoring vs. Real user monitoring

Want to know the difference between synthetic monitoring and real user monitoring (RUM)? You're not alone. In this video, we break down both monitoring types, show how they work, and explain when to use each—so you can build a monitoring strategy that gives you full visibility into your website or application performance. Here’s what you’ll learn: Whether you're a DevOps engineer, SRE, or IT admin, this video will help you make smarter monitoring decisions.

Attack Surface Visibility: Research Uncovers Critical Security Blind Spots

You can’t fix what you don’t know is broken. Proactive attack surface management begins with total attack surface visibility, but persistent cybersecurity data blind spots leave organizations vulnerable. Ivanti’s 2025 State of Cybersecurity Report finds that siloed and inaccessible data limits visibility into threats and impedes security efforts and response times.

How to Integrate OpenTelemetry Collector with Prometheus

Pulling observability data together is rarely clean. Metrics come from everywhere, formats vary, and making sense of it takes some work. OpenTelemetry Collector and Prometheus fit perfectly here. The Collector handles ingestion and processing from different sources, while Prometheus stores and queries the data. Simple, effective, and no vendor lock-in. In this blog, we cover how to integrate the Collector with Prometheus, common pitfalls, and ways to control costs.

It's The End Of Observability As We Know It (And I Feel Fine)

In a really broad sense, the history of observability tools over the past couple of decades have been about a pretty simple concept: how do we make terabytes of heterogeneous telemetry data comprehensible to human beings? New Relic did this for the Rails revolution, Datadog did it for the rise of AWS, and Honeycomb led the way for OpenTelemetry.

Making the Case for Creating a Digital Twin of All Your Technical Spaces

Technology assets are no longer confined to the walls of a traditional data center. They now span a range of environments from core facilities and labs to distributed sites like IDF closets, manufacturing sites, and retail branches. Yet many organizations still rely on fragmented tools and manual processes to manage these distributed environments. This can result in gaps in visibility, inconsistent documentation, and higher operational risk.

Migrate historical logs from Splunk and Elasticsearch using Observability Pipelines

Migrating to a new logging platform can be a complex operation, especially when it involves both active and historical logs. Observability Pipelines offers dual-shipping capability, making it easy to route active logs to your new platform without disrupting your log management workflows. But migrating years worth of historical logs—which are critical for investigating security incidents and demonstrating compliance with applicable laws—requires a different approach.