Operations | Monitoring | ITSM | DevOps | Cloud

Latest Posts

The Benefits of Structuring Logs in a Standardized Format

Image via Pixabay As any developer or IT professional will tell you, when systems experience issues, logs are often invaluable. When implemented and leveraged effectively, the data produced by logging can assist DevOps teams in more quickly identifying occurrences of problems within a system. Moreover, they can prove helpful in enabling incident responders to isolate the root cause of the problem efficiently. With that being the case, maximizing the value of log data is vital.

Announcing LogDNA Agent 3.3 GA: Improved Performance for Linux Support

We’re excited to announce the general availability of the LogDNA Agent 3.3, which introduces Linux and ARM64 support to our Rust Agent. This new support in our Rust Agent provides improved performance and enables a few features previously only available for our Kubernetes customers, such as various configurations within the Agent and the ability to run as a non-root user. Additionally, we have added in Prometheus Metrics that help provide insights into your Agent.

IoT Data With LogDNA

Consider the following question: Why do most teams face pressure to rethink traditional logging and observability approaches? Asking this question to most engineers would likely result in answers centered on the challenges posed by microservices apps. Because microservices are more complex than monoliths and involve more moving parts, they require more sophisticated, granular log collection, correlation, and analysis.

Tucker Callaway on the State of the Observability Market

Tucker Callaway is the CEO of LogDNA. He has more than 20 years of experience in enterprise software with an emphasis on developer and DevOps tools. Tucker drives innovation, experimentation, and a culture of collaboration at LogDNA, three ingredients that are essential for the type of growth that we've experienced over the last few years.

5 Examples of Metrics or Log Data That Drives Observability

Which data sources do DevOps teams need in order to achieve observability? At a high level, that’s an easy question to answer. Concepts like the “three pillars of observability”—logs, metrics, and traces—may come to mind. Or, you may think in terms of techniques like the RED Method or Google’s Golden Signals, which are other popular frameworks for defining which types of data teams should collect for monitoring and observability purposes.

Announcing the Control API Suite

As LogDNA has grown, many of our customers have too, meaning that they are bringing in more ingestion data sources and expanding their use cases for their logs. To help with managing more data, we’re excited to introduce the Control API suite. We’ve built 4 individual APIs that will help companies programmatically configure their data and how they want to ingest logs. Below, we’ll cover each new API in detail as well as why they are massively impactful for our customers.‍

Announcing Early Access to Variable Retention on LogDNA

The massive proliferation of log data forces teams to manage the costs to process, route, and store it. Teams need access to this data to gain critical insights into their services, but for many organizations this presents a challenge for their budget. Logging can get expensive, fast, which often results in teams making difficult tradeoffs between aggregating enough logging information to be useful and controlling the cost of storing all those logs.

Apache Kafka Tutorial: Use Cases and Challenges of Logging at Scale

Enterprises often have several servers, firewalls, databases, mobile devices, API endpoints, and other infrastructure that powers their IT. Because of this, organizations must provide resources to manage logged events across the environment. Logging is a factor in detecting and blocking cyber-attacks, and organizations use log data for auditing during an investigation after an incident. Brokers, such as Apache Kafka, will ingest logging data in real-time, process, store, and route data.