Operations | Monitoring | ITSM | DevOps | Cloud

Graylog

Log Wrangling: Leveraging Logs to Optimize Your System

Today, we delve into the art and science of Log Wrangling. This process involves corralling, organizing, and deriving maximum benefits from your logs like handling unpredictable livestock. Why do we do this? Managing logs can be challenging, but we can transform them from a daunting task with the correct approach into a beneficial tool… Graylog.

Log Wrangling Make Your Logs Work For You

Senior Sales Engineer Chris Black enlightens users on 'Log Wrangling’. Utilizing his expertise, Chris compares logs to livestock and provides strategies to manage them effectively, just like a wrangler would handle livestock. Topics discussed include ways to understand and maximize the utility of logs, the complexities of log wrangling, how to simplify the process, and the significance of data normalization. He also touches on organizational policies, the importance of feedback mechanisms in resource management, and key considerations when choosing your log priorities.

Using VPC Flow Logs to Monitor AWS Virtual Public Cloud

While no man is an island, your Virtual Private Cloud (VPC) is, except it’s a digital island floating in the ocean of a public cloud offered by a cloud service provider (CSP). The VPC means that everything on your digital island is yours, and none of the CSPs other customers can (or should be able to!) access it. You’ve likely been introduced to the shared security model, a sometimes-confusing way that organizations and their cloud-services providers (CSPs) split security responsibilities.

Understanding the difference between OpenSearch and Elasticsearch

Search is a fundamental requirement for anyone working with log files. When you have terabytes and petabytes of data, you need to find answers to questions – fast. The search engine that you choose sits as the cornerstone for any technology that helps you look for the information needed to answer questions. While OpenSearch and Elasticsearch may have similar beginnings, their modern iterations have significant differences.

Monitoring Microsoft SQL Server login audit events in Graylog

One of the most important events you should be monitoring on your network is failed and successful logon events. What comes to most people’s minds when they think of authentication auditing is OS level login events, but you should be logging all authentication events regardless of application or platform. Not only should we monitor these events across our network, but we should also normalize this data so that we can correlate events between these platforms.

Key Value Parser Delivers Useful Information Fast

Parsers make it easier to dig deep into your data to get every byte of useful information you need to support the business. They tell Graylog how to decode the log messages that come in from a source, which is anything in your infrastructure that generates log messages (e.g., a router, switch, web firewall, security device, Linux server, windows server, an application, telephone system and so on).

Azure Monitoring: What it is and why you need it

Even before the push to the cloud, your company was a Microsoft shop. From workstations to servers, you’ve invested heavily in the Microsoft ecosystem because it gave your business all the technologies necessary for success. As part of your organization’s digital transformation strategy, Azure offered the easiest onboarding experience.

What is IT Asset Management (ITAM)?

Organizations collect technologies like kids collecting baseball cards. As a company’s IT strategy matures, it adds new technologies to supplement previously existing ones, just like kids add new rookie cards to their collections of classics. While kids can leave their baseball cards randomly piled in a shoebox, organizations need to carefully identify and track their IT assets so that they can appropriately manage digital performance and cybersecurity.

A Guide to Docker Adoption

Whether you’re a developer or a security analyst, you probably already know the name Docker. Developers use Docker’s open-source platform to build, package, and distribute their applications. Since the application and all dependencies sit in the container, it runs consistently across different operating systems and environments. As with everything technology, Docker adoption is a good news/bad news story. Good news: DevOps teams can ship applications faster.