Operations | Monitoring | ITSM | DevOps | Cloud

Latest Posts

Logging Golang Apps with ELK and Logz.io

The abundance of programming languages available today gives programmers plenty of tools with which to build applications. Whether long-established giants like Java or newcomers like Go, applications need monitoring after deployment. In this article, you will learn how to ship Golang logs to the ELK Stack and Logz.io. It’s usually possible to get an idea of what an application is doing by looking at its logs. However, log data has a tendency to grow exponentially over time.

Can Distributed Tracing Replace Logging?

Logging has been around since programming began. We use logs to debug issues and understand how software works at the code level. After logging and debuggers, profilers are a dev’s best friend when writing code and may run in production with limits to reduce overhead. As we distributed architectures — making systems more complex — centralized log aggregation was soon necessary. At that point, we had to analyze this data. Hence, log analytics technologies were born.

Using the Prune Filter in Logstash

Logstash has a number of helpful plugins. We’ve covered the mutate plugin in great detail here, as well as Logstash grok, but it was time to go over some of the others. Here, the Logstash Prune Filter will get its due attention. Its existence owes to the need to remove fields according to select blacklists or whitelists of field names and their associated values. Put more curtly, it prunes the excess branches (fields) in your garden (your data).

The New Technical Executive: between CIO & CTO

Typically, there are two technical executive leadership roles in most organizations: the Chief Information Officer (CIO) and the Chief Technical Officer (CTO). But there can be confusion between these two positions, a lot of questions when comparing the CIO vs CTO, and often they might actually fuse into a single position depending on the business strategy. Their positions might not be so clear to the people who work for them.

Managing your Log Volume across Multiple Accounts Just Got Easier

Many organizations are adopting centralized logging tools so that they have one place for all of their data. This is generally easier than having separate tools across teams for log storage and analysis. But centralized logging introduces new challenges, like how to segment those logs according to the teams or developers where they are the most relevant. And, how to manage log volume.

Introducing Multiple Shipping Tokens for Logz.io Accounts

We’re excited to share that we’ve revamped our Shipping Tokens feature! If you’re a Logz.io user, you’re familiar with the key role tokens play in shipping and protecting your data. As a form of virtual identification, tokens help us properly attribute data to the right account. They are required in a variety use cases such as log shipping, API access, and read access. And in addition, they are also mandatory for compliance.

Shipping Metrics from Hashicorp Consul with ELK and Logz.io

Microservices interact in so many ways. Load balancers, security authentication, and service discovery are just the tip of the iceberg. It can get confusing, if not outright messy. But why be messy when you can be meshy? This is where service meshes come into play, linking the roles these tools have in a common ‘net’ that ties and weaves the whole architecture together. Hashicorp has produced one of the most popular of these organizational assets — Consul Connect.

Jaeger Essentials: Jaeger Persistent Storage With Elasticsearch, Cassandra & Kafka

Running systems in production involves requirements for high availability, resilience and recovery from failure. When running cloud native applications this becomes even more critical, as the base assumption in such environments is that compute nodes will suffer outages, Kubernetes nodes will go down and microservices instances are likely to fail, yet the service is expected to remain up and running.

Full Observability with Your Node.js App

Javascript is a pretty prolific programming language, used daily by people visiting any number of websites and web applications. NodeJS, it’s server-side version, is also used all over the place. You’ll find it deployed as full application stacks to functions in things like AWS Lambda, or even as IoT processes with things like Johnny Five. So when we think about Observability in the context of a nodejs stack, how do we set it up and get the information flowing?