Operations | Monitoring | ITSM | DevOps | Cloud

Latest Posts

How to ensure your Kubernetes Pods and containers can restart automatically

As complex as Kubernetes is, much of it can be distilled to one simple question: how do we keep containers available for as long as possible? All of the various utilities, features, platform integrations, and observability tools surrounding Kubernetes tend to serve this one goal. Unfortunately, this also means there’s a lot of complexity and confusion surrounding this topic. After all, most people would agree that availability is important, but how exactly do you go about achieving it?

How to ensure your Kubernetes cluster can tolerate lost nodes

Redundancy is a core strength of Kubernetes. Whenever a component fails, such as a Pod or deployment, Kubernetes can usually automatically detect and replace it without any human intervention. This saves DevOps teams a ton of time and lets them focus on developing and deploying applications, rather than managing infrastructure.

How to standardize resiliency on Kubernetes

There’s more pressure than ever to deliver high-availability Kubernetes systems, but there’s a combination of organizational and technological hurdles that make this ‌easier said than done. Technologically, Kubernetes is complex and ephemeral, with deployments that span infrastructure, cluster, node, and pod layers. And like with any complex and ephemeral system, the large amount of constantly-changing parts opens the possibility for sudden, unexpected failures.

Where to automate resilience testing in your SDLC

When organizations begin to deploy resilience testing or Chaos Engineering, there’s a natural question: can we integrate this with our CI/CD pipeline or release automation tools? After all, you’re likely running unit, performance, and integration tests already—is resiliency different? The short answer is yes—to both. Integration is possible, but resiliency is different, so automation is a nuanced conversation.

Resiliency is different on AWS: Here's how to manage it

There’s a common misconception about running workloads in the cloud: the cloud provider is responsible for reliability. After all, they’re hosting the infrastructure, services, and APIs. That leaves little else for their customers to manage, other than the workloads themselves…right?

Fault Injection in your release automation

One of the real successes of the Agile Software development movement has been the push to have regular, frequent deployments. This has manifested as build and deployment automation and the general adoption of CI/CD. As engineers automate more processes of their software release lifecycle, an important question is how to automate Quality Assurance, which includes resilience testing and, more specifically, Fault Injection.

How to scale your systems based on CPU utilization

CPU usage is one of the most common metrics used in observability and cloud computing. It’s for a good reason: CPU usage represents the amount of work a system is performing, and if it’s near 100% capacity, adding more work could make the system unstable. The solution is to scale - add more hosts with more CPU capacity, migrate some of your workloads to the new host, and split the traffic between them using a load balancer.

Release Roundup March 2024: More ways to discover and test your services

2024 is off to a fast start here at Gremlin. Since our last release roundup, we’ve released new experiment types, new features to improve integration with cloud platforms, and improvements to our auto-detection processes. Now you can push processes to their limits, find dependencies even easier, limit when tests can be run, and much more. We also introduced a slew of platform improvements to improve efficiency, performance, and user experience in the Gremlin web application.

Introducing Process Exhaustion: How to scale your services without overwhelming your systems

We rarely think about how many processes are running on our systems. Modern CPUs are powerful enough to run thousands of processes concurrently, but at what point do our systems become oversaturated? When you’re running large-scale distributed applications, you might reach this limit sooner than you'd expect. How can you determine what that limit is, and how does that affect the number and complexity of the workloads you deploy?

How to validate memory-intensive workloads scale in the cloud

Memory is a surprisingly difficult thing to get right in cloud environments. The amount of memory (also called RAM, or random-access memory) in a system indirectly determines how many processes can run on a system, and how large those processes can get. You might be able to run a dozen database instances on a single host, but that same host may struggle to run a single large language model.