Operations | Monitoring | ITSM | DevOps | Cloud

July 2020

It's upstream-first with Ocean for Kops

Many of Spot’s AWS customers are using Kubernetes Operations (kops) to self-manage their Kubernetes clusters. The tool significantly simplifies cluster set up, lifecycle management via instance groups, Kubernetes Day 2 operations and generates Terraform configurations, making it a popular tool for deploying production-grade k8s clusters.

Technical introduction to Ocean by Spot: Serverless infrastructure engine for containers and Kubernetes

When it comes to modern container orchestration, there are a variety of control plane solutions for managing your applications in a containerized environment. Users can opt for managed services (i.e. Amazon EKS and ECS, Google GKE and Azure AKS) or run their own orchestration with Kubernetes. However, the dynamic nature of containers introduces operational complexities that can make your cloud infrastructure difficult to manage.

Understanding Kubernetes Cluster Autoscaler: Features, Limitations and Alternatives

There are different tools and mechanisms for autoscaling in Kubernetes at both the application and infrastructure layers to help users manage their cluster resources. In this article, we’ll explore two infrastructure autoscaling tools for Kubernetes — Ocean by Spot and open source Cluster Autoscaler.

AWS Fargate pricing: how to optimize billing and save costs

AWS Fargate is a managed service that enables you to run containers in Amazon Elastic Kubernetes Service (EKS) or Elastic Container Service (ECS). Since Fargate is serverless, you don’t need to provision or manage servers or clusters. However, you do need to set up package containers, define resource requirements, and configure permissions and networking policies. Fargate pricing is determined according to virtual CPUs (vCPUs) requirements, as well as usage of GBs of RAM for running services.

Instant scale up for even the most dynamic ECS clusters

One of the key features of Ocean by Spot is a “headroom” feature, the ability to maintain a dynamic buffer of spare capacity for immediate scale-up. Ocean continuously predicts which workloads are most likely to require scale-up and adjusts headroom in line with this prediction to enable immediate scheduling of new tasks, without waiting for infrastructure provisioning. This shortens the time to execution for these workloads and dramatically speeds up the scale-up process.

Log aggregation and the journey to optimized logs

Ever experienced bad logging- whether it’s the wrong log, the wrong information, or a multitude of other logging woes? We aren’t able to count the number of times anymore that we’ve happily gone and set log lines, only to find out that it was all for naught. The frustrations are endless. What is meant to be magic for your code, the ultimate savior when debugging, has become the ultimate frustration.

Kubernetes Secrets - The good the bad and the ugly

Secrets, by definition, should be kept secret, whichever tool you’re using. While there are plenty of best practices for keeping your Kubernetes secrets actually secret, there are some loopholes that can compromise their security, and might be taken advantage of by malicious entities. This post will cover prevalent best practices for securing your secrets on Kubernetes along with some new approaches for secrets management.

Leveraging cloud computing pricing models for greater cost efficiency

As we discussed in our previous post on cloud cost analysis and optimization, with over 60% of all cloud costs attributable to compute, focusing on compute infrastructure spend should be of the utmost priority. In this post we will focus on the public cloud vendors’ pricing models and how to best leverage them all for maximum cost optimization.