Operations | Monitoring | ITSM | DevOps | Cloud

The latest News and Information on Containers, Kubernetes, Docker and related technologies.

Auto Scaling of Kubernetes Workloads Using Custom Application Metrics

Orchestration platforms such as Kubernetes and OpenShift help customers reduce costs by enabling on-demand, scalable compute resources. Customers can manually scale out and scale in their Kubernetes compute resources as needed. Autoscaling is the process of automatically adjusting compute resources to meet a system's performance requirements. As workloads grow, systems require additional resources to sustain performance and handle increasing demand.

Data Sovereignty Demystified: What You Need to Know

As data continues to flow across borders, understanding data sovereignty is more important than ever. Kunal Kushwaha, explores the laws and regulations governing data storage and transfer, and the implications of data sovereignty in the UK and India. Learn how data sovereignty affects individuals, businesses, and governments, and discover the challenges and opportunities that arise from it. For organizations looking to maintain control over their data, Civo offers Sovereign Cloud solutions in the UK and India.

Real-Time, Automated Resource Optimization for Kubernetes Workloads

Struggling with underutilized Kubernetes resources or rising cloud costs? Learn how Pepperdata Capacity Optimizer delivers real-time, automated resource optimization for Kubernetes and Amazon EMR workloads—helping teams reduce costs and boost performance without manual tuning. In this video, discover how Pepperdata helps DevOps, platform engineers, and FinOps teams.

Heroku vs AWS: Differences & What to Choose for Mid-Size & Startups in 2025?

Heroku and AWS offer distinct benefits for startups and mid-size companies. This guide compares pricing, scalability, security, and developer experience to help you choose the right cloud platform based on your team’s needs and growth goals.

What's Holding Back AI Adoption in India?

Earlier this year, I spent a few weeks in India, visiting universities, speaking at meetups, and catching up with founders. What stood out wasn’t just the excitement about AI, but the focus on what it can actually do today. The curiosity about GenAI and big-picture questions around AGI is there, but most conversations centered around real needs: learning faster, applying for jobs, and getting healthier.

Pepperdata In Collaboration with AWS | Optimize Utilization and Cost for Kubernetes Workloads

In this AWS Startup Partner Spotlight, discover how Pepperdata empowers cloud-native startups to optimize their Kubernetes and Amazon EMR workloads in real time. With automated resource optimization, companies can reduce costs by an average of 30% while increasing utilization by up to 80%—without any manual tuning. Whether you're scaling rapidly or managing unpredictable workloads, Pepperdata ensures your infrastructure runs efficiently and cost-effectively from day one.

Stop Guessing, Start Measuring: Optimizing Rancher Continuous Delivery With Fleet Benchmarks

Rancher Continuous Delivery (known as Fleet) can be used in a workflow to deploy applications to many clusters. With its GitOps support, it enables downstream clusters to pull updates from a Git repository. We know of users that monitor several hundred Git repositories and deploy to a thousand clusters. To make this scale possible, several intermediate steps are necessary. First, the application is converted into separate bundles, which are then targeted at clusters.

Why Manual Tuning Fails: A Better Way to Optimize Kubernetes Workloads

As a data platform engineer, you’re tasked with running complex workloads—Apache Spark jobs, AI/ML pipelines, batch ETL—across dynamic Kubernetes environments. Performance matters. Time spent tuning matters. And so does cost. But if you’re still relying on manual resource tuning to optimize your workloads, you’re playing a losing game. Sure, you can tweak CPU and memory requests by hand. You can comb through Prometheus metrics, look at job logs, estimate peaks.

Comprehensive Guide to Developing and Deploying a Python API with Docker and Kubernetes (Part I)

In the evolving landscape of software development, containerization and orchestration have become pivotal. Docker and Kubernetes stand at the forefront of this transformation, offering scalable and efficient solutions for application deployment. This guide provides a detailed walkthrough on developing a Python API, containerizing it with Docker, and deploying it using Kubernetes, ensuring a robust and production-ready application.

Our Biggest Platform Release in Years: Virtual Providers and Virtual Machines

Cycle.io is taking a giant leap forward in 2025. Today, we're announcing the biggest platform release in years -- a release that catapults Cycle into a new era of hybrid infrastructure orchestration and cements its status as a true alternative to both Kubernetes and VMware. Now, with two massively impactful features: Virtual Providers and Virtual Machines.