Operations | Monitoring | ITSM | DevOps | Cloud

2026 - Redgate Flyway - Starting strong with Oracle

Deploying changes to Oracle databases can be complex from working across multiple schemas, handling dependencies, and accounting for environment differences. Flyway has been helping teams bring order and automation to Oracle development for over 15 years and in 2026 we’re accelerating that investment even further. Here’s a look at the latest enhancements available today and what’s coming next for Oracle users.

AWS EC2 Vs. Azure VMs Vs. GCE: Understanding The Real Cost Of Cloud VMs

AWS EC2, Azure Virtual Machines, and Google Compute Engine (GCE) appear similar on paper but produce different bills due to how each provider prices capacity, discounts, idle time, and commitment terms. The same VM configuration can cost 20-40% more or less depending on which cloud you choose and how your workload runs. On paper, all three offer similar virtual machines. In reality, they price capacity, discounts, and idle time very differently.

5 key takeaways from the 2026 State of Software Delivery

AI has made it easier than ever to write code. Shipping it is a different story. Today we released the 2026 State of Software Delivery report, sponsored by Thoughtworks. In it, we analyzed more than 28 million CI/CD workflows across thousands of engineering teams. The picture that emerged is clear: teams are producing more code than ever, but fewer of them are able to turn that activity into software that actually reaches customers.

Build and test your first Kubernetes operator with Go, Kubebuilder, and CircleCI

Kubernetes operators extend the Kubernetes API with custom logic, automating tasks like provisioning, configuration, and policy enforcement. Instead of managing these tasks manually or with ad hoc scripts, Operators codify your workflows into controllers that run natively inside the cluster. In this tutorial, you’ll build a simple operator using Go and Kubebuilder; a framework that scaffolds much of the boilerplate so you can focus on core logic.

GPU-as-a-Service: The network's critical role in accelerated computing

The explosion of AI has created a continuous demand for computing power. At the heart of this need sits one critical resource: GPUs. They have become the hardware of choice for AI and machine learning, particularly deep learning workloads that operate on enormous data sets. However, as organisations race to train larger models and deliver faster inference, many are discovering that owning and operating GPU infrastructure isn’t always practical.

Predict, compare, and reduce costs with our S3 cost calculator

Previously I have written about how useful public cloud storage can be when starting a new project without knowing how much data you will need to store. However, as datasets grow over time, the costs of public cloud storage can become overwhelming. This is where an on premise, or co-located, self-hosted storage system becomes advantageous: it provides the greatest range of benefits, including cost, performance, security, and data sovereignty.

Improve performance and reliability with APM Recommendations

SREs and application developers rely on telemetry data to understand and improve their systems. As organizations scale and evolve, those systems generate an ever-growing volume of metrics, logs, and traces. But more data alone does not make it easier to improve performance or reliability: Identifying meaningful optimizations still requires careful investigation and analysis.