Operations | Monitoring | ITSM | DevOps | Cloud

LTS vs. upgrades: which future are you building for?

How should businesses decide between sticking to an LTS release or moving to a continuous upgrade model? In this episode, we explore the trade-offs, from stability and security to innovation and agility, and why flexibility in your upgrade policy is key to long-term success. We break down when LTS makes sense, when frequent upgrades deliver the most value, and how to balance both to keep your business secure, stable, and ready for what’s next.

Getting closer to space with Canonical #ubuntu #space #shorts

@EuropeanSpaceAgency is scaling to support more missions than ever. Canonical makes it possible with open source infrastructure built for space. Watch the full video to see how we're helping ESA automate, scale, and future-proof its operations. Subscribe for more tech stories from space.

Building your AI infra, our tips

Modular architecture: Decouple compute from storage so each can scale independently. This makes it easier to adapt to growing or shifting workloads over time. Future-ready hardware: Select GPUs and CPUs not just for current workloads but with an eye on scalability, including support for newer accelerator types. Scalable design: Ensure the system allows seamless addition of compute nodes or storage without a full redesign.

Running AI without blowing up your storage

Storage is often underestimated: In infrastructure discussions, compute and networking get most of the attention, while storage is treated as secondary. For AI workloads, that can be a costly oversight. Data throughput for specialized hardware: AI infrastructure powered by GPUs can process massive volumes of data at unprecedented speeds. This puts immense pressure on the storage system to keep up. Scale-out performance: An on-prem, scale-out, software-defined storage setup allows you to meet high performance demands, grow capacity as needed, and stay in control of infrastructure costs.

Is on-prem the top choice to run AI?

‎‎Subscribe. Fuel your curiosity. In this episode, we break down what we’ve learned from teams running AI at scale, and why on-premises infrastructure is making a strong comeback. We’re seeing a shift: performance, cost control, data sovereignty, and platform flexibility are driving conversations about on-prem strategies for AI. No one-size-fits-all answers, but if you’re building or scaling AI, this might help you think a few steps ahead.