Operations | Monitoring | ITSM | DevOps | Cloud

What craft means for Canonical

Last month Jon Seager (our Vice President for Ubuntu Engineering) wrote about crafting software: Multiple Canonical products have craft in their names: Snapcraft, Charmcraft, Rockcraft (and there are others in the works). Our craft products are tools for making software, for the software craftsperson. To be a maker of tools comes with responsibilities – when you decide what tools should be like, you are also deciding how people should work.

Building your AI infra, our tips

Modular architecture: Decouple compute from storage so each can scale independently. This makes it easier to adapt to growing or shifting workloads over time. Future-ready hardware: Select GPUs and CPUs not just for current workloads but with an eye on scalability, including support for newer accelerator types. Scalable design: Ensure the system allows seamless addition of compute nodes or storage without a full redesign.

Running AI without blowing up your storage

Storage is often underestimated: In infrastructure discussions, compute and networking get most of the attention, while storage is treated as secondary. For AI workloads, that can be a costly oversight. Data throughput for specialized hardware: AI infrastructure powered by GPUs can process massive volumes of data at unprecedented speeds. This puts immense pressure on the storage system to keep up. Scale-out performance: An on-prem, scale-out, software-defined storage setup allows you to meet high performance demands, grow capacity as needed, and stay in control of infrastructure costs.

Is on-prem the top choice to run AI?

‎‎Subscribe. Fuel your curiosity. In this episode, we break down what we’ve learned from teams running AI at scale, and why on-premises infrastructure is making a strong comeback. We’re seeing a shift: performance, cost control, data sovereignty, and platform flexibility are driving conversations about on-prem strategies for AI. No one-size-fits-all answers, but if you’re building or scaling AI, this might help you think a few steps ahead.

Are you running AI the smart way?

Data locality: AI models often rely on large datasets. Locating compute close to the data reduces transfer times and improves training performance. Latency sensitivity: Real-time AI applications, like recommendation systems or edge analytics, depend on low-latency environments. This can be more easily tuned in private or hybrid setups. Hardware specialization: Some AI workloads benefit from custom hardware like GPUs or TPUs. Private cloud allows more control over this, while public cloud offers broader access but less customization.

The Tech Behind Europe's Space Missions | Canonical x ESA

‎‎Subscribe. Fuel your curiosity. “Open source software is… the glue for everything that everyone does, from sending an email through to managing critical operations, not just space operations.” The European Space Agency (ESA) runs missions ranging from investigating Earth’s forests, to exploring Jupiter’s moons, to deflecting incoming asteroids.

What is Linux Support?

In the world of enterprise IT, “support” can mean many things. For some, it’s a safety net – insurance for the day something breaks. For others, it’s the difference between a minor hiccup and a full-scale outage. At Canonical, it means a simple, comprehensive subscription that takes care of everything, so that everything you build works the way you want it to, for all the people who love to use it.