Markham, ON, Canada
1999
  |  By Kubex
Industry analyst recognition means something different from an award. GigaOm does not hand out trophies. They evaluate products against a defined capability framework and tell the market where vendors actually stand. By that measure, Kubex has been named a Leader in two of GigaOm’s 2026 Radar Reports: Kubernetes Resource Management and Cloud Resource Optimization. In the Kubernetes report, we are positioned as an Outperformer. In Cloud Resource Optimization, a Fast Mover.
  |  By Kubex
The early era for AI was defined by experimentation, standing up isolated environments, and finding the first practical use cases. Today, the conversation is different. Enterprises are no longer asking whether AI matters. They are asking how to scale it sustainably, securely, and economically. That shift is giving rise to the AI factory: a repeatable, governed, production-ready environment where data scientists, platform teams, and application teams can build, train, deploy, and operate AI at scale.
  |  By Kubex
TL;DR: Most Kubernetes clusters waste GPU compute through over-provisioned pod requests and suboptimal node selection. This guide covers 10 tools that fix this across four layers: resource lifecycle (Kubex, ScaleOps, Cast.ai), hardware partitioning (GPU Operator, MIG, time-slicing), inference serving (Triton, KServe), and observability (DCGM Exporter, NFD). For most teams, the biggest gains are at the resource lifecycle layer: no model changes required.
  |  By Kubex
In the modern cloud infrastructure landscape, we don’t have a data problem; we have an actionable interpretation gap. Engineering teams are often drowning in metrics that describe a crisis without providing a clear path to remediation. Traditional FinOps, SRE, and DevOps work has become a reactive loop of dashboard-watching and manual firefighting.
  |  By Kubex
By 2026, the GPU shortage isn’t a supply-chain hiccup anymore. It’s baked into the system. Even after pouring billions into CapEx, most enterprises still want 40% more GPU capacity than they actually have. And it’s not because they’re chasing moonshots. Technology companies are training foundation models while serving inference for millions of users on the same clusters. AI labs are juggling fine-tuning, evaluation, and real-time experimentation side by side.
  |  By Kubex
Over the past few years, Kubex has evolved from a cloud optimization product into a Kubernetes-centric solution, shifting its focus from cost and waste visibility to fully automated resource optimization. As that evolution happened, one of the earliest design decisions we had made began to show its limits: how the product was configured.
  |  By Kubex
Kubex releases data from a survey of over 500 U.S. software developers, revealing a disconnect between cost sensitivity, scrutiny and resource efficiency.
  |  By Kubex
Enterprises operating at cloud scale today face a growing reality: managing infrastructure performance and cost in silos no longer works. Kubernetes, multi cloud environments, and GPU accelerated workloads deliver immense agility and capability, but they also introduce complexity that outpaces traditional monitoring and cost governance approaches.
  |  By Kubex
When I joined Kubex last year, the company was already well aware of the growing power of Large Language Models. As a company focused on intelligent resource optimization for Kubernetes, GPUs, and cloud infrastructure, generative AI didn’t feel like a threat so much as a natural extension of where the industry was heading. Kubex had already invested heavily in machine learning, but it was becoming clear that foundation models could unlock an entirely new class of capabilities for our customers.
  |  By Densify
Densify has announced Kubex AI, a major leap forward in how organizations optimize complex Kubernetes and AI environments. This new solution combines verticalized AI for resource optimization with a conversational interface, empowering anyone—regardless of technical background—to access expert-level analytics and automation through simple, natural-language interactions.
  |  By Kubex
In this episode, Andrew Hillier and Bijit Ghosh discuss the evolving landscape of AI, discussing the growing prominence of inference over training, hybrid cloud strategies, balancing cost with performance, and the orchestration of complex hardware environments. The conversation also touches on emerging concepts like AI factories, the challenges of sovereign cloud, and how enterprises are navigating data gravity and regulatory constraints. It's a deep dive into optimizing AI infrastructure, managing costs, and the disruptive changes that are transforming both technology and business outcomes.
  |  By Kubex
  |  By Kubex
  |  By Densify
Densify and The New Stack bring to you an enlightening session geared towards ways to help reduce developer burden, and how Kubex can do exactly that.
  |  By Densify
Andrew Hillier sits down with Dan Ciruli, who leads the cloud native product management team at Nutanix. Dan’s got some great stories from his days at Google—back when cloud native and Kubernetes were just getting started, in addition to the knowledge and wisdom he picked up along the way.
  |  By Densify
Kubernetes, container resources, request and limits, sizing, the impact of getting things wrong, CPU limits, JVMs, HPA and VPA, does Karpenter fix the request and limit problem? We’ve got a great episode for you today! Thanks for joining us on Densify Talks! We welcome Daniele Polencic, one of the lead instructors at LearnK8s, which specializes in containers and Kubernetes technologies.
  |  By Densify
Download this whitepaper to understand the rise of capacity operations (CapOps), an online, operational discipline with the mandate of "continuous resource optimization." By refocusing on the new, more dynamic capabilities of cloud and container infrastructure, CapOps enables organizations to once again ensure that there are sufficient resources to meet the demands of applications (without allocating too much), filling a gap left by the evolution of DevOps and FinOps practices.
  |  By Densify
Public cloud providers have introduced reserved capacity commitments as a way to take control of your cloud costs. These are branded RIs in AWS, Azure calls them Reserved VM Instances, and Google Cloud has a similar concept of Committed Use Discounts. With the hundreds of different instance types available and not knowing your workload's exact requirements, how can you possibly take advantage of the benefits of RIs?
  |  By Densify
Infrastructure as code (IaC) is a method to provision and manage IT infrastructure through the use of source code, rather than through standard operating procedures and manual processes. Tools like HashiCorp Terraform, AWS CloudFormation, or Ansible help you automate the infrastructure deployment process in a repeatable and consistent manner, driving speed, simplicity, and efficiency.

Densify is a predictive analytics engine that removes the guesswork from optimizing cloud and container environments. Our patented technology precisely determines the resources applications and workloads require to run efficiently and safely. Leading service providers and enterprises use Densify as a foundation to drastically lower infrastructure costs, and at the same time, reduce risk and deliver better performance for their businesses.

Densify’s Unique Cloud & Container Resource Management Approach:

  • Precise: Optimization is impossible without meticulously-accurate analytics that produce actions your application owners will trust and allow.
  • Unifying: Policy and transparency that unify Finance, Engineering, Operations, and application owners to drive continuous cost optimization.
  • Integrated: Connects with your ecosystem to feed the processes and systems required to confidently optimize.

Container Infrastructure Optimization & Control.