What’s Next for Cloud Native: Highlights from KubeCon North America 2025
Link⚡ TL;DR
📝 Summary
Cloud Native and AI Continue to Advance Together VKS: Certified Kubernetes AI Conformant Supply Chain Security and Identity Are Top Priorities Platform Engineering Is Becoming a Repeatable Discipline AI Networking, Compute, and Storage Are Converging Sessions, Demos, and Industry Engagement Sessions Our Upstream Contributions: Strengthening Kubernetes for Everyone Sharing Practical Expertise: Demos and Technical Deep Dives Looking Ahead Discover more from VMware Cloud Foundation (VCF) Blog Related Articles VCF Breakroom Chats Episode 75 - Breaking the GitOps Barrier: Continuous Delivery for Modern Apps with VCF 9 What’s Next for Cloud Native: Highlights from KubeCon North America 2025 Making Harbor Production-Ready: Essential Considerations for Deployment KubeCon + CloudNativeCon North America once again brought together thousands of developers, maintainers, operators, and end users from across the cloud native ecosystem. More than 9,000 attendees gathered in Atlanta, with nearly half joining for the first time, reflecting the accelerating global adoption of Kubernetes and open source innovation. For Broadcom, KubeCon is where we connect directly with the community that has shaped Kubernetes from its earliest days. It’s where we learn, share, and collaborate on the technologies that define modern cloud infrastructure. Here are the major themes that stood out this year and how we are helping customers take advantage of what’s next. A clear message emerged among a number of the keynotes and technical sessions: The future of AI is cloud native, and the future of cloud native is AI. Whether it was Adobe’s “Maximum Acceleration” keynote, Niantic’s real-time ML workflows, or Cohere’s enterprise AI architecture, it became evident that Kubernetes has become the foundation for training, serving, and governing AI models. These were some of the main takeaways : Inference is still the dominant enterprise AI workload, projected to drive hundreds of billions in investment over the next decade AI workloads need portability and interoperability across data centers and public clouds DRA (Dynamic Resource Allocation) is now the standard Kubernetes API for orchestrating accelerators and other specialized resources including GPUs, networking interfaces, and fine-grained CPI sharing. This is now GA in Kubernetes v1.34 Multi-cluster and multi-cloud operations are becoming standard patterns For more than two decades, we have engineered the operational foundations now required for large-scale AI: predictable scheduling, isolation boundaries, memory and CPU efficiency, and robust lifecycle automation. These strengths directly support the direction Kubernetes is heading. Our upstream work, including etcd improvements, leadership on long-term lifecycle stability, conformance engagement, and contributions across SIGs ensures our platforms adopt these emerging AI-native standards in a way enterprises can trust. This year, the CNCF formally launched Kubernetes AI Conformance, validating the essential set of capabilities required for portable and interoperable AI workloads.