Our journey to AI-centricity, part 1: Building on a stable foundation
Link⚡ TL;DR
📝 Summary
Our journey to AI-centricity, part 1: Building on a stable foundation Standardizing a fragmented infrastructure Cleaning up our data to find the truth Laying the groundwork for what's next Red Hat OpenShift Container Platform | Product Trial About the authors Chris Wright Marco Bill More like this Red Hat and NVIDIA: Setting standards for high-performance AI inference Red Hat AI tops MLPerf Inference v6.0 with vLLM on Qwen3-VL, Whisper, and GPT-OSS-120B Technically Speaking | Build a production-ready AI toolbox Technically Speaking | Platform engineering for AI agents Keep exploring Browse by channel Automation Artificial intelligence Open hybrid cloud Security Edge computing Infrastructure Applications Virtualization Share At Red Hat, our IT and Engineering functions encounter the same challenges and make the same decisions our customers face every day, from infrastructure optimization and application delivery to automating and enhancing the security of our global business. Right now, almost every organization we talk to is navigating the complexities of an AI journey, and we’re in that same boat. As users of our own products—because we love and believe in the technology we build—we want to pull back the curtain on our internal experience. We hope that the lessons we’ve learned through some foresight and some trial and error might help you navigate your own path. Our move toward AI didn't begin with a model. It began with a massive cleanup of our technical debt. A few years ago, Red Hat’s IT department was struggling to manage a fragmented landscape of virtual machines (VMs) and containers across multiple platforms, including Red Hat Virtualization, Red Hat OpenStack, and the public cloud. This fragmentation meant that we lacked a consistent way to deploy or manage workloads. Simple tasks were slowed down by "it works here, but not there" bottlenecks, creating constant operational friction. We realized that speed and innovation are impossible when you’re fighting your own infrastructure every day. To solve this challenge, we migrated all our workloads to Red Hat OpenShift, creating a single environment across bare-metal and public cloud environments. We migrated virtualized workloads from Red Hat Virtualization and Red Hat OpenStack Platform to Red Hat OpenShift Virtualization, now using Red Hat OpenShift AI, part of the Red Hat AI portfolio, for our AI workloads.
Open the original post ↗ https://www.redhat.com/en/blog/our-journey-ai-centricity-part-1-building-stable-foundation