Our journey to AI-centricity, part 2: Crafting a strategy that scales

Link
2026-03-31 ~1 min read www.redhat.com #kubernetes

⚡ TL;DR

Our journey to AI-centricity, part 2: Crafting a strategy that scales Moving from policy to participation The 3 layers of our AI strategy The adaptable enterprise: Why AI readiness is disruption readiness About the authors Chris Wright Marco Bill More like this Red Hat and NVIDIA: Setting standards for high-performance AI inference Red Hat AI tops MLPerf Inference v6.0 with vLLM on Qwen3-VL, Whisper, and GPT-OSS-120B Technically Speaking | Build a production-ready AI toolbox Technically Speaking | Platform engineering for AI agents Keep exploring Browse by channel Automation Artificial intelligence Open hybrid cloud Security Edge computing Infrastructure Applications Virtualization Share In the first part of this series, we discussed the messy and challenging work of fixing our foundation—standardizing on Red Hat OpenShift and cleaning up years of fragmented data. With that foundation in place, we faced a new challenge: how to integrate AI into how Red Hatters work without creating new internal barriers or security risks.

📝 Summary

Our journey to AI-centricity, part 2: Crafting a strategy that scales Moving from policy to participation The 3 layers of our AI strategy The adaptable enterprise: Why AI readiness is disruption readiness About the authors Chris Wright Marco Bill More like this Red Hat and NVIDIA: Setting standards for high-performance AI inference Red Hat AI tops MLPerf Inference v6.0 with vLLM on Qwen3-VL, Whisper, and GPT-OSS-120B Technically Speaking | Build a production-ready AI toolbox Technically Speaking | Platform engineering for AI agents Keep exploring Browse by channel Automation Artificial intelligence Open hybrid cloud Security Edge computing Infrastructure Applications Virtualization Share In the first part of this series, we discussed the messy and challenging work of fixing our foundation—standardizing on Red Hat OpenShift and cleaning up years of fragmented data. With that foundation in place, we faced a new challenge: how to integrate AI into how Red Hatters work without creating new internal barriers or security risks. When gen AI first arrived, we made a mistake common to many enterprises: we led with a policy of "no. " Our first move was to release a dense legal document so restrictive that it inadvertently discouraged people from exploring the technology altogether. Instead of a collaborative rollout that got our users excited about what’s possible, we created a culture of apprehension. The result was a surge in shadow AI. Associates were still using the tools—the demand was too high to ignore—but they were doing it in the dark, using non-sanctioned tools without any governance or protection. We realized that by trying to over-control the journey, we had actually increased our risk and stifled the very innovation we needed. We had to pivot from a policing-the-technology mindset to one of empowering our people to use it safely. We’ve since shifted toward a culture of curiosity. We now encourage every Red Hatter to experiment with both in-house and third-party LLMs that we provide. The goal is to let AI handle the repetitive, time-consuming tasks that slow us down.