What’s new with Red Hat OpenShift AI 3.3 UI: Moving from pilot to production
Link⚡ TL;DR
📝 Summary
What’s new with Red Hat OpenShift AI 3.3 UI: Moving from pilot to production Centralized assets: The AI hub Governance at scale: Model-as-a-Service (MaaS) Developer velocity: Gen AI studio The production gap: Continuous evaluation and optimization Try Red Hat OpenShift AI Red Hat OpenShift AI (Self-Managed) | Product Trial About the authors Jenny Yi Jehlum Vitasta Pandit Taylor Smith More like this Why the future of AI depends on a portable, open PyTorch ecosystem How does real-world AI deliver value? The Ask Red Hat example Technically Speaking | Build a production-ready AI toolbox Technically Speaking | Platform engineering for AI agents Keep exploring Browse by channel Automation Artificial intelligence Open hybrid cloud Security Edge computing Infrastructure Applications Virtualization Share With our previous release of Red Hat OpenShift AI , we established a solid foundation for your enterprise AI infrastructure. Today, with the release of OpenShift AI 3.3, we are tackling the polarizing forces that often prevent AI projects from reaching production—the need for rigorous governance versus the demand for rapid developer access. OpenShift AI 3.3 introduces a suite of tools designed to manage a centralized hub of AI assets while optimizing for the multimodel, multiagent future. As enterprises move beyond single-model use cases, discoverability becomes a bottleneck. Platform teams need a central source of truth for their AI assets—to register and version models before they are configured for deployment, and to view deployed models. They also need guidance on how to best deploy these models—it is hard to assess the hardware requirements and to understand the latency and throughput to expect. The AI hub aims to provide that—it is now the central repository for your organization's AI assets, starting from large language models (LLMs) in OpenShift AI 3.3 to Model Context Protocol (MCP) servers in future releases. In OpenShift AI 3.3, AI hub provides performance insights and guidance from our Red Hat AI model validation program on the trade-offs of performance, cost, and hardware requirements. This helps platform teams steer developers toward the most efficient configurations before deployment begins. If you're configuring and managing your own GPUs and deploying AI models on them, building AI applications is tough. Most developers, AI engineers, and data scientists would rather start with an endpoint for a model that’s already up and running. Asking them to do all of this extra work slows them down, reduces time to value, and is neither scalable nor efficient, whether in terms of cost, time, or governance.
Open the original post ↗ https://www.redhat.com/en/blog/whats-new-red-hat-openshift-ai-33-ui-moving-pilot-production