Docs
Curated Kubernetes content from AKS, EKS, GKE, OpenShift, Rancher/K3s and more—auto‑aggregated daily.
- 2026-04-02Redhat Blog
Red Hat and NVIDIA: Setting standards for high-performance AI inference New
Red Hat and NVIDIA: Setting standards for high-performance AI inference Results at a glance Qwen3-VL-235B (multimodal vision model) GPT-OSS-120B Whisper large-V3 (speech-to-text) Delivering greater efficiency and ROI About the author Ashish Kamra More like this Red Hat AI tops MLPerf Inference v6.0 with vLLM on Qwen3-VL, Whisper, and GPT-OSS-120B Running LLMs dynamically, in production, on limited resources, is hard. We think there’s room for another approach… Technically Speaking | Build a production-ready AI toolbox Technically Speaking | Platform engineering for AI agents Browse by channel Automation Artificial intelligence Open hybrid cloud Security Edge computing Infrastructure Applications Virtualization Share Red Hat is proud to announce industry-leading results from the latest MLPerf Inference v6.0 benchmarks, achieved through deep engineering co-design with NVIDIA.
#kubernetes - 2026-04-02Redhat Blog
Running LLMs dynamically, in production, on limited resources, is hard. We think there’s room for another approach… New
Running LLMs dynamically, in production, on limited resources, is hard. We think there’s room for another approach… The production scale challenges Inference cost is the real bill The hardware math can be challenging Static partitioning might not fit every use case Exploring a different approach Another approach Meeting kvcached… … and its companion project, Sardeenz How it works in practice A few things worth highlighting Deployment: one container, one port What it isn't Where this fits and where to go from here Scenarios we had in mind When to look elsewhere Getting started What's next Red Hat AI Inference Server | Product Trial About the authors Guillaume Moutier Xingqi Cui Jiarong Xing Yifan Qiao More like this Red Hat and NVIDIA: Setting standards for high-performance AI inference Red Hat AI tops MLPerf Inference v6.0 with vLLM on Qwen3-VL, Whisper, and GPT-OSS-120B Technically Speaking | Build a production-ready AI toolbox Technically Speaking | Platform engineering for AI agents Keep exploring Browse by channel Automation Artificial intelligence Open hybrid cloud Security Edge computing Infrastructure Applications Virtualization Share The promise of large language models (LLMs) is clear.
#kubernetes - 2026-04-02Redhat Blog
Take your automation to the next level with Ansible Content Collections for Windows, Splunk, AIOps, MCP, and more New
Take your automation to the next level with Ansible Content Collections for Windows, Splunk, AIOps, MCP, and more Comprehensive Microsoft Windows automation AIOps with Splunk and Ansible Automation Platform: Insights to action solution New AI content for Ansible Automation Platform Automated AI infrastructure and operations management on the cloud Virtual infrastructure automation Expanding the reach of multivendor network automation Level up your automation: Recent updates to the Ansible Automation Platform collection Automate full workflows faster with Ansible Content Collections Resources 5 steps to automate your business About the author Steve Fulmer More like this Automating the modern network: A Q1 network automation recap End of Maintenance Support for Ansible Automation Platform 2.4 Technically Speaking | Taming AI agents with observability Transforming Your Database | Code Comments Keep exploring Browse by channel Automation Artificial intelligence Open hybrid cloud Security Edge computing Infrastructure Applications Virtualization Share One of the strengths of Red Hat Ansible Automation Platform is its flexible automation of an array of use cases across ITOps. It includes multiple options to help you jumpstart new automation projects, using Ansible Content Collections.
#kubernetes - 2026-04-01Digital Ocean
Run Advanced Reasoning on DigitalOcean with Arcee AI's Trinity Large-Thinking New
Run Advanced Reasoning on DigitalOcean with Arcee AI's Trinity Large-Thinking Why this model, why now Built for real-world agent workloads on DigitalOcean A new phase of AI infrastructure Get started in seconds About the author Try DigitalOcean for free Related Articles The Agentic Era Demands a New Class of Infrastructure: DigitalOcean Acquires Katanemo Labs Now Available: DigitalOcean Cloud Security Posture Management (CSPM) NVIDIA GTC 2026 Confirmed It: The Inference Era Is Here By DigitalOcean Published: April 1, 2026 3 min read Today, we’re announcing that Arcee AI ’s Trinity Large-Thinking is now available in Public Preview on DigitalOcean’s Agentic Inference Cloud, giving developers the ability to run frontier-class reasoning workloads without managing infrastructure or stitching together complex systems. DigitalOcean is proud to partner with Arcee to bring Trinity Large-Thinking to AI builders, available via Serverless Inference, on day one.
#kubernetes - 2026-04-01VMware Cloud Foundation Blog
Accelerate Database as a Service with new VMware Data Services Manager Proof of Value Service from AxelCore New
Deploying a private cloud Database-as-a-Service (DBaaS) Real-World Proof: The Broadcom IT Blueprint The AxelCore Perspective: Freedom and Flexibility The Engagement: 8 Weeks to a Data-Driven Roadmap Stop Guessing, Start Scaling Discover more from VMware Cloud Foundation (VCF) Blog Related Articles Applying GitOps Principles to Maintain Desired State Configuration using VMware vSphere Configuration Profile - Part 3 Accelerate Database as a Service with new VMware Data Services Manager Proof of Value Service from AxelCore Announcing the i7i. metal-24xl Instance for VMware Cloud on AWS Deploying a private cloud Database-as-a-Service (DBaaS) is rarely just a technical challenge; it’s also an organizational one.
#vmware #cloud-foundation #kubernetes - 2026-04-01VMware Cloud Foundation Blog
Announcing the i7i.metal-24xl Instance for VMware Cloud on AWS New
Key Specifications Regional Availability Cluster Configuration and Flexibility Purchasing i7i. metal-24xl Subscriptions Deploying and Migrating to i7i.
#vmware #cloud-foundation #kubernetes - 2026-04-01AWS Containers Blog (EKS)
Building PCI DSS-Compliant Architectures on Amazon EKS New
Building PCI DSS-Compliant Architectures on Amazon EKS Understanding PCI DSS Compliance and Available AWS Resources Node Provisioning Considerations for PCI DSS Environments Node Provisioning Options for EKS Shared Tenancy vs. Dedicated Hosts Recommended Best Practices for PCI DSS on EKS 1.
#eks #aws - 2026-04-01Digital Ocean
Now Available: DigitalOcean Cloud Security Posture Management (CSPM) New
Now Available: DigitalOcean Cloud Security Posture Management (CSPM) Addressing Common Security Challenges Cloud Security Posture Management Features Scan, prioritize, and fix in minutes About the author Try DigitalOcean for free Related Articles The Agentic Era Demands a New Class of Infrastructure: DigitalOcean Acquires Katanemo Labs Run Advanced Reasoning on DigitalOcean with Arcee AI's Trinity Large-Thinking NVIDIA GTC 2026 Confirmed It: The Inference Era Is Here By Grace Morgan Updated: April 1, 2026 2 min read Keeping cloud infrastructure secure at scale is challenging. Infrastructure drift, exposed services, and sprawling identities create risk, and teams don’t always have the time or expertise to maintain a consistent security posture across their environments.
#kubernetes - 2026-04-01Redhat Blog
Red Hat AI tops MLPerf Inference v6.0 with vLLM on Qwen3-VL, Whisper, and GPT-OSS-120B New
Red Hat AI tops MLPerf Inference v6.0 with vLLM on Qwen3-VL, Whisper, and GPT-OSS-120B Our results Qwen3-VL-235B (multimodal vision model) GPT-OSS-120B (reasoning model) Whisper Large-V3 (speech-to-text) Llama-2-70B on AMD MI350X Key takeaways What comes next The adaptable enterprise: Why AI readiness is disruption readiness About the authors Ashish Kamra Diane Feddema Michael Goin Michey Mehta Naveen Miriyalu Nikhil Palaskar Saša Zelenović Aanya Sharma Alberto Perdomo Harika Pothina Samuel Monson Sayali Bhavsar More like this Red Hat and NVIDIA: Setting standards for high-performance AI inference Running LLMs dynamically, in production, on limited resources, is hard. We think there’s room for another approach… Technically Speaking | Build a production-ready AI toolbox Technically Speaking | Platform engineering for AI agents Keep exploring Browse by channel Automation Artificial intelligence Open hybrid cloud Security Edge computing Infrastructure Applications Virtualization Share Red Hat is proud to announce our strong results from the latest industry-standard MLPerf Inference v6.0 benchmark.
#kubernetes - 2026-04-01Redhat Blog
Why customers are choosing Red Hat AI for real business outcomes New
Why customers are choosing Red Hat AI for real business outcomes 4 business reasons customers choose Red Hat AI 1. Freedom to deploy where the business needs it 2.
#kubernetes