Docs

Curated Kubernetes content from AKS, EKS, GKE, OpenShift, Rancher/K3s and more—auto‑aggregated daily.

  • 2025-09-17
    CNCF

    CNCF Welcomes 20 New Silver Members Reflecting Broader Cloud Native and AI Adoption

    New Silver Members About the Newest End User Members About Cloud Native Computing Foundation Media Contact New members span DevOps, platform engineering, and agentic AI, underscoring cloud native’s critical role in modern infrastructure across industries SAN FRANCISCO, CA – September 17, 2025 – The Cloud Native Computing Foundation® (CNCF®) , which builds sustainable ecosystems for cloud native software, today announced the addition of 20 new Silver Members, reinforcing the continued momentum of cloud native adoption across industries and further strengthening the foundation’s global community. CNCF Silver Members receive a variety of benefits valued at over $300,000 USD.

    #cncf
  • 2025-09-17
    AWS Containers Blog (EKS)

    Use Raspberry Pi 5 as Amazon EKS Hybrid Nodes for edge workloads

    Use Raspberry Pi 5 as Amazon EKS Hybrid Nodes for edge workloads Why Raspberry Pi 5? Architectural overview Getting started Step 1: Create the EKS cluster Step 2: Set up the VPN server Add the Raspberry Pi to the cluster as a remote node Setting up the Container Network Interface Step 1: Install Cilium Deploying a sample application on Amazon EKS Hybrid Nodes with edge integration Step 1: Hardware requirements and setup Step 2: Deploy the DynamoDB table Step 3: Deploy the sensor application Step 4: Deploy the frontend dashboard Conclusion About the authors Since its launch, Amazon Elastic Kubernetes Service (Amazon EKS) has powered tens of millions of clusters so that users can accelerate application deployment, optimize costs, and use the flexibility of Amazon Web Services (AWS) for hosting containerized applications. Amazon EKS eliminates the operational complexities of maintaining Kubernetes control plane infrastructure, while offering seamless integration with AWS resources and infrastructure.

    #eks #aws
  • 2025-09-16
    Tigera

    Calico Whisker vs. Traditional Observability: Why Context Matters in Kubernetes Networking

    What is Calico Whisker? How Does It Work? What Makes Calico Whisker Different? Practical Examples: Calico Whisker in Action Scenario 1: Safely Rolling Out a New Network Policy Scenario 2: Uncovering Hidden Security Risks How to Get Started with Calico Whisker Are you tired of digging through cryptic logs to understand your Kubernetes network? In today’s fast-paced cloud environments, clear, real-time visibility isn’t a luxury, it’s a necessity. Traditional logging and metrics often fall short, leaving you without the context needed to troubleshoot effectively.

    #tigera
  • 2025-09-16
    Kubernetes Blog

    Kubernetes v1.34: Moving Volume Group Snapshots to v1beta2

    Kubernetes v1.34: Moving Volume Group Snapshots to v1beta2 What's new in Beta 2? What’s next? How do I get involved? Volume group snapshots were introduced as an Alpha feature with the Kubernetes 1.27 release and moved to Beta in the Kubernetes 1.32 release. The recent release of Kubernetes v1.34 moved that support to a second beta.

    #kubernetes
  • 2025-09-16
    KodeKloud Blog (Kubernetes)

    Best DevOps Courses in 2025: Learning Paths to Boost Your Career

    2025-ready skills - IT Foundations --> Linux & Networking --> Git --> Containers --> Kubernetes --> CI/CD --> IaC --> GitOps --> Observability --> Security --> Platform Engineering --> Cloud --> AI. AI everywhere - LLMs, MCPs, RAG, anomaly detection, and copilots are now part of daily DevOps workflows.

    #kodekloud #kubernetes
  • 2025-09-16
    VMware Cloud Foundation Blog

    Deploy Distributed LLM Inference with GPUDirect RDMA over InfiniBand in VMware Private AI

    Key Highlights and Technical Deep Dives Leveraging HGX Servers for Maximum Performance Intra-Node and Inter-Node Communication GPUDirect RDMA in VCF Determining Server Requirements Architecture Overview Deployment Workflow and Best Practices Practical Examples and Configurations Performance Verification Related Related Articles Deploy Distributed LLM Inference with GPUDirect RDMA over InfiniBand in VMware Private AI VMware Cloud Foundation Support Upgrade Playbook VCF Breakroom Chats Episode 58: A Smarter Way to a Unified Private Cloud Consumption Experience with VCF 9.0 At the VMware Explore 2025 keynote, Chris Wolf announced DirectPath enablement for GPUs with VMware Private AI , marking a major step forward in simplifying and scaling enterprise AI infrastructure. By granting VMs exclusive, high-performance access to NVIDIA GPUs, DirectPath allows organizations to fully harness GPU capabilities without added licensing complexity.

    #vmware #cloud-foundation #kubernetes
  • 2025-09-16
    Grafana Kubernetes
  • 2025-09-16
    Redhat Blog

    Fedora 43 Beta now available

    Fedora 43 Beta now available What’s new in Fedora 43 Beta? What is a Fedora Beta release? Let’s test Fedora 43 Beta together About the author Jef Spaleta More like this Blog post Blog post Original podcast Original podcast Browse by channel Automation Artificial intelligence Open hybrid cloud Security Edge computing Infrastructure Applications Virtualization Share Today, the Fedora Project is excited to announce that the beta version of Fedora Linux 43 - the latest version of the free and open source operating system - is now available. Learn more about the new and updated features of Fedora 43 Beta below and don’t forget to make sure that your system is fully up-to-date before upgrading from a previous release.

    #kubernetes
  • 2025-09-16
    Redhat Blog

    Unlocking AI innovation: GPU-as-a-Service with Red Hat

    Unlocking AI innovation: GPU-as-a-Service with Red Hat The GPU challenge: A multifaceted problem for ITOps Red Hat's solution: Solving the GPU puzzle with GPU-as-a-Service Key components for delivering GPU-as-a-Service Red Hat: Your partner in AI innovation Get started with AI Inference About the author Martin Isaksson More like this Blog post Blog post Original podcast Original podcast Keep exploring Browse by channel Automation Artificial intelligence Open hybrid cloud Security Edge computing Infrastructure Applications Virtualization Share Graphics processing units (GPUs) are key to both generative and predictive AI. Data scientists, machine learning engineers, and AI engineers rely on GPUs to experiment with AI models, and to train, tune, and deploy them.

    #kubernetes
  • 2025-09-16
    Redhat Blog

    Use the RHEL command-line assistant offline with this new developer preview

    Use the RHEL command-line assistant offline with this new developer preview Architecture overview Prerequisites and requirements Hardware requirements Getting started Configure the GPU Configure the command-line assistant client Usage First query response delay CPU-only systems Intended for individual system use cases Conclusion Get started with AI Inference About the authors Brian Smith Máirín Duffy Sally O'Malley More like this Blog post Blog post Original podcast Original podcast Keep exploring Browse by channel Automation Artificial intelligence Open hybrid cloud Security Edge computing Infrastructure Applications Virtualization Share An offline version of the Red Hat Enterprise Linux (RHEL) command-line assistant powered by RHEL Lightspeed is now available as a developer preview to existing Red Hat Satellite subscribers. This delivers the power of the RHEL command-line assistant in a self-contained format that runs locally on a workstation or an individual RHEL system.

    #kubernetes