Skip to content

Kubernetes

BGP Route Advertisement with Kube-OVN on Harvester

In multi-tenant or hybrid environments, Kubernetes workloads (VMs, pods, and services) need to be reachable from the broader network. The traditional answer is static routes scattered across every upstream router, which breaks as soon as the cluster grows or moves. BGP solves this cleanly : the cluster advertises its own CIDRs dynamically, and every router learns them automatically.

This post documents a proof-of-concept lab that validates BGP route propagation between Kube-OVN's built-in speaker, two VyOS sagitta (1.4.x) routers, and a Harvester cluster, all running as Hyper-V VMs on a single Windows host.

The end-state we are aiming for is a successful ping from router-02 to a Kube-OVN pod IP, with every hop learned via BGP :

Terminal screenshot of router-02 successfully pinging a Kube-OVN pod IP, confirming end-to-end reachability over BGP-learned routes

Per-Namespace Egress IPs on Harvester with Kube-OVN VpcEgressGateway

This article is the result of one of my very first deep-dives into Harvester (aka SUSE Virtualization) not related to storage. It describes how to configure dedicated egress IPs per tenant (aka Namespace) on Harvester using Kube-OVN's VpcEgressGateway and we will go in unexplored territory here with release-candidates 🔖, experimental features 🧪 & hot fixes 🔥 on top of bugs 🪳 that are yet to be reported 📑.

Network architecture overview showing Harvester node, ProviderNetwork, VpcEgressGateway, and tenant VM connectivity

CSI for PowerFlex on OpenShift with Multiple Networks

Managing multiple networks for storage workloads on OpenShift is not optional: it is essential for performance and isolation. Dell PowerFlex , with its CSI driver, delivers dynamic storage provisioning, but multi-network setups require proper configuration.

This guide explains how to enable multi-network support for CSI PowerFlex on OpenShift, including prerequisites, network attachment definitions, and best practices for high availability.

Enable Storage Multi-tenancy on Kubernetes with PowerScale

Dell PowerScale is a scale-out NAS solution designed for high-performance, enterprise-grade file storage and multi-tenant environments. In multi-tenant environments, such as shared Kubernetes clusters, isolating workloads and data access is critical.

PowerScale addresses this need through Access Zones, which logically partition the cluster to enforce authentication boundaries, export rules, and quota policies. The Dell CSI driver maps Kubernetes StorageClass resources to specific Access Zones, providing per-tenant isolation at the storage layer.

This setup is particularly useful when multiple teams share a common PowerScale backend but require strict separation of data and access controls. This approach proved extremely valuable when building a GPU-as-a-Service AI Factory .

Use Harvester with Dell Storage

Co-authored with Parasar Kodati.

Dell CSI drivers for PowerStore, PowerMax, PowerFlex, and PowerScale have all been tested and are compatible with KubeVirt . This guide provides instructions for installing Dell CSI for PowerMax on Harvester , though the steps are very similar regardless of the storage backend.

Tested on :

  • Harvester v1.3.1
  • CSM v2.11
  • PowerMax protocols : Fibre Channel, iSCSI, and NFS

🌩️🛟 Disaster Recovery for VMs on Kubernetes

Author(s): Pooja Prasannakumar & Florian Coulombel

Kubernetes is no longer just a container orchestrator. As organizations modernize infrastructure, there’s growing interest in using Kubernetes to manage virtual machines (VMs) alongside cloud-native workloads—while still meeting familiar expectations like disaster recovery (DR).

In this post, we’ll walk through a practical, GitOps-friendly DR approach for VMs running on Kubernetes using:

  • KubeVirt to run VMs on Kubernetes
  • Dell Container Storage Modules (CSM) for storage and replication
  • CSM Replication to replicate VM disks across clusters
  • Argo CD + Kustomize to manage deployment and failover via GitOps

🔒🧰 Hardening Kubernetes CSI Drivers: Reducing CAP_SYS_ADMIN Without Breaking Storage

Many Kubernetes storage drivers still rely on the powerful—and notoriously over‑broad—Linux capability CAP_SYS_ADMIN to perform host‑level operations. While it enables critical actions like filesystem mounts, it also substantially expands the attack surface of your cluster.

This post explains why CSI node plugins often end up needing CAP_SYS_ADMIN, what breaks when you remove it, and several concrete hardening strategies using tools like seccomp, AppArmor, SELinux, and controlled privilege elevation.

Best Practices for Deployment and Life Cycle Management of Dell CSM Modules

Co-authored with Parasar Kodati.

The Dell CSM Kubernetes Operator packages the CSI driver and other storage services (Container Storage Modules) for observability, replication, authorization, and node resiliency. This single operator supports PowerFlex, PowerStore, PowerScale, and Unity platforms (see Support Matrix ). This post lays out best practices for deployment and lifecycle management of Dell CSMs.