Tools
Solved: Is it possible to redeploy a Proxmox VM but keep certain disks?
2026-01-02
0 views
admin
š Executive Summary ## šÆ Key Takeaways ## Understanding the Challenge: Redeploying Proxmox VMs with Persistent Disks ## Symptoms: Why You Need to Keep Specific Disks ## Solution 1: Detach Disks, Delete VM, Recreate, and Re-attach ## Concept ## Advantages ## Disadvantages ## Step-by-Step Example ## Solution 2: Create New VM, Detach from Old, Attach to New (Old VM as Fallback) ## Concept ## Advantages ## Disadvantages ## Step-by-Step Example ## Solution 3: Clone Data Disks to a New VM ## Concept ## Advantages ## Disadvantages ## Step-by-Step Example ## Solution Comparison ## Conclusion TL;DR: Redeploying Proxmox VMs while preserving critical data disks is a common challenge arising from OS issues, upgrades, or template changes, with the primary risk being unintended data loss. This guide presents three robust strategiesādetach/re-attach, create new VM with old disks as fallback, or clone data disksāto ensure storage integrity during VM rebuilds or OS changes. Each method offers varying levels of safety, complexity, and resource consumption, allowing users to choose the best approach for their specific operational requirements. Learn how to redeploy Proxmox VMs while preserving critical data disks. This guide covers three effective strategies for maintaining storage integrity during VM rebuilds or OS changes. As a DevOps engineer managing Proxmox environments, youāve likely encountered scenarios where a virtual machine requires a fresh operating system installation, a template change, or an upgrade, but critically, its data disks must remain untouched. This common requirement often arises due to: The primary risk in these situations is unintended data loss. Simply deleting a VM without properly detaching or managing its associated disks will permanently erase all data, including your crucial application files, databases, or user data. This guide provides robust, step-by-step solutions to navigate this challenge successfully. This is the most straightforward method when youāre confident in completely replacing the VMās operating system environment. You will detach the data disks from the existing VM, delete the VM, create a new VM with a fresh OS disk, and then re-attach the previously detached data disks. Letās assume we have a VM with ID 100, which has its OS on scsi0 and a crucial data disk on scsi1 (e.g., local-lvm:vm-100-disk-1). First, check the VMās configuration to identify your data disks. You can do this via the Proxmox GUI (VM > Hardware tab) or CLI: Output might look like: Note that scsi1 corresponds to local-lvm:vm-100-disk-1. Using the identified disk ID (scsi1 in our example): Verify the disk is detached and now appears as āunusedā in the VMās hardware configuration. The underlying disk image (e.g., local-lvm:vm-100-disk-1) remains on your storage. Once youāve confirmed the data disks are detached, you can safely remove the old VM configuration. This action is irreversible for the VM configuration. Create your new VM (e.g., ID 101) with the desired OS and configuration. Ensure it has an OS disk. Now, attach the previously detached disk (local-lvm:vm-100-disk-1) to the new VM (101). You can do this via the Proxmox GUI by going to VM 101 > Hardware > Add > Unused Disk, and selecting the correct disk. Via CLI, you need to specify the full disk path. The path will usually follow the pattern <storage_name>:vm-<old_vmid>-disk-<disk_number>: Ensure you pick an unused SCSI/VirtIO slot (--scsi1 here). The disk will now appear in the new VMās hardware list. Boot the new VM, and within the guest OS, verify that the data disk is present and accessible (you might need to mount it, depending on the OS and previous configuration). This method is similar to Solution 1 but offers an additional layer of safety by keeping the original VM intact until the new VM is fully operational with the data disks. Itās ideal for scenarios where you want to test the new setup thoroughly before committing to the full deletion of the old environment. Continuing with VM 100 (old-webserver) and wanting to create VM 102 (new-webserver). As in Solution 1, identify the data disk from VM 100. Letās assume itās still scsi1 (local-lvm:vm-100-disk-1). Create VM 102 with your desired new OS. Install the OS on its own dedicated disk (e.g., scsi0). Detach the data disk from VM 100. It will become an āUnused Diskā on your storage. Attach local-lvm:vm-100-disk-1 to VM 102. Use an available SCSI/VirtIO slot (e.g., scsi1). Boot VM 102 and confirm that the data disk is present and correctly mounted within the guest OS. Once you are absolutely certain that VM 102 is functioning as expected and the data is accessible, you can delete VM 100. This method provides the highest level of safety and flexibility by cloning the critical data disk(s) from your existing VM to a new one. The original VM and its disks remain completely untouched, acting as a perfect fallback. This is particularly useful for testing new configurations, performing major upgrades, or creating parallel environments without risking the original data. We will clone VM 100ās data disk (local-lvm:vm-100-disk-1) and attach it to a new VM 103. As before, identify the data disk from VM 100. Letās use scsi1 (local-lvm:vm-100-disk-1). Create VM 103 with your desired new OS. Install the OS on its own dedicated disk. While cloning can sometimes be done on a running VM, shutting it down ensures data consistency for the clone. Use the qm clone_disk command. Specify the source VMID, the source disk ID, the target storage, the target VMID, and the destination disk ID (e.g., scsi1 for the new VM). Proxmox will handle the cloning and attachment. This command will create a new disk volume (e.g., local-lvm:vm-103-disk-1) by copying local-lvm:vm-100-disk-1, and then attach it as scsi1 to VM 103. Boot VM 103 and confirm the cloned data disk is accessible within the guest OS. If you only shut down the old VM for consistency during cloning, you can now restart it. Once VM 103 is fully validated, you can decide whether to keep VM 100 as an archive or delete it. Hereās a comparison of the three methods to help you choose the best approach for your specific scenario: Redeploying Proxmox VMs while preserving critical data disks is a common task that, when approached systematically, can be executed safely and efficiently. By understanding the nuances of each method ā whether itās the directness of detaching and re-attaching, the safety net of creating a new VM while keeping the old, or the robust isolation offered by cloning data disks ā you can choose the strategy that best fits your operational requirements and risk tolerance. Always ensure you have recent backups before performing any major VM or disk operations. š Read the original article on TechResolve.blog Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse CODE_BLOCK:
qm config 100 Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
qm config 100 CODE_BLOCK:
qm config 100 COMMAND_BLOCK:
boot: order=scsi0;ide2;net0 cores: 2 ide2: none,media=cdrom memory: 2048 name: old-webserver net0: virtio=AA:BB:CC:DD:EE:FF,bridge=vmbr0 numa: 0 ostype: l26 scsi0: local-lvm:vm-100-disk-0,size=32G # OS Disk scsi1: local-lvm:vm-100-disk-1,size=100G # IMPORTANT Data Disk scsihw: virtio-scsi-pci smbios1: uuid=... sockets: 1 vmgenid: ... Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
boot: order=scsi0;ide2;net0 cores: 2 ide2: none,media=cdrom memory: 2048 name: old-webserver net0: virtio=AA:BB:CC:DD:EE:FF,bridge=vmbr0 numa: 0 ostype: l26 scsi0: local-lvm:vm-100-disk-0,size=32G # OS Disk scsi1: local-lvm:vm-100-disk-1,size=100G # IMPORTANT Data Disk scsihw: virtio-scsi-pci smbios1: uuid=... sockets: 1 vmgenid: ... COMMAND_BLOCK:
boot: order=scsi0;ide2;net0 cores: 2 ide2: none,media=cdrom memory: 2048 name: old-webserver net0: virtio=AA:BB:CC:DD:EE:FF,bridge=vmbr0 numa: 0 ostype: l26 scsi0: local-lvm:vm-100-disk-0,size=32G # OS Disk scsi1: local-lvm:vm-100-disk-1,size=100G # IMPORTANT Data Disk scsihw: virtio-scsi-pci smbios1: uuid=... sockets: 1 vmgenid: ... COMMAND_BLOCK:
qm shutdown 100 # Wait for VM to stop, or use qm stop 100 for immediate halt (less graceful) Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
qm shutdown 100 # Wait for VM to stop, or use qm stop 100 for immediate halt (less graceful) COMMAND_BLOCK:
qm shutdown 100 # Wait for VM to stop, or use qm stop 100 for immediate halt (less graceful) CODE_BLOCK:
qm detach 100 scsi1 Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
qm detach 100 scsi1 CODE_BLOCK:
qm detach 100 scsi1 CODE_BLOCK:
qm destroy 100 Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
qm destroy 100 CODE_BLOCK:
qm destroy 100 COMMAND_BLOCK:
qm create 101 --name new-webserver --memory 2048 --cores 2 --net0 virtio,bridge=vmbr0 --ostype l26 --scsihw virtio-scsi-pci --scsi0 local-lvm:32G,ssd=1 # You will likely install an OS from an ISO after creation Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
qm create 101 --name new-webserver --memory 2048 --cores 2 --net0 virtio,bridge=vmbr0 --ostype l26 --scsihw virtio-scsi-pci --scsi0 local-lvm:32G,ssd=1 # You will likely install an OS from an ISO after creation COMMAND_BLOCK:
qm create 101 --name new-webserver --memory 2048 --cores 2 --net0 virtio,bridge=vmbr0 --ostype l26 --scsihw virtio-scsi-pci --scsi0 local-lvm:32G,ssd=1 # You will likely install an OS from an ISO after creation CODE_BLOCK:
qm set 101 --scsi1 local-lvm:vm-100-disk-1 Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
qm set 101 --scsi1 local-lvm:vm-100-disk-1 CODE_BLOCK:
qm set 101 --scsi1 local-lvm:vm-100-disk-1 CODE_BLOCK:
qm start 101 Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
qm start 101 CODE_BLOCK:
qm start 101 CODE_BLOCK:
qm config 100 Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
qm config 100 CODE_BLOCK:
qm config 100 COMMAND_BLOCK:
qm create 102 --name new-webserver-fallback --memory 2048 --cores 2 --net0 virtio,bridge=vmbr0 --ostype l26 --scsihw virtio-scsi-pci --scsi0 local-lvm:32G,ssd=1 # Install OS from ISO. Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
qm create 102 --name new-webserver-fallback --memory 2048 --cores 2 --net0 virtio,bridge=vmbr0 --ostype l26 --scsihw virtio-scsi-pci --scsi0 local-lvm:32G,ssd=1 # Install OS from ISO. COMMAND_BLOCK:
qm create 102 --name new-webserver-fallback --memory 2048 --cores 2 --net0 virtio,bridge=vmbr0 --ostype l26 --scsihw virtio-scsi-pci --scsi0 local-lvm:32G,ssd=1 # Install OS from ISO. CODE_BLOCK:
qm shutdown 100 Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
qm shutdown 100 CODE_BLOCK:
qm shutdown 100 CODE_BLOCK:
qm detach 100 scsi1 Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
qm detach 100 scsi1 CODE_BLOCK:
qm detach 100 scsi1 CODE_BLOCK:
qm set 102 --scsi1 local-lvm:vm-100-disk-1 Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
qm set 102 --scsi1 local-lvm:vm-100-disk-1 CODE_BLOCK:
qm set 102 --scsi1 local-lvm:vm-100-disk-1 CODE_BLOCK:
qm start 102 Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
qm start 102 CODE_BLOCK:
qm start 102 CODE_BLOCK:
qm destroy 100 Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
qm destroy 100 CODE_BLOCK:
qm destroy 100 CODE_BLOCK:
qm config 100 Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
qm config 100 CODE_BLOCK:
qm config 100 COMMAND_BLOCK:
qm create 103 --name cloned-webserver-test --memory 2048 --cores 2 --net0 virtio,bridge=vmbr0 --ostype l26 --scsihw virtio-scsi-pci --scsi0 local-lvm:32G,ssd=1 # Install OS from ISO. Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
qm create 103 --name cloned-webserver-test --memory 2048 --cores 2 --net0 virtio,bridge=vmbr0 --ostype l26 --scsihw virtio-scsi-pci --scsi0 local-lvm:32G,ssd=1 # Install OS from ISO. COMMAND_BLOCK:
qm create 103 --name cloned-webserver-test --memory 2048 --cores 2 --net0 virtio,bridge=vmbr0 --ostype l26 --scsihw virtio-scsi-pci --scsi0 local-lvm:32G,ssd=1 # Install OS from ISO. CODE_BLOCK:
qm shutdown 100 Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
qm shutdown 100 CODE_BLOCK:
qm shutdown 100 CODE_BLOCK:
qm clone_disk 100 scsi1 local-lvm --format raw --target-vm 103 --dest scsi1 Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
qm clone_disk 100 scsi1 local-lvm --format raw --target-vm 103 --dest scsi1 CODE_BLOCK:
qm clone_disk 100 scsi1 local-lvm --format raw --target-vm 103 --dest scsi1 CODE_BLOCK:
qm start 103 Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
qm start 103 CODE_BLOCK:
qm start 103 CODE_BLOCK:
qm start 100 Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
qm start 100 CODE_BLOCK:
qm start 100 - Proxmox provides three distinct strategies for redeploying VMs while preserving data disks: detaching/re-attaching, creating a new VM with the old disks as a fallback, or cloning the data disks.
- Critical CLI commands for these operations include qm config to identify disks, qm shutdown and qm detach to prepare disks, qm destroy for old VMs, qm create for new VMs, qm set to attach disks, and qm clone\_disk for copying data.
- The choice of method depends on risk tolerance and resource availability: āDetach, Delete, Recreate, Re-attachā is simple but lacks rollback; āCreate New VM, Detach from Old, Attach to Newā offers a safety net; and āClone Data Disksā provides maximum safety and flexibility but requires more storage and potentially higher downtime. - OS Corruption or Instability: The guest OS becomes unbootable or unstable, necessitating a reinstall.
- OS Upgrade/Downgrade: Migrating to a newer (or older) OS version that requires a clean install, while application data or databases reside on separate volumes.
- Template Updates: Replacing a VM based on an outdated Proxmox template with a new one, but needing to connect existing persistent storage.
- Hardware Resource Changes: Reconfiguring VM hardware (e.g., CPU, RAM, network) that might be easier with a fresh VM definition, while retaining data. - Simplicity: Easy to understand and execute for experienced Proxmox users.
- Clean Slate: Ensures a completely fresh OS environment.
- No Extra Storage: Doesnāt require additional temporary storage space beyond the new OS disk. - No Rollback: Once the original VM is deleted, thereās no easy way to revert to its previous state (unless you have a backup).
- Downtime: The data disks are unavailable during the entire process.
- Risk of Error: Requires careful identification of the correct disks to avoid accidental deletion. - Identify Data Disks: - Shutdown the VM: - Detach Data Disks: - Delete the Old VM: - Create a New VM: - Attach the Detached Data Disks: - Start New VM and Verify: - Safety Net: The old VM remains available as a fallback in case of issues with the new deployment.
- Reduced Risk: Minimizes the chance of accidental data loss during the transition.
- Parallel Testing: Allows for testing the new VM with the old data disks while the original VM is still present (though shut down for disk detachment). - Temporary Resources: Requires that the old VMās configuration and OS disk persist temporarily, consuming some resources.
- Manual Cleanup: Requires explicit deletion of the old VM once the new one is validated. - Identify Data Disks: - Create a New VM (and install OS): - Shutdown the Old VM: - Detach Data Disks from Old VM: - Attach Detached Data Disks to New VM: - Start New VM and Verify: - Optional: Delete Old VM: - Maximum Safety: The original VM and its data disks are left intact, providing an instant rollback option.
- Zero Risk to Original Data: You are working with a copy, eliminating any chance of accidental data loss on the source.
- Parallel Testing: Can run the old and new VMs side-by-side (if appropriate for the application) during the transition. - Storage Consumption: Requires enough free storage space to accommodate a full copy of the data disk(s).
- Increased Downtime: The cloning process itself takes time, during which the data disk is being copied.
- Data Drift: If the original VM continues to write data after the clone, the clone will not reflect the latest changes. - Identify Data Disks: - Create a New VM (and install OS): - Shutdown the Source VM (Optional but Recommended): - Clone the Data Disk: - 100: Source VMID
- scsi1: Source disk ID on VM 100
- local-lvm: Target storage for the cloned disk
- --format raw: Specify disk format (optional, will default to storageās preferred format)
- --target-vm 103: The VM to which the cloned disk should be attached
- --dest scsi1: The disk ID within VM 103 (e.g., scsi1, virtio1, etc.) - Start New VM and Verify: - Resume Old VM (Optional):
how-totutorialguidedev.toaiservernetworkdatabase