NFS - Network file system
Shiny Space finally has a network file sharing system that seems like a solid setup. Previously, the issues were:
- we don't want to share anything from the Proxmox installation itself, because a Hypervisor should only do one thing: Host VMs.
- If we set up a VM and allocate it storage, that storage will be included in backups, and is hard to move from one VM to another if we need to change the setup in the future. Having to make 2 Terabyte Backups regularly is not fun.
There are possibilities of not including the network share in backups, and there are ways of switching the allocated storage from one VM to another while keeping the system seperate, but... Complexity demon smiles.
So we've landed at the current option:
Direct Disk Passthrough. https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM)
This allows us to give direct acces to a physical disk to one of our VM's. Since it's a physical disk, it's very easy to disconnect it from one VM and attach it to another! Also, while by default it IS included in backups, it's way easier to just remove it, take a backup of the very small nfs host vm, and reattach it.
The nfs host vm is a small Alpine Linux VM with nfs-utils installed. Available NFS "disks" are directories in /storage/ , made available with entries in /etc/exports. This is easily managed by Ansible - if we need to connect a new host to our NFS, we can simply add a line in /etc/exports that contains the host's IP and the target directory, for example /storage/media.
On clients, it should be easy to manage as well - all we need is an /etc/fstab entry and it should be mounted automatically on boot.
One issue is when restarting the hypervisor - for the fstab mount to work, both netvm1 and nfs host must be running already. Conveniently, Proxmox offers a "boot order" option, where we set netvm1 to "1" to start first, nfs host to "5" to leave some room, and everything else has no value (or maybe "50" later for a default).
No Comments