Remount NFS share on ProxMox

From Levy

Revision as of 12:21, 24 September 2024 by Louis (talk | contribs) (A guide on how to remount hanging NFS shares on ProxMox without rebooting)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Introduction

When on a ProxMox server a mounted NFS share becomes unavailable for a longer time, the server can fail to repair the mount when the NFS share becomes available again. This can be fixed by rebooting the ProxMox server (and if you're unlucky the whole cluster), but there's a more elegant way to do this.

Fixing the NFS mount

To fix the NFS mount without rebooting your ProxMox server you need to log in on the command line as root. Then we first find out the storage ID of the hanging mount (typically the same name as displayed in the web interface):

root@ayaka:~# pvesm status
got timeout
unable to activate storage 'backup_remote' - directory '/mnt/pve/backup_remote' does not exist or is unreachable
Name                 Type     Status           Total            Used       Available        %
ISO                   nfs     active      6542483968      5682793728       859149568   86.86%
backup                nfs     active      6542483968      5683334144       859149824   86.87%
backup_remote         nfs   inactive               0               0               0    0.00%
local                 dir     active        18077696         6398836        11678860   35.40%
local-lvm         lvmthin     active         2097152               0         2097152    0.00%
vms                   nfs     active       532508160       275493120       257015040   51.74%

As we can see, the backup_remote share is havving the issue here.

To get it to mount again, we first disable it:

root@ayaka:~# pvesm set backup_remote --disable

And then we unmount it:

root@ayaka:~# umount -f -l /mnt/pve/backup_remote

Then finally enable the mount again:

root@ayaka:~# pvesm set backup_remote --delete disable

Checking the status again, in a few seconds, we can see that backup_remote is now marked as "active" again:

root@ayaka:~# pvesm status
Name                 Type     Status           Total            Used       Available        %
ISO                   nfs     active      6542483968      5682955520       858987776   86.86%
backup                nfs     active      6542483968      5683496192       858987776   86.87%
backup_remote         nfs     active      2473882624      1044884480      1303257088   42.24%
local                 dir     active        18077696         6402552        11675144   35.42%
local-lvm         lvmthin     active         2097152               0         2097152    0.00%
vms                   nfs     active       532508160       275493120       257015040   51.74%