Remote Support Start download

TrueNAS NFS Best Practices for Linux Environments

TrueNASNFSLinuxStorage
TrueNAS NFS Best Practices for Linux Environments

NFS is the standard protocol for file sharing in Linux environments. Combined with TrueNAS, it forms a powerful storage platform that covers everything from simple home directories to shared storage for Proxmox VE clusters. However, there is a significant difference between an NFS setup that merely works and one that is optimally configured — in terms of performance, security, and stability.

This article covers the most important best practices for NFS on TrueNAS in production Linux environments: from protocol selection and export options to performance tuning.

NFSv3 vs NFSv4 vs NFSv4.1/pNFS

The choice of NFS version directly impacts security, performance, and feature set.

CriterionNFSv3NFSv4NFSv4.1/pNFS
Port usageMultiple ports (rpcbind, mountd, statd)Single port (2049)Single port (2049)
Firewall friendlinessComplex (dynamic ports)SimpleSimple
AuthenticationIP-based (AUTH_SYS)Kerberos capable (RPCSEC_GSS)Kerberos capable (RPCSEC_GSS)
ID mappingUID/GID numericUsername-based (idmapd)Username-based (idmapd)
DelegationNoYes (client caching)Yes (client caching)
Parallel accessNoNoYes (pNFS)
RecommendationLegacy systems onlyStandard for productionLarge cluster environments

Recommendation: NFSv4 is the standard for new deployments. Use NFSv3 only for older systems that do not support NFSv4.

Configuring TrueNAS NFS Shares

The setup is done via the web GUI under Sharing > Unix Shares (NFS).

Create a Share

Sharing > Unix Shares (NFS) > Add
  Path:           /mnt/tank/nfs-data
  Description:    Linux NFS Share
  Enabled:        Yes
  NFSv4:          Enabled

Export Options in Detail

Export options control which clients can access the share and with what permissions.

Authorized Networks:   10.0.10.0/24
Authorized Hosts:      (empty = all hosts on network)
maproot User:          root
maproot Group:         wheel
mapall User:           (empty)
mapall Group:          (empty)
Security:              sys

maproot maps the client’s root user to a specific user on TrueNAS. mapall maps all users to a single user — useful when all clients should have identical access.

Security note: Only set maproot when root access is genuinely required. For general file shares, mapall set to an unprivileged user is the safer choice.

Client-Side Mount Configuration

Manual Mount

mount -t nfs4 10.0.10.1:/mnt/tank/nfs-data /mnt/nfs-data

Permanent Mount via /etc/fstab

10.0.10.1:/mnt/tank/nfs-data  /mnt/nfs-data  nfs4  rw,hard,intr,rsize=1048576,wsize=1048576,timeo=600,retrans=2,_netdev  0 0

Key mount options explained:

  • hard — Retries indefinitely on server failure (recommended for production). The alternative soft aborts after timeout and can lead to data loss.
  • intr — Allows interrupting hung NFS operations.
  • rsize/wsize=1048576 — Read and write block size of 1 MB. Maximizes throughput for large files.
  • _netdev — Waits for network availability before attempting the mount during boot.

Kerberos Authentication with NFSv4

In security-sensitive environments, Kerberos replaces IP-based authentication with cryptographic identity verification.

Security Levels

LevelDescription
sec=sysDefault — UID/GID-based, no encryption
sec=krb5Kerberos authentication, data unencrypted
sec=krb5iAuthentication + integrity checking
sec=krb5pAuthentication + encryption (highest security)

Setting Up the Kerberos Client

apt install krb5-user nfs-common

Configure the realm in /etc/krb5.conf:

[libdefaults]
    default_realm = EXAMPLE.LAN

[realms]
    EXAMPLE.LAN = {
        kdc = dc01.example.lan
        admin_server = dc01.example.lan
    }

Deploy the keytab on the client and mount with Kerberos:

mount -t nfs4 -o sec=krb5p 10.0.10.1:/mnt/tank/nfs-data /mnt/nfs-secure

Performance Tuning

Server Side: Increasing NFS Threads

TrueNAS starts 16 NFS threads by default. For environments with many clients or high load, this value should be increased.

Services > NFS > Settings
  Number of Servers:  32 (or 64 for >20 clients)

Alternatively, check via CLI:

sysctl vfs.nfsd.server_max_nfsds

Client Side: Optimizing rsize and wsize

ValueUse case
rsize/wsize=65536Conservative, good for many small files
rsize/wsize=262144Balanced
rsize/wsize=1048576Maximum, optimal for large files and backups

async vs sync

  • sync (TrueNAS default): Every write is committed to disk before acknowledging the client. Safe but slower.
  • async: Writes are buffered. Significantly faster, but risks data loss during power failure.

Recommendation: sync for production data, async only for temporary data or backup targets where source data is secured elsewhere.

Proxmox NFS Storage Integration

Proxmox VE natively supports NFS as a storage backend — ideal for ISO images, container templates, and backups.

Datacenter > Storage > Add > NFS
  ID:       truenas-nfs
  Server:   10.0.10.1
  Export:   /mnt/tank/nfs-data
  Content:  ISO image, Container template, VZDump backup file
  NFS Version: 4.2

Proxmox automatically mounts the NFS share on all cluster nodes. For VM disks, we still recommend iSCSI as block storage has lower overhead.

autofs: Dynamic Mounting

In environments with many NFS shares or workstations, autofs is a better alternative to static fstab entries. Shares are mounted on access and automatically unmounted after a period of inactivity.

apt install autofs

In /etc/auto.master:

/mnt/nfs  /etc/auto.nfs  --timeout=300

In /etc/auto.nfs:

data  -rw,hard,intr,rsize=1048576,wsize=1048576  10.0.10.1:/mnt/tank/nfs-data
media -ro,soft  10.0.10.1:/mnt/tank/nfs-media
systemctl enable --now autofs

Accessing /mnt/nfs/data automatically mounts the share. After 300 seconds of inactivity, it is unmounted again.

Troubleshooting

Stale File Handles

Occurs when the server-side export is modified or the share is recreated while the client still has an active mount.

umount -f /mnt/nfs-data
mount -a

Permission Denied

Most common causes: UID/GID mismatch between client and server, or missing network authorization in the export options.

# Check current NFS exports on TrueNAS
showmount -e 10.0.10.1

# Verify UID/GID on the client
id username

Slow Performance

# Display NFS statistics on the client
nfsstat -c

# Check mount options of the active mount
mount | grep nfs

# Test network throughput
dd if=/dev/zero of=/mnt/nfs-data/testfile bs=1M count=1024 oflag=direct

Typical causes: incorrect rsize/wsize values, missing Jumbo Frames, too few NFS threads on the server.

Monitoring with DATAZONE Control

NFS storage is business-critical in many environments — an outage affects all connected clients simultaneously. With DATAZONE Control, we comprehensively monitor your NFS infrastructure:

  • Mount availability: Automatic detection of stale mounts and failed NFS connections
  • Performance metrics: NFS operations per second, latency, throughput
  • Server health: NFS thread utilization, ZFS pool status, storage capacity
  • Client monitoring: Mount status and NFS error rates across all Linux clients

Thresholds trigger automatic alerts — before a slow NFS connection turns into a production outage.

Conclusion

NFS on TrueNAS is a proven solution for file sharing in Linux environments. With the right configuration — NFSv4, appropriate export options, optimized mount parameters, and Kerberos for security — it becomes a reliable and high-performance storage platform.

The key takeaways: NFSv4 as the standard, a dedicated storage network, rsize/wsize tuned to the workload, sync for production data, and continuous monitoring.


Looking to set up NFS storage on TrueNAS for your Linux environment? Contact us — we configure your NFS infrastructure from planning to monitoring.

Need IT consulting?

Contact us for a no-obligation consultation on Proxmox, OPNsense, TrueNAS and more.

Get in touch