Network isolation is a fundamental requirement in any virtualization environment. Traditionally, this involves physical switches with VLAN configuration and manually maintained Linux bridges on each hypervisor. Starting with Proxmox VE 8.1, SDN (Software-Defined Networking) provides an integrated solution that manages virtual networks centrally through the web interface — independent of the physical network infrastructure.
What Is Proxmox SDN?
SDN separates the logical network configuration from physical hardware. Instead of configuring bridges, VLANs, and firewall rules individually on each cluster node, administrators define virtual networks centrally in Proxmox. The configuration is automatically distributed and applied to all nodes in the cluster.
This solves a concrete problem: in a Proxmox cluster with three nodes and ten isolated networks, without SDN you would need to manually configure ten bridges or VLAN interfaces on each node — a total of 30 configuration entries that must be kept in sync. With SDN, this happens automatically.
Architecture: Zones, VNets, and Subnets
Proxmox SDN uses a three-tier hierarchy:
Zones define the network technology and determine how traffic is transported between nodes. Each zone uses a specific type — Simple, VLAN, VXLAN, or EVPN.
VNets (Virtual Networks) are the virtual networks within a zone. VMs and containers connect to VNets, not to physical bridges. A VNet appears on the nodes as a bridge through which Proxmox controls traffic flow.
Subnets define IP ranges and gateway addresses for a VNet. Optionally, DHCP and DNS can be provided through integrated dnsmasq.
Controllers manage routing information between nodes in EVPN zones. No controller is required for Simple and VXLAN zones.
Zone Types Overview
| Zone Type | Transport | Nodes | Use Case |
|---|---|---|---|
| Simple | Local bridge | Single node | Test environments, standalone servers |
| VLAN | 802.1Q tags | Cluster | Leverage existing VLAN infrastructure |
| VXLAN | UDP overlay (port 4789) | Cluster | Network isolation without physical VLANs |
| EVPN | VXLAN + BGP routing | Cluster | Inter-VNet routing, complex topologies |
Simple is suitable for single servers or quick tests. No overlay, no controller — a VNet is created as a local bridge on the node.
VLAN uses the existing switch infrastructure with 802.1Q tagging. Useful when physical switches already provide VLANs and you only want to simplify the Proxmox configuration.
VXLAN is the best choice for cluster environments. A Layer 2 overlay network over UDP transports traffic between nodes — independent of the physical network. Up to 16 million isolated networks are possible.
EVPN extends VXLAN with BGP-based routing between VNets. Typically not required for SMB environments.
Setting Up a Simple Zone
For a single Proxmox node, a Simple zone is sufficient. Create a new zone under Datacenter > SDN > Zones:
Zone ID: simple1
Type: Simple
Then create a VNet under Datacenter > SDN > VNets:
VNet ID: vnet-internal
Zone: simple1
Define a subnet for the VNet:
Subnet: 10.10.10.0/24
Gateway: 10.10.10.1
SNAT: enabled (for internet access)
After configuring, activate the changes via SDN > Apply. The bridges are only created on the nodes after applying.
VXLAN Zone for Multi-Node Clusters
In a cluster with multiple nodes, VXLAN provides the cleanest solution. Nodes communicate through an overlay network that requires no specially configured physical switch.
Zone ID: vxlan1
Type: VXLAN
Peers: 192.168.1.10,192.168.1.11,192.168.1.12
MTU: 1450
The Peers are the IP addresses of the cluster nodes on the management network. The MTU should be set to 1450 because the VXLAN header adds 50 bytes to each packet — with a physical MTU of 1500, this leaves 1450 bytes for the payload. With jumbo frames (MTU 9000), the value can be increased to 8950.
VNets in a VXLAN zone require a unique VXLAN tag number (VNI), comparable to a VLAN ID:
VNet ID: vnet-prod
Zone: vxlan1
Tag: 100
VNet ID: vnet-dev
Zone: vxlan1
Tag: 200
Both VNets are completely isolated from each other. VMs in vnet-prod cannot access vnet-dev.
Assigning VNets to VMs and Containers
After applying, the VNets appear as selectable bridges in the VM and container configuration. In the web interface under Hardware > Network Device, select the desired VNet as bridge:
Bridge: vnet-prod
Model: VirtIO (paravirtualized)
Alternatively via the CLI:
qm set 100 -net0 virtio,bridge=vnet-prod
pct set 200 -net0 name=eth0,bridge=vnet-dev,ip=dhcp
The assignment works identically for KVM VMs and LXC containers.
DHCP and DNS with dnsmasq
Since Proxmox VE 8.1, SDN can provide an integrated DHCP server per subnet. The configuration is done in the subnet settings:
Subnet: 10.10.10.0/24
Gateway: 10.10.10.1
DHCP Range: 10.10.10.100 - 10.10.10.200
DNS Server: 10.10.10.1
Proxmox automatically starts a dnsmasq instance for the subnet. The DHCP server assigns IP addresses within the defined range and registers hostnames. For production environments, a dedicated DNS server is still recommended — the integrated DHCP is best suited for development and test environments.
SDN vs Linux Bridges vs Open vSwitch
| Criterion | Linux Bridge | Open vSwitch | Proxmox SDN |
|---|---|---|---|
| Configuration | Manual per node | Manual per node | Centralized in cluster |
| VLAN support | Manual (vlan-aware) | Yes (flow rules) | Yes (VLAN zone) |
| Overlay networks | No | Yes (GRE, VXLAN) | Yes (VXLAN, EVPN) |
| Automatic distribution | No | No | Yes (SDN Apply) |
| Integrated DHCP | No | No | Yes (dnsmasq) |
| Web GUI | Basic interfaces only | No | Full support |
| Complexity | Low | High | Medium |
Linux bridges remain a solid choice for simple setups with few networks. SDN becomes worthwhile when multiple isolated networks are needed across a cluster. Open vSwitch (OVS) is still an option but is increasingly being replaced by Proxmox SDN.
Monitoring with DATAZONE Control
Network isolation alone is not enough — the networks must also be monitored. With DATAZONE Control, SDN networks in the cluster can be monitored effectively:
- Interface status: Monitor bridge and VXLAN interfaces across all nodes
- Traffic analysis: Detect bandwidth utilization per VNet
- Configuration drift: Identify deviations between SDN configuration and actual state on nodes
- Alerting: Notifications for failed overlay tunnels or faulty bridges
This ensures you maintain visibility even as the number of virtual networks grows.
Conclusion
Proxmox SDN significantly simplifies network management in virtualization environments. Instead of manually maintaining bridges and VLANs on each node, you define networks once centrally. VXLAN provides the most flexible solution for cluster environments: complete Layer 2 isolation without dependency on physical switch configuration. For SMB environments with three to five nodes and ten to twenty isolated networks, SDN delivers a substantial improvement in clarity and maintainability.
Looking to set up Proxmox SDN in your cluster or migrate existing networks? Contact us — we plan the network architecture and implement the configuration.
More on these topics:
More articles
Backup Strategy for SMBs: Proxmox PBS + TrueNAS as a Reliable Backup Solution
Backup strategy for SMBs with Proxmox PBS and TrueNAS: implement the 3-2-1 rule, PBS as primary backup target, TrueNAS replication as offsite copy, retention policies, and automated restore tests.
Proxmox Notification System: Matchers, Targets, SMTP, Gotify, and Webhooks
Configure the Proxmox notification system from PVE 8.1: matchers and targets, SMTP setup, Gotify integration, webhook targets, notification filters, and sendmail vs. new API.
Proxmox Cluster Network Design: Corosync, Migration, Storage, and Management
Design Proxmox cluster networks: Corosync ring, migration network, storage network for Ceph/iSCSI, management VLAN, bonding/LACP, and MTU 9000 — with example topologies.