LACP, Bonding, Bridges, and VLAN Tagging


This article provides some brief information on setting up bonded interfaces in Linux, configuring Link Aggregation Control Protocol (LACP), and then setting up tagged VLANs. Many times there’s an assumption of knowledge. This article provides some background and resources to verify the knowledge or increase it if some information is lacking. This is not intended, however, to be a detailed or deep dive into networking in Linux. Just to provide some information and configurations for the current requirement. There are other settings and options available. Feel free to continue reading on the topics and even take a class for even more information.

As always links to some of the pages I used for this article are in the references section at the end. If you find a better link, by all means, let me know and I’ll add it to the references.

Link Aggregation Control Protocol

We’ll start with LACP. We use LACP on a Bonded Interface in order to provide redundancy, load balancing, and to improve performance. A collection of LACP interfaces is called a Link Aggregation Group (LAG). Unlike other options, LACP collects all interfaces into a single, wider trunk for traffic. If we have 4 gigabit interfaces, we have a 4 gigabit trunk and not 4 individual 1 gigabit pipes. In my example below, I have 12 network interfaces as a LAG for bond0 so a 12 gigabit trunk.


To enable LACP, set the mode to 802.3ad.


This flag has two options, 0 and 1. If set to 0 (slow; default setting), the default LACP packet is sent to a peer (like the switch) every 30 seconds. If set to 1 (fast), the default LACP packet is sent to a peer every 1 second.


There are three policy settings for this option.

  • layer2 (Data Link) – The default policy. Transmission hash is based on MAC. All network traffic is on a single slave interface in the bonded LACP.
  • layer2+3 (Data Link + Network) – Transmission hash is based on MAC and IP address. Sends traffic to destinations on the same slave interface. More balanced so provides the best performance and stability.
  • layer3+4 (Network + Transport) – Creates a transmission hash based on the upper network layer (Transport) whenever possible. It allows multiple traffic or connections spanning over multiple slave interfaces in a bonded LACP. One connection will not span over multiple interfaces. Reverts to layer2 (Data Link) for non IP traffic. Note that this isn’t fully compliant with LACP. This can be the best option for performance and stability but as noted, it’s not fully LACP compliant.


This is how often the bond checks the bonded interfaces to ensure they’re still active. A good value is 100 (milliseconds) which is the default if not set.

You can verify the setting by checking.

$ cat /sys/devices/virtual/net/bond0/bonding/miimon 100

Bonded Interfaces

Network Bonding is essentially combining multiple network interfaces into a single interface. There are several modes you can select to manage the behavior of the bonded interface. For our purposes, we’re using mode 4 which is the 802.3ad specification, Dynamic Link Aggregation.

Bonding Module

It’s very likely that whatever system you’re going to set up bonding on will be configured. But just to be sure, you’ll want to verify it’s available.

# modinfo bonding
filename:       /lib/modules/3.10.0-693.21.1.el7.x86_64/kernel/drivers/net/bonding/bonding.ko.xz
author:         Thomas Davis, and many others
description:    Ethernet Channel Bonding Driver, v3.7.1
version:        3.7.1
license:        GPL
alias:          rtnl-link-bond
retpoline:      Y
rhelversion:    7.4
srcversion:     33C47E3D00DF16A17A5AB9C
intree:         Y
vermagic:       3.10.0-693.21.1.el7.x86_64 SMP mod_unload modversions
signer:         CentOS Linux kernel signing key
sig_key:        03:DA:60:92:F6:71:13:21:B5:AC:E1:2E:84:5D:A9:73:36:F7:67:4D
sig_hashalgo:   sha256

 If modinfo doesn’t return any information, you’ll want to activate bonding.

# modprobe --first-time bonding

Network Interfaces

The servers have a 4 port network card in addition to the four ports on the motherboard. I will be creating a 8 network interface bond.

  • eno5-eno8 – On Board
  • ens1f0-ens1f3 – Network Card


For CentOS, multiple files are created in the /etc/sysconfig/network-scripts directory for the bond0 interface and each interface as listed above.

# vi /etc/sysconfig/network-scripts/ifcfg-bond0
BONDING_OPTS="mode=802.3ad miimon=100 lacp_rate=fast xmit_hash_policy=layer2+3"

For CentOS, we’ll create a file for each interface. This is just the first file but each interface file will be the same except for the DEVICE and NAME entries.

# vi /etc/sysconfig/network-scripts/ifcfg-eno1

For Ubuntu which I’m using as my KVM server, you’ll edit the single /etc/netplan/01-netcfg.yaml file. Remember that spacing is critical when working in yaml. Initially you’ll disable dhcp4 for all interfaces. Then set up the bonding configuration.

$ cat 01-netcfg.yaml
  version: 2
  renderer: networkd
      dhcp4: no
      dhcp4: no
      dhcp4: no
      dhcp4: no
      dhcp4: no
      dhcp4: no
      dhcp4: no
      dhcp4: no
      dhcp4: no
      dhcp4: no
      dhcp4: no
      dhcp4: no

      - eno1
      - eno2
      - enp8s0f0
      - enp8s0f1
      - enp12s0f0
      - enp12s0f1
        mode: 802.3ad
        transmit-hash-policy: layer3+4
        mii-monitor-interval: 100

Bond Status

You can check the bonding status by viewing the current configuration in /proc.

# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer3+4 (1)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: be:70:a3:ab:10:6d
Active Aggregator Info:
        Aggregator ID: 1
        Number of ports: 1
        Actor Key: 9
        Partner Key: 1
        Partner Mac Address: 00:00:00:00:00:00

Slave Interface: enp13s0f1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:1b:21:ca:12:45
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: churned
Actor Churned Count: 0
Partner Churned Count: 1
details actor lacp pdu:
    system priority: 65535
    system mac address: be:70:a3:ab:10:6d
    port key: 9
    port priority: 255
    port number: 1
    port state: 77
details partner lacp pdu:
    system priority: 65535
    system mac address: 00:00:00:00:00:00
    oper key: 1
    port priority: 255
    port number: 1
    port state: 1


Bridge Interfaces

A Bridge interface connects two networks together. Generally you’re connecting a VM network such as with KVM with the physical or bonded interface.


You can get some bridge information using the brctl command.

# brctl show
bridge name	bridge id		STP enabled	interfaces
br2719		8000.98f2b3288d26	no		bond2.719
br2751		8000.98f2b3288d26	no		bond2.751
br2752		8000.98f2b3288d26	no		bond2.752
br700		8000.98f2b3288d24	no		bond0.700
br710		8000.98f2b3288d25	no		bond1.710
br750		8000.98f2b3288d24	no		bond0.750
virbr0		8000.525400085dad	yes		virbr0-nic
virbr1		8000.525400a5a4ea	yes		virbr1-nic

When using nmcli to create a bridge, you can connect an interface that doesn’t have a connection profile or connect an interface with an existing connection profile. A connection profile is an interface that was created by Network Manager.

First create the bridge interface.

# nmcli con add type bridge con-name br810 ifname br810

For an interface without a connection profile.

# nmcli con add type ethernet slave-type bridge con-name br810-eno5 \
    ifname eno5 master br810
# nmcli con add type ethernet slave-type bridge con-name br810-eno6 \
    ifname eno6 master br810

However if we already have a connection profile, you just designate the bridge as the master.

# nmcli con mod bond0.810 master br810

For Ubuntu systems, you’ll again edit the /etc/netplan/01-netcfg.yaml file to add the bridge information.

      dhcp4: false
        search: ['internal.pri']
        - bond0

VLAN Tagging

VLAN Tagging simply lets you tag traffic associated with the underlying interface so that the switch knows where traffic needs to be sent.


This will describe setting up one of the Dell servers. This is the interface hierarchy. At the top level is the bridged interface followed by the bonded and then physical interfaces.

  • br810 – Bridged interface. This has the IP Assignments.
    • bond0.810 – Bonded VLAN tagged interface, references br810.
      • bond0 – The underlying bonded interface. This has the BONDED_OPTS line.
        • eno5 – The lower physical interface and references bond0.
        • eno6 – The lower physical interface.
        • eno7 – The lower physical interface.
        • etc…

Here is the list of IP addresses. This is for a dual interface system which my Dell is. For the Ubuntu based Dell box, it’s a single interface as described above. The only real change when viewing the examples below should be the IP address for each system.

Domain is internal.pri.

Current Server NameNew Server NameApp/IP AddressGatewayManagement/IP AddressGateway

The production Dell system has 8 interfaces. eno5-eno8 and ens1f0-ens1f3. We’ll set up the bridge, the bonded vlan tagged interface, the bonded interface, and then connect all the interfaces to the bond.

Since this is via the console, the example output below will not have all the information.

The configuration process is to create the bonded interface first. Then add the underlying physical interfaces. Next add a bridged interface. And finally configure the VLAN for the tagged interface.

# nmcli con add type bond con-name bond0 ifname bond0 \
    bond.options "mode=802.3ad,miimon=100,lacp_rate=fast,xmit_hash_policy=layer2+3" \
    ipv4.method disabled ipv6.method ignore

The bond0 interface doesn’t get an IP address, the bridge interface does. Hence the ipv4.method disabled option.

Next add the physical interfaces to the bond. Since they already have connection profiles, we don’t need to further configure the interfaces.

# nmcli con add type bond-slave ifname eno5 master bond0
# nmcli con add type bond-slave ifname eno6 master bond0
# nmcli con add type bond-slave ifname eno7 master bond0
# nmcli con add type bond-slave ifname eno8 master bond0
# nmcli con add type bond-slave ifname ens1f0 master bond0
# nmcli con add type bond-slave ifname ens1f1 master bond0
# nmcli con add type bond-slave ifname ens1f2 master bond0
# nmcli con add type bond-slave ifname ens1f3 master bond0

Now we configure a bridge interface. Make sure you change the IP address.

# nmcli con add type bridge con-name br810 ifname br810 ip4

We’ll need to add the gateway as well. And change the gateway address too.

# nmcli con mod br810 ipv4.method manual ipv4.gateway

Finally add the VLAN interface on top of the bond0 interface but with the bridge interface as the master.

# nmcli con add type vlan con-name bond0.810 ifname bond0.810 dev bond0 \ 810 master br810 slave-type bridge

That should bring the interface up. You can review the status by:

# more /proc/net/bonding/bond0

As the network is a service network, it’s not reachable so a second bridge and vlan has been configured. Make sure you change the IP address.

# nmcli con add type bridge con-name br700 ifname br700 ip4

Add the gateway. And change the gateway address too.

# nmcli con mod br700 ipv4.method manual ipv4.gateway

And the VLAN.

# nmcli con add type vlan con-name bond0.700 ifname bond0.700 dev bond0 \ 700 master br700 slave-type bridge

Finally a route is needed to ensure we can access the servers. Make sure you change that last IP to the gateway for the Management IP address.

# nmcli con mod br700 +ipv4.routes ""

And remove the default route for br700. This changes the ‘DEFROUTE=yes’ option to ‘DEFROUTE=no’ and removes the gateway if set (as in above).

# nmcli con mod br700 ipv4.never-default yes


This entry was posted in Computers, KVM. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *