View Issue Details

IDProjectCategoryView StatusLast Update
0017162CentOS-7kernelpublic2020-03-17 14:13
Reporterdragle 
PrioritynormalSeverityminorReproducibilityalways
Status newResolutionopen 
PlatformOSCentOSOS Version6
Product Version 
Target VersionFixed in Version 
Summary0017162: Bond MII Status: down in new kernel with ARP monitoring (was MII Status: up in previous kernel)
DescriptionAfter upgrading the kernel from 2.6.32-754.27.1.el6.x86_64 to 2.6.32-754.28.1.el6.x86_64 I found that my public network bond (using ARP monitoring in active/backup) is behaving differently. Specifically, the MII Status on my primary NIC for the bond is reporting "down" when in the previous kernel (and for as long before that as I know) it reports "up". I.E, with the older kernel:

# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: eth0 (primary_reselect always)
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0
ARP Polling Interval (ms): 1000
ARP IP target/s (n.n.n.n form): 1.1.1.1

Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: <snip>
Slave queue ID: 0

Slave Interface: eth3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: <snip>
Slave queue ID: 0

And with the newer kernel:

# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: eth0 (primary_reselect always)
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0
ARP Polling Interval (ms): 1000
ARP IP target/s (n.n.n.n form): 1.1.1.1

Slave Interface: eth0
MII Status: down
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: <snip>
Slave queue ID: 0

Slave Interface: eth3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: <snip>
Slave queue ID: 0

Since I'm using ARP monitoring:

BONDING_OPTS="mode=1 arp_interval=1000 arp_ip_target=1.1.1.1 arp_validate=3 primary=eth0"

perhaps the MII Status is irrelevant anyway; but I'm just wanting to make sure there aren't any unforeseen problems (I.E., if there is a NIC bounce, that it works properly and is able to go back to eth0 when it comes back up). As far as I can tell the bond is working properly.
Steps To ReproduceIn kernel 2.6.32-754.27.1.el6.x86_64, eth0 is always MII Status: up.

In kernel 2.6.32-754.28.1.el6.x86_64, eth0 is always MII Status: down.
Tagsbond
abrt_hash
URL

Activities

dragle

dragle

2020-03-17 13:15

reporter   ~0036526

Dang, reported this in the wrong project. Should be over in CentOS6. Can it be moved or do I need to close/resubmit?
dragle

dragle

2020-03-17 14:13

reporter   ~0036527

Update: Changing the arp_interval to 500 "fixes" the problem; the MII status on the eth0 NIC is now "up".

Issue History

Date Modified Username Field Change
2020-03-17 13:13 dragle New Issue
2020-03-17 13:13 dragle Tag Attached: bond
2020-03-17 13:15 dragle Note Added: 0036526
2020-03-17 14:13 dragle Note Added: 0036527