View Issue Details

IDProjectCategoryView StatusLast Update
0005526CentOS-6qemu-kvmpublic2017-11-03 03:48
Reporterbmalynovytch Assigned To 
Status newResolutionopen 
Platform64 bitsOSCentOSOS Version6
Product Version6.2 
Summary0005526: KVM Guest with virtio network loses network connectivity
DescriptionRunning KVM guests with virtio network interfaces, the guest will (probably in some - unidentified - circumstances) stop receiving packets.
A tcpdump on the bugged interface will show only ARP requests being sent by the server and unanswered.

To solve (temporarily) the problem, doing a ifdown and ifup on the interface will let it run smoothly again for a while.
Another solution is to switch to e1000 interfaces.
Steps To Reproduce- Run a KVM guest with virtio network interfaces
- Send some trafic (crashed with http, memcached, custom app and mysql trafic on different servers)
- Wait and see (and cross fingers)
Additional InformationRunning KVM on Intel motherboards and processors, CentOS 6 64 bits.
Guests are CentOS 6 64 bits as well, with virtio net, blk and baloon.

Hypervisors have bond network interfaces, with VLAN support.
TagsNo tags attached.




2012-02-21 08:58

reporter   ~0014515

I forgot to mention that either guest or hypervisor show absolutely no error / warning / message, neither in dmesg nor in /var/log/messages.


2012-03-08 14:48

reporter   ~0014630

I've seen this as well over the past couple of days, but on Windows Server guests. After a relatively short period of heavy network IO, the guest will stop receiving all traffic from the network. No errors are logged and disabling/re-enabling the virtio network interface within the guest brings it back.

The combinations we've seen it on -

Centos 5.7 x86_64 w/ KVM Windows Server 2003r2 guest & Virtio drivers 1.16
Centos 6.2 x86_65 w/ KVM Windows Server 2008r2 guest & Virtio drivers 1.16

I loaded version 1.22 of the Virtio drivers on the guests yesterday, but haven't tried reproducing the issue yet.


2012-03-08 16:17

reporter   ~0014631

Same issue right now with one guest that just rebooted, with e1000 drivers :'(
I did ifdown and ifup within the guest to solve the problem.


2012-03-27 11:30

reporter   ~0014747

We have a new dedicated-hosted server that I installed centos 6.2 on, using it to run two centos 6.2 guests and one windows guest. It crashes at least once a week with this problem, and ifdown/ifup don't solve it; so far only rebooting seems to fix it. Nothing really running on the host but the KVM manager. Driving me crazy, as I had finally convinced my organizatino that hosted virtualization was the way to go, and now with this first attempt, it's crazy unstable with no reasons logged, etc. Happy to help if there's anything I can do.


2012-03-27 11:44

reporter   ~0014748

Does anyone know if this is just a problem with KVM ? I'm thinking about starting over and using Xen instead.


2012-03-27 12:25

reporter   ~0014749

I read somewhere that the problem seemed to exist with Xen in some circumstances.
Not completely sure though.


2012-05-15 12:07

reporter   ~0015082

I saw a similar issue on CentOS 6.2 when using KVM machines with network devices specified by "-net nic" on the command line. Switching to the "-device virtio-net-pci,netdev=..." syntax fixed it. Maybe, it's the same problem here?


2012-05-15 12:13

reporter   ~0015083

@rex : thank you for your note.
I just checked on my servers : libvirtd already launches the guests with "-device virtio-net-pci"



2012-05-23 06:17

reporter   ~0015134

I face same problem. I guess whether is virtio and virtio-blk use same time will triggle this problem.
My environment:
Host PC: centos6.2
Guest PC: centos6.2 virio(net) virtio-blk(disk)

We create almost 70 guest VMs, and the network crash occur in 6 VMs in two weeks.
We had two eth, it only happen in one of them.
In those error VMs, some only can receive packets and some only can send packets, restart the network it will work well again.


2012-06-23 01:47

reporter   ~0015322

i have similar issues with ubuntu 12 and now try centOS .. we'll see:


2012-06-25 06:46

reporter   ~0015337

after update qemu-kvm, it works better, but still will happen after 2T bytes transfer


2012-06-25 08:19

reporter   ~0015338

qmue-kvm tool can update, but the kvm-kmod is depend on kernel version, so I think maybe due to kvm-kmod for kernel2.6.32 still had some bug.


2012-06-26 00:47

reporter   ~0015342

Use Centos5.4 as guest os it works well. So it seems centos6.2 isn't compatible with kvm.


2012-07-03 06:44

reporter   ~0015361

It sounds like it is (was ?) due to a memory (leak ?) in the virtio-net driver.;a=commitdiff;h=4b727361f0bc7ee7378298941066d8aa15023ffb;hp=e1ac50f64691de9a095ac5d73cb8ac73d3d17dba


2012-07-09 00:59

reporter   ~0015377

use vhost will resolve this problem


2012-07-09 15:25


bug5526.patch (418 bytes)   
--- virtio_net.c.dist	2012-06-13 14:41:19.000000000 -0700
+++ virtio_net.c	2012-07-09 08:01:10.743492470 -0700
@@ -142,6 +142,7 @@
 	skb->data_len += f->size;
 	skb->len += f->size;
+	skb->truesize += PAGE_SIZE;
 	*len -= f->size;
@@ -267,7 +268,6 @@
 	hdr = skb_vnet_hdr(skb);
-	skb->truesize += skb->data_len;
 	dev->stats.rx_bytes += skb->len;
bug5526.patch (418 bytes)   


2012-07-09 15:29

manager   ~0015378

I have uploaded a centos patch ( bug5526.patch ) created from the bug fix reported in comment 15361. A candidate for the centosplus kernel.


2012-07-10 09:38

reporter   ~0015383

Could that be related to this bug:
* Tue Apr 24 2012 Michal Novotny <> - qemu-kvm-
- kvm-virtio-add-missing-mb-on-enable-notification.patch [bz#804578]
- Resolves: bz#804578
  (KVM Guest with virtio network driver loses network connectivity)

CentOS 6.2 has a newer qemu-kvm which might fix the problem.


2012-07-10 09:38

reporter   ~0015384

I meant CentOS 6.3 of course.


2013-03-08 11:27

reporter   ~0016620

I'm also affected by this bug.

CentOS 6.3 2.6.32-279.el6.x86_64
intel hardware/broadcom chipset (HP DL360 G7) with bonded and bridged interfaces (two pairs)
(03:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)

moving guests to e1000 instead of virtio /seems/ to be working.


2013-03-08 14:10

reporter   ~0016621

excerpt from /var/log/messages when error occurs
Mar 8 13:36:06 holly kernel: br0: port 12(vnet20) entering disabled state
Mar 8 13:36:07 holly kernel: br1: port 12(vnet21) entering disabled state
Mar 8 13:36:07 holly kernel: device vnet21 left promiscuous mode
Mar 8 13:36:07 holly kernel: br1: port 12(vnet21) entering disabled state
Mar 8 13:36:09 holly kernel: device vnet12 entered promiscuous mode
Mar 8 13:36:09 holly kernel: br0: port 8(vnet12) entering learning state
Mar 8 13:36:09 holly kernel: device vnet13 entered promiscuous mode
Mar 8 13:36:09 holly kernel: br1: port 8(vnet13) entering forwarding state
Mar 8 13:36:09 holly qemu-kvm: Could not find keytab file: /etc/qemu/ No such file or directory
Mar 8 13:36:12 holly kernel: br0: port 8(vnet12) entering forwarding state

changing from virtio to e1000 has not worked in my case.


2013-06-24 11:37

reporter   ~0017588

Same issue over here. We have upgraded to CentOS 6.4 but the bug still persists.
Is there some plan to put this bugfix on a release any time soon?

Help would be much appreciated.



2013-06-24 16:15

manager   ~0017589

Is anyone interested in testing a kernel with the patch reported by Jason Wang of RH :


2013-06-24 16:32

reporter   ~0017590

I may be able to test it as long as it comes as a CentOS package. Can this be done?


2013-06-24 16:47

manager   ~0017591

What I can offer is a centosplus kernel with the patch applied for testing. If this fixes the issue, the patch will be added to the next release of the official plus kernel.


2013-06-24 21:00

reporter   ~0017592

I have to get this aproved first, but I think it's possible. Do you know if this bugfix worked on other distributions? I tried to trace it but I just couldn't.



2013-06-25 05:43

manager   ~0017594

Anyone wishing to try the centosplus kernel that has the referenced patch (see note 15361) can download it from:

Please note that the packages are not signed and are provided for testing purposes only.


2013-07-08 21:18

reporter   ~0017632

I can test this. Do we need to install the new kernel on the guests, the hypervisor, or both?


2013-07-08 21:49

manager   ~0017633

Please test it on the guests.


2013-07-09 03:37

reporter   ~0017634

Thanks. I just installed the updated kernel on one of our production guests that is exhibiting this problem. We are running CentOS 6.4. We totally lose connectivity on eth0 about twice a day, so I should be able to report back soon whether this updated kernel fixes the problem or not.

Interestingly, we also have eth1 configured on a private IP range, and when eth0 goes down I'm still able to get into the machine from another local server via eth1 and restart networking. I'm guessing that could be because eth1 is running under a less heavy load than eth0.

Let me know if I can provide any additional info about our configuration.


2013-07-10 18:36

manager   ~0017635


Any test result with the patched kernel?


2013-07-10 18:42

reporter   ~0017636

So far we haven't had any reports of the problem from staff. I did get a connection refused trying to login last night, but by the time I got into the console via the host to run a tcpdump against eth0, it was working again. I'm not entirely sure if the problem is completely resolved but will continue to report updates over the next couple of days. It definitely seems to have helped.


2013-07-10 18:49

manager   ~0017637

Thanks for the update. Yes, please continue to monitor the system.


2013-07-16 17:43

manager   ~0017684


How about the last 6 days? A kernel update is out upstream. I'm going to include the patch in the plus kernel.


2013-07-16 17:50

reporter   ~0017685

So far, about once a day or so we're still losing networking on eth0. The difference this time is that I can run a tcpdump -i eth0 and "wake up" the interface so it starts working again. This is a different problem than before where I had to restart networking. I'm not sure if this should now be considered a separate issue, but we are still consistently experiencing this problem. Next time this happens I'll record a screencast so you can see the behavior.


2013-07-16 19:56

manager   ~0017688


Thanks for the update.

My feeling is that this thread consists of a mix of different issues. Yours might not be the same as what the OP reported. And some issue may be fixed with the patch provided.


2013-07-31 00:01

reporter   ~0017743

We have the exact same problem described at btallent and mlorenzo.

Running 6.4 (Final) 2.6.32-358.11.1.el6.x86_64

We're within VMWare and have had the problem with both VMXNet3 and E1000. We do not have any way to cause the problem, but it only happens under load. Network load testing has not caused failures.

As with btallent, we have a private (eth1) and public (eth0) network, and when eth0 is unresponsive, eth1 is always online. No log entries of any kind except for my ifdown/ifup commands.

I have not found any other resource on the web that describes this problem better, we've been plagued with it since July 4th when this VM went live.


2013-07-31 19:23

reporter   ~0017747

Upgraded to 2.6.32-358.14.1.el6.x86_64, Problem still recurs.


2013-08-23 20:09

reporter   ~0017840

My version Kernel is 2.6.32-358.14.1.el6.x86_64 i had the same problem.

Temporary Solution , reboot server.

Someone has information to add to this issue?


2013-09-04 12:20

reporter   ~0017933


We had a similar issue on an hypervisor hosting around 30 vps (bridged environment)

There were 4 ip ranges on the server. We saw the problem occured only on 1 ip range.

- changing the network adapter to intel e1000 (also tried realtek) did not changed anything. Reboot, and restarting libvirt do not work : the problem just come back a bit later.

There are 3 ways we found to fix this issue

1) Migrate to another host : after a migration to another hypervisor, we no longer have this problem. chances are the issue is probably not inside the vps

2) Create an image of the vps, and reinstall from it. When you will reinstall, choose a different ip address, on another ip range where you don't see this issue.

3) If you cannot change the ip of a vps, and cannot perform a migration, just temporary change the ip of the affected vps. Once there is no longer any vps on the range, backup your config, delete the problematic range from your configuration and then re-configure the range in question using your previous settings. should be fine right after following one of theses steps. Each of them worked for us. So, i assume the issue concern the ip configuration on kvm / xen.

If someone else have fixed this differently, please let us know :)


2013-10-19 16:17

manager   ~0018198

Could people update the status with the latest kernel 2.6.32-358.23.2.el6 (distro or centosplus kernel) ?


2013-11-08 11:44

reporter   ~0018309

Hi everybody,
I have similar problem
HW is:
SuperMicro X9DRW
Ethernet Intel I350

kernel is 2.6.32-358.18.1.el6.x86_64

Centos 6.4 64bit


2014-01-27 21:21

reporter   ~0019135

CentOS 6.5 on minimal hardware and RAM. Kernel is 2.6.32 431.3.1.e16.

Our ongoing case is: KVM guests get disconnected from the network under high system load and never regain connectivity. Reboot from the host is the only solution.

After months of dealing with these random shutdowns, I found this:

and this:

They mention there is a kernel bug in the networking portion that in my layman's terms drops bridged connections on virtual interfaces when faced with big packets.

The Debian folks seem to be experiencing a similar issue. According to a post on their site, the fix is on its way in kernel 3.10.X (Due out in Redhat 7 as per and presumably CentOS 7).

Here's the Debian link:

The workaround is to turn off TSO and GSO on each interface, e.g.,

ethtool -K eth0 tso off gso off

We tried this last night, and so far no crashes. I'll try to report back if this fixes our problem.



2014-01-28 08:25

reporter   ~0019136

Thanks Kevin for your feedback !
Could you update us in a couple of days ?



2014-01-30 06:57

reporter   ~0019170

Following up on my note, above, we've gone 72 hours without any downtime. I'll try to report back after about a week has elapsed or if we have a related crash.

Prior to doing the mentioned workaround, I think 10 days is about as long as we've ever gone without a crash. So after only 3 days, we're not out of the woods, yet...



2014-01-31 14:47

reporter   ~0019181

No, folks, I'm sorry to report that one of our guests went down inexplicably overnight, after just 4 days of uptime. The "ethtool" work-around I mentioned in my above note does not seem to help.

One good outcome is that we found a new way to bring back the KVM guest after it has lost network connectivity to the host:

We put the guest into the "paused" state from the KVM host. Then when we changed the state back to "un-paused", the network connectivity had returned as normal! And indeed, the guest was still up and running, it had just been orphaned from its host.



2014-02-07 03:04

reporter   ~0019236


Here is how it goes :

error: Failed to create domain from /etc/libvirt/qemu/9099.xml
error: Unable to create tap device viif3099: Device or resource busy

We saw this from Tiger (we can ignore as per

--CONFIG-- [con010c] Filesystem 'devtmpfs' used by 'udev' is not recognised as a valid filesystem
--CONFIG-- [con010c] Filesystem 'cgroup' used by 'cgroup' is not recognised as a valid filesystem
--CONFIG-- [con010c] Filesystem 'cgroup' used by 'cgroup' is not recognised as a valid filesystem
--CONFIG-- [con010c] Filesystem 'cgroup' used by 'cgroup' is not recognised as a valid filesystem
--CONFIG-- [con010c] Filesystem 'cgroup' used by 'cgroup' is not recognised as a valid filesystem
--CONFIG-- [con010c] Filesystem 'cgroup' used by 'cgroup' is not recognised as a valid filesystem
--CONFIG-- [con010c] Filesystem 'cgroup' used by 'cgroup' is not recognised as a valid filesystem
--CONFIG-- [con010c] Filesystem 'cgroup' used by 'cgroup' is not recognised as a valid filesystem
--CONFIG-- [con010c] Filesystem 'cgroup' used by 'cgroup' is not recognised as a valid filesystem

Also, i can see this in the libvirt logs :

2014-02-07 02:17:28.690+0000: 2228: error : virDomainDeleteConfig:11955 : cannot remove config /etc/libvirt/qemu/9026.xml: Operation not permitted
2014-02-07 02:17:31.387+0000: 2224: error : virDomainDeleteConfig:11955 : cannot remove config /etc/libvirt/qemu/9027.xml: Operation not permitted
2014-02-07 02:17:31.387+0000: 2229: error : virDomainDeleteConfig:11955 : cannot remove config /etc/libvirt/qemu/9029.xml: Operation not permitted
2014-02-07 02:17:36.451+0000: 2230: error : virDomainDeleteConfig:11955 : cannot remove config /etc/libvirt/qemu/9034.xml: Operation not permitted
2014-02-07 02:17:36.451+0000: 2229: error : virDomainDeleteConfig:11955 : cannot remove config /etc/libvirt/qemu/9038.xml: Operation not permitted
2014-02-07 02:18:00.135+0000: 2231: error : virDomainDeleteConfig:11955 : cannot remove config /etc/libvirt/qemu/9032.xml: Operation not permitted
2014-02-07 02:18:03.313+0000: 5056: error : virDomainDeleteConfig:11955 : cannot remove config /etc/libvirt/qemu/9039.xml: Operation not permitted
2014-02-07 02:18:03.313+0000: 2225: error : virDomainDeleteConfig:11955 : cannot remove config /etc/libvirt/qemu/9039.xml: Operation not permitted
2014-02-07 02:27:22.510+0000: 8769: error : virNetDevGetIndex:656 : Unable to get index for interface viif1002: No such device
2014-02-07 02:27:28.422+0000: 2225: error : virDomainDeleteConfig:11955 : cannot remove config /etc/libvirt/qemu/9032.xml: Operation not permitted

It always start with those logs :

2014-02-07 02:10:46.026+0000: 2484: error : virCgroupRemoveRecursively:715 : Unable to remove /sys/fs/cgroup/cpu/libvirt/qemu/1008//vcpu1 (16)
2014-02-07 02:10:46.026+0000: 2484: error : virCgroupRemoveRecursively:715 : Unable to remove /sys/fs/cgroup/cpu/libvirt/qemu/1008/ (16)
2014-02-07 02:10:46.026+0000: 2484: error : virCgroupRemoveRecursively:715 : Unable to remove /sys/fs/cgroup/cpuacct/libvirt/qemu/1008/ (16)
2014-02-07 02:10:46.026+0000: 2484: error : virCgroupRemoveRecursively:715 : Unable to remove /sys/fs/cgroup/cpuset/libvirt/qemu/1008/ (16)
2014-02-07 02:10:46.026+0000: 2484: error : virCgroupRemoveRecursively:715 : Unable to remove /sys/fs/cgroup/memory/libvirt/qemu/1008/ (16)
2014-02-07 02:10:46.026+0000: 2484: error : virCgroupRemoveRecursively:715 : Unable to remove /sys/fs/cgroup/devices/libvirt/qemu/1008/ (16)
2014-02-07 02:10:46.026+0000: 2484: error : virCgroupRemoveRecursively:715 : Unable to remove /sys/fs/cgroup/freezer/libvirt/qemu/1008/ (16)
2014-02-07 02:10:46.026+0000: 2484: error : virCgroupRemoveRecursively:715 : Unable to remove /sys/fs/cgroup/blkio/libvirt/qemu/1008/ (16)
2014-02-07 02:10:46.226+0000: 2484: error : virCgroupRemoveRecursively:715 : Unable to remove /sys/fs/cgroup/cpu/libvirt/qemu/1008//vcpu1 (16)

On our side, this is happening while our backups are running... ...always. If we perform manual backups of individual vm's then it work, but as soon as dd achieve with the last ones, then this mess happen.


2014-02-07 03:21

reporter   ~0019237

We are using virtio for most of our vm's.

When this happen, there is absolutely nothing to do. If we start to create or destroy the domain through virsh all we can get is this :

error: Failed to create domain from /etc/libvirt/qemu/9099.xml
error: Unable to create tap device viif3099: Device or resource busy

So we need to reboot, over and over since last month. I remember there been a cgroup-lite update there is a couple of weeks. Seems like this started right after.

Install: cgroup-lite:amd64 (1.1.5, automatic)

We need to reboot the system each time this happen. Due to a previous bug, our visualization platform doesn't allow to switch to e1000 if the vm's are created using virtio, to avoid getting buggy vm's. We will try to do this as a last resort only.

Does someone else can confirm it's a virtio bug? Cause it seems to be a libvirt / cgroup related issue on our side. we are running Ubuntu on our side, but it's the exact same kind of bug.

We start receiving ARP only if we reboot the vm's. Not at the time this happen. We need a complete reboot of the server to fix.

I'm getting sick and tired of this bug. :P


2014-02-07 03:52

reporter   ~0019238

This bug is similar to this one :

When is said "We start receiving ARP only if we reboot the vm's." i should have wrote instead : We start receiving ARP only if we restart our bridge network".


2014-02-24 05:02

reporter   ~0019346

I also have this bug but found that disabling the vhost-net handling of the virtual interface works around it. The scenario outlined in (which includes a patch) seems promising to me.

The symptoms include the associated vhost host process CPU looping for a period of up to 30 minutes before something unlocks and normal networking resumes.


2014-06-20 00:41

reporter   ~0019973

Hi guys, have you managed to fix this or know of a work around..

I'm having the same problem on RHEL6.4 + virtio-win 1.6.5 + win2008R2

the quick fix is always to restart the NIC inside the VM..


2014-06-20 01:31

reporter   ~0019974

Workaround is to disable offload on all interfaces:
ethtool --offload #{bridge} gso off tso off sg off gro off
ethtool --offload #{dev} gso off tso off sg off gro off


2014-06-20 01:40

reporter   ~0019975

I assume you are talking about the host, not the VM (which is win 2008R2 in mycase)?

Kevn indicated it didn't fix the problem?


2014-06-20 01:54

reporter   ~0019976

I should also mention, i'm running WIN2003R2, RHEL5, RHEL6 vms on the same kvm host servers, and they are not affected..


2014-06-20 02:22

reporter   ~0019977

As mentioned, the workaround I found for both Windows and Linux guests was to tell libvirt to disable vhost-net for the interfaces in question.

In the guest's XML config file insert a driver directive as below:

     <interface type='network'>
       <mac address='52:54:00:f3:29:76'/>
       <source network='default'/>
       <target dev='vnet1'/>
       <model type='virtio'/>
+ <driver name='qemu'/>
       <alias name='net2'/>
       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>


2014-06-20 02:33

reporter   ~0019978

My system version info:
  Linux kernel: 2.6.32-431.17.1.el6.x86_64


2014-07-31 17:02

reporter   ~0020562

I faced probably the same problem installing Xen-4.3.2 with Gentoo kernel-3.12.21.

DomU's interface hangs after short time under heavy network load (starting at ~10Mbyte/s). From outside it looks, like the instance would crash, but deactivating and activating the interface e.g. form the "xl console <domU name>" with /etc/init.d/net.eth0 stop/start, restores normal operation.

After 3 days of testing/searching I found a workaround. Setting the following options with ethtool, I could successfully prevent my domU's interfaces from hanging:

ethtool --offload <network device> gso off tso off sg off gro off

This leeds me to my solution.

I also posted my other expirience with this bridged network configuration in the Gentoo wiki .


2014-08-01 02:50

reporter   ~0020564

I've managed to fix the issue (1 month without issue) by upgrading to the latest virtio-win device drivers ( virtio-win-1.7.1-1.el6_5)..

I'm running RHEL6.4 + virtio-win-1.7.1-1.el6_5 host with win2008R2 guests now and everything seems stable..


2015-07-05 15:10

reporter   ~0023568

I have the same problem with Centos 6.6 kernel 2.6.32-504.23.4.el6.x86_64
I regain network access removing and reloading the virtio driver.
# modprobe -r virtio_net
# modprobe virtio_net


2015-12-28 12:33

reporter   ~0025207

Does someone know what happened to this bug?
I seem to have the same issue over here with two identical Centos 7 servers. Both run 2 identical virtual servers, running Centos 7 guests.
I am managing them over ssh and from time to time I have a hung up while updating with yum update. Sometimes it is the guest what hangs but I also had (today for example) the Host machine did.
It always work the same, running the update over ssh, the process goes and at one point (seems always at the point when it runs the actual updates, not while downloading) the process stops. From that point on the server does not response even to ping, like interface is totally down.
When the host does it then of course the guests are also not reachable, otherwise if the guest hangs then only that one guest goes off line.

Not sure if it is the same problem, but google-ing the problem I ended up here.


2017-11-03 03:48

reporter   ~0030517

I got the same issue here, will try the workaround (disable vhost-net) provided by chrismaltby and report the result here.

Issue History

Date Modified Username Field Change
2012-02-21 08:55 bmalynovytch New Issue
2012-02-21 08:58 bmalynovytch Note Added: 0014515
2012-03-08 14:48 briancamp Note Added: 0014630
2012-03-08 16:17 bmalynovytch Note Added: 0014631
2012-03-27 11:30 kdwinton Note Added: 0014747
2012-03-27 11:44 kdwinton Note Added: 0014748
2012-03-27 12:25 bmalynovytch Note Added: 0014749
2012-05-15 12:07 rex Note Added: 0015082
2012-05-15 12:13 bmalynovytch Note Added: 0015083
2012-05-23 06:17 ang639 Note Added: 0015134
2012-06-23 01:47 schorsch Note Added: 0015322
2012-06-25 06:46 ang639 Note Added: 0015337
2012-06-25 08:19 ang639 Note Added: 0015338
2012-06-26 00:47 ang639 Note Added: 0015342
2012-07-03 06:44 bmalynovytch Note Added: 0015361
2012-07-09 00:59 ang639 Note Added: 0015377
2012-07-09 15:25 toracat File Added: bug5526.patch
2012-07-09 15:29 toracat Note Added: 0015378
2012-07-10 09:38 sternkop Note Added: 0015383
2012-07-10 09:38 sternkop Note Added: 0015384
2013-03-08 11:27 jharasym Note Added: 0016620
2013-03-08 14:10 jharasym Note Added: 0016621
2013-06-24 11:37 mlorenzo Note Added: 0017588
2013-06-24 16:15 toracat Note Added: 0017589
2013-06-24 16:32 mlorenzo Note Added: 0017590
2013-06-24 16:47 toracat Note Added: 0017591
2013-06-24 21:00 mlorenzo Note Added: 0017592
2013-06-25 05:43 toracat Note Added: 0017594
2013-07-08 21:18 btallent Note Added: 0017632
2013-07-08 21:49 toracat Note Added: 0017633
2013-07-09 03:37 btallent Note Added: 0017634
2013-07-10 18:36 toracat Note Added: 0017635
2013-07-10 18:42 btallent Note Added: 0017636
2013-07-10 18:49 toracat Note Added: 0017637
2013-07-16 17:43 toracat Note Added: 0017684
2013-07-16 17:50 btallent Note Added: 0017685
2013-07-16 19:56 toracat Note Added: 0017688
2013-07-31 00:01 cstrom Note Added: 0017743
2013-07-31 19:23 cstrom Note Added: 0017747
2013-08-23 20:09 trcintra Note Added: 0017840
2013-09-04 12:20 uname-r Note Added: 0017933
2013-10-19 16:17 toracat Note Added: 0018198
2013-11-08 11:44 wohnout Note Added: 0018309
2014-01-27 21:21 Kevn Note Added: 0019135
2014-01-28 08:25 bmalynovytch Note Added: 0019136
2014-01-30 06:57 Kevn Note Added: 0019170
2014-01-31 14:47 Kevn Note Added: 0019181
2014-02-07 03:04 bugos Note Added: 0019236
2014-02-07 03:21 bugos Note Added: 0019237
2014-02-07 03:52 bugos Note Added: 0019238
2014-02-24 05:02 chrismaltby Note Added: 0019346
2014-06-20 00:41 james.shirley Note Added: 0019973
2014-06-20 01:31 vadikgo Note Added: 0019974
2014-06-20 01:40 james.shirley Note Added: 0019975
2014-06-20 01:54 james.shirley Note Added: 0019976
2014-06-20 02:22 chrismaltby Note Added: 0019977
2014-06-20 02:33 chrismaltby Note Added: 0019978
2014-07-31 17:02 mpajak Note Added: 0020562
2014-08-01 02:50 james.shirley Note Added: 0020564
2015-07-05 15:10 javierwilson Note Added: 0023568
2015-12-28 12:33 gabor Note Added: 0025207
2017-11-03 03:48 jackhuang Note Added: 0030517