View Issue Details
ID | Project | Category | View Status | Date Submitted | Last Update |
---|---|---|---|---|---|
0017875 | CentOS-7 | kernel | public | 2020-11-21 10:42 | 2020-11-21 11:21 |
Reporter | yinzg | ||||
Priority | high | Severity | major | Reproducibility | always |
Status | closed | Resolution | not fixable | ||
Product Version | 7.4.1708 | ||||
Target Version | Fixed in Version | ||||
Summary | 0017875: The nvme driver or hotplug works abnormally when the hot plug is performed under multiple namespace using NVMe SSD | ||||
Description | The nvme driver or hotplug works abnormally when the hot plug is performed under multiple namespace using NVMe SSD. The nvme driver is dead .The dmesg info is as belows, [10895.483889] nvme nvme8: I/O 78 QID 0 timeout, reset controller [10895.535248] INFO: task kworker/u449:0:4587 blocked for more than 120 seconds. [10895.535256] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10895.535259] kworker/u449:0 D ffff881dd6bf1140 0 4587 2 0x00000080 [10895.535278] Workqueue: nvme nvme_reset_work [nvme] [10895.535281] ffff881035667d50 0000000000000046 ffff88102f221fa0 ffff881035667fd8 [10895.535285] ffff881035667fd8 ffff881035667fd8 ffff88102f221fa0 ffff881fd9794680 [10895.535288] ffff881fd9794db0 ffff881dd6bf1268 0000000000000000 ffff881dd6bf1140 [10895.535293] Call Trace: [10895.535305] [<ffffffff816a94c9>] schedule+0x29/0x70 [10895.535318] [<ffffffff81301d05>] blk_mq_freeze_queue_wait+0x75/0xe0 [10895.535327] [<ffffffff810b1910>] ? wake_up_atomic_t+0x30/0x30 [10895.535335] [<ffffffffc09852e9>] nvme_wait_freeze+0x39/0x50 [nvme_core] [10895.535339] [<ffffffffc097e43a>] nvme_reset_work+0x59a/0x8a3 [nvme] [10895.535347] [<ffffffff810a881a>] process_one_work+0x17a/0x440 [10895.535351] [<ffffffff810a9638>] worker_thread+0x278/0x3c0 [10895.535355] [<ffffffff810a93c0>] ? manage_workers.isra.24+0x2a0/0x2a0 [10895.535359] [<ffffffff810b098f>] kthread+0xcf/0xe0 [10895.535362] [<ffffffff810b08c0>] ? insert_kthread_work+0x40/0x40 [10895.535369] [<ffffffff816b4f18>] ret_from_fork+0x58/0x90 [10895.535372] [<ffffffff810b08c0>] ? insert_kthread_work+0x40/0x40 [10895.535403] INFO: task kworker/12:20:20121 blocked for more than 120 seconds. [10895.535405] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10895.535406] kworker/12:20 D ffff88203bd8f0c8 0 20121 2 0x00000080 [10895.535417] Workqueue: pciehp-51 pciehp_power_thread [10895.535418] ffff881e9ea97d60 0000000000000046 ffff881dfab32f70 ffff881e9ea97fd8 [10895.535421] ffff881e9ea97fd8 ffff881e9ea97fd8 ffff881dfab32f70 ffff88203bd8f0c0 [10895.535425] ffff88203bd8f0c4 ffff881dfab32f70 00000000ffffffff ffff88203bd8f0c8 [10895.535428] Call Trace: [10895.535432] [<ffffffff816aa3e9>] schedule_preempt_disabled+0x29/0x70 [10895.535442] [<ffffffff816a8317>] __mutex_lock_slowpath+0xc7/0x1d0 [10895.535446] [<ffffffff816a772f>] mutex_lock+0x1f/0x2f [10895.535449] [<ffffffff8137f6df>] pciehp_power_thread+0x8f/0xc0 [10895.535452] [<ffffffff810a881a>] process_one_work+0x17a/0x440 [10895.535456] [<ffffffff810a94e6>] worker_thread+0x126/0x3c0 [10895.535459] [<ffffffff810a93c0>] ? manage_workers.isra.24+0x2a0/0x2a0 [10895.535462] [<ffffffff810b098f>] kthread+0xcf/0xe0 [10895.535465] [<ffffffff810b08c0>] ? insert_kthread_work+0x40/0x40 [10895.535468] [<ffffffff816b4f18>] ret_from_fork+0x58/0x90 [10895.535471] [<ffffffff810b08c0>] ? insert_kthread_work+0x40/0x40 [10895.535474] INFO: task kworker/12:90:21364 blocked for more than 120 seconds. [10895.535475] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10895.535476] kworker/12:90 D ffff88203bd8f0c8 0 21364 2 0x00000080 [10895.535481] Workqueue: pciehp-51 pciehp_power_thread [10895.535482] ffff881e9ea9fd60 0000000000000046 ffff881fe03b3f40 ffff881e9ea9ffd8 [10895.535485] ffff881e9ea9ffd8 ffff881e9ea9ffd8 ffff881fe03b3f40 ffff88203bd8f0c0 [10895.535488] ffff88203bd8f0c4 ffff881fe03b3f40 00000000ffffffff ffff88203bd8f0c8 [10895.535491] Call Trace: [10895.535495] [<ffffffff816aa3e9>] schedule_preempt_disabled+0x29/0x70 [10895.535499] [<ffffffff816a8317>] __mutex_lock_slowpath+0xc7/0x1d0 [10895.535503] [<ffffffff816a772f>] mutex_lock+0x1f/0x2f [10895.535506] [<ffffffff8137f683>] pciehp_power_thread+0x33/0xc0 [10895.535509] [<ffffffff810a881a>] process_one_work+0x17a/0x440 [10895.535513] [<ffffffff810a94e6>] worker_thread+0x126/0x3c0 [10895.535516] [<ffffffff810a93c0>] ? manage_workers.isra.24+0x2a0/0x2a0 [10895.535519] [<ffffffff810b098f>] kthread+0xcf/0xe0 [10895.535522] [<ffffffff810b08c0>] ? insert_kthread_work+0x40/0x40 [10895.535525] [<ffffffff816b4f18>] ret_from_fork+0x58/0x90 [10895.535528] [<ffffffff810b08c0>] ? insert_kthread_work+0x40/0x40 [10895.535531] INFO: task kworker/12:218:22799 blocked for more than 120 seconds. [10895.535532] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10895.535533] kworker/12:218 D ffff88203bd8f0c8 0 22799 2 0x00000080 [10895.535538] Workqueue: pciehp-51 pciehp_power_thread [10895.535539] ffff882036adbd60 0000000000000046 ffff881fe3710fd0 ffff882036adbfd8 [10895.535542] ffff882036adbfd8 ffff882036adbfd8 ffff881fe3710fd0 ffff88203bd8f0c0 [10895.535545] ffff88203bd8f0c4 ffff881fe3710fd0 00000000ffffffff ffff88203bd8f0c8 [10895.535548] Call Trace: [10895.535551] [<ffffffff816aa3e9>] schedule_preempt_disabled+0x29/0x70 [10895.535555] [<ffffffff816a8317>] __mutex_lock_slowpath+0xc7/0x1d0 [10895.535559] [<ffffffff816a772f>] mutex_lock+0x1f/0x2f [10895.535562] [<ffffffff8137f683>] pciehp_power_thread+0x33/0xc0 [10895.535565] [<ffffffff810a881a>] process_one_work+0x17a/0x440 [10895.535568] [<ffffffff810a94e6>] worker_thread+0x126/0x3c0 [10895.535572] [<ffffffff810a93c0>] ? manage_workers.isra.24+0x2a0/0x2a0 [10895.535575] [<ffffffff810b098f>] kthread+0xcf/0xe0 [10895.535578] [<ffffffff810b08c0>] ? insert_kthread_work+0x40/0x40 [10895.535581] [<ffffffff816b4f18>] ret_from_fork+0x58/0x90 [10895.535583] [<ffffffff810b08c0>] ? insert_kthread_work+0x40/0x40 [10895.535586] INFO: task kworker/12:245:23091 blocked for more than 120 seconds. [10895.535587] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10895.535588] kworker/12:245 D 0000000000000300 0 23091 2 0x00000080 [10895.535593] Workqueue: pciehp-51 pciehp_power_thread [10895.535594] ffff88203ae33b10 0000000000000046 ffff881fd8fd3f40 ffff88203ae33fd8 [10895.535597] ffff88203ae33fd8 ffff88203ae33fd8 ffff881fd8fd3f40 ffff88203ae33c60 [10895.535600] 7fffffffffffffff ffff88203ae33c58 ffff881fd8fd3f40 0000000000000300 [10895.535603] Call Trace: [10895.535606] [<ffffffff816a94c9>] schedule+0x29/0x70 [10895.535610] [<ffffffff816a6fd9>] schedule_timeout+0x239/0x2c0 [10895.535618] [<ffffffff8105ab43>] ? x2apic_send_IPI_mask+0x13/0x20 [10895.535626] [<ffffffff810c4593>] ? try_to_wake_up+0x183/0x340 [10895.535629] [<ffffffff816a987d>] wait_for_completion+0xfd/0x140 [10895.535632] [<ffffffff810c4810>] ? wake_up_state+0x20/0x20 [10895.535636] [<ffffffff810a987d>] flush_work+0xfd/0x190 [10895.535640] [<ffffffff810a5df0>] ? move_linked_works+0x90/0x90 [10895.535644] [<ffffffffc097ddb3>] nvme_remove+0x63/0x150 [nvme] [10895.535652] [<ffffffff8136ba59>] pci_device_remove+0x39/0xc0 [10895.535661] [<ffffffff8143f63f>] __device_release_driver+0x7f/0xf0 [10895.535664] [<ffffffff8143f6d3>] device_release_driver+0x23/0x30 [10895.535670] [<ffffffff81363ec4>] pci_stop_bus_device+0x94/0xa0 [10895.535673] [<ffffffff81363fc2>] pci_stop_and_remove_bus_device+0x12/0x20 [10895.535677] [<ffffffff8137fb60>] pciehp_unconfigure_device+0xb0/0x1b0 [10895.535679] [<ffffffff8137f5d2>] pciehp_disable_slot+0x52/0xd0 [10895.535682] [<ffffffff8137f6e7>] pciehp_power_thread+0x97/0xc0 [10895.535686] [<ffffffff810a881a>] process_one_work+0x17a/0x440 [10895.535689] [<ffffffff810a94e6>] worker_thread+0x126/0x3c0 [10895.535693] [<ffffffff810a93c0>] ? manage_workers.isra.24+0x2a0/0x2a0 [10895.535696] [<ffffffff810b098f>] kthread+0xcf/0xe0 [10895.535699] [<ffffffff810b08c0>] ? insert_kthread_work+0x40/0x40 [10895.535702] [<ffffffff816b4f18>] ret_from_fork+0x58/0x90 [10895.535705] [<ffffffff810b08c0>] ? insert_kthread_work+0x40/0x40 [10895.535712] INFO: task nvme:196591 blocked for more than 120 seconds. [10895.535714] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [10895.535715] nvme D 0000000000000000 0 196591 196290 0x00000080 [10895.535718] ffff880e57f8fcc0 0000000000000082 ffff880e16360000 ffff880e57f8ffd8 [10895.535721] ffff880e57f8ffd8 ffff880e57f8ffd8 ffff880e16360000 ffff880e57f8fe10 [10895.535724] 7fffffffffffffff ffff880e57f8fe08 ffff880e16360000 0000000000000000 [10895.535727] Call Trace: [10895.535730] [<ffffffff816a94c9>] schedule+0x29/0x70 [10895.535734] [<ffffffff816a6fd9>] schedule_timeout+0x239/0x2c0 [10895.535742] [<ffffffff810c1309>] ? ttwu_do_wakeup+0x19/0xd0 [10895.535746] [<ffffffff810c149d>] ? ttwu_do_activate.constprop.92+0x5d/0x70 [10895.535749] [<ffffffff810c4593>] ? try_to_wake_up+0x183/0x340 [10895.535752] [<ffffffff816a987d>] wait_for_completion+0xfd/0x140 [10895.535755] [<ffffffff810c4810>] ? wake_up_state+0x20/0x20 [10895.535758] [<ffffffff810a987d>] flush_work+0xfd/0x190 [10895.535762] [<ffffffff810a5df0>] ? move_linked_works+0x90/0x90 [10895.535766] [<ffffffffc097b45e>] nvme_pci_reset_ctrl+0x2e/0x40 [nvme] [10895.535771] [<ffffffffc0986964>] nvme_dev_ioctl+0x154/0x220 [nvme_core] [10895.535779] [<ffffffff812151cd>] do_vfs_ioctl+0x33d/0x540 [10895.535790] [<ffffffff812b75df>] ? file_has_perm+0x9f/0xb0 [10895.535793] [<ffffffff81215471>] SyS_ioctl+0xa1/0xc0 [10895.535797] [<ffffffff816b4fc9>] system_call_fastpath+0x16/0x1b [10959.539074] pciehp 0000:d7:00.0:pcie04: Slot(51-1): Link Down [10959.539108] pciehp 0000:d7:00.0:pcie04: Slot(51-1): Link Down event queued; currently getting powered on [10966.761276] pciehp 0000:d7:00.0:pcie04: Slot(51-1): Card present [10966.763097] pciehp 0000:d7:00.0:pcie04: Slot(51-1): Link Up [10966.763160] pciehp 0000:d7:00.0:pcie04: Slot(51-1): Link Up event ignored; already powering on [11015.252985] INFO: task kworker/u449:0:4587 blocked for more than 120 seconds. [11015.252992] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11015.252995] kworker/u449:0 D ffff881dd6bf1140 0 4587 2 0x00000080 [11015.253012] Workqueue: nvme nvme_reset_work [nvme] [11015.253015] ffff881035667d50 0000000000000046 ffff88102f221fa0 ffff881035667fd8 [11015.253019] ffff881035667fd8 ffff881035667fd8 ffff88102f221fa0 ffff881fd9794680 [11015.253022] ffff881fd9794db0 ffff881dd6bf1268 0000000000000000 ffff881dd6bf1140 [11015.253026] Call Trace: [11015.253038] [<ffffffff816a94c9>] schedule+0x29/0x70 [11015.253051] [<ffffffff81301d05>] blk_mq_freeze_queue_wait+0x75/0xe0 [11015.253061] [<ffffffff810b1910>] ? wake_up_atomic_t+0x30/0x30 [11015.253068] [<ffffffffc09852e9>] nvme_wait_freeze+0x39/0x50 [nvme_core] [11015.253073] [<ffffffffc097e43a>] nvme_reset_work+0x59a/0x8a3 [nvme] [11015.253080] [<ffffffff810a881a>] process_one_work+0x17a/0x440 [11015.253084] [<ffffffff810a9638>] worker_thread+0x278/0x3c0 [11015.253088] [<ffffffff810a93c0>] ? manage_workers.isra.24+0x2a0/0x2a0 [11015.253092] [<ffffffff810b098f>] kthread+0xcf/0xe0 [11015.253095] [<ffffffff810b08c0>] ? insert_kthread_work+0x40/0x40 [11015.253101] [<ffffffff816b4f18>] ret_from_fork+0x58/0x90 [11015.253105] [<ffffffff810b08c0>] ? insert_kthread_work+0x40/0x40 [11015.253129] INFO: task kworker/12:20:20121 blocked for more than 120 seconds. [11015.253130] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11015.253132] kworker/12:20 D ffff88203bd8f0c8 0 20121 2 0x00000080 [11015.253142] Workqueue: pciehp-51 pciehp_power_thread [11015.253144] ffff881e9ea97d60 0000000000000046 ffff881dfab32f70 ffff881e9ea97fd8 [11015.253147] ffff881e9ea97fd8 ffff881e9ea97fd8 ffff881dfab32f70 ffff88203bd8f0c0 [11015.253150] ffff88203bd8f0c4 ffff881dfab32f70 00000000ffffffff ffff88203bd8f0c8 [11015.253154] Call Trace: [11015.253158] [<ffffffff816aa3e9>] schedule_preempt_disabled+0x29/0x70 [11015.253167] [<ffffffff816a8317>] __mutex_lock_slowpath+0xc7/0x1d0 [11015.253172] [<ffffffff816a772f>] mutex_lock+0x1f/0x2f [11015.253175] [<ffffffff8137f6df>] pciehp_power_thread+0x8f/0xc0 [11015.253178] [<ffffffff810a881a>] process_one_work+0x17a/0x440 [11015.253182] [<ffffffff810a94e6>] worker_thread+0x126/0x3c0 [11015.253186] [<ffffffff810a93c0>] ? manage_workers.isra.24+0x2a0/0x2a0 [11015.253189] [<ffffffff810b098f>] kthread+0xcf/0xe0 [11015.253192] [<ffffffff810b08c0>] ? insert_kthread_work+0x40/0x40 [11015.253195] [<ffffffff816b4f18>] ret_from_fork+0x58/0x90 [11015.253198] [<ffffffff810b08c0>] ? insert_kthread_work+0x40/0x40 [11015.253200] INFO: task kworker/12:90:21364 blocked for more than 120 seconds. [11015.253202] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11015.253203] kworker/12:90 D ffff88203bd8f0c8 0 21364 2 0x00000080 [11015.253207] Workqueue: pciehp-51 pciehp_power_thread [11015.253209] ffff881e9ea9fd60 0000000000000046 ffff881fe03b3f40 ffff881e9ea9ffd8 [11015.253212] ffff881e9ea9ffd8 ffff881e9ea9ffd8 ffff881fe03b3f40 ffff88203bd8f0c0 [11015.253215] ffff88203bd8f0c4 ffff881fe03b3f40 00000000ffffffff ffff88203bd8f0c8 [11015.253217] Call Trace: [11015.253221] [<ffffffff816aa3e9>] schedule_preempt_disabled+0x29/0x70 [11015.253225] [<ffffffff816a8317>] __mutex_lock_slowpath+0xc7/0x1d0 [11015.253229] [<ffffffff816a772f>] mutex_lock+0x1f/0x2f [11015.253232] [<ffffffff8137f683>] pciehp_power_thread+0x33/0xc0 [11015.253235] [<ffffffff810a881a>] process_one_work+0x17a/0x440 [11015.253238] [<ffffffff810a94e6>] worker_thread+0x126/0x3c0 [11015.253242] [<ffffffff810a93c0>] ? manage_workers.isra.24+0x2a0/0x2a0 [11015.253245] [<ffffffff810b098f>] kthread+0xcf/0xe0 [11015.253248] [<ffffffff810b08c0>] ? insert_kthread_work+0x40/0x40 [11015.253251] [<ffffffff816b4f18>] ret_from_fork+0x58/0x90 [11015.253254] [<ffffffff810b08c0>] ? insert_kthread_work+0x40/0x40 [11015.253256] INFO: task kworker/12:218:22799 blocked for more than 120 seconds. [11015.253257] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [11015.253258] kworker/12:218 D ffff88203bd8f0c8 0 22799 2 0x00000080 [11015.253263] Workqueue: pciehp-51 pciehp_power_thread [11015.253264] ffff882036adbd60 0000000000000046 ffff881fe3710fd0 ffff882036adbfd8 [11015.253267] ffff882036adbfd8 ffff882036adbfd8 ffff881fe3710fd0 ffff88203bd8f0c0 [11015.253270] ffff88203bd8f0c4 ffff881fe3710fd0 00000000ffffffff ffff88203bd8f0c8 [11015.253273] Call Trace: [11015.253276] [<ffffffff816aa3e9>] schedule_preempt_disabled+0x29/0x70 [11015.253280] [<ffffffff816a8317>] __mutex_lock_slowpath+0xc7/0x1d0 [11015.253284] [<ffffffff816a772f>] mutex_lock+0x1f/0x2f [11015.253287] [<ffffffff8137f683>] pciehp_power_thread+0x33/0xc0 [11015.253290] [<ffffffff810a881a>] process_one_work+0x17a/0x440 [11015.253293] [<ffffffff810a94e6>] worker_thread+0x126/0x3c0 [11015.253297] [<ffffffff810a93c0>] ? manage_workers.isra.24+0x2a0/0x2a0 [11015.253300] [<ffffffff810b098f>] kthread+0xcf/0xe0 [11015.253303] [<ffffffff810b08c0>] ? insert_kthread_work+0x40/0x40 [11015.253306] [<ffffffff816b4f18>] ret_from_fork+0x58/0x90 [11015.253308] [<ffffffff810b08c0>] ? insert_kthread_work+0x40/0x40 Why does the drive work abnormally? | ||||
Tags | No tags attached. | ||||
abrt_hash | |||||
URL | |||||
The kernel version is 3.10-693.el7.x86_64, in CentOS 7.4 | |
The kernel you are using lost any form of support two and half years ago when CentOS 7.5 was launched. We are now at CentOS 7.9 / kernel 3.10.0-1160.6.1.el7.x86_64 and we do not support anything older. Feel free to open a new bug if you can reproduce your problem after updating your OS. |
|
Date Modified | Username | Field | Change |
---|---|---|---|
2020-11-21 10:42 | yinzg | New Issue | |
2020-11-21 10:45 | yinzg | Note Added: 0037975 | |
2020-11-21 11:21 | ManuelWolfshant | Status | new => closed |
2020-11-21 11:21 | ManuelWolfshant | Resolution | open => not fixable |
2020-11-21 11:21 | ManuelWolfshant | Note Added: 0037976 | |
2020-11-21 11:21 | ManuelWolfshant | Note Edited: 0037976 | View Revisions |