View Issue Details

IDProjectCategoryView StatusLast Update
0015252CentOS-7centos-releasepublic2019-01-15 14:23
Reporterfelix 
PriorityhighSeveritymajorReproducibilitysometimes
Status newResolutionopen 
Product Version7.4.1708 
Target VersionFixed in Version 
Summary0015252: system hung on rwsem_down_read_failed
Descriptionsome proc hung on rwsem_down_read_failed.cat /proc/pid/cmdline also hung.
call trace as below:
kernel: INFO: task axserver:202459 blocked for more than 120 seconds.
kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
kernel: bidserver D ffff887bf26d7de0 0 202459 198052 0x00000080
kernel: ffff887bf26d7db8 0000000000000086 ffff88541f65eeb0 ffff887bf26d7fd8
kernel: ffff887bf26d7fd8 ffff887bf26d7fd8 ffff88541f65eeb0 ffff88541f65eeb0
kernel: ffff887f6af44538 ffffffffffffffff ffff887f6af44540 ffff887bf26d7de0
kernel: Call Trace:
kernel: [<ffffffff816a9ad9>] schedule+0x29/0x70
kernel: [<ffffffff816ab10d>] rwsem_down_read_failed+0x10d/0x1a0
kernel: [<ffffffff81331ba8>] call_rwsem_down_read_failed+0x18/0x30
kernel: [<ffffffff816a8d70>] down_read+0x20/0x40
kernel: [<ffffffff816b07dc>] __do_page_fault+0x37c/0x450
kernel: [<ffffffff816b08e5>] do_page_fault+0x35/0x90
kernel: [<ffffffff816acb08>] page_fault+0x28/0x30
Steps To Reproduceit happens sometimes,cannot find any no reproduce steps now。
TagsNo tags attached.
abrt_hash
URL

Activities

richardl

richardl

2018-11-26 23:42

reporter   ~0033147

I seem to be having a similar issue. Although I am not really sure if it is the same. (in my case I use DRBD, LVM thin, dm-writeboost, RAID, KVM etc.)

Nov 26 17:13:46 newfront kernel: INFO: task kswapd0:64 blocked for more than 120 seconds.
Nov 26 17:13:46 newfront kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 26 17:13:46 newfront kernel: kswapd0 D ffff8839f6450000 0 64 2 0x00000000
Nov 26 17:13:46 newfront kernel: Call Trace:
Nov 26 17:13:46 newfront kernel: [<ffffffff977a2ec7>] ? global_dirty_limits+0x37/0x160
Nov 26 17:13:46 newfront kernel: [<ffffffff97d19e59>] schedule_preempt_disabled+0x29/0x70
Nov 26 17:13:46 newfront kernel: [<ffffffff97d17c17>] __mutex_lock_slowpath+0xc7/0x1d0
Nov 26 17:13:46 newfront kernel: [<ffffffff97d16fff>] mutex_lock+0x1f/0x2f
Nov 26 17:13:46 newfront kernel: [<ffffffffc049f0b8>] shrink+0x38/0x170 [dm_bufio]
Nov 26 17:13:46 newfront kernel: [<ffffffff977a9f9e>] shrink_slab+0xae/0x340
Nov 26 17:13:46 newfront kernel: [<ffffffff97815751>] ? vmpressure+0x21/0x90
Nov 26 17:13:46 newfront kernel: [<ffffffff977add21>] balance_pgdat+0x4b1/0x5e0
Nov 26 17:13:46 newfront kernel: [<ffffffff977adfc3>] kswapd+0x173/0x440
Nov 26 17:13:46 newfront kernel: [<ffffffff976bef10>] ? wake_up_atomic_t+0x30/0x30
Nov 26 17:13:46 newfront kernel: [<ffffffff977ade50>] ? balance_pgdat+0x5e0/0x5e0
Nov 26 17:13:46 newfront kernel: [<ffffffff976bdf21>] kthread+0xd1/0xe0
Nov 26 17:13:46 newfront kernel: [<ffffffff976bde50>] ? insert_kthread_work+0x40/0x40
Nov 26 17:13:46 newfront kernel: [<ffffffff97d255e4>] ret_from_fork_nospec_begin+0xe/0x21
Nov 26 17:13:46 newfront kernel: [<ffffffff976bde50>] ? insert_kthread_work+0x40/0x40
Nov 26 17:13:46 newfront kernel: INFO: task dmeventd:1917 blocked for more than 120 seconds.
Nov 26 17:13:46 newfront kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 26 17:13:46 newfront kernel: dmeventd D ffff8839f6450fd0 0 1917 1 0x00000080
Nov 26 17:13:46 newfront kernel: Call Trace:
Nov 26 17:13:46 newfront kernel: [<ffffffff97d18f39>] schedule+0x29/0x70
Nov 26 17:13:46 newfront kernel: [<ffffffff97d1a56d>] rwsem_down_read_failed+0x10d/0x1a0
Nov 26 17:13:46 newfront kernel: [<ffffffff9795f3d8>] call_rwsem_down_read_failed+0x18/0x30
Nov 26 17:13:46 newfront kernel: [<ffffffff97d18060>] down_read+0x20/0x40
Nov 26 17:13:46 newfront kernel: [<ffffffffc0875683>] dm_pool_get_metadata_transaction_id+0x23/0x60 [dm_thin_pool]
Nov 26 17:13:46 newfront kernel: [<ffffffffc086f18e>] pool_status+0x21e/0x750 [dm_thin_pool]
Nov 26 17:13:46 newfront kernel: [<ffffffff977a18b5>] ? __alloc_pages_nodemask+0x405/0x420
Nov 26 17:13:46 newfront kernel: [<ffffffffc023444f>] retrieve_status+0xaf/0x1c0 [dm_mod]
Nov 26 17:13:46 newfront kernel: [<ffffffffc0235196>] table_status+0x66/0xb0 [dm_mod]
Nov 26 17:13:46 newfront kernel: [<ffffffffc0235b20>] ctl_ioctl+0x220/0x4f0 [dm_mod]
Nov 26 17:13:46 newfront kernel: [<ffffffffc0235130>] ? dm_get_live_or_inactive_table.isra.3+0x30/0x30 [dm_mod]
Nov 26 17:13:46 newfront kernel: [<ffffffffc0235dfe>] dm_ctl_ioctl+0xe/0x20 [dm_mod]
Nov 26 17:13:46 newfront kernel: [<ffffffff97834100>] do_vfs_ioctl+0x360/0x550
Nov 26 17:13:46 newfront kernel: [<ffffffff978dcddf>] ? file_has_perm+0x9f/0xb0
Nov 26 17:13:46 newfront kernel: [<ffffffff97834391>] SyS_ioctl+0xa1/0xc0
Nov 26 17:13:46 newfront kernel: [<ffffffff97d256d5>] ? system_call_after_swapgs+0xa2/0x146
Nov 26 17:13:46 newfront kernel: [<ffffffff97d2579b>] system_call_fastpath+0x22/0x27
Nov 26 17:13:46 newfront kernel: [<ffffffff97d256e1>] ? system_call_after_swapgs+0xae/0x146
Nov 26 17:13:46 newfront kernel: INFO: task drbd_s_eelko:14961 blocked for more than 120 seconds.
Nov 26 17:13:46 newfront kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 26 17:13:46 newfront kernel: drbd_s_eelko D ffffffff98216480 0 14961 2 0x00000080
Nov 26 17:13:46 newfront kernel: Call Trace:
Nov 26 17:13:46 newfront kernel: [<ffffffff977a2ec7>] ? global_dirty_limits+0x37/0x160
Nov 26 17:13:46 newfront kernel: [<ffffffff97d19e59>] schedule_preempt_disabled+0x29/0x70
Nov 26 17:13:46 newfront kernel: [<ffffffff97d17c17>] __mutex_lock_slowpath+0xc7/0x1d0
Nov 26 17:13:46 newfront kernel: [<ffffffff97d16fff>] mutex_lock+0x1f/0x2f
Nov 26 17:13:46 newfront kernel: [<ffffffffc049f0b8>] shrink+0x38/0x170 [dm_bufio]
Nov 26 17:13:46 newfront kernel: [<ffffffff977a9f9e>] shrink_slab+0xae/0x340
Nov 26 17:13:46 newfront kernel: [<ffffffff97815751>] ? vmpressure+0x21/0x90
Nov 26 17:13:46 newfront kernel: [<ffffffff977ad1e2>] do_try_to_free_pages+0x3c2/0x4e0
Nov 26 17:13:46 newfront kernel: [<ffffffff977ad3fc>] try_to_free_pages+0xfc/0x180
Nov 26 17:13:46 newfront kernel: [<ffffffff97d0f2a4>] __alloc_pages_slowpath+0x457/0x724
Nov 26 17:13:46 newfront kernel: [<ffffffff977a18b5>] __alloc_pages_nodemask+0x405/0x420
Nov 26 17:13:46 newfront kernel: [<ffffffff977ec058>] alloc_pages_current+0x98/0x110
Nov 26 17:13:46 newfront kernel: [<ffffffffc08c5d5c>] alloc_send_buffer+0x8c/0x110 [drbd]
Nov 26 17:13:46 newfront kernel: [<ffffffffc08c9152>] __conn_prepare_command+0x62/0x80 [drbd]
Nov 26 17:13:46 newfront kernel: [<ffffffffc08c91b1>] conn_prepare_command+0x41/0x80 [drbd]
Nov 26 17:13:46 newfront kernel: [<ffffffffc08cae97>] drbd_send_dblock+0xd7/0x480 [drbd]
Nov 26 17:13:46 newfront kernel: [<ffffffffc089ee8d>] process_one_request+0x16d/0x320 [drbd]
Nov 26 17:13:46 newfront kernel: [<ffffffffc08a4822>] drbd_sender+0x3c2/0x420 [drbd]
Nov 26 17:13:46 newfront kernel: [<ffffffffc08c7320>] ? _get_ldev_if_state.part.30+0x110/0x110 [drbd]
Nov 26 17:13:46 newfront kernel: [<ffffffffc08c73a7>] drbd_thread_setup+0x87/0x1c0 [drbd]
Nov 26 17:13:46 newfront kernel: [<ffffffffc08c7320>] ? _get_ldev_if_state.part.30+0x110/0x110 [drbd]
Nov 26 17:13:46 newfront kernel: [<ffffffff976bdf21>] kthread+0xd1/0xe0
Nov 26 17:13:46 newfront kernel: [<ffffffff976bde50>] ? insert_kthread_work+0x40/0x40
Nov 26 17:13:46 newfront kernel: [<ffffffff97d255e4>] ret_from_fork_nospec_begin+0xe/0x21
Nov 26 17:13:46 newfront kernel: [<ffffffff976bde50>] ? insert_kthread_work+0x40/0x40
Nov 26 17:13:46 newfront kernel: INFO: task qemu-kvm:32114 blocked for more than 120 seconds.
Nov 26 17:13:46 newfront kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 26 17:13:46 newfront kernel: qemu-kvm D ffff8839f6450000 0 32114 1 0x00000080
Nov 26 17:13:46 newfront kernel: Call Trace:
Nov 26 17:13:46 newfront kernel: [<ffffffff976cea89>] ? ttwu_do_wakeup+0x19/0xe0
Nov 26 17:13:46 newfront kernel: [<ffffffff97d19e59>] schedule_preempt_disabled+0x29/0x70
Nov 26 17:13:46 newfront kernel: [<ffffffff97d17c17>] __mutex_lock_slowpath+0xc7/0x1d0
Nov 26 17:13:46 newfront kernel: [<ffffffff97d16fff>] mutex_lock+0x1f/0x2f
Nov 26 17:13:46 newfront kernel: [<ffffffffc049f0b8>] shrink+0x38/0x170 [dm_bufio]
Nov 26 17:13:46 newfront kernel: [<ffffffff9794edb5>] ? cpumask_next_and+0x35/0x50
Nov 26 17:13:46 newfront kernel: [<ffffffff977a9f9e>] shrink_slab+0xae/0x340
Nov 26 17:13:46 newfront kernel: [<ffffffff97815751>] ? vmpressure+0x21/0x90
Nov 26 17:13:46 newfront kernel: [<ffffffff977ad1e2>] do_try_to_free_pages+0x3c2/0x4e0
Nov 26 17:13:46 newfront kernel: [<ffffffff977ad3fc>] try_to_free_pages+0xfc/0x180
Nov 26 17:13:46 newfront kernel: [<ffffffff97d0f2a4>] __alloc_pages_slowpath+0x457/0x724
Nov 26 17:13:46 newfront kernel: [<ffffffff977a18b5>] __alloc_pages_nodemask+0x405/0x420
Nov 26 17:13:46 newfront kernel: [<ffffffff977ec058>] alloc_pages_current+0x98/0x110
Nov 26 17:13:46 newfront kernel: [<ffffffff9779bf3e>] __get_free_pages+0xe/0x40
Nov 26 17:13:46 newfront kernel: [<ffffffff97834ba0>] __pollwait+0xa0/0xf0
Nov 26 17:13:46 newfront kernel: [<ffffffff9786f2df>] eventfd_poll+0x2f/0x70
Nov 26 17:13:46 newfront kernel: [<ffffffff9783619a>] do_sys_poll+0x2aa/0x550
Nov 26 17:13:46 newfront kernel: [<ffffffff97834b00>] ? poll_initwait+0x50/0x50
Nov 26 17:13:46 newfront kernel: [<ffffffff97834d70>] ? poll_select_copy_remaining+0x150/0x150
Nov 26 17:13:46 newfront kernel: [<ffffffff97834d70>] ? poll_select_copy_remaining+0x150/0x150
Nov 26 17:13:46 newfront kernel: [<ffffffff97834d70>] ? poll_select_copy_remaining+0x150/0x150
Nov 26 17:13:46 newfront kernel: [<ffffffff97834d70>] ? poll_select_copy_remaining+0x150/0x150
Nov 26 17:13:46 newfront kernel: [<ffffffff97834d70>] ? poll_select_copy_remaining+0x150/0x150
Nov 26 17:13:46 newfront kernel: [<ffffffff97834d70>] ? poll_select_copy_remaining+0x150/0x150
Nov 26 17:13:46 newfront kernel: [<ffffffff97834d70>] ? poll_select_copy_remaining+0x150/0x150
Nov 26 17:13:46 newfront kernel: [<ffffffff97834d70>] ? poll_select_copy_remaining+0x150/0x150
Nov 26 17:13:46 newfront kernel: [<ffffffff97834d70>] ? poll_select_copy_remaining+0x150/0x150
Nov 26 17:13:46 newfront kernel: [<ffffffff97836793>] SyS_ppoll+0x1b3/0x1d0
Nov 26 17:13:46 newfront kernel: [<ffffffff97d256d5>] ? system_call_after_swapgs+0xa2/0x146
Nov 26 17:13:46 newfront kernel: [<ffffffff97d256e1>] ? system_call_after_swapgs+0xae/0x146
Nov 26 17:13:46 newfront kernel: [<ffffffff97d2579b>] system_call_fastpath+0x22/0x27
Nov 26 17:13:46 newfront kernel: [<ffffffff97d256e1>] ? system_call_after_swapgs+0xae/0x146
Nov 26 17:13:46 newfront kernel: INFO: task worker:3573 blocked for more than 120 seconds.
Nov 26 17:13:46 newfront kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 26 17:13:46 newfront kernel: worker D ffff8839f6bceeb0 0 3573 1 0x00000080
Nov 26 17:13:46 newfront kernel: Call Trace:
Nov 26 17:13:46 newfront kernel: [<ffffffff97d16ec0>] ? bit_wait+0x50/0x50
Nov 26 17:13:46 newfront kernel: [<ffffffff97d16ec0>] ? bit_wait+0x50/0x50
Nov 26 17:13:46 newfront kernel: [<ffffffff97d18f39>] schedule+0x29/0x70
Nov 26 17:13:46 newfront kernel: [<ffffffff97d168a9>] schedule_timeout+0x239/0x2c0
Nov 26 17:13:46 newfront kernel: [<ffffffff9779756d>] ? find_get_pages_tag+0xfd/0x240
Nov 26 17:13:46 newfront kernel: [<ffffffff97d16ec0>] ? bit_wait+0x50/0x50
Nov 26 17:13:46 newfront kernel: [<ffffffff97d1844d>] io_schedule_timeout+0xad/0x130
Nov 26 17:13:46 newfront kernel: [<ffffffff97d184e8>] io_schedule+0x18/0x20
Nov 26 17:13:46 newfront kernel: [<ffffffff97d16ed1>] bit_wait_io+0x11/0x50
Nov 26 17:13:46 newfront kernel: [<ffffffff97d169f7>] __wait_on_bit+0x67/0x90
Nov 26 17:13:46 newfront kernel: [<ffffffff977962a1>] wait_on_page_bit+0x81/0xa0
Nov 26 17:13:46 newfront kernel: [<ffffffff976befd0>] ? wake_bit_function+0x40/0x40
Nov 26 17:13:46 newfront kernel: [<ffffffff977963d1>] __filemap_fdatawait_range+0x111/0x190
Nov 26 17:13:46 newfront kernel: [<ffffffff97796464>] filemap_fdatawait_range+0x14/0x30
Nov 26 17:13:46 newfront kernel: [<ffffffff97798566>] filemap_write_and_wait_range+0x56/0x90
Nov 26 17:13:46 newfront kernel: [<ffffffff9785d2eb>] blkdev_fsync+0x1b/0x50
Nov 26 17:13:46 newfront kernel: [<ffffffff97853277>] do_fsync+0x67/0xb0
Nov 26 17:13:46 newfront kernel: [<ffffffff97d256d5>] ? system_call_after_swapgs+0xa2/0x146
Nov 26 17:13:46 newfront kernel: [<ffffffff97853583>] SyS_fdatasync+0x13/0x20
Nov 26 17:13:46 newfront kernel: [<ffffffff97d2579b>] system_call_fastpath+0x22/0x27
Nov 26 17:13:46 newfront kernel: [<ffffffff97d256e1>] ? system_call_after_swapgs+0xae/0x146
Nov 26 17:13:46 newfront kernel: INFO: task kworker/1:0:17462 blocked for more than 120 seconds.
Nov 26 17:13:46 newfront kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 26 17:13:46 newfront kernel: kworker/1:0 D ffff8839f6bcaf70 0 17462 2 0x00000080
Nov 26 17:13:46 newfront kernel: Workqueue: kcopyd do_work [dm_mod]
Nov 26 17:13:46 newfront kernel: Call Trace:
Nov 26 17:13:46 newfront kernel: [<ffffffffc08e6905>] ? is_suspended_quorum.part.18+0x55/0x90 [drbd]
Nov 26 17:13:46 newfront kernel: [<ffffffff97d18f39>] schedule+0x29/0x70
Nov 26 17:13:46 newfront kernel: [<ffffffffc08bfe47>] drbd_make_request+0x167/0x360 [drbd]
Nov 26 17:13:46 newfront kernel: [<ffffffff976bef10>] ? wake_up_atomic_t+0x30/0x30
Nov 26 17:13:46 newfront kernel: [<ffffffff9791f1cb>] generic_make_request+0x10b/0x320
Nov 26 17:13:46 newfront kernel: [<ffffffff9785acbb>] ? __bio_add_page+0x20b/0x2b0
Nov 26 17:13:46 newfront kernel: [<ffffffff9791f450>] submit_bio+0x70/0x150
Nov 26 17:13:46 newfront kernel: [<ffffffffc0236447>] dispatch_io+0x1a7/0x3c0 [dm_mod]
Nov 26 17:13:46 newfront kernel: [<ffffffffc0235f50>] ? dm_copy_name_and_uuid+0xc0/0xc0 [dm_mod]
Nov 26 17:13:46 newfront kernel: [<ffffffffc0235f80>] ? list_get_page+0x30/0x30 [dm_mod]
Nov 26 17:13:46 newfront kernel: [<ffffffffc0236b50>] ? dm_kcopyd_do_callback+0x40/0x40 [dm_mod]
Nov 26 17:13:46 newfront kernel: [<ffffffffc023676e>] dm_io+0x10e/0x2e0 [dm_mod]
Nov 26 17:13:46 newfront kernel: [<ffffffffc0235f50>] ? dm_copy_name_and_uuid+0xc0/0xc0 [dm_mod]
Nov 26 17:13:46 newfront kernel: [<ffffffffc0235f80>] ? list_get_page+0x30/0x30 [dm_mod]
Nov 26 17:13:46 newfront kernel: [<ffffffffc023784f>] run_io_job+0xbf/0x1a0 [dm_mod]
Nov 26 17:13:46 newfront kernel: [<ffffffffc0236b50>] ? dm_kcopyd_do_callback+0x40/0x40 [dm_mod]
Nov 26 17:13:46 newfront kernel: [<ffffffffc0236f80>] process_jobs+0x60/0x100 [dm_mod]
Nov 26 17:13:46 newfront kernel: [<ffffffffc0237790>] ? dm_kcopyd_client_create+0x210/0x210 [dm_mod]
Nov 26 17:13:46 newfront kernel: [<ffffffffc0237088>] do_work+0x68/0x90 [dm_mod]
Nov 26 17:13:46 newfront kernel: [<ffffffff976b613f>] process_one_work+0x17f/0x440
Nov 26 17:13:46 newfront kernel: [<ffffffff976b72dc>] worker_thread+0x22c/0x3c0
Nov 26 17:13:46 newfront kernel: [<ffffffff976b70b0>] ? manage_workers.isra.24+0x2a0/0x2a0
Nov 26 17:13:46 newfront kernel: [<ffffffff976bdf21>] kthread+0xd1/0xe0
Nov 26 17:13:46 newfront kernel: [<ffffffff976bde50>] ? insert_kthread_work+0x40/0x40
Nov 26 17:13:46 newfront kernel: [<ffffffff97d255e4>] ret_from_fork_nospec_begin+0xe/0x21
Nov 26 17:13:46 newfront kernel: [<ffffffff976bde50>] ? insert_kthread_work+0x40/0x40
Nov 26 17:13:46 newfront kernel: INFO: task kworker/u16:2:2201 blocked for more than 120 seconds.
Nov 26 17:13:46 newfront kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 26 17:13:46 newfront kernel: kworker/u16:2 D ffff8839f6bcdee0 0 2201 2 0x00000080
Nov 26 17:13:46 newfront kernel: Workqueue: dm_bufio_cache work_fn [dm_bufio]
Nov 26 17:13:46 newfront kernel: Call Trace:
Nov 26 17:13:46 newfront kernel: [<ffffffff976db7ec>] ? dequeue_entity+0x11c/0x5e0
Nov 26 17:13:46 newfront kernel: [<ffffffff97d19e59>] schedule_preempt_disabled+0x29/0x70
Nov 26 17:13:46 newfront kernel: [<ffffffff97d17c17>] __mutex_lock_slowpath+0xc7/0x1d0
Nov 26 17:13:46 newfront kernel: [<ffffffff97d16fff>] mutex_lock+0x1f/0x2f
Nov 26 17:13:46 newfront kernel: [<ffffffffc049f287>] work_fn+0x97/0x1d0 [dm_bufio]
Nov 26 17:13:46 newfront kernel: [<ffffffff976b613f>] process_one_work+0x17f/0x440
Nov 26 17:13:46 newfront kernel: [<ffffffff976b71d6>] worker_thread+0x126/0x3c0
Nov 26 17:13:46 newfront kernel: [<ffffffff976b70b0>] ? manage_workers.isra.24+0x2a0/0x2a0
Nov 26 17:13:46 newfront kernel: [<ffffffff976bdf21>] kthread+0xd1/0xe0
Nov 26 17:13:46 newfront kernel: [<ffffffff976bde50>] ? insert_kthread_work+0x40/0x40
Nov 26 17:13:46 newfront kernel: [<ffffffff97d255e4>] ret_from_fork_nospec_begin+0xe/0x21
Nov 26 17:13:46 newfront kernel: [<ffffffff976bde50>] ? insert_kthread_work+0x40/0x40
Nov 26 17:13:46 newfront kernel: INFO: task kworker/u16:4:2797 blocked for more than 120 seconds.
Nov 26 17:13:46 newfront kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 26 17:13:46 newfront kernel: kworker/u16:4 D ffff8839f6bcdee0 0 2797 2 0x00000080
Nov 26 17:13:46 newfront kernel: Workqueue: dm-thin do_worker [dm_thin_pool]
Nov 26 17:13:46 newfront kernel: Call Trace:
Nov 26 17:13:46 newfront kernel: [<ffffffff97d16ec0>] ? bit_wait+0x50/0x50
Nov 26 17:13:46 newfront kernel: [<ffffffff97d18f39>] schedule+0x29/0x70
Nov 26 17:13:46 newfront kernel: [<ffffffff97d168a9>] schedule_timeout+0x239/0x2c0
Nov 26 17:13:46 newfront kernel: [<ffffffff976caba4>] ? __wake_up+0x44/0x50
Nov 26 17:13:46 newfront kernel: [<ffffffff976fa982>] ? ktime_get_ts64+0x52/0xf0
Nov 26 17:13:46 newfront kernel: [<ffffffff97d16ec0>] ? bit_wait+0x50/0x50
Nov 26 17:13:46 newfront kernel: [<ffffffff97d1844d>] io_schedule_timeout+0xad/0x130
Nov 26 17:13:46 newfront kernel: [<ffffffff97d184e8>] io_schedule+0x18/0x20
Nov 26 17:13:46 newfront kernel: [<ffffffff97d16ed1>] bit_wait_io+0x11/0x50
Nov 26 17:13:46 newfront kernel: [<ffffffff97d169f7>] __wait_on_bit+0x67/0x90
Nov 26 17:13:46 newfront kernel: [<ffffffff97d16ec0>] ? bit_wait+0x50/0x50
Nov 26 17:13:46 newfront kernel: [<ffffffff97d16b61>] out_of_line_wait_on_bit+0x81/0xb0
Nov 26 17:13:46 newfront kernel: [<ffffffff976befd0>] ? wake_bit_function+0x40/0x40
Nov 26 17:13:46 newfront kernel: [<ffffffffc049fdb9>] dm_bufio_write_dirty_buffers+0xf9/0x1d0 [dm_bufio]
Nov 26 17:13:46 newfront kernel: [<ffffffffc05c5c6c>] dm_bm_flush+0x1c/0x20 [dm_persistent_data]
Nov 26 17:13:46 newfront kernel: [<ffffffffc05c8cf3>] dm_tm_commit+0x33/0x40 [dm_persistent_data]
Nov 26 17:13:46 newfront kernel: [<ffffffffc0874f05>] __commit_transaction+0x395/0x3a0 [dm_thin_pool]
Nov 26 17:13:46 newfront kernel: [<ffffffffc0876597>] dm_pool_commit_metadata+0x37/0x60 [dm_thin_pool]
Nov 26 17:13:46 newfront kernel: [<ffffffffc086eef6>] commit+0x36/0xb0 [dm_thin_pool]
Nov 26 17:13:46 newfront kernel: [<ffffffffc0873b0d>] do_worker+0x87d/0x8d0 [dm_thin_pool]
Nov 26 17:13:46 newfront kernel: [<ffffffff976b613f>] process_one_work+0x17f/0x440
Nov 26 17:13:46 newfront kernel: [<ffffffff976b71d6>] worker_thread+0x126/0x3c0
Nov 26 17:13:46 newfront kernel: [<ffffffff976b70b0>] ? manage_workers.isra.24+0x2a0/0x2a0
Nov 26 17:13:46 newfront kernel: [<ffffffff976bdf21>] kthread+0xd1/0xe0
Nov 26 17:13:46 newfront kernel: [<ffffffff976bde50>] ? insert_kthread_work+0x40/0x40
Nov 26 17:13:46 newfront kernel: [<ffffffff97d255e4>] ret_from_fork_nospec_begin+0xe/0x21
Nov 26 17:13:46 newfront kernel: [<ffffffff976bde50>] ? insert_kthread_work+0x40/0x40
Nov 26 17:13:46 newfront kernel: INFO: task kworker/u16:0:3396 blocked for more than 120 seconds.
Nov 26 17:13:46 newfront kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 26 17:13:46 newfront kernel: kworker/u16:0 D ffff8839f6bcdee0 0 3396 2 0x00000080
Nov 26 17:13:46 newfront kernel: Workqueue: writeback bdi_writeback_workfn (flush-253:11)
Nov 26 17:13:46 newfront kernel: Call Trace:
Nov 26 17:13:46 newfront kernel: [<ffffffff97d18f39>] schedule+0x29/0x70
Nov 26 17:13:46 newfront kernel: [<ffffffff97d1a56d>] rwsem_down_read_failed+0x10d/0x1a0
Nov 26 17:13:46 newfront kernel: [<ffffffff97799d35>] ? mempool_alloc_slab+0x15/0x20
Nov 26 17:13:46 newfront kernel: [<ffffffff9795f3d8>] call_rwsem_down_read_failed+0x18/0x30
Nov 26 17:13:46 newfront kernel: [<ffffffff97d18060>] down_read+0x20/0x40
Nov 26 17:13:46 newfront kernel: [<ffffffffc0875d07>] dm_thin_find_block+0x37/0x90 [dm_thin_pool]
Nov 26 17:13:46 newfront kernel: [<ffffffffc0870f59>] thin_map+0x169/0x2c0 [dm_thin_pool]
Nov 26 17:13:46 newfront kernel: [<ffffffffc022d136>] __map_bio+0x96/0x190 [dm_mod]
Nov 26 17:13:46 newfront kernel: [<ffffffffc022b680>] ? queue_io+0x80/0x80 [dm_mod]
Nov 26 17:13:46 newfront kernel: [<ffffffffc022d437>] __clone_and_map_data_bio+0x177/0x280 [dm_mod]
Nov 26 17:13:46 newfront kernel: [<ffffffffc022d811>] __split_and_process_bio+0x2d1/0x520 [dm_mod]
Nov 26 17:13:46 newfront kernel: [<ffffffffc022dd7d>] dm_make_request+0x11d/0x1a0 [dm_mod]
Nov 26 17:13:46 newfront kernel: [<ffffffff9791f1cb>] generic_make_request+0x10b/0x320
Nov 26 17:13:46 newfront kernel: [<ffffffff9791f450>] submit_bio+0x70/0x150
Nov 26 17:13:46 newfront kernel: [<ffffffff9785b865>] ? bio_alloc_bioset+0x115/0x310
Nov 26 17:13:46 newfront kernel: [<ffffffff97857497>] _submit_bh+0x127/0x160
Nov 26 17:13:46 newfront kernel: [<ffffffff97857712>] __block_write_full_page+0x162/0x380
Nov 26 17:13:46 newfront kernel: [<ffffffff9785cfb0>] ? I_BDEV+0x10/0x10
Nov 26 17:13:46 newfront kernel: [<ffffffff9785cfb0>] ? I_BDEV+0x10/0x10
Nov 26 17:13:46 newfront kernel: [<ffffffff97857ade>] block_write_full_page+0xce/0xe0
Nov 26 17:13:46 newfront kernel: [<ffffffff9785d8e8>] blkdev_writepage+0x18/0x20
Nov 26 17:13:46 newfront kernel: [<ffffffff977a1d09>] __writepage+0x19/0x50
Nov 26 17:13:46 newfront kernel: [<ffffffff977a2804>] write_cache_pages+0x254/0x4e0
Nov 26 17:13:46 newfront kernel: [<ffffffff977a1cf0>] ? global_dirtyable_memory+0x70/0x70
Nov 26 17:13:46 newfront kernel: [<ffffffff977a2add>] generic_writepages+0x4d/0x80
Nov 26 17:13:46 newfront kernel: [<ffffffff9785d8a5>] blkdev_writepages+0x35/0x40
Nov 26 17:13:46 newfront kernel: [<ffffffff977a3b81>] do_writepages+0x21/0x50
Nov 26 17:13:46 newfront kernel: [<ffffffff9784cfc0>] __writeback_single_inode+0x40/0x260
Nov 26 17:13:46 newfront kernel: [<ffffffff9784da54>] writeback_sb_inodes+0x1c4/0x490
Nov 26 17:13:46 newfront kernel: [<ffffffff9784ddbf>] __writeback_inodes_wb+0x9f/0xd0
Nov 26 17:13:46 newfront kernel: [<ffffffff9784e5f3>] wb_writeback+0x263/0x2f0
Nov 26 17:13:46 newfront kernel: [<ffffffff9783ab0c>] ? get_nr_inodes+0x4c/0x70
Nov 26 17:13:46 newfront kernel: [<ffffffff9784ef7b>] bdi_writeback_workfn+0x2cb/0x460
Nov 26 17:13:46 newfront kernel: [<ffffffff976b613f>] process_one_work+0x17f/0x440
Nov 26 17:13:46 newfront kernel: [<ffffffff976b71d6>] worker_thread+0x126/0x3c0
Nov 26 17:13:46 newfront kernel: [<ffffffff976b70b0>] ? manage_workers.isra.24+0x2a0/0x2a0
Nov 26 17:13:46 newfront kernel: [<ffffffff976bdf21>] kthread+0xd1/0xe0
Nov 26 17:13:46 newfront kernel: [<ffffffff976bde50>] ? insert_kthread_work+0x40/0x40
Nov 26 17:13:46 newfront kernel: [<ffffffff97d255e4>] ret_from_fork_nospec_begin+0xe/0x21
Nov 26 17:13:46 newfront kernel: [<ffffffff976bde50>] ? insert_kthread_work+0x40/0x40
gchen

gchen

2018-11-27 04:11

reporter   ~0033148

hi felix
   Do you have vmcore generated for this issue? If you can, can you send me vmcore to analyze it?
   Because I also have a similar issue, my question list is 0015489.
thanks
di_ps

di_ps

2019-01-15 14:23

reporter   ~0033591

I also have a similar issue on 7.5 https://bugs.centos.org/view.php?id=15711
[6175405.104220] [<ffffffffa6d18f39>] schedule+0x29/0x70
[6175405.104252] [<ffffffffa6d1a56d>] rwsem_down_read_failed+0x10d/0x1a0
[6175405.104262] [<ffffffffa695f3d8>] call_rwsem_down_read_failed+0x18/0x30
[6175405.104291] [<ffffffffa6d18060>] down_read+0x20/0x40
[6175405.104301] [<ffffffffa68985d2>] proc_pid_cmdline_read+0xb2/0x560
[6175405.104334] [<ffffffffa68d549c>] ? security_file_permission+0x8c/0xa0
[6175405.104342] [<ffffffffa681f0af>] vfs_read+0x9f/0x170
[6175405.104347] [<ffffffffa681ff7f>] SyS_read+0x7f/0xf0
[6175405.104380] [<ffffffffa6d2579b>] system_call_fastpath+0x22/0x27

Issue History

Date Modified Username Field Change
2018-09-07 08:54 felix New Issue
2018-11-26 23:42 richardl Note Added: 0033147
2018-11-27 04:11 gchen Note Added: 0033148
2019-01-15 14:23 di_ps Note Added: 0033591