View Issue Details

IDProjectCategoryView StatusLast Update
0018054CentOS-8selinux-policypublic2021-02-18 18:01
Reporterbsandhu Assigned To 
PrioritynormalSeverityminorReproducibilityalways
Status newResolutionopen 
Product Version8.3.2011 
Summary0018054: SELinux is preventing /usr/sbin/mdadm from read access on the blk_file nvme0n1p1
DescriptionSELinux does not prevent read access on any of my other 3 (three) block devices: Only on the two mdadm | raid 1 | nvme drives on the Gigabyte Technology Co., Ltd. X570 AORUS ELITE motherboard. For complete SELinux messages run: sealert -l a68bb103-4cd2-4761-a558-56e5426eeeb5.
The same error is reported for nvme1n1p1.
Steps To ReproduceA continuous stream of the following log entries (on average about five every minute):
setroubleshoot
SELinux is preventing /usr/sbin/mdadm from read access on the blk_file nvme0n1p1. For complete SELinux messages run: sealert -l a68bb103-4cd2-4761-a558-56e5426eeeb5
PRIORITY 3
SYSLOG_FACILITY 1
SYSLOG_IDENTIFIER setroubleshoot
_BOOT_ID 34b38c0208d1474abcf4c18c1a9d4592
_CAP_EFFECTIVE 0
_CMDLINE /usr/libexec/platform-python -Es /usr/sbin/setroubleshootd -f
_COMM setroubleshootd
_EXE /usr/libexec/platform-python3.6
_GID 976
_HOSTNAME kvm.locky
_MACHINE_ID 9df6a916df98445cab976e1d0375da4c
_PID 17907
_SELINUX_CONTEXT system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023
_SOURCE_REALTIME_TIMESTAMP 1612836406468141
_SYSTEMD_CGROUP /system.slice/dbus.service
_SYSTEMD_INVOCATION_ID 27a437e9ff3e44d4af3b5b8ca7fec4f0
_SYSTEMD_SLICE system.slice
_SYSTEMD_UNIT dbus.service
_TRANSPORT syslog
_UID 976
__CURSOR s=b72b15e2a42d40d5a6b15102c1e60027;i=1faa;b=34b38c0208d1474abcf4c18c1a9d4592;m=d9d7d49c;t=5baddbd169e35;x=b751fe2f7bcf6e40
__MONOTONIC_TIMESTAMP 3654800540
__REALTIME_TIMESTAMP 1612836406468149
Additional Informationsealert -l a68bb103-4cd2-4761-a558-56e5426eeeb5
SELinux is preventing /usr/sbin/mdadm from read access on the blk_file nvme1n1p1.

***** Plugin catchall (100. confidence) suggests **************************

If you believe that mdadm should be allowed read access on the nvme1n1p1 blk_file by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'mdadm' --raw | audit2allow -M my-mdadm
# semodule -X 300 -i my-mdadm.pp

Additional Information:
Source Context system_u:system_r:pcp_pmcd_t:s0
Target Context system_u:object_r:nvme_device_t:s0
Target Objects nvme1n1p1 [ blk_file ]
Source mdadm
Source Path /usr/sbin/mdadm
Port <Unknown>
Host kvm.locky
Source RPM Packages mdadm-4.1-14.el8.x86_64
Target RPM Packages
SELinux Policy RPM selinux-policy-targeted-3.14.3-54.el8_3.2.noarch
Local Policy RPM selinux-policy-targeted-3.14.3-54.el8_3.2.noarch
Selinux Enabled True
Policy Type targeted
Enforcing Mode Enforcing
Host Name kvm.locky
Platform Linux kvm.locky 4.18.0-240.10.1.el8_3.x86_64 #1
                              SMP Mon Jan 18 17:05:51 UTC 2021 x86_64 x86_64
Alert Count 6197
First Seen 2021-02-08 04:08:22 CST
Last Seen 2021-02-08 20:10:56 CST
Local ID a68bb103-4cd2-4761-a558-56e5426eeeb5

Raw Audit Messages
type=AVC msg=audit(1612836656.18:1231): avc: denied { read } for pid=25036 comm="mdadm" name="nvme1n1p1" dev="devtmpfs" ino=4473 scontext=system_u:system_r:pcp_pmcd_t:s0 tcontext=system_u:object_r:nvme_device_t:s0 tclass=blk_file permissive=0

type=SYSCALL msg=audit(1612836656.18:1231): arch=x86_64 syscall=openat success=no exit=EACCES a0=ffffff9c a1=556bdf719440 a2=4000 a3=0 items=0 ppid=25035 pid=25036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm=mdadm exe=/usr/sbin/mdadm subj=system_u:system_r:pcp_pmcd_t:s0 key=(null)

Hash: mdadm,pcp_pmcd_t,nvme_device_t,blk_file,read

I ran the recommended steps to allow access for now by executing:
ausearch -c 'mdadm' --raw | audit2allow -M my-mdadm
******************** IMPORTANT ***********************
To make this policy package active, execute:
semodule -i my-mdadm.pp

After a reboot, networkmanager failed to start and the logs flood continues with one added error for EACH of the nvme SSD. Log entry shown for nvme1n1p1. The error for nvme0n1p1 is the same:
setroubleshoot
failed to retrieve rpm info for /dev/nvme1n1p1
PRIORITY 3
SYSLOG_FACILITY 1
SYSLOG_IDENTIFIER setroubleshoot
_BOOT_ID f1ed38e4926044a2a908167dd15a8e7e
_CAP_EFFECTIVE 0
_CMDLINE /usr/libexec/platform-python -Es /usr/sbin/setroubleshootd -f
_COMM setroubleshootd
_EXE /usr/libexec/platform-python3.6
_GID 976
_HOSTNAME kvm.locky
_MACHINE_ID 9df6a916df98445cab976e1d0375da4c
_PID 6063
_SELINUX_CONTEXT system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023
_SOURCE_REALTIME_TIMESTAMP 1612837805198245
_SYSTEMD_CGROUP /system.slice/dbus.service
_SYSTEMD_INVOCATION_ID eb09036ba9f34e84bf0bbc9b79af3069
_SYSTEMD_SLICE system.slice
_SYSTEMD_UNIT dbus.service
_TRANSPORT syslog
_UID 976
__CURSOR s=2af6e9edc14d4b81a7ada30ae28fd733;i=10b5;b=f1ed38e4926044a2a908167dd15a8e7e;m=668cd19;t=5bade10758bae;x=96de5b61380047f
__MONOTONIC_TIMESTAMP 107531545
TagsNo tags attached.

Activities

bsandhu

bsandhu

2021-02-18 18:01

reporter   ~0038256

SELinux is preventing /usr/sbin/mdadm from open access on the blk_file /dev/nvme1n1p1. For complete SELinux messages run: sealert -l e385619e-161b-4f61-90c6-5ecf2efebe14
SELinux is preventing /usr/sbin/mdadm from open access on the blk_file /dev/nvme0n1p1. For complete SELinux messages run: sealert -l e385619e-161b-4f61-90c6-5ecf2efebe14
AVC Message for setroubleshoot, dropping message
failed to retrieve rpm info for /etc/selinux/targeted
The detailed recommendation was to report as a bug which is what I did and for a workaround enter these two commands, which I did:
# ausearch -c 'mdadm' --raw | audit2allow -M my-mdadm
# semodule -X 300 -i my-mdadm.pp
The two commands above ran successfully. Now I am getting flooded with the following log entries for each of my two NVME drives:
setroubleshoot
Unable to add audit event: node=kvm.locky type=AVC msg=audit(1613671189.199:142374): avc: denied { read } for pid=185466 comm="mdadm" name="nvme1n1p1" dev="devtmpfs" ino=21616 scontext=syste
m_u:system_r:pcp_pmcd_t:s0 tcontext=system_u:object_r:nvme_device_t:s0 tclass=blk_file permissive=0
-ditto- for nvme0n1p1

Issue History

Date Modified Username Field Change
2021-02-09 02:31 bsandhu New Issue
2021-02-18 18:01 bsandhu Note Added: 0038256