View Issue Details
|ID||Project||Category||View Status||Date Submitted||Last Update|
|0016893||CentOS-8||kmod||public||2020-01-06 01:19||2020-09-21 09:27|
|Target Version||Fixed in Version|
|Summary||0016893: 2 identical nvme disks are recognized as 1 multipath nvme device|
|Description||I'm trying to setup a local workstation/virtualization host and bought 2 identical nvme disk for raid 1.|
Entering storage setup, 2 devices are recognized as 1 multipath device, which is obviously wrong. I tried just add the device and proceeded with the installation, which resulted in installed system can't boot.
Remove 1 disk, and installation went flawlessly
|Steps To Reproduce||- Add 2 Lexar NM610 nvme disk |
- Install CentOS-stream using official dvd iso
- At the storage select step, instead of two selectable basic nvme disk, multipath create a /dev/mapper/mpatha specialized disk
- Unable to install Centos
|Additional Information||- Model name: Lexar 1TB NM610 M 2 2280 PCIe G3X4|
- Part number: LNM610-1TRB
|Tags||No tags attached.|
|I have experienced this exact issue with 2 identical nvme disks, I am unable to remove any disks from the physical unit to proceed with installation and have instead had to pick Ubuntu.|
|I can confirm this behaviour in both Centos version 7 and 8. I'm using the iso image and the same happens for both. I have two identical 500GB NVME drives and two identical 2TB NVME drives. The first two are identified correctly but the second two drives, the 2TB ones are being seen as a single multipath device. There's nothing wrong with the devices and they work fine on their own. So far I have not found a way around this but it's essential that I do. Will update if I find a solution.|
@draytoc I've found that in my case removing my first nvme (which hadls Windows installed) worked for me.
Then all I had to do was simply update the grub boot config to refresh the boot images.
Thanks @colinM9991 I need to configure both the 500GB drives and the 2TB drives each in mdraid so I would like to achieve this from the setup process. I'm using a SATA DOM for /boot.
I've been trying to find out how to disable multipath from the kernel command line but to no avail. I think some distros allow for multipath=off but not sure about CentOS.
@draytoc I tried 'inst.nompath' during the installation of my Fedora instance (this bug also exists on Fedora, presumably the same installer - Anaconda? My Linux knowledge is very poor) and had no luck.
From the following docs, you may be able to use 'inst.nompath' to see if you have any success https://docs.centos.org/en-US/8-docs/standard-install/assembly_custom-boot-options/
Thanks again @colinM9991 I tried inst.nompath and it is completely ignored.
Finally I tried adding dm-multipath.blacklist=1 to the kernel command line and this allowed me to see the NVME drives and set everything up. So the issue is resolved for me.
|This issue also exists as Bug 1845915 in Fedora's Bugzilla and as is the case here, there has been no reported progress since June 2020. Maybe I mistyped something, draytoc's blacklist method at the kernel boot up command did not work for me. This issue exists in the latest Fedora, Centos 8.2, Centos Stream - since all use the same Anaconda set up utility. My workaround, as some have noted, is to remove one of my two nvme drives. Luckily I had an SSD the same size as the remaining nvme which allowed me to put the nvme and the ssd in raid-1. After the OS is installed I manually fail the SSD and remove it. Power off the computer. Add the 2nd nvme. Reboot. OS comes up fine. mdadm reports the missing drive from the raid pair. After cloning the 1st nvme partitions to the 2nd nvme drive, I manually add the 2nd nvme to my raid-1 config and all is roses until my next OS re-install ....|
@bsandhu I'm surprised using dm-multipath.blacklist=1 didn't work for you. Were your NVME drives being seen as a single multipath device?
I think this issue is specific to certain hardware. It doesn't happen for us in revision 2 of the system board we are using - with the same NVME drives. It might be specific to certain system boards and certain drives / NVME controller firmware.
|2020-01-06 01:19||quyetnd||New Issue|
|2020-06-04 21:37||ColinM9991||Note Added: 0037037|
|2020-06-29 16:41||draytoc||Note Added: 0037258|
|2020-06-29 18:35||ColinM9991||Note Added: 0037265|
|2020-06-29 20:19||draytoc||Note Added: 0037267|
|2020-06-29 21:40||ColinM9991||Note Added: 0037269|
|2020-06-30 10:39||draytoc||Note Added: 0037273|
|2020-09-18 17:38||bsandhu||Note Added: 0037710|
|2020-09-21 09:27||draytoc||Note Added: 0037713|