View Issue Details

IDProjectCategoryView StatusLast Update
0016941CentOS-8anacondapublic2020-01-31 16:35
Reportermshopf 
PriorityhighSeverityblockReproducibilityalways
Status newResolutionopen 
Platformx86_64OSCentOSOS Version8
Product Version8.0.1905 
Target VersionFixed in Version 
Summary0016941: Installation fails on pre-partitioned software RAID
DescriptionI have a server that has its OS on a RAID 1 of two SSDs. I wanted to install CentOS as the new operating system, while keeping the old one available, so I shrunk the original filesystem, repartitioned the SSDs and created new RAIDs on the partitions, v0.9 metadata for the OS to be on the safe side for booting. The old OS boots and works just fine.

Disk layout for both SSDs is like follows:
# fdisk -l /dev/sdi

Disk /dev/sdi: 300.1 GB, 300069052416 bytes
255 heads, 63 sectors/track, 36481 cylinders, total 586072368 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00008b43

Device Boot Start End Blocks Id System
/dev/sdi1 2048 4196351 2097152 82 Linux swap / Solaris
/dev/sdi2 * 4196352 134219775 65011712 fd Linux raid autodetect
/dev/sdi3 134219776 264243199 65011712 fd Linux raid autodetect
/dev/sdi4 264243200 586072367 160914584 fd Linux raid autodetect

The first raid partition is a RAID1 with the old OS:
# mdadm -D /dev/md1
/dev/md1:
Version : 0.90
Creation Time : Fri Dec 13 17:58:06 2019
Raid Level : raid1
Array Size : 65011648 (62.00 GiB 66.57 GB)
Used Dev Size : 65011648 (62.00 GiB 66.57 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Thu Jan 9 15:14:56 2020
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : f2d83e21:53b00d3b:04894333:532a878b
Events : 0.1
Number Major Minor RaidDevice State
0 8 130 0 active sync /dev/sdi2
1 8 146 1 active sync /dev/sdj2

I created new RAIDs on the new partitions (abbreviated):
# mdadm -D /dev/md2
/dev/md2:
Version : 0.90
Creation Time : Thu Jan 9 15:36:39 2020
Raid Level : raid1
Array Size : 65011648 (62.00 GiB 66.57 GB)
[...]
UUID : 15d9716d:e9e47889:a6b0c1bc:0e02d2b0 (local to host zuse2)
Events : 0.11
Number Major Minor RaidDevice State
0 8 131 0 active sync /dev/sdi3
1 8 147 1 active sync /dev/sdj3
# mdadm -D /dev/md3 (this one with v1.2 metadata, because it doesn't need to be bootable):
/dev/md3:
Version : 1.2
Creation Time : Thu Jan 9 15:37:11 2020
Raid Level : raid1
Array Size : 160783424 (153.34 GiB 164.64 GB)
[...]
Name : zuse2:sys (local to host zuse2)
UUID : 46a71480:7534c6bf:943a2006:983ac7bd
Events : 1
Number Major Minor RaidDevice State
0 8 132 0 active sync /dev/sdi4
1 8 148 1 active sync /dev/sdj4


I tried to install CentOS on sdi3+sdj3, with and without pre-creating a raid there, with sdi1+sdj1 reserved as swap (not RAID!) and sdi4+sdj4 to be assembled as a system data RAID 1 (OS-independent).

Installation always fails after it booted into X. All I see is a black screen with a mouse pointer, when booting without the SSDs attached I got into the graphical user interface, so graphics itself is working. Installing with textual interface doesn't do any good either.
When switching back to VT1, on the text console it states the following error:
ERROR:pugi-argument.c:1006:_pygi_argument_to_object: code should not be reached

anaconda.log logs the following error (all log files attached):
[...]
File "/usr/lib/python3.6/site-packages/blivet/populator/populator.py", line 264, in handle_device
device = helper_class(self, info).run()
File "/usr/lib/python3.6/site-packages/blivet/populator/helpers/partition.py", line 112, in run
self._devicetree._add_device(device)
File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
return m(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/blivet/devicetree.py", line 158, in _add_device
raise ValueError("device is already in tree")
ValueError: device is already in tree

I checked the uuids of the parititions, and indeed the (potential) RAIDs have identical uuids - but that seems to be normal (according to what I read):

# blkid /dev/sdi? /dev/sdj?
/dev/sdi1: UUID="42e567fd-92e2-4245-b143-6f6fead66003" TYPE="swap"
/dev/sdi2: UUID="f2d83e21-53b0-0d3b-0489-4333532a878b" TYPE="linux_raid_member"
/dev/sdi3: UUID="15d9716d-e9e4-7889-a6b0-c1bc0e02d2b0" TYPE="linux_raid_member"
/dev/sdi4: UUID="46a71480-7534-c6bf-943a-2006983ac7bd" UUID_SUB="4264cf23-e3f9-99b9-7978-478f5a7c18cf" LABEL="zuse2:sys" TYPE="linux_raid_member"
/dev/sdj1: UUID="823a2345-5b62-4f28-8618-e1184e1c4032" TYPE="swap"
/dev/sdj2: UUID="f2d83e21-53b0-0d3b-0489-4333532a878b" TYPE="linux_raid_member"
/dev/sdj3: UUID="15d9716d-e9e4-7889-a6b0-c1bc0e02d2b0" TYPE="linux_raid_member"
/dev/sdj4: UUID="46a71480-7534-c6bf-943a-2006983ac7bd" UUID_SUB="99872eb9-362f-38bb-6d06-71c0b9d20480" LABEL="zuse2:sys" TYPE="linux_raid_member"


What's happening here, and more importantly - how do I install CentOS on this system? What additional information would be necessary to debug the problem?
Steps To ReproduceUnknown; probably try to install CentOS 8 on a system with prepartitioned software RAID 1...
Additional InformationFor completeness, I used the network installation boot medium. Will try 8.1.1911 next.
TagsNo tags attached.

Relationships

has duplicate 0016942 closedIssue Tracker Installation fails on pre-partitioned software RAID 

Activities

mshopf

mshopf

2020-01-17 10:48

reporter  

anaconda.log (4,670 bytes)
X.log (22,823 bytes)
dbus.log (3,502 bytes)
dnf.librepo.log (293 bytes)
mshopf

mshopf

2020-01-17 12:03

reporter  

ifcfg.log (7,939 bytes)
packaging.log (2,463 bytes)
program.log (5,244 bytes)
storage.log (34,701 bytes)
syslog.log (250,388 bytes)
mshopf

mshopf

2020-01-22 11:45

reporter   ~0036093

Same issue with 8.1.1911.
Only difference is that the language selection dialog pops up, and an error dialog as an overlay.
Will upload new set of log files.
mshopf

mshopf

2020-01-22 11:51

reporter  

anaconda-2.log (10,220 bytes)
dbus-2.log (3,460 bytes)
mshopf

mshopf

2020-01-22 11:53

reporter  

hawkey.log (51 bytes)
ifcfg-2.log (7,939 bytes)
packaging-2.log (10,236 bytes)
mshopf

mshopf

2020-01-22 12:04

reporter  

program-2.log (12,334 bytes)
storage-2.log (59,290 bytes)
syslog-2.log (249,499 bytes)
X-2.log (22,833 bytes)
mshopf

mshopf

2020-01-22 12:07

reporter   ~0036094

OracleOS 8.1 has the same issue. I suspect the bug to be in the base RHEL system.
raktenadmin

raktenadmin

2020-01-31 14:27

reporter   ~0036169

same problem here.

anaconda-2020-01-31-13:47:30.590171-2005.tar.gz (130,977 bytes)

Issue History

Date Modified Username Field Change
2020-01-17 10:48 mshopf New Issue
2020-01-17 10:48 mshopf File Added: anaconda.log
2020-01-17 10:48 mshopf File Added: X.log
2020-01-17 10:48 mshopf File Added: dbus.log
2020-01-17 10:48 mshopf File Added: dnf.librepo.log
2020-01-17 12:03 mshopf File Added: ifcfg.log
2020-01-17 12:03 mshopf File Added: packaging.log
2020-01-17 12:03 mshopf File Added: program.log
2020-01-17 12:03 mshopf File Added: storage.log
2020-01-17 12:03 mshopf File Added: syslog.log
2020-01-18 18:00 toracat Relationship added has duplicate 0016942
2020-01-22 11:45 mshopf Note Added: 0036093
2020-01-22 11:51 mshopf File Added: anaconda-2.log
2020-01-22 11:51 mshopf File Added: dbus-2.log
2020-01-22 11:53 mshopf File Added: hawkey.log
2020-01-22 11:53 mshopf File Added: ifcfg-2.log
2020-01-22 11:53 mshopf File Added: packaging-2.log
2020-01-22 12:04 mshopf File Added: program-2.log
2020-01-22 12:04 mshopf File Added: storage-2.log
2020-01-22 12:04 mshopf File Added: syslog-2.log
2020-01-22 12:04 mshopf File Added: X-2.log
2020-01-22 12:07 mshopf Note Added: 0036094
2020-01-31 14:27 raktenadmin File Added: anaconda-2020-01-31-13:47:30.590171-2005.tar.gz
2020-01-31 14:27 raktenadmin Note Added: 0036169