RAID - DeviceDisappeared - /dev/md/2

raid

#1

Hi.
I use raid via software, I have the following scenario.

Definitions of existing MD arrays

ARRAY / dev / md / 0 metadata = 1.2 UUID = 832db4cb: ea6a2d20: b200fd2d: 65a72b50 name = a signed host name: 0
ARRAY / dev / md / 2 metadata = 1.2 UUID = 95910e75: 3c5fce9e: 5689640d: 86695109 name = a signed host name: 2
ARRAY / dev / md / 1 metadata = 1.2 UUID = 578a8e41: c8d3b616: 8cbc0ce6: edc5058d name = a signed host name: 1

Md0 and md2 are raid1
Md1 is raid10

When restarting the server, an alert is triggered indicating that md0 and md2 “raid1 exclusively” indicating DeviceDisappeared

Despite the indicative of this problem, when executing the cat / proc / mdstat command, everything is normal … and I have no major problems, just this boring message, as long as the server is restarted.

Cat / proc / mdstat
Personalities: [raid1] [raid10]
Md1: active raid 10 sda6 [0] sdd6 [3] sdc6 [2] sdb6 [4]
1904433152 super blocks 1.2 pieces 512K 2 copies close [4/4] [UUUU]
Bitmap: 1/15 pages [4KB], part 65536KB

Md2: active (read only) raid1 sda5 [5] sdd5 [7] sdc5 [6] sdb5 [4]
23419904 super blocks 1.2 [4/4] [UUUU]

Md0: raid1 sda1 [6] sdd1 [7] sdc1 [5] sdb1 [4]
975296 super blocks 1.2 [4/4] [UUUU]

Unused devices:

In syslog, the error appears.
Mdadm-raid [341]: Generating udev events for MD arrays … done.

I already tried to rebuild the raid, but the error persists.
Does anyone have any idea of ​​what can be done to fix it?

Best Regards,

Michael Voigt


#2

Everything indeed seems fine - where does this message at boot come from? Can you post a “screenshot” of it?


#3

Hi,
I started to notice the problem, when I made a change so that it was possible for me to receive the raid notifications via email. Every time I restart the server, I receive an email with this content, saying that raid 1 has disappeared.
It was just there that I realized that this problem has happened since this server was built.
By analyzing syslog, I can only see this:

Root @ server: / var / log # cat syslog | Grep udev
Jul 1 07:34:23 systemd server [1]: Starting udev Wait for Complete Device Initialization …
Jul 1 07:34:23 systemd server [1]: Starting udev Kernel Device Manager …
Jul 1 07:34:23 systemd server [1]: Started udev Kernel Device Manager.
Jul 1 07:34:23 mdadm-raid server [346]: Generating udev events for MD arrays … done.
Jul 1 07:34:23 systemd server [1]: Started udev Wait for Complete Device Initialization.
Jul 1 07:34:23 kernel server: [1.726329] random: systemd-udevd: uninitialized urinary read (16 bytes read)
Jul 3 07:02:26 systemd server [1]: Starting udev Wait for Complete Device Initialization …
Jul 3 07:02:26 systemd server [1]: Starting udev Kernel Device Manager …
Jul 3 07:02:26 systemd server [1]: Started udev Kernel Device Manager.
Jul 3 07:02:26 mdadm-raid server [344]: Generating udev events for MD arrays … done.
Jul 3 07:02:26 systemd server [1]: Started udev Wait for Complete Device Initialization.
Jul 3 07:02:26 kernel server: [1.725832] random: systemd-udevd: uninitialized urinary read (16 bytes read)

From what I was able to analyze, at the time of boot it is having difficulty locating raid 1, but soon after it resolves itself, but the message that the raid broke occurs.

Do you have any clue what I can do to resolve this?

Best Regards,

Michael Voigt


#4

No, I am sorry - no idea.