RAIDarrays重新启动后不重新组装

RAIDarrays在重新启动后不能组装。

我有一个从系统启动的SSD,和三个硬盘是arrays的一部分。 系统是Ubuntu 16.04。

我遵循的步骤主要基于本指南:

https://www.digitalocean.com/community/tutorials/how-to-create-raid-arrays-with-mdadm-on-ubuntu-16-04#creating-a-raid-5-array

  1. validation我是否可以走。

    lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT 

输出显示除SSD分区外的sda,sdb和sdc设备。 我已经validation,如果这些实际上代表硬盘通过查看输出这个:

 hwinfo --disk 

一切都匹配。

  1. 组装arrays。

     sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda /dev/sdb /dev/sdc 

我input:cat / proc / mdstat来validation它是否正常

输出看起来像这样:

 Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdc[3] sdb[1] sda[0] 7813774336 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_] [=======>.............] recovery = 37.1% (1449842680/3906887168) finish=273.8min speed=149549K/sec bitmap: 0/30 pages [0KB], 65536KB chunk unused devices: <none> 

我等到过程结束。

 Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdc[3] sdb[1] sda[0] 209584128 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> 
  1. 创build和安装文件系统。

     sudo mkfs.ext4 -F /dev/md0 sudo mkdir -p /mnt/md0 sudo mount /dev/md0 /mnt/md0 df -h -x devtmpfs -x tmpfs 

我把一些数据,输出如下所示:

 Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p2 406G 191G 196G 50% / /dev/nvme0n1p1 511M 3.6M 508M 1% /boot/efi /dev/md0 7.3T 904G 6.0T 13% /mnt/md0 
  1. 保存arrays布局。

     sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf sudo update-initramfs -u echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab 
  2. 重新启动并validation一切是否正常。

重新启动后,我尝试:cat / proc / mdstat
它没有显示任何有效的RAID设备。

 ls /mnt/md0 

是空的。

以下命令不打印任何内容,也不起作用:

 mdadm --assemble --scan -v 

只有以下内容才能恢复数组:

 sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda /dev/sdb /dev/sdc 

应该有什么不同的做法?

它与这个的输出有关吗?

 sudo dpkg-reconfigure mdadm 

输出显示:

 update-initramfs: deferring update (trigger activated) Generating grub configuration file ... Warning: Setting GRUB_TIMEOUT to a non-zero value when GRUB_HIDDEN_TIMEOUT is set is no longer supported. Found linux image: /boot/vmlinuz-4.4.0-51-generic Found initrd image: /boot/initrd.img-4.4.0-51-generic Found linux image: /boot/vmlinuz-4.4.0-31-generic Found initrd image: /boot/initrd.img-4.4.0-31-generic Adding boot menu entry for EFI firmware configuration done update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults Processing triggers for initramfs-tools (0.122ubuntu8.5) ... update-initramfs: Generating /boot/initrd.img-4.4.0-51-generic 

我感兴趣的部分是“不再支持启动和停止操作;回落到默认值”

此外,/ usr / share / mdadm / mkconf的输出不会在最后打印任何数组。

 # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default (built-in), scan all partitions (/proc/partitions) and all # containers for MD superblocks. alternatively, specify devices to scan, using # wildcards if desired. #DEVICE partitions containers # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR [email protected] # definitions of existing MD arrays 

而cat /etc/mdadm/mdadm.conf的输出却可以。

 # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default (built-in), scan all partitions (/proc/partitions) and all # containers for MD superblocks. alternatively, specify devices to scan, using # wildcards if desired. #DEVICE partitions containers # DEVICE /dev/sda /dev/sdb /dev/sdc # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR [email protected] # definitions of existing MD arrays # This file was auto-generated on Sun, 04 Dec 2016 18:56:42 +0100 # by mkconf $Id$ ARRAY /dev/md0 metadata=1.2 spares=1 name=hinton:0 UUID=616991f1:dc03795b:8d09b1d4:8393060a