我正在使用Ubuntu 12.04 LTS的全新安装以及ZFS PPA 。
我发现当我创build一个池,它会挂载和运行正常,但重新启动后,它显示为UNAVAIL,我找不到一个方法把它拿回来。
这里是一个快速testing的日志来演示:
root@nas1:~# zpool status no pools available root@nas1:~# zpool create data /dev/disk/by-id/scsi-360019b90b24d9300174d28912b1c485d /dev/disk/by-id/scsi-360019b90b24d9300174d28a610419bec root@nas1:~# zpool status pool: data state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 scsi-360019b90b24d9300174d28912b1c485d ONLINE 0 0 0 scsi-360019b90b24d9300174d28a610419bec ONLINE 0 0 0 errors: No known data errors root@nas1:~# shutdown -r now Broadcast message from root@nas1 (/dev/pts/0) at 10:41 ... The system is going down for reboot NOW! root@nas1:~# login as: root Server refused our key root@nas1's password: Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.2.0-24-generic x86_64) * Documentation: https://help.ubuntu.com/ System information as of Wed May 23 10:42:09 BST 2012 System load: 0.48 Users logged in: 0 Usage of /: 6.0% of 55.66GB IP address for eth0: 10.24.0.5 Memory usage: 1% IP address for eth1: 192.168.30.51 Swap usage: 0% IP address for eth2: 192.168.99.41 Processes: 142 Graph this data and manage this system at https://landscape.canonical.com/ Last login: Wed May 23 10:40:06 2012 from 192.168.100.35 root@nas1:~# zpool status pool: data state: UNAVAIL status: One or more devices could not be used because the label is missing or invalid. There are insufficient replicas for the pool to continue functioning. action: Destroy and re-create the pool from a backup source. see: http://zfsonlinux.org/msg/ZFS-8000-5E scan: none requested config: NAME STATE READ WRITE CKSUM data UNAVAIL 0 0 0 insufficient replicas scsi-360019b90b24d9300174d28912b1c485d UNAVAIL 0 0 0 scsi-360019b90b24d9300174d28a610419bec UNAVAIL 0 0 0 root@nas1:~#
编辑
按要求输出ls -l /dev/disk/by-id/scsi-* :
root@nas1:~# ls -l /dev/disk/by-id/scsi-* lrwxrwxrwx 1 root root 9 May 23 12:03 /dev/disk/by-id/scsi-360019b90b24d9300174d28912b1c485d -> ../../sdb lrwxrwxrwx 1 root root 9 May 23 12:03 /dev/disk/by-id/scsi-360019b90b24d9300174d28a610419bec -> ../../sdc lrwxrwxrwx 1 root root 9 May 23 12:03 /dev/disk/by-id/scsi-360019b90b24d9300174d28b1031dd786 -> ../../sdd lrwxrwxrwx 1 root root 9 May 23 12:03 /dev/disk/by-id/scsi-360019b90b24d9300174d28baf7edd45e -> ../../sde lrwxrwxrwx 1 root root 9 May 23 12:03 /dev/disk/by-id/scsi-360019b90b24d9300174d28c5ea9c6198 -> ../../sdf lrwxrwxrwx 1 root root 9 May 23 12:03 /dev/disk/by-id/scsi-360019b90b24d9300174d28d1db783151 -> ../../sdg lrwxrwxrwx 1 root root 9 May 23 12:03 /dev/disk/by-id/scsi-360019b90b24d9300174d28e6c0af4c8e -> ../../sdh lrwxrwxrwx 1 root root 9 May 23 12:03 /dev/disk/by-id/scsi-360019b90b24d9300174d28eeb7d87669 -> ../../sdi lrwxrwxrwx 1 root root 9 May 23 12:03 /dev/disk/by-id/scsi-360019b90b24d9300174d28f6ad29d90a -> ../../sdj lrwxrwxrwx 1 root root 9 May 23 12:03 /dev/disk/by-id/scsi-360019b90b24d9300174d28fca5534028 -> ../../sdk
编辑
我刚刚做了一些进一步的testing。 而不是使用ID我试着用sdb,sdc等:
zpool create data sdb sdc sdd sde
同样的结果。 它创build了池,但重新启动后,它是“UNAVAIL”。
编辑
根据要求,输出zdb -l /dev/sdb :
~# zdb -l /dev/sdb -------------------------------------------- LABEL 0 -------------------------------------------- failed to unpack label 0 -------------------------------------------- LABEL 1 -------------------------------------------- failed to unpack label 1 -------------------------------------------- LABEL 2 -------------------------------------------- failed to unpack label 2 -------------------------------------------- LABEL 3 -------------------------------------------- failed to unpack label 3
在创build一个新的池之后,我做了这个testing,得到了相同的结果。
编辑
我刚刚尝试了Ubuntu 11.04的全新安装(以排除12.04版本中的错误)。
所以这是我的12.04事件的一个问题。 诱惑只是重新安装…
原来是处理磁盘的故障RAID控制器。 把控制器换了出来,现在一切正常。
不要害怕,简单地说:
cd /path_to_your_disks zpool import -d . <name_of_your_pool>
在我的情况下,它位于/disks 。 在你的情况下,它可能位于/dev/disk/by-id 。