ZFS池报告丢失的设备,但不会丢失

我在Linux上运行最新的Debian 7.7 x86和ZFS

把我的电脑移到另一个房间后。 如果我做一个zpool状态,我得到这个状态:

pool: solaris state: DEGRADED status: One or more devices could not be used because the label is missing or invalid. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the device using 'zpool replace'. see: http://zfsonlinux.org/msg/ZFS-8000-4J scan: none requested config: NAME STATE READ WRITE CKSUM solaris DEGRADED 0 0 0 raidz1-0 DEGRADED 0 0 0 11552884637030026506 UNAVAIL 0 0 0 was /dev/disk/by-id/ata-Hitachi_HDS723020BLA642_MN1221F308BR3D-part1 ata-Hitachi_HDS723020BLA642_MN1221F308D55D ONLINE 0 0 0 ata-Hitachi_HDS723020BLA642_MN1220F30N4JED ONLINE 0 0 0 ata-Hitachi_HDS723020BLA642_MN1220F30N4B2D ONLINE 0 0 0 ata-Hitachi_HDS723020BLA642_MN1220F30JBJ8D ONLINE 0 0 0 

它说不可用的磁盘是/ dev / sdb1经过一番调查,我发现这个,ata-Hitachi_HDS723020BLA642_MN1221F308BR3D-part1只是对/ dev / sdb1的一个微笑,它确实存在:

 lrwxrwxrwx 1 root root 10 Jan 3 14:49 /dev/disk/by-id/ata-Hitachi_HDS723020BLA642_MN1221F308BR3D-part1 -> ../../sdb1 

如果我检查智能状态,如:

 # smartctl -H /dev/sdb smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-4-amd64] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED 

磁盘在那里。 我可以做它的fdisk,和其他一切。

如果我试图分离它,如:

 zpool detach solaris 11552884637030026506 cannot detach 11552884637030026506: only applicable to mirror and replacing vdevs 

我也试过用/ dev / sdb / dev / sdb1和长by-id的名字。 一直都是一样的错误。

我无法替代它,或者什么似乎别的。 我甚至试图closures和重新启动电脑,无济于事。

除非我真的自己更换硬盘,否则我看不到任何解决scheme。

想法?

[更新] b </s>

 # blkid /dev/mapper/q-swap_1: UUID="9e611158-5cbe-45d7-9abb-11f3ea6c7c15" TYPE="swap" /dev/sda5: UUID="OeR8Fg-sj0s-H8Yb-32oy-8nKP-c7Ga-u3lOAf" TYPE="LVM2_member" /dev/sdb1: UUID="a515e58f-1e03-46c7-767a-e8328ac945a1" UUID_SUB="7ceeedea-aaee-77f4-d66d-4be020930684" LABEL="q.heima.net:0" TYPE="linux_raid_member" /dev/sdf1: LABEL="solaris" UUID="2024677860951158806" UUID_SUB="9314525646988684217" TYPE="zfs_member" /dev/sda1: UUID="6dfd5546-00ca-43e1-bdb7-b8deff84c108" TYPE="ext2" /dev/sdd1: LABEL="solaris" UUID="2024677860951158806" UUID_SUB="1776290389972032936" TYPE="zfs_member" /dev/sdc1: LABEL="solaris" UUID="2024677860951158806" UUID_SUB="2569788348225190974" TYPE="zfs_member" /dev/sde1: LABEL="solaris" UUID="2024677860951158806" UUID_SUB="10515322564962014006" TYPE="zfs_member" /dev/mapper/q-root: UUID="07ebd258-840d-4bc2-9540-657074874067" TYPE="ext4" 

禁用mdadm并重新启动后,这个问题回来了不知道为什么sdb被标记为linux_raid_member。 如何清除?

只需运行zpool clear solaris然后发布zpool status -v的结果。

知道所涉及的硬件以及您使用的是什么控制器会很好。


编辑

看看你的blkid输出,你有一个以前的Linux软件RAID的残余。 你需要mdadm --zero-superblock /dev/sdb1来清除它。

search了互联网和服务器故障和堆栈溢出了一天之后,没有发现任何东西。 我问这个问题,答案就出现在右边的相关问题上。 所以我在这个问题上find了答案:

升级的Ubuntu中,一个zpool中的所有驱动器标记为不可用

出于某种原因,女士开始运行,并启动md0,即使md0不包含任何磁盘(如错误中所示),它确实会导致此错误。

所以一个简单的

 mdadm --stop /dev/md0 

诀窍,现在我的磁盘重新弹性。 案件结案。