我在Debian Wheezy服务器上使用rsnapshot。 这是最近从挤压升级。 自升级以来,我收到hourly cron作业的以下错误:
remote rm -rf /share/HDA_DATA/backup/rsnapshot/hourly.3 p1=-rf p2=/backup/rsnapshot/hourly.3/ remote cp -al /share/HDA_DATA/backup/rsnapshot/hourly.0 /share/HDA_DATA/backup/rsnapshot/hourly.1 p1=-al p2=/backup/rsnapshot/hourly.0 Logical volume "rsnapshot" successfully removed Logical volume "rsnapshot" successfully removed Unable to deactivate open raid5-dl-real (254:4) Failed to resume dl. ---------------------------------------------------------------------------- rsnapshot encountered an error! The program was invoked with these options: /usr/bin/rsnapshot hourly ---------------------------------------------------------------------------- ERROR: Removal of LVM snapshot failed: 1280
两个LVM卷被正确备份, Logical volume "rsnapshot" successfully removed ,但后来它在lvm VG raid5达到卷dl ,看到无法停用raid5-dl-real 。
我的lvm快照的名称叫做raid5/rsnapshot 。 raid5-dl-real与卷名不对应 – 真正的设备是/dev/mapper/raid5-dl 。
所以如果这是dl卷本身,为什么lvm会试图去激活它?
请注意,原本这是一个完全不同的卷,所以我将其从备份中删除。 现在已经转移到这一个。
rsnapshot日志也不是很有启发性:
[16/Jul/2013:17:26:26] /sbin/lvcreate --snapshot --size 512M --name rsnapshot /dev/raid5/dl [16/Jul/2013:17:26:29] /bin/mount /dev/raid5/rsnapshot /mnt/lvm-snapshot [16/Jul/2013:17:26:32] chdir(/mnt/lvm-snapshot) [16/Jul/2013:17:26:32] /usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded . /backup/rsnapshot/hourly.0/dl/ [16/Jul/2013:17:27:57] rsync succeeded [16/Jul/2013:17:27:57] chdir(/root) [16/Jul/2013:17:27:57] /bin/umount /mnt/lvm-snapshot [16/Jul/2013:17:27:58] /home/share/scripts/rsnapshot_lvremove --force /dev/raid5/rsnapshot [16/Jul/2013:17:29:02] /usr/bin/rsnapshot hourly: ERROR: Removal of LVM snapshot failed: 1280 [16/Jul/2013:17:29:02] rm -f /var/run/rsnapshot.pid
有任何想法吗?
更新 – 这是刚刚开始发生在一个完全不同的服务器上。 同样的LVM问题。
我曾经尝试过的一件事是将lvremove命令redirect到脚本:
#!/bin/bash sync sleep 600 ls /dev/mapper/raid5-*-real for i in /dev/mapper/raid5-*-real; do /sbin/dmsetup remove $i ; done /sbin/lvremove --debug "$@"
所以这个同步睡觉了一下,然后在尝试lvremove之前删除任何-real设备地图。
即使在所有这些之后,移除通常也会失败。 这是来自rsnapshot的输出。 请忽略部分错误,而其中一个卷出现问题,直到后来lvremove失败:
remote cp -al /share/HDA_DATA/backup/rsnapshot/hourly.0 /share/HDA_DATA/backup/rsnapshot/hourly.1 p1=-al p2=/backup/rsnapshot/hourly.0 One or more specified logical volume(s) not found. /dev/mapper/raid5-crypt-real /dev/mapper/raid5-db-real device-mapper: remove ioctl on raid5-crypt-real failed: No such device or address Command failed device-mapper: remove ioctl on raid5-db-real failed: Device or resource busy Command failed Logical volume "rsnapshot" successfully removed One or more specified logical volume(s) not found. /dev/mapper/raid5-crypt-real /dev/mapper/raid5-db-real /dev/mapper/raid5-db--var-real device-mapper: remove ioctl on raid5-crypt-real failed: No such device or address Command failed device-mapper: remove ioctl on raid5-db-real failed: No such device or address Command failed device-mapper: remove ioctl on raid5-db--var-real failed: Device or resource busy Command failed Logical volume "rsnapshot" successfully removed One or more specified logical volume(s) not found. /dev/mapper/raid5-crypt-real /dev/mapper/raid5-db-real /dev/mapper/raid5-db--var-real device-mapper: remove ioctl on raid5-crypt-real failed: Device or resource busy Command failed device-mapper: remove ioctl on raid5-db-real failed: No such device or address Command failed device-mapper: remove ioctl on raid5-db--var-real failed: No such device or address Command failed /dev/raid5/rsnapshot: read failed after 0 of 4096 at 42949607424: Input/output error /dev/raid5/rsnapshot: read failed after 0 of 4096 at 42949664768: Input/output error /dev/raid5/rsnapshot: read failed after 0 of 4096 at 0: Input/output error /dev/raid5/rsnapshot: read failed after 0 of 4096 at 4096: Input/output error Logical volume "rsnapshot" successfully removed One or more specified logical volume(s) not found. /dev/mapper/raid5-crypt-real /dev/mapper/raid5-db-real /dev/mapper/raid5-db--var-real /dev/mapper/raid5-dl-real device-mapper: remove ioctl on raid5-crypt-real failed: No such device or address Command failed device-mapper: remove ioctl on raid5-db-real failed: No such device or address Command failed device-mapper: remove ioctl on raid5-db--var-real failed: No such device or address Command failed device-mapper: remove ioctl on raid5-dl-real failed: Device or resource busy Command failed Unable to deactivate open raid5-dl-real (254:25) Failed to resume dl. ---------------------------------------------------------------------------- rsnapshot encountered an error! The program was invoked with these options: /usr/bin/rsnapshot hourly ---------------------------------------------------------------------------- ERROR: Removal of LVM snapshot failed: 1280
在这种情况下可以帮助任何人,我遇到了Debian bug ID 659762报告中描述的问题。
我用dmsetup info标识了处于挂起状态的卷,并用dmsetup resume重新激活它。这将解锁LVM系统。