我试图学习与虚拟框centoOS 6.3 drbd,我有两个vm configed,node1和node2,我复制一个文件到挂载点/数据是/ dev / drbd0节点1,但不反映到node2的/数据
这里是configuration
# You can find an example in /usr/share/doc/drbd.../drbd.conf.example #include "drbd.d/global_common.conf"; #include "drbd.d/*.res"; global { # do not participate in online usage survey usage-count no; } resource data { # write IO is reported as completed if it has reached both local # and remote disk protocol C; net { # set up peer authentication cram-hmac-alg sha1; shared-secret "s3cr3tp@ss"; # default value 32 - increase as required max-buffers 512; # highest number of data blocks between two write barriers max-epoch-size 512; # size of the TCP socket send buffer - can tweak or set to 0 to # allow kernel to autotune sndbuf-size 0; } startup { # wait for connection timeout - boot process blocked # until DRBD resources are connected wfc-timeout 30; # WFC timeout if peer was outdated outdated-wfc-timeout 20; # WFC timeout if this node was in a degraded cluster (ie only had one # node left) degr-wfc-timeout 30; } disk { # the next two are for safety - detach on I/O error # and set up fencing - resource-only will attempt to # reach the other node and fence via the fence-peer # handler #on-io-error detach; #fencing resource-only; # no-disk-flushes; # if we had battery-backed RAID # no-md-flushes; # if we had battery-backed RAID # ramp up the resync rate # resync-rate 10M; } handlers { # specify the two fencing handlers # see: http://www.drbd.org/users-guide-8.4/s-pacemaker-fencing.html fence-peer "/usr/lib/drbd/crm-fence-peer.sh"; after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh"; } # first node on node1 { # DRBD device device /dev/drbd0; # backing store device disk /dev/sdb; # IP address of node, and port to listen on address 192.168.1.101:7789; # use internal meta data (don't create a filesystem before # you create metadata!) meta-disk internal; } # second node on node2 { # DRBD debice device /dev/drbd0; # backing store device disk /dev/sdb; # IP address of node, and port to listen on address 192.168.1.102:7789; # use internal meta data (don't create a filesystem before # you create metadata!) meta-disk internal; } }
这里是猫/ proc / drbd
cat: /proc/data: No such file or directory [root@node1 /]# cat /proc/drbd version: 8.3.16 (api:88/proto:86-97) GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build64R6, 2013-09-27 16:00:43 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r----- ns:543648 nr:0 dw:265088 dr:280613 al:107 bm:25 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:7848864 [>...................] sync'ed: 6.5% (7664/8188)M finish: 7:47:11 speed: 272 (524) K/sec
我复制了一个文件/节点1中的数据,但我无法find节点2中的/date中的文件,任何人都可以帮忙吗?
节点1上的drbd状态
[root@node1 /]# service drbd status drbd driver loaded OK; device status: version: 8.3.16 (api:88/proto:86-97) GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build64R6, 2013-09-27 16:00:43 m:res cs ro ds p mounted fstype 0:data SyncSource Primary/Secondary UpToDate/Inconsistent C /data ext3 ... sync'ed: 8.1% (7536/8188)M
certificate我错了,但是IIRC你只能同时在一个节点上挂载一个FS。 让他们同步,卸载/数据。 切换,安装在node2上,你应该看到所有的数据。
DRBD表示分布式复制块设备。 这不是一个文件系统。
如果您在主节点上写入文件,则文件系统会发出写入操作。 在下面的图层上,DRBD确保将这些写入复制到辅助节点。 对于辅助节点,这些写入就像数据块一样。 为了查看文件,通常必须在主节点上卸载分区,并将其挂载到辅助节点上。
尽pipe如此,还是有一个解决scheme。 为此,您将需要一个集群文件系统。 这样的文件系统允许您同时在两个节点上安装分区。 对于常用的文件系统,如ext4,这是不可能的。
在DRBD之上工作的这种集群文件系统的一个例子是OCFS2。 为了使用此文件系统并同时在两台服务器上安装分区,您的DRBD资源将需要configuration为双主模式。 这意味着没有主节点。 两个节点都可以同时写入资源。 集群文件系统确保写入的数据是一致的。