我有以下设置
Oracle Solaris 10 – > 5.10 Generic_147147-26 sun4v sparc
Oracle数据库11g企业版版本11.2.0.1.0 – 64位生产
Oracle Solaris Cluster 3.3u2 for Solaris 10 sparc
Oracle Solaris Cluster地理版3.3u2 for Solaris 10 sparc
我使用ZFS安装了Oracle Solaris 10我有一个用于/ oradata的池当我重新启动/重新启动集群时,ZFS池因为该集群而消失无法启动Oracle数据库资源/组每次重新启动/closures集群后,我必须执行手动
zpool import db clrg online ora-rg ...
可能是什么原因?
我知道的唯一的事情就是db zpool,这个池被导入ora-has资源,我创build了如下所示(使用Zpools选项)
# /usr/cluster/bin/clresourcegroup create ora-rg # /usr/cluster/bin/clresourcetype register SUNW.HAStoragePlus # /usr/cluster/bin/clresource create -g ora-rg -t SUNW.HAStoragePlus -p Zpools=db ora-has # zpool status db pool: db state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM db ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c0t3d0 ONLINE 0 0 0 errors: No known data errors Booting in cluster mode impdneilab1 console login: Apr 21 17:12:24 impdneilab1 cl_runtime: NOTICE: CMM: Node impdneilab1 (nodeid = 1) with votecount = 1 added. Apr 21 17:12:24 impdneilab1 sendmail[642]: My unqualified host name (impdneilab1) unknown; sleeping for retry Apr 21 17:12:24 impdneilab1 cl_runtime: NOTICE: CMM: Node impdneilab1: attempting to join cluster. Apr 21 17:12:24 impdneilab1 cl_runtime: NOTICE: CMM: Cluster has reached quorum. Apr 21 17:12:24 impdneilab1 cl_runtime: NOTICE: CMM: Node impdneilab1 (nodeid = 1) is up; new incarnation number = 1429629142. Apr 21 17:12:24 impdneilab1 cl_runtime: NOTICE: CMM: Cluster members: impdneilab1. Apr 21 17:12:24 impdneilab1 cl_runtime: NOTICE: CMM: node reconfiguration #1 completed. Apr 21 17:12:24 impdneilab1 cl_runtime: NOTICE: CMM: Node impdneilab1: joined cluster. Apr 21 17:12:24 impdneilab1 in.mpathd[262]: Successfully failed over from NIC nxge1 to NIC e1000g1 Apr 21 17:12:24 impdneilab1 in.mpathd[262]: Successfully failed over from NIC nxge0 to NIC e1000g0 obtaining access to all attached disks
亲爱的,我find了我的答案
https://community.oracle.com/thread/3714952?sr=inbox
在地理群集configuration中,单个节点群集的行为是预期的:
如果整个集群出现故障,则预期的行为是地理版本在引导时停止本地集群上的保护组。 原因是可能已经发起了接pipe,或者存储/数据可能不完整或不可用(如果主站点经历了全部故障,尽pipe集群节点已经恢复,这并不意味着存储/数据是完好无损,并准备承担失败前该网站的angular色)。 这就是为什么我们要求在添加到保护组的应用程序rgs上使用auto_start_on_new_cluster = false的原因。 集群重新引导后,用户需要介入并根据需要启动或执行故障回复过程。