CRS-1606&CRS-1604:CSSD voting file is offline的问题解决思路

it2022-05-05  204

场景:crs服务无法启动,通过日志分析,有如下问题: 1.alerttrac01.log的日志重点信息如下: cssd(10580)]CRS-1606:CSSD Insufficient voting files available [0 of 1]. Details in /u01/app/oracle/product/10.2/crs_1/log/rac01/cssd/ocssd.log. 2016-10-14 10:08:03.907 [cssd(10693)]CRS-1605:CSSD voting file is online: /dev/raw/raw3. Details in /u01/app/oracle/product/10.2/crs_1/log/rac01/cssd/ocssd.log. 2016-10-14 10:08:03.931 [cssd(10693)]CRS-1604:CSSD voting file is offline: /dev/raw/raw3. Details in /u01/app/oracle/product/10.2/crs_1/log/rac01/cssd/ocssd.log. 2016-10-14 10:08:03.931 2.ocssd.log的日志详细信息如下: [    CSSD]2016-10-14 10:08:01.689 [1910019744] >TRACE:   clssnmReadNodeInfo: added node 2 (rac02) to cluster [    CSSD]2016-10-14 10:08:01.690 [1910019744] >TRACE:   clssnmInitNMInfo: Initialized with unique 1476410881 [    CSSD]2016-10-14 10:08:01.695 [1910019744] >TRACE:   clssNMInitialize: Initializing with OCR id (183835705) [    CSSD]2016-10-14 10:08:01.705 [1132575040] >TRACE:   clssnm_skgxninit: Compatible vendor clusterware not in use [    CSSD]2016-10-14 10:08:01.705 [1132575040] >TRACE:   clssnm_skgxnmon: skgxn init failed [    CSSD]2016-10-14 10:08:01.712 [1910019744] >TRACE:   clssnmNMInitialize: misscount set to (60) [    CSSD]2016-10-14 10:08:01.714 [1910019744] >TRACE:   clssnmNMInitialize: Network heartbeat thresholds are: impending reconfig 30000 ms, reconfig start (misscount) 60000 ms [    CSSD]2016-10-14 10:08:01.723 [1910019744] >TRACE:   clssnmDiskStateChange: state from 1 to 2 disk (0//dev/raw/raw3) [    CSSD]2016-10-14 10:08:01.723 [1132575040] >TRACE:   clssnmvDPT: spawned for disk 0 (/dev/raw/raw3) [    CSSD]2016-10-14 10:08:03.856 [1132575040] >TRACE:   clssnmDiskStateChange: state from 2 to 4 disk (0//dev/raw/raw3) [    CSSD]2016-10-14 10:08:03.865 [1143064896] >TRACE:   clssnmvKillBlockThread: spawned for disk 0 (/dev/raw/raw3) initial sleep interval (1000)ms [    CSSD]2016-10-14 10:08:03.874 [1910019744] >TRACE:   clssnmFatalInit: fatal mode enabled [    CSSD]2016-10-14 10:08:03.896 [1164044608] >TRACE:   clssnmClusterListener: Spawned [    CSSD]2016-10-14 10:08:03.897 [1164044608] >TRACE:   clssnmClusterListener: Listening on (ADDRESS=(PROTOCOL=tcp)(HOST=rac01_priv)(PORT=49895)) [    CSSD]2016-10-14 10:08:03.897 [1164044608] >TRACE:   clssnmClusterListener: Probing node rac02 (2), probcon(0x14f72470) [    CSSD]2016-10-14 10:08:03.907 [1174534464] >TRACE:   clssgmclientlsnr: Spawned [    CSSD]2016-10-14 10:08:03.907 [1164044608] >TRACE:   clsc_send_msg: (0x14f71f00) NS err (12571, 12560), transport (530, 111, 0) [    CSSD]2016-10-14 10:08:03.907 [1164044608] >TRACE:   clssnmInitialMsg: node 2, rac02, con (0x14f72470) [    CSSD]2016-10-14 10:08:03.907 [1164044608] >TRACE:   clssnmInitialMsg: send failed node 2, rac02, con (0x14f72470), rc 3 [    CSSD]2016-10-14 10:08:03.907 [1164044608] >TRACE:   clssnmDiscHelper: rac02, node(2) connection failed, con (0x14f72470), probe(0x14f72470) [    CSSD]2016-10-14 10:08:03.907 [1164044608] >TRACE:   clssnmeventhndlr: node(2), con(0x14f72470) failed, probe((nil))ninf->con ((nil)) CONNCOMPLETE [    CSSD]2016-10-14 10:08:03.931 [1143064896] >ERROR:   clssnmvDiskKillCheck: voting disk corrupted (0x00000000,0x00000000) (0//dev/raw/raw3) [    CSSD]2016-10-14 10:08:03.931 [1143064896] >TRACE:   clssnmDiskStateChange: state from 4 to 3 disk (0//dev/raw/raw3) [    CSSD]2016-10-14 10:08:03.931 [1153554752] >TRACE:   clssnmDiskPMT: disk offline (0//dev/raw/raw3) [    CSSD]2016-10-14 10:08:03.931 [1153554752] >ERROR:   clssnmDiskPMT: Aborting, 1 of 1 voting disks unavailable [    CSSD]2016-10-14 10:08:03.931 [1132575040] >TRACE:   clssnmvReadDskHeartBeat: read ALL for Joining [    CSSD]2016-10-14 10:08:03.931 [1153554752] >ERROR:   ################################### [    CSSD]2016-10-14 10:08:03.931 [1153554752] >ERROR:   clssscExit: CSSD aborting [    CSSD]2016-10-14 10:08:03.931 [1153554752] >ERROR:   ################################### 然后,我们基本定位至vote disk的问题,但是分析node2没有任何问题,查看vote disk也存在。 虽然 raw 方式绑定  可能重启后  盘符会改变。但是我们已经在linux的底层直接绑定了对应的id号到制定的盘符了,不会出现如上情况 后面,直接认为重启两个节点,问题可能会直接解决。正好另外一个不是很重要的rac节点,也出现了类似的情况。重启了两个rac节点,问题解决。 解决思路:1)也许log日志,定位的问题虽然是由于本身的linux系统对应的文件有问题,但是不一定就是文件损坏,也许重启对应的服务器可以解决很多问题,包括当年的mysql死锁问题,一个个排查都没有重启来的效果显著。

最新回复(0)