What the SIDs on all the nodes? Is this really the same DATABASE. For example you can have 3 node cluster having a single ASM on all the nodes, but database A is clustered on node1/node2. But database B runs only on the node 3.
The database B then is still started and stopped by the Clusterware, but is not clustered. You can also check output from:
ps -ef | grep -e lmd
The lmd process is started on clustered instances only.
EDITED: maybe I understand now. You do not have RAC database. RAC is active-active cluster. .i.e databases instances run on multiple nodes at the same time.
You have standalone instances guarded by Oracle Clusterware. So you have active-passive failover cluster. As described here :Using Oracle Clusterware to Protect
A Single Instance. Then you do NOT need the parameter cluster_database set to true. It only applies to RAC databases.
Check if autostart for Oracle Restart is enabled:
$ cat /etc/oracle/scls_scr/$HOSTNAME/oracle/ohasdstr
enable
If it is not enabled, then enable it:
crsctl enable has
Check if ASM autostart is enabled:
crsctl stat res ora.asm
If ASM is not registered in GI, add it with:
srvctl add asm ...
Check if used diskgroups are registered:
srvctl status diskgroup -g DATA
crsctl stat res ora.DATA.dg
If they are not registered, add them with:
srvctl add diskgroup ...
Check if the database is registered and autostart is enabled:
srvctl config database -d ORCL
If it is not, register and enable:
srvctl add database ...
srvctl enable database ...
Finally make sure you define the used ASM diskgroups as dependencies:
srvctl modify database -d ORCL -diskgroup "DATA,FRA"
You can also check the listener:
srvctl config listener
If it does not exist, you can add with:
srvctl add listener ...
You do not need to start or stop anything manually with sqlplus, lsnrctl. Oracle Restart takes care of that based on the defined start/stop options and dependencies.
Best Answer
What is node eviction?
The process of removing the failed(due to various reasons) node from the cluster is known as eviction. Prior to 11gR2 Oracle tries to prevent from split brain situation by quickly rebooting the failed node . After 11gr2 Clusterware will attempt to clean up the failed resources . If the clusterware is able to clean up the failed resources, OHASD will try to restart the CRS stack. Once this task is done all the cluster resources on that node will be started automatically. This is called reboot less fencing(or eviction). If clusterware can not stop or clean the failed resources then it will roboot the node.
Causes of node eviction
-Missing network heartbeat
-Missing disk heartbeat
-CPU starvation issues
-Hanging cluster processes
-May have more...
Same applies as above mentioned.
No, Oracle Clusterware will decide on it.
If you want to learn more on it you can google the term 'Rebootless Node Fencing or Eviction' I promise you will have number of options to carry on.