I have built a search head cluster in our 6.3 Splunk environment. Distributed Searches, App roll out and general searches all work fine until a specific member of the SHCl is elected captain.
What would cause the SHCuster to work fine, but fail when 1 specific host is elected captain?
The working status of the shcluster looks like this:
**Captain**:
dynamic_captain : 1
elected_captain : Mon Oct 26 18:47:19 2015
id : .....406DD1B10D4F
initialized_flag : 1
label : workingSH1.sh
maintenance_mode : 0
mgmt_uri : https://workingSH1.sh:8089
min_peers_joined_flag : 1
rolling_restart_flag : 0
service_ready_flag : 1
**Members**:
*problemcaptain.sh*
label : problemcaptain.sh
mgmt_uri : https://problemcaptain.sh:8089
mgmt_uri_alias : https://10.1.1.2:8089
status : Up
*workingSH1.sh*
label : workingSH1.sh
mgmt_uri : https://workingSH1.sh:8089
mgmt_uri_alias : https://10.1.2.1:8089
status : Up
*workingSH2.sh*
label : workingSH2.sh
mgmt_uri : https://workingSH2.sh:8089
mgmt_uri_alias : https://10.1.3.2:8089
status : Up
The system seems to work fine in this state. Reports, alerts and manual search all work fine. Deployment of new apps or changes to existing from the deployer work as expected.
The SHCluster fails when the *problemcaptain.sh* host is elected SHCluster captain. The problemcaptain.sh no longer shows as a member in the status, only showing as the Captain.
**Captain**:
dynamic_captain : 1
elected_captain : Mon Oct 26 18:27:32 2015
id : .....406DD1B10D4F
initialized_flag : 0
label : problemcaptain.sh
maintenance_mode : 0
mgmt_uri : https://*problemcaptain.sh*:8089
min_peers_joined_flag : 0
rolling_restart_flag : 0
service_ready_flag : 0
**Members**:
*workingSH1.sh*
label : workingSH1.sh
mgmt_uri : https://workingSH1.sh:8089
mgmt_uri_alias : https://10.1.2.1:8089
status : Up
*workingSH2.sh*
label : workingSH2.sh
mgmt_uri : https://workingSH2.sh:8089
mgmt_uri_alias : https://10.1.3.2:8089
status : Up
↧