**1)[Indexer Cluster]What does the following Error Message means:**
1-16-2015 11:14:48.129 -0500 WARN CMMaster - event=removePeerBuckets peer=3611AD96-B6BB-4B66-BDC0-9A09442F718F peer_name=index19 bid=xenopsychology~5~3611AD96-B6BB-4B66-BDC0-9A09442F718F msg="Bucket is not on any other peer! Removing it."
what exactly does that mean?
**Here is information on it:**
**"CMMaster event=removePeerBuckets"** -> the peer is being removed from the cluster - either 1) it is transitioning to Down (missed heartbeats, shutdown, etc) or 2) it is being re-added (the peer got a re-add command to resync with the cluster master).
**"Bucket is not on any other peer..."** part means that while the CM is removing the peer from the cluster, it is removing that peers' buckets. For this specific bucket, it was the last copy, so after this the CM does not know about this bucket.
If the removePeerBuckets event was from a re-add, very shortly after this line sometimes you'll see that the bucket is added back (because right after event=removePeerBuckets there is an event=addPeer for the peer re-adds)
**2)[SHC] We are now seeing the following warning every few minutes**.
11-20-2015 12:08:16.550 -0800 WARN ConfReplicationThread - Error pushing configurations to captain=https://iosplunkprd-v33.capgroup.com:8089, consecutiveErrors=1: Error in acceptPush: Non-200 status_code=400: ConfReplicationException: Cannot accept push with outdated_baseline_op_id=7ed0975e094de48ed178a98b9ab025a38d63f17a; current_baseline_op_id=dc2670cde60d738e1c21a024434f39b727e3de24
Would re-bootstrapping help?
**Here is Information on it:**
It will help to check if configuration replication actually failing. A transient acceptPush error can be part of normal operation, during resolution of "merge conflicts".
It shouldn't need destructive resync if consecutiveErrors=1. You should only consider destructive resync if the banner is showing, i.e. if consecutiveErrors==<some big number>.
A question was rasied if consecutiveErrors=700, would be a big number. The 700 is big enough to merit consideration of resync, assuming that configuration replication is continuing to fail, but if changes have been made, and configuration replication is now working, there's no reason to resync.
↧