Quantcast
Channel: Questions in topic: "search-head-clustering"
Viewing all 660 articles
Browse latest View live

Can you help me with my Search Head Cluster Setup Error?: "Cannot start a Captain"

$
0
0
I built up a brand new fresh Splunk environment (on 7.2.1) and am getting an error when attempting to set up the SH cluster.... specifically when starting the cluster captain for the 1st time. I started the process on the Deployer, and added the following stanza/values in the /etc/system/local/server.conf file: [shclustering] pass4SymmKey = myPassword shcluster_label = myClusterName ... and restarted the deployer. Confirmed that the plain text password I type in now encrypted (not in plain text) I ran the SH cluster init command on all (3) SH members: /opt/splunk/bin/splunk init shcluster-config -auth admin:myPassword -mgmt_uri https://myDeploymentServer:8089 -replication_port 34567 -replication_factor 3 -conf_deploy_fetch_url https://myDeployer:8089 -shcluster_label myClusterName ... and restarted them. No errors. Attempted to start a SH captain (just picked one of the SH members) and ran this command: /opt/splunk/bin/splunk bootstrap shcluster-captain -servers_list "https://mySearchHead1:8089,https://mySearchHead2:8089,https://mySearchHead3:8089" -auth admin:myPassword ... and I get this error message: uri=https://myDeploymentServer:8089/services/shcluster/member/consensus/pseudoid/last_known_state?output_mode=json, error=401 - Unauthorized. Is this member using the same pass4SymmKey as other members?; Interesting that it appears to be coming from the deployment server... and I know that the myPassword value is correct. I use that one password all over the place when connecting to the deployment server, setting up the index cluster, etc. I noticed that there is a pass4SymmKey under 2 stanza's.. [general] and [shclustering]. Does that matter? Any help would be much appreciated. Thank You! Joe

How do you install Splunk IT Service Intelligence 4.0 on a search head and connect that search head to the production?

$
0
0
Hello, On a new server, we have been asked to install ITSI 4.0 and connect that search head to the production. Our production environment has a search head cluster ITSI.

Splunk IT Service Intelligence (ITSI) migration to a new search head cluster (SHC) from an old SHC

$
0
0
We want to migrate ITSI from one search head cluster to another search head cluster. We don’t want to uninstall ITSI on the current/primary working cluster until we know the new search head cluster is functioning. Is there an easy way to “disable” ITSI on the current/primary cluster so that it doesn’t continue and produce duplicate data? After we have completely migrated ITSI to the new search head cluster, we will uninstall ITSI from the old one. Just trying to make sure we don’t have 2 ITSI environments up and running at the same time.

Why is our search head cluster scheduler failing following deployment or rolling restart?

$
0
0
We have a problem with the scheduler failing following a search head cluster (SHC) deployment, which is resolved only if we manually change the captain following the deployment. This is not an ideal solution, and we want to sort out the root cause. Following last nights deployment, we saw the following sequence of events (mostly from the debug logs); SHC Rolling Restart begins...All peers told to close down their searches in turn...Restarts complete normally with no error... Then, Captain tells peers to remove artifacts "DEBUG SHCMaster - remove artifact aid=scheduler~" Most work fine, but two fail with the following errors; "DEBUG SHCMaster - event=SHPMaster::asyncReplicationArtifact sid=154~ status=failed msg=sid is not an artifact but a remote search job " "DEBUG SHCMaster - event=SHPMaster::asyncReplicationArtifact aid=154~ status=failed msg="Could not find artifact or sid" From then on, the scheduler keeps repeating these errors and no scheduler searches, accelerations, alerts etc run until the captain is transferred. Couldn't tell you if this is a symptom or cause. I can hazard a guess something went wrong with those searches, but what? And how do we stop it happening?

Search Head Cluster - Indexes are missing

$
0
0
We have a Search Head Cluster. The three search heads cluster members have the Indexers listed in the Search Peer. Everything looks good configuration wise but none of our existing indexes (we had a standalone SH we are using while we setup the SHC) are available to select when access a role or create a new role. I thought i was an article about this in SPlunk answers yesterday but could not find it again. Any help is appreciated. Thx

Alternative to having three members in a Search Head Cluster ?

$
0
0
I have deployed Splunk Search Head Cluster with two Search Head members and a Deployer. I read here [Captain election process has deployment implications ][1] that a cluster must consist of a minimum of three members to participate in dynamic captain election process. If I dont have the option to have add a third one and a cluster cannot function without a captain, can I use "static captain" option to overcome the problem ? or are there any better alternatives or workaround this issue ? [1]: https://docs.splunk.com/Documentation/Splunk/7.2.3/DistSearch/SHCarchitecture

How come our indexes are missing in our search head cluster?

$
0
0
We have a Search Head Cluster. The three search head cluster (SHC) members have the Indexers listed in the Search Peer. Everything looks good configuration wise but none of our existing indexes (we had a standalone search head we are using while we set up the SHC) are available to select when accessing a role or creating a new role. I thought there was an article about this in Splunk Answers yesterday, but I could not find it again. Any help is appreciated. Thx

Is there an alternative to having three members in a search head cluster?

$
0
0
I have deployed the Splunk Search Head Cluster with two Search Head members and a Deployer. I read here [Captain election process has deployment implications ][1] that a cluster must consist of a minimum of three members to participate in dynamic captain election process. If I don't have the option to add a third one. and a cluster cannot function without a captain, can I use the "static captain" option to overcome the problem? Or are there any better alternatives or workarounds for this issue ? [1]: https://docs.splunk.com/Documentation/Splunk/7.2.3/DistSearch/SHCarchitecture

Migrate Searches, Dashboards etc from Standalone Search head to new Search Head Cluster

$
0
0
Can I move the /splunk/etc/apps/search/local folder to the deployers shcluster/apps/search/local folder? (And then push package?) Reading the documentation it seems this would be a bad idea but I want to move the savedsearches.conf, macros etc. Or would be a better idea to copy these folders to the search heads individually and restart splunk? This is from docs: > **Caution:** Do not use the deployer to> push default apps, such as the search> app, to the cluster members. In> addition, make sure that no app in the> configuration bundle has the same name> as a default app. Otherwise, it will> overwrite that app on the cluster> members. For example, if you create an> app called "search" in the> configuration bundle, it will> overwrite the default search app when> you push it to the cluster members.

How do you migrate searches, dashboards, etc. from a standalone search head to a new search head cluster?

$
0
0
Can I move the /splunk/etc/apps/search/local folder to the deployers shcluster/apps/search/local folder? (And then push package?) Reading the documentation it seems this would be a bad idea, but I want to move the savedsearches.conf, macros etc. Or would it be a better idea to copy these folders to the search heads individually and restart Splunk? This is from docs: > **Caution:** Do not use the deployer to> push default apps, such as the search> app, to the cluster members. In> addition, make sure that no app in the> configuration bundle has the same name> as a default app. Otherwise, it will> overwrite that app on the cluster> members. For example, if you create an> app called "search" in the> configuration bundle, it will> overwrite the default search app when> you push it to the cluster members.

Can you help me with my Amazon Web Services ELB with search head cluster Issues?

$
0
0
I have a Splunk 7.1.2 cluster, using Search Head Cluster with AWS Load Balancer. It works fine. The server.conf says **[settings] httpport = 443 enableSplunkWebSSL = true privKeyPath = /path/to/mycert.key caCertPath = /path/to/mycert.pem** Now I'm deploying a brand new cluster with the 7.2.3 version, with the same server.conf, but the load balancer doesn't recognize the instances as Healthy. In the splunkd.log, for every check from the load balancer, which is a get on *https://splunkhostIP/en-US/account/login?return_to=%2Fen-US%2F*, I receive these two messages when It happens. 01-30-2019 21:27:18.107 +0000 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read client hello C', alert_description='handshake failure'. 01-30-2019 21:27:18.107 +0000 WARN HttpListener - Socket error from 172.16.77.204:3955 while idling: error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher The IP in the message is a Loadbalancer internal IP, calling the instance for healthcheck. The old search head cluster instances don't show me these same warning messages. The old cluster has the exact same scenario and except for the Splunk version. The certificate file is the same for both, and they work exactly alike, calling on the browser with a name, for the Certificate is a Digicert Signed. And calling using IP, they complain about the certificate, but when I accept to see even "unsafe", they have the same behavior. I saw some issues with the same Warning messages, but the issues are not like mine. I really appreciate any help.

How to calculate max search concurrency in search head cluster and indexer cluster environment.

$
0
0
I know that how to calculate max search concurrency in stand-alone is below. normal search : max_hist_searches = max_search_per_cpu(* default is 1) * core + base_max_searches(* default is 6) normal real-time search : max_realtime_searches = max_rt_search_multiplier(* default is 1) * max_hist_searches saved search : max_hist_scheduled_searches = max_searches_perc(* default is 50)/100 * max_hist_searches saved real-time search : max_realtime_scheduled_searches = max_searches_perc(* default is 50)/100 * max_realtime_searches But if there is environment such below, how calculate? Search head : 3 (* contains captain) Indexer : 4 (* not contains Cluster master) Please someone tell me. Also if there is document that mentions about it, please tell me too.

How do you calculate max search concurrency in a search head cluster and an indexer cluster environment?

$
0
0
I know that how to calculate max search concurrency in stand-alone is below. normal search : max_hist_searches = max_search_per_cpu(* default is 1) * core + base_max_searches(* default is 6) normal real-time search : max_realtime_searches = max_rt_search_multiplier(* default is 1) * max_hist_searches saved search : max_hist_scheduled_searches = max_searches_perc(* default is 50)/100 * max_hist_searches saved real-time search : max_realtime_scheduled_searches = max_searches_perc(* default is 50)/100 * max_realtime_searches But if there is an environment such as below, how would I calculate? Search head : `3 (* contains captain)` Indexer : `4 (* not contains Cluster master)` Please someone tell me. Also if there is document that mentions about it, please tell me too.

Search Head Cluster connected to Multiple Single Site Index Clusters

$
0
0
I have a search head cluster consisting of 3 search heads. This search head cluster is going to attach to 6 different single site index clusters. Is it possible to restrict all searches from querying every Index cluster? If I specify "srchIndexesDefault" as none, and specify the "srchIndexesAllowed" with the indexes that can be searched; if the indexes don't exist on some of the index clusters, will the indexers from that site still be searched? I am trying to maintain performance on the Index clusters and not have every cluster hit with every search.

When I use Splunk with SAML, why can any SAML user access any app/dashboard (ignoring his/her role)?

$
0
0
Hello. I'm trying to configure the SAML authentication in a search head cluster (x3 peers). The configuration seems to be good since I can access with SAML users, and I don't have any error in splunkd.log about SAML. Now I'm with the tests, and for some reason, Splunk is ignoring the mapped roles. I mean; I have one SAML user (user1) and I give it the user role. I created a **test app** that only the admin role can read and write. When I login with **user1**, I can see the **test app**, access it and see all the content inside it. I try similar test with some other users, and every time, it's happening the same. I checked in Splunk Answers for similar cases and found this: https://answers.splunk.com/answers/227274/is-it-possible-to-use-saml-2-for-splunk-to-achieve.html https://answers.splunk.com/answers/551201/shc-with-saml-authentication-role-update-on-existi.html But none of those suggestions work for me. I tried: **defaultRoleIfMissing** and **blacklistedAutoMappedRoles** with the same result. The users exist in SAML and in Splunk (we have pending a migration), and I checked the roles in the local version and all of them have the user role. Have I missed something? Any suggestions, please? Regards.

search head cluster apps

$
0
0
Hi, We use the deployer to distribute configs to the search head cluster. I want to add configs to the search heads which are independent to each other. Can I place an app in the /etc/apps directory of each search head where as not causing any syncing issues with the deployer ? Thanks

In my Search Head Cluster, why does "show shcluster-status" show captain, but not the members' information?

$
0
0
After a recent bundle push from deployer to our search head cluster (SHC) members running Splunk Enterprise version 7.2.4, SHC is in a broken state with missing member information: [splunk@SH1 bin]$ ./splunk show shcluster-status Captain: dynamic_captain : 1 elected_captain : Wed Feb 20 19:02:42 2019 id : 718F33BC-E8A5-4EDB-AFAE-279860226B84 initialized_flag : 0 label : SH1 mgmt_uri : https://SH1:8089 min_peers_joined_flag : 0 rolling_restart_flag : 0 service_ready_flag : 0 Members: [splunk@SH2 bin]$ ./splunk show shcluster-status Captain: dynamic_captain : 1 elected_captain : Wed Feb 20 19:02:42 2019 id : 718F33BC-E8A5-4EDB-AFAE-27986022 initialized_flag : 0 label : SH1 mgmt_uri : https://SH1:8089 min_peers_joined_flag : 0 rolling_restart_flag : 0 service_ready_flag : 0 [splunk@SH3 bin]$ ./splunk show shcluster-status Captain: dynamic_captain : 1 elected_captain : Wed Feb 20 19:02:42 2019 id : 718F33BC-E8A5-4EDB-AFAE-279860226B84 initialized_flag : 0 label : SH1 mgmt_uri : https://SH1:8089 min_peers_joined_flag : 0 rolling_restart_flag : 0 service_ready_flag : 0 Members: It appears the election had successfully done with all members voted SH1 to be the captain, but member information just couldn't get updated. From SHC captain SH1's splunkd.log: 02-20-2019 19:02:53.796 -0600 ERROR SHCRaftConsensus - failed appendEntriesRequest err: uri=https://SH3:8089/services/shcluster/member/consensus/pseudoid/raft_append_entries?output_mode=json, socket_error=Connection refused to https://SH3:8089 - Tried below procedure to clean up RAFT then bootstrap a static captain but same result after doing so: https://docs.splunk.com/Documentation/Splunk/7.2.4/DistSearch/Handleraftissues#Fix_the_entire_cluster - Confirmed all members have their serverName defined properly to its own name. - Confirmed no network issue as each member can access each other's mgmt port 8089 through below curl cmd: curl -s -k https://hostname:8089/services/server/info - Also tried increasing the thread through the below setting and restarted Splunk on all members. server.conf [httpServer] maxSockets =1000000 maxThreads= 50000 The issue remains the same. None of the SHC members are listed under "show shcluster-status" and the SHC remains broken along with kvstore cluster not established.

What is the impact to expire my server.pem?

$
0
0
What is the impact to expire my server.pem? Hi Splunk professional, I would like to know any impacts when the server.pem in SHC are expired. I have already understood what will happen to expire them in SHC. - impossible to use 8089 - impossible to use kvstore - not to work replication between SH - not to work lookup and inputlookup command I do make sure whether SHC are impossible to connect with indexer when expiring the server.pem, because the 8089 port is not work. Is that correct? Anyway, I would like to know another impact and concern. I appreciate any opinion. Regards,

Question regarding Search head clustering

$
0
0
Hi We have a small Splunk environment with one search head and one indexer, both in the same server box. Due to the increasing number of usage of Splunk recently, we are seeing a few performance issues(mainly with reports and alerts). Reports are taking a lot of CPU while generating and this is affecting the concurrent searches. Now the idea is to have a server just for the purpose of reports. Increasing the CPU of the existing server is not an option. 1) Should we add a search head just for reports? 2) Can we have 2 search heads with single 1 URL as the URL naming is standard across the organization? Is there any other better options. Regards, Pradeep

Search Head Cluster Node fall in a restarting loop

$
0
0
I deploy a Splunk Index Cluster, like following - 10.6.113.25 (peer node) - 10.6.113.26 (master node) - 10.6.113.27 (peer node) - 10.6.113.28 (peer node) And I want to deploy Search Head Cluster in same cluster. - 10.6.113.25 (search head) - 10.6.113.27 (search head) - 10.6.113.28 (search head) - 10.6.113.32 (deployer) I follow the [doc: Deploy a search head cluster](https://docs.splunk.com/Documentation/Splunk/6.2.2/DistSearch/SHCdeploymentoverview) Run `splunk init shcluster-config` in 25,27,28 and restart. But they fall in a restarting loop. `system/local/server.conf `in node 25 [clustering] master_uri = https://10.6.113.26:8089 mode = slave pass4SymmKey = $7$8BS4L6+X6dBUlQusZWdHNLNRkc7QurRQQnV3E9zG5YpWC5kAUj8= [replication_port://34567] [shclustering] conf_deploy_fetch_url = https://10.6.112.32:8089 disabled = 0 mgmt_uri = https://10.6.113.25:8089 pass4SymmKey = $7$JN1I+/uq0kBl7+/fZZWOVKNO7LopWArjBLS7q4e1KO+sHlRb3pbNu28DsVoytuk= replication_factor = 2 shcluster_label = shcluster1 And when restart, `splunkd.log` has `03-26-2019 14:05:41.395 +0800 INFO loader - Downloaded new baseline configuration; restarting ...` I'm not sure a node can be both Search Head Cluster node and Indexer Cluster node.
Viewing all 660 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>