Quantcast
Channel: Questions in topic: "search-head-clustering"
Viewing all 660 articles
Browse latest View live

How can I find out which apps and add-ons support Search Head Clustering?

$
0
0
Hi Splunkers, I have to implement Search Head Clustering (SHC) on my 4 search heads. I have a lot of apps and add-ons installed on one of the search heads which are heavily used and I'm not sure which add-ons out of these support SHC. I have few doubts below:- 1. How to identify if a certain app/add-on supports SHC? 2. What happens to the kvstores once i implement SHC? 3. What is the difference between a deployer and a deployment server? Some of the apps I have are :- Splunk Add-on Builder Splunk App for CEF (Because of this app's support not there in SHC, we had to rollback the clustering changes last time) Splunk App for ServiceNow Splunk Add-on for Cisco ASA Splunk Add-on for Cisco ISE Splunk Add-on for CyberArk Splunk Add-on for NetFlow (Splunk_TA_flowfix) Splunk Add-on for Microsoft SQL Server Splunk Add-on for Unix and Linux Splunk Add-on for Oracle Database Splunk Add-on for ServiceNow Splunk Add-on for Symantec Endpoint Protection Splunk Add-on for Microsoft Windows Splunk for Palo Alto Networks Can someone please help or share his/her experience on this? Thanks in advance guys.

Splunk App for Salesforce: How to install in a distributed environment?

$
0
0
I'd like to install the Splunk App for Salesforce in my test environment. I have a search head cluster, indexer cluster and heavy forwarders to deploy on (perhaps). Does anyone know what goes where? I tried deploying to my indexer cluster first, since there are indexes defined in the included indexes.conf, but I get a bunch of these messages during the deploy. So I'm doing something wrong but I don't know what it is. Can anyone throw me a rope? ; Invalid key in stanza [sfdc_event_log://EventLog] in /opt/splunk/etc/master-apps/splunk-app-sfdc/default/inputs.conf, line 3: limit (value: 1000). ; Invalid key in stanza [sfdc_event_log://EventLog] in /opt/splunk/etc/master-apps/splunk-app-sfdc/default/inputs.conf, line 5: start_date (value: ). ; Invalid key in stanza [sfdc_event_log://EventLog] in /opt/splunk/etc/master-apps/splunk-app-sfdc/default/inputs.conf, line 9: compression (value: 1). ; Invalid key in stanza [sfdc_object://LoginHistory] in /opt/splunk/etc/master-apps/splunk-app-sfdc/default/inputs.conf, line 14: query (value: SELECT ApiType, ApiVersion, Application, Browser, ClientVersion, Id, LoginTime, LoginType, LoginUrl, Platform, SourceIp, Status, UserId FROM LoginHistory). + 23 more messages like these...

Is the search head cluster label for identification purposes in the DMC or can it be used for each host system?

$
0
0
Documentation: "The `-shcluster_label` parameter is useful for identifying the cluster in the distributed management console". http://docs.splunk.com/Documentation/Splunk/6.4.6/DistSearch/SHCconfigurationoverview We are implementing a multisite search head cluster. We want to identify the host itself by using the DNS alias associated with each host, it is much more useful for us. Question: is the search head cluster label restricted to a cluster as a naming unit or can it be used to be a label for each host system too? Thank you.

In splunkd.log, why do I receive repeating error "ERROR KVStorageProvider - An error occurred during the last operation...Cannot do an empty bulk write"?

$
0
0
ERROR KVStorageProvider - An error occurred during the last operation ('saveBatchData', domain: '11', code: '22'): Cannot do an empty bulk write This error is repeated in splunkd.log. The search head cluster appears to be functional but I am concerned about the cause of this error.

Why does my data model fail when I create a calculated field using a lookup table?

$
0
0
hello sirs I am hoping someone could give me a hand today. I have this search head cluster where I am trying to build a data model with a certain calculated field based on a lookup. As I have a distributed environment, it streams the searches to the peers and then I get several errors saying that the lookup table does not exist in the peers. I expect that behavior as when I use lookups in my searches I need to use the attribute: local=true to let my search head know not to go anywhere else to look for that lookup. The question here is : how can I set up my calculated field in this data model with this lookup so it can work like a charm (with no errors?) Error sample: > Streamed search execute failed because: Error in 'lookup' command: The lookup table 'mylookupdefinition' does not exist or is not available

How do I disable the system configurations in the search head cluster?

$
0
0
I enabled the system configurations in the search head cluster. How do I disable them so they don't show up any more?

How to remove "learned" app entries in a Search Head cluster?

$
0
0
The "learned" app with combination of Search Head cluster is causing us real issue - Apps pushed from deployer will be put into "default" folder in the SH members - In SHC, the learned apps have entries in "local" directory which I believe Splunk have put automatically. - Though we made the sourcetypes and props correctly from Deployer, this is NOT taking effect due to entry in "learned/local" - No option to delete the entries of "learned" via UI. No entries shown in "All configurations" The only Option left for us is to "Shutdown All Search Heads" and delete manually at same time. Is there any other better options? How do you guys delete something from "local" in a SHC?

Is there any negative impact of clean-dispatch in large 6.5.2 search head cluster environments?

$
0
0
Getting this on 1 of my 27 search head members: Search peer splunklog403 has the following message: Dispatch Manager: The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch. Search peer splunklog403 has the following message: Dispatch Command: The number of search artifacts in the dispatch directory is higher than recommended (count=30929, warning threshold=5000) and could have an impact on search performance. Remove excess search artifacts using the "splunk clean-dispatch" CLI command, and review artifact retention policies in limits.conf and savedsearches.conf. You can also raise this warning threshold in limits.conf / dispatch_dir_warning_size. Planning on running: ./splunk cmd splunkd clean-dispatch /tmp/old-dispatch-jobs/ -7d@d What I am unsure of is the impact this will have on the other hosts NOT having this issue. Now I understand that the rc of the large number of searches dispatched needs to be investigated but for now I just need to clear it up. Thanks a bunch!

Is Syntax Highlighting broken on a Splunk 6.5.2 Search Head Cluster?

$
0
0
I'm finding that in all my 6.5.2 infrastructure that syntax highlighting is working fine, with the exception of my Search Head Cluster Members. This is pervasive across both my production and my test clusters. Is this perhaps a known issue? Under user-prefs.conf, search_syntax_highlighting shows a value of 1 Thank you.

How to set 1 Search Head cluster member to send all alerts?

$
0
0
We currently have a Search Head (SH) cluster with members at 2 different sites. 1 site is failing to send emails and create Jira tickets successfully. We are looking into the network changes that need to be made to fix this but in the mean time, is there a setting we can change so all alerts come from the SH members at the working site or even just delegate 1 member to send out all alerts? Thanks

Why am I unable to start the Splunk Monitoring Console?

$
0
0
Every time I try to enable the Splunk Monitoring Console, I get the following error: User 'splunk-system-user' triggered the 'disable' action on app 'splunk_monitoring_console', and the following objects required a restart: checklist, dmc_alerts, splunk_monitoring_console_assets It doesn't matter if we try via the API or the config files upon restarting splunk, the application's state in app.conf is set to disabled. I am running Splunk version 6.5.1.2 with search head clustering enabled. Any ideas on how we can get the app to start?

adhoc_searchhead is not working in search head cluster

$
0
0
I have added the following setting in one of the search head to disable scheduled search. [shclustering] adhoc_searchhead = true After adding this, When I visit the "Jobs" tab there are so many savedsearches are in running mode and some are completed. Not able to stop running scheduled search.

Remove reference to host in mongodb

$
0
0
Hi, we have a search head cluster where a couple of the search heads where removed by shutting down the VMs. In other words, the search heads wasn't removed gracefully as they should be. Now the remaining search heads is complaining because mongodb can't reach the removed search heads. I'm getting the following error messages: 2017-03-23T12:27:42.296Z I NETWORK [ReplExecNetThread-1919] getaddrinfo("prod-searchhead-x") failed: Name or service not known 2017-03-23T12:27:42.290Z I REPL [ReplicationExecutor] Error in heartbeat request to prod-searchhead-x:8191; Location18915 Failed attempt to connect to prod-searchhead-x:8191; couldn't initialize connection to host prod-searchhead-x, address is invalid Anyone knows how to forefully remove a host from mongodb in the search head cluster, so that we'll get rid of these error messages?

Why are search head hosts unable to connect to master indexer?

$
0
0
I'm setting up some hosts to become a search head cluster to be joined with an indexer cluster. There are five hosts, one of which is going to be the Search Head cluster captain. I've installed Splunk 6.5.2 on all of them and it's up and running with no errors. I'm now at the point of turning them into search heads and whether I use the CLI or from the Splunk Web UI, they all return the same error. Could not contact master. Check that the master is up, the master_uri=https://10.x.x.97:5500 and secret are specified correctly But when I ran a netcat test on that port from each search head candidate to the master indexer and from the master indexer to each of the hosts it past every time from each host to master searchhead-001 Connection to 10.x.x.97 5500 port [tcp/fcp-addr-srvr1] succeeded! searchhead-002 Connection to 10.x.x.97 5500 port [tcp/fcp-addr-srvr1] succeeded! searchhead-004 Connection to 10.x.x.97 5500 port [tcp/fcp-addr-srvr1] succeeded! searchhead-003 Connection to 10.x.x.97 5500 port [tcp/fcp-addr-srvr1] succeeded! From Master to each host Connection to 10.x.x.203 5500 port [tcp/fcp-addr-srvr1] succeeded! Connection to 10.x.x.200 5500 port [tcp/fcp-addr-srvr1] succeeded! Connection to 10.x.x.202 5500 port [tcp/fcp-addr-srvr1] succeeded! Connection to 10.x.x.201 5500 port [tcp/fcp-addr-srvr1] succeeded! When setting them up I've copied and pasted the URI and secret from a text file so I'm certain that I'm not mistyping either one. The only one that has succeeded is the host that will be the search head captain and there's nothing different about it at all. I've even gone so far as to reimage each of the other hosts, reinstall RHEL 6.8.4, and reinstall and set up Splunk 6.5.2 but still am getting the same results. What am I overlooking?

Splunk Add-on for Atlassian JIRA Alerts: How should this add-on be deployed and configured in a Search Head Cluster?

$
0
0
I cannot successfully deploy the Splunk Add-on for Atlassian JIRA Alerts in a search head cluster and configure it properly to access my Jira server. I have managed to do it on a single-node search head. Any indication of the correct procedure to follow?

Splunk DB Connect v2 , RPC_SERVER PORT_IS_IN_USE ERROR

$
0
0
Hi , We have a clustered environment with n search heads, n indexers and a cluster master, both indexer and search head clustering running Splunk Enterprise 6.4.1. We are using Splunk DB connect 2.3.0 (residing on each search heads)to read from database and index data. **Issue** : RPC server being up/down too frequently almost every other minute. DBX logs give us following error: "RPC_SERVER PORT_IS_IN_USE" , ""rpc_serice_is_called_to_halt", etc. We have checked the ports, and no other service is using the rpc port. Also changing RPC port works only temporarily. Please suggest how to resolve this. Thanks !

Search head cluster cluster rolling restart issue

$
0
0
Hi, I am having an issue with my SH cluster.. Was working fine, now there are no members. Captain is elected dynamically. All of the _flag options are 0 under the status. It seems as though none of the peers want to join. There are no errors in splunkd that imply there is a problem or related to an issue with this. If it were an issue with a pass4SymmKey change this would be represented in the logs surely? Any thoughts ?

How do I replicate internal index data across a Hunk search head cluster with no indexers?

$
0
0
We currently have a search head cluster setup to use HDFS as a backend. We'd like the _audit index data to be the same on each box for statistic gathering purposes, but the Splunk documentation guides regarding doing this all mention pushing the data to Splunk indexers, which we don't use. Has anyone else solved this problem?

procedure to perform non-rolling upgrade a (Windows) Search Head Cluster - problem with service auto-restart

$
0
0
Hello, I have a 3-node (Windows 2012 R2 based) Search Head Cluster currently connected to a standalone indexer. I wanted to follow the non-rolling upgrade procedure (anticipating it will be soon connected to a clustered indexer) described here: http://docs.splunk.com/Documentation/Splunk/latest/DistSearch/UpgradeaSHC It is specified as step 1 to stop all cluster members, and to only restart them after the upgrade of all nodes. However: 1. If I simply stop Splunkd Service, the MSI setup does restart the service automatically at the end of the upgrade. 2. If I stop Splunkd Service and temporary change startup type to Disabled, upgrade fails and rollback because setup is not able to start the service. By the end, the upgrade has worked (I manually stopped again each node after its upgrade) but I have not been able to comply with the "official" non-rolling upgrade procedure. Is anybody proceeding differently and able to follow the official procedure ? Regards.

Why is search head cluster not showing the search head members?

$
0
0
I am not able to view search head members in the Members listed. I have 2 search head nodes one acting as captain (xxxxxxx02) and other acting as member. But I don't see the member listed when I run the shcluster-status (output provided below) from the member node (xxxxxx01). I am not sure if the member is part of search head clustering. [root@xxxxx01 users]# /opt/splunk/bin/splunk show shcluster-status In handler 'shclusterstatus': Node is not captain. Current captain = https://xxxxx02:8089 [root@xxxxxx01 users]# /opt/splunk/bin/splunk show shcluster-status Captain: dynamic_captain : 1 elected_captain : Mon Apr 10 14:17:22 2017 id : 85C9FA62-9AE7-47E5-A7D6-D114C2B15BCC initialized_flag : 0 label : xxxxxxxx02 mgmt_uri : https://xxxxx02:8089 min_peers_joined_flag : 0 rolling_restart_flag : 0 service_ready_flag : 0 Members: xxxxxxx02 label : xxxxxxx02 mgmt_uri : https://xxxx02:8089 mgmt_uri_alias : https://xxxxx02:8089 status : Up I get the same status output from the Captain node (xxxxx02) which doesn't show the server - xxxxx01 listed as the member. How do I confirm if the clustering is setup properly and if both nodes are under search head clustering?
Viewing all 660 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>