I am trying to cluster search head
In this process when I am trying to execute cammand for captain
Throwing an ‘error=Connection refused’
But I checked for the ports it was in the listening state
↧
Why did I receive this error while clustering search heads? "Error=Connection refused'
↧
Search Head Queue Pile Up
Hi Splunkers,
We have dashboards that refresh every minute.
We observed that after some time, our widgets are not refreshing and turning to "Waiting for queued job to start". The users who are using the dashboard are non-admin users.
Please refer to image below:
![alt text][1]
We closed the dashboard for the user at 8:37 AM and logged out. After we logged out, we find that the searches are still being executed through the queue(refer to the query below). We verified through Activity -> Job manager too (filtered against the user). We used the following query to monitor the searches(that are triggered by dashboard) that are waiting to be queued.
index=_internal earliest=-300m@m The maximum number of concurrent historical searches for this user based on their role quota has been reached. concurrency_limit=3 pale_test1 | timechart distinct_count(search_id)
Results below:
![alt text][2]
Environment: Search Head Cluster(3 Search Heads) posting requests to Index Master through master_uri. There are 3 indexers in the environment.
Version: 6.6.2
1. Ideally, the search head should not execute the search queue created by the dashboard after it is closed. Is there any configuration setting to enforce this?
2. As this is non-admin user, the current concurrency_limit is at 3. We can increase the limit to 10/50 to increase the pace at which the queue is cleared. But as we foresee large number of users in the future, we are evaluating performance efficient options.
3. Please share if there are any options we should evaluate.
[1]: /storage/temp/218986-issue-1.png
[2]: /storage/temp/218984-issue-2.png
↧
↧
KV Store changed status to failed...
Hi All,
We have Splunk environment with 8 search heads in a cluster and a Search Head Quorum Node. Recently, we had an issue with the Indexer Clustering and it was resolved with the help of Splunk team. But after that issue, we started seeing the below error on all Search Heads
KV Store changed status to failed. Failed to establish communication with KVStore. See splunkd.log for details
Could you please help me in understanding the error and how to mitigate it?
Thanks in advance.
↧
Palo Alto Networks App for Splunk: Is this app supported on a search head cluster?
Does anyone know if the new PA add-on is supported on the SHC, I deployed the addon on a single SH, configured it to talk to panorama and then moved that configs to the SHC. Now, when i open the TA it still asks me to configure it, when i add the details again, it throws the below error:
500 Internal Server Error
Return to Splunk home page
An error occurred while reading the page template. See web_service.log for more details
↧
Splunk deployment in AWS with elastic search heads and indexers
I am planning to deploy Splunk (distributed search) in AWS.
Has anyone tested and/or verified if it is possible to setup a deployment so that additional search heads will spin up as more users need access to a search head? For example I have a total of 10 users and after 2 users logon to a search head, the next user will logon to a new search head instance elastically created in AWS. In this example there would be 2 users per SH and if all 10 users were using Splunk there would be 5 search heads elastically deployed. When users logged off the search head then the search head instance would shutdown until one search head remains.
Is this possible? If so where might I find documentation on this.
Thank you
↧
↧
Why is the searchhead captain skipping some searches?
Hi Splunkers,
We have a Search Head Cluster with 3 search heads. We have 70 searches that are supposed to run every minute.
We find that 14-15% of searches are getting skipped on SH Captain. We tried to change the captain and observed the same phenomenon on new captain too. We do not have any SH designated for ad-hoc searches.
Please find the image below where other search heads are not experiencing any skip. Also, note that SH captain is taking higher number of searches.
![alt text][1]
Please let us know if there is a way to get around this.
[1]: /storage/temp/225585-sh-captain-skip-ratio.png
↧
How can index clusters and search head clusters interact with each other in Splunk Enterprise on AWS?
I am trying to learn Splunk and understand how to install Splunk enterprise on AWS.
While reading through the documentation I came across index clusters and head search clusters in Splunk, but there is no documentation(which I can find) showing how these 2 clusters interact with each other.
[Quick Start Guide][1] is setting up both indexer cluster and search head cluster in one environment but even in this, there is no mention of how these 2 work together and relate with each other.
Any reference to relevant doc or explanation will be great.
Since I cannot make hyperlink, URL for quick start guide is https://s3.amazonaws.com/quickstart-reference/splunk/enterprise/latest/doc/splunk-enterprise-on-the-aws-cloud.pdf
[1]: https://s3.amazonaws.com/quickstart-reference/splunk/enterprise/latest/doc/splunk-enterprise-on-the-aws-cloud.pdf
↧
Error while deploying apps to first member: Error while fetching apps baseline on target=https://1.1.1.1:8089: Non-200/201 status_code=401; {"messages":[{"type":"ERROR","text":"Unauthorized"}]}
I have set up the Deployer and Search Heads but I am unable to apply bundles to the search heads. I am getting the following error:
I am running the following command on the Deployer.
`sudo -u user ./splunk apply shcluster-bundle -target https://1.1.1.1:8089`
Getting the following error:
Error while deploying apps to first member: Error while fetching apps baseline on target=https://1.1.1.1:8089: Non-200/201 status_code=401; {"messages":[{"type":"ERROR","text":"Unauthorized"}]}
Deployer and SH have the same secrets.
**Deployer server.conf shcluster entry:**
[shclustering]
pass4SymmKey = something
shcluster_label = shcluster1
**Search Head shcluster entry in server.conf**
[shclustering]
pass4SymmKey = something
shcluster_label = shcluster1
conf_deploy_fetch_url =1.1.1.1:8089
Anyone know what's going on?
↧
Why am I getting this error when trying to setup a SH cluster: Search Head Clustering is not enabled on this node, Raft REST endpoints are not available!
Trying to setup a search head cluster. Followed the steps to configure the deployer, initialize the members, setting the captain. Any check on the shcluster status comes back with the following error on all members and even the deployer.
"Search Head Clustering is not enabled on this node. Raft REST endpoints are not available!"
The Search Head is already attached to an Indexer cluster. Can anyone explain why the Search Head Cluster will not enable period?
↧
↧
How can I associate a no license option under the setting in a search head cluster?
Hi all,
I have two search heads that have enterprise license expired, and I need to associate them to a master license server.
However. There's no 'License' option under setting, which I could find in deployers and indexers.
Also. During login, I will be redirected to a page to prompt for 'change license group'. Rightfully, clicking cancel shall redirect me to a page to associate to master license server. Instead, I am getting error 404.
Checking the log and here's what I found:
capabilities:21 - Access denied for path "/en-US/manager/system/licensing". Returning 404. Insufficient user permissions
May I know what's wrong with my search heads?
Thanks in advance
↧
Why are there errors on new Search Head Cluster member?
I recently added a new host to my search head cluster and am receiving a continuous stream of errors as seen below from the new host. Any idea how I can determine what is causing these errors and how to fix them?
Interestingly, when I look at a count of the alerts, the number of alerts per hour has gone steadily down by about 5-10 per hour since they first started:
![alt text][1]
I also noticed that the error seems to reference 2 apps that don't currently show any data: NetApp and Palo Alto. I'm not sure if they ever displayed data or not as I have never used them, but I know that they have not displayed data for quite some time - long before these errors started. The "skipping" note in the error seems to indicate there is a lot more to the error than I can see, but I obviously don't know what so I'm not sure if other apps are referenced or not.
These are the steps I have tried to resolve the issue:
- Rolling restart of the SHC
- Remove, clean and re-add the newest member
- I haven't seen any problems while using the latest member; searching works, dashboards work, etc.
Here is one of the errors:
index=_internal source="/opt/splunk/var/log/splunk/splunkd.log" "SHCMasterHTTPProxy - Low Level http request failure err=Deserialization failed."
> 02-12-2018 10:50:52.843 -0800 WARN SHCMasterHTTPProxy - Low Level http request failure err=Deserialization failed. Could not find expected key 'unique_guids_artifactids' (Reply: ConfigInfo: feed_name = , {\n CC2A8F3B-A392-4C0D-8914-F611CE068DFB -> ConfigItem: name=CC2A8F3B-A392-4C0D-8914-F611CE068DFB title= atomId= owner=system app= customActions={}; ArgsList: {artifacts_location_csv -> ParamType: _dataType=unset _isMultiValue=false {_values: {[0]='"artifact_id","artifact_log_entry",peer,"__mv_artifact_id","__mv_artifact_log_entry","__mv_peer"\n"scheduler__admin__postfix__RMD504f0506f29d1e837_at_1518456600_22508_3142118D-D20E-4C18-B6EC-EE7B69A5F00B",0,"3142118D-D20E-4C18-B6EC-EE7B69A5F00B",,,\n"scheduler__admin__postfix__RMD504f0506f29d1e837_at_1518456600_22508_3142118D-D20E-4C18-B6EC-EE7B69A5F00B",0,"F6E7F7FE-DC53-456F-B8EC-B624BAF5E1B4",,,\n"scheduler__admin__postfix__RMD504f0506f29d1e837_at_1518460200_25_3142118D-D20E-4C18-B6EC-EE7B69A5F00B",0,"3142118D-D20E-4C18-B6EC-EE7B69A5F00B",,,\n"scheduler__admin__postfix__RMD504f0506f29d1e837_at_1518460200_25_3142118D-D20E-4C18-B6EC-EE7B69A5F00B",0,"F6E7F7FE-DC53-456F-B8EC-B624BAF5E1B4",,,\n"scheduler__admin__postfix__RMD51d56dd48c3688be1_at_1518456600_26467_F6E7F7FE-DC53-456F-B8EC-B624BAF5E1B4",0,"3142118D-D20E-4C18-B6EC-EE7B69A5F00B",,,\n"scheduler__admin__postfix__RMD51d56dd48c3688be1_at_1518456600_26467_F6E7F7FE-DC53-456F-B8EC-B624BAF5E1B4",0,"F6E7F7FE-DC53-456F-B8EC-B624BAF5E1B4",,,\n"scheduler__admin__postfix__RMD51d56dd48c3688be1_at_1518460200_0_CC2A8F3B-A392-4C0D-8914-F611CE068DFB",0,"314211 ...{skipping 103210 bytes}... _app_netapp","tsidx-perf-system-ontap",1,1518461700,,,,,\nnobody,SplunkforPaloAltoNetworks,"WildFire Reports - Retrieve Report",1,1518461460,,,,,\nadmin,"splunk_app_netapp","tsidx-perf-disk-ontap",1,1518461700,,,,,\nadmin,"splunk_app_netapp","tsidx-perf-quota-ontap",1,1518461700,,,,,\nadmin,"splunk_app_netapp","tsidx-perf-qtree-ontap",1,1518461700,,,,,\n'} (size=1)}, splunk_min_version -> ParamType: _dataType=unset _isMultiValue=false {_values: {[0]='6.5.0'} (size=1)}, } _m.size=14\n Messages:\n}\n)>
[1]: /storage/temp/228744-2018-02-12-11-06-13-error-timeline.png
↧
Is it possible to find "lost" code changes?
We had a case today where the two Search Heads were out of sync.
Based on [Search head clustering dashboards in the monitoring console][1]
[1]: http://docs.splunk.com/Documentation/Splunk/7.0.2/DistSearch/ViewSHCstatusinDMC#Troubleshoot_configuration_baseline_consistency
I ran `splunk resync shcluster-replicated-config` which according to the **Search Head Clustering: Status and Configuration** of the Data Management Console, fixed the issue. However, one client still sees an old version of his dashboard. Is there a way to recover it?
↧
Why are Security Roles, Indexes not showing up on SHC?
I have a number of indexes that only exist on the indexers. In the past, I know that I have been able to select them in the role management GUI and now they do not appear. The `authorize.conf` on the Search Head Cluster has them listed under the roles.
As follows.
[role_user]
srchDiskQuota = 250
srchIndexesAllowed = application;idx_appdev;idx_citrix;idx_fourd;idx_infrastructure;main;network;os;perfmon;server;wind
srchIndexesDefault = application;idx_appdev;main;perfmon;server;windows;wineventlog;winevents
srchMaxTime = 8640000
However I do not see these in the GUI. Any ideas? Do I now have to make a placeholder indexes with these names on the SHC for them to show up? Seems sloppy.
![alt text][1]
[1]: /storage/temp/227725-capture.jpg
↧
↧
Can we create a symbolic link for only search peer bundles in indexers?
Suppose splunk is installed in the path called SPLUNK_HOME/etc/.... and the search peers bundles are located the SPLUNK_HOME/var/run/searchpeers.But always it gets fill which leads to stopping the indexer to index the data.so I want to create a symbolic link for that searchpeers bundles to other location(or drive),the path is abc/splunk.So that I should not get the space full due to search peers bundles again.
1) the search heads are in clusters and indexers are not?
2)Is it possible to create the symlink **only for searchpeers** to the path abc/splunk without any consequences?
2)will there be any problem related to anything after creating a symlink to searchpeers?
↧
What happens if during the Multisite Search Head Cluster split brain situation, we statically assign a captain on the secondary site, too?
Hi,
I have a theoretical question about Search Head Cluster(SHC) operation in a multisite environment:
- We have multiple sites.
- We have a multisite indexer cluster.
- We have an SHC cluster with nodes on both sites with Enterprise Security for example.
Situation: the two site lost the connection between them.
- The primary site can select a new captain. Life goes on on this site:)
- A captain cannot be selected dynamically on the secondary site. Just ad-hoc searches can be run.
The question: what happens if during the split brain situation we statically assign a captain on the secondary site, too (or VMware boots up all the server on both sides)? There will be 2 captains. Data accelerations, ES correlations will run and fill up indexes, summary tables, etc on the local indexer cluster on both sites.
After a recovery, can IXC merge this tables, indexer or it will throw some interesting errors? Will it accelerate, summary events twice and messed up all the notable events?
I know, the double captain is not supported, but in some environment, it can happen...
Regards,
Istvan
↧
Why do skipped scheduled searches deactivate?
My Search Head Cluster(SHC) was skipping scheduled searches overnight. I've resolved the issue, but most impacted scheduled searches now show no "next scheduled time" and aren't running. If I disable/enable, or simply click through the "edit schedule" dialog, the scheduled time is restored and it will run next time. But scanning through > 1000 searches and doing this manually is a PITA.
Why did it happen? Is there a better way to restore?
Splunk Linux x64, 6.6.3
10 member SHC
28 indexers
↧
Search Head Cluster that index data
Hello,
We are analysing Splunk in a HA environment with 2 or 3 instances of Splunk Enterprise that replicate data between them and also configurations and apps. This can be accomplished with a Search Head Cluster (with a deployer) and an Indexer Cluster. It is possible to have both clusters in the same machine? I mean, can a Search Head Cluster also store and index data and replicate it to other members of the Search Head Cluster without the necessity of a separate Indexer Cluster?
Thanks in advanced,
↧
↧
Multiple Search Heads - Possible Clustering of Virtual and Physical?
What started as a plan to stand up a new/additional VM Search Head dedicated to a specific department in IT has turned into a possible first attempt at Search Head clustering.
In trying to segregate field extractions, dashboards, etc., I was going to stand up a virtual SH specifically for the use of one department at our company. Additionally, I thought that separate SH's might lesson the workload on Splunk, at least at the SH level, but the further I go learning how to implement my plan, the more I'm wondering if we'll actually be creating more workload on the Indexers.
To the questions:
**1.** Will two dedicated, non-clustered, Search Heads have a positive or negative impact on overall Splunk resources, mainly SH and IDX performance, and has anyone successfully implemented this layout, and do they recommend it?
**2.** If two stand-alone SH's is not the solution, and instead I just need to better learn how to implement roles to isolate extractions/dashboards and the like on my existing deployment, then is clustering a Virtual SH with a Physical SH acceptable? At this point the VM has lower CPU/RAM than the physical. The department it was originally meant for will likely not need as much power as the primary/physical SH, but being a VM resources can be increased.
Thanks in advance for your time/thoughts on the matter!
↧
Why is there a failure error when integrating splunk SH cluster with Indexing Cluster?
We have 3 node indexer Cluster and have setup a 3 node Search Head(SH) cluster. We are trying to integrate the SH cluster with the index cluster. I'm running the following command:
./splunk edit cluster-config -mode searchhead -master_uri https://master:8089 -secret
I run this on each SH cluster member and they all give the following response:
Could not contact master. Check that the master is up, the master_uri=https://master:8089 and secret are specified correctly
I've confirmed that everything is correct. I tested with curl and telnet to make sure the master could be reached on 8089 from each SH. The search head cluster is up and running as is the indexer cluster. I've attempted to change master to fqdn and IP address.
Does anyone have any other suggestions? This seems like such an easy fix but it's been driving me crazy for 2 days.
↧
Why is the LDAP User Default App Not Used in Search Head Cluster and Defaults to Launcher?
LDAP User configured default application does not take effect within a Search Head Cluster with LDAP configurations deployed via an auth_ldap app containing authorize.conf and authentication.conf under **/shcluster/apps** pushed from the Deployer.
When the Search Heads are logged into the LDAP users are defaulted to the /en-US/app/launcher/home page. How can I get the default application for each user to work?
Note the app was already defined for each Role.
↧