When we run
splunk bootstrap shcluster-captain -servers_list "http://X.X.X.X:8089,http://X.X.X.Y:8089,http://X.X.X.Z:8089" -auth admin:XXXXXXXX
We are getting the following errors
10-25-2016 21:48:55.256 +0000 WARN HttpListener - Socket error from X.X.X.X while idling: error:1407609C:SSL routines:SSL23_GET_CLIENT_HELLO:http request
10-25-2016 21:48:55.259 +0000 ERROR HttpClientRequest - HTTP client error: Connection reset by peer (while accessing http://X.X.X.X:8089/services/shcluster/captain/members/2CA66831-3411-49C6-8E42-FB259BA52216)
10-25-2016 21:48:55.259 +0000 WARN SHCMasterHTTPProxy - Low Level http request failure err=failed method=POST path=/services/shcluster/captain/members/2CA66831-3411-49C6-8E42-FB259BA52216 captain=10.82.98.6:8089 rc=0 actual_response_code=502 expected_response_code=200 status_line="Connection reset by peer" socket_error="Connection reset by peer"
Any idea what is going on here?
↧
While trying to setup search head cluster in Azure, receive "ERROR HttpClientRequest - HTTP client error: Connection reset by peer" - how to fix?
↧
Will the Octopus Deploy app work in a clustered environment?
I want to know if the Octopus Deploy app will work properly on a Splunk Cluster environment, like Index or Search Head Cluster. Or was this app built only to run on Splunk single instance? My Splunk version 6.4.2 and it is running under Windows Server 2012 R2. Thanks
↧
↧
Why do I receive an "Error while deploying apps to first member..." message when using shcluster-bundle?
Hello,
I have a search head and 2 indexes setup as well as a standalone Splunk instance.
I have followed every documentation to push out an app using the configuration bundle from the Splunk instance to the indexers however I keep on getting the following error:
Error while deploying apps to first member: Error while fetching apps baseline on target=https://dsaw2k8ap010:8090:Non-200/201 status_code=401; {"messages":[{"type":"ERROR","text":"Unauthorized"}]
I have triple checked everything including the pass4symmkey which i set the same for both the indexers as well as the deployer (the individual Splunk instance)
I just cant seem to get this to work. Any help would be appreciated.
↧
Why is search head cluster bundle apply failing with error "Application does not exist"?
I have seen a couple questions out there that are similar to this, but they weren't quite the same and none had answers that seemed to apply to my situation.
I have a search head cluster that I made a couple updates to, but when I run the apply command, I get this error message:
Error while deploying apps to first member: Error while updating app=Splunk_TA_windows on target=https://xxx.xx.xxx.xx:8089: Non-200/201 status_code=404; {"messages":[{"type":"ERROR","text":"Application does not exist: Splunk_TA_windows"}]}
1) Splunk_TA_windows was not one of the apps that were updated.
2) Splunk_TA_windows DOES exist on all of the nodes and I can see it when I connect to them,
I am not sure how to get past this error when attempting a push.
↧
How do I get Splunk to limit the Windows file path to under 260 characters?
Getting the following Error on one of our clustered indexers (and similar ones on the other indexers):
10-26-2016 16:20:03.362 -0500 ERROR SearchResultsWriter - Unable to open output file: path=C:\Program Files\Splunk\var\run\splunk\dispatch\remote_SplunkSH02_scheduler__admin_c3BsdW5rX2FwcF93aW5kb3dzX2luZnJhc3RydWN0dXJl__RMD5e93ff07c552f3ee0_at_1477516800_3187_F5AAE4E2-7A34-4327-8CDA-83913FB48502\index_buckets.csv.647C07D6-2813-4D98-AD2E-ED1FCACEB554.tmp error=The system cannot find the path specified.
Background: 3 Indexer Cluster, all running on Windows. 3 Search Head Cluster, also Windows.
The directories all exist, the permissions are set correctly, and the file itself does not exist. When these errors occur, the RAM usage goes through the roof and quite often it ends up crashing splunkd on the indexer.
Spoiler Alert:
I know why the error is occurring. It's because in all of M$'s glory, they still hard code the file path limit to 260 characters. This file path is 264 characters. Now, how do I get Splunk to limit the file paths to under 260 characters?
↧
↧
Why does "splunk show shcluster-status" show "last_conf_replication: Pending" after upgrade from 6.3.2 to 6.5.0?
Hi
Recently I upgraded my Splunk environment from 6.3.2 to 6.5.0. The environment has a search head cluster, an indexer cluster, and the manager server. After the upgrade, when I run the `splunk show shcluster-status`, I noticed there is a new status called last_conf_replication and it always shows Pending for the search head members, but the search head captain doesn't have this. The replication is running fine in the Indexer cluster, so I am not quite sure what this new option is about and how to get the replication finished. Can anyone shed some lights on this?
Thank you.
↧
In a search head cluster, is it expected behavior for only the captain to have all the alerts, not the other cluster members?
Hi at all,
I'm passing from a single Search Head (with four Indexers) to a Search Head Cluster.
I have three Search Heads: one that is working alone, and the other two are configured as a SH cluster.
I enabled alerts both in the standalone SH and in the clustered ones.
I checked if the standalone SH has the same triggered alerts of the other two SHs, and this is correct.
The strange thing I found is that in the clustered SHs, one (the Captain) has all the alerts and the other none!
Can someone help me to understand if this is expected behavior or not, before putting this cluster in production (inserting in it also the stand alone SH)?
I already saw documentation and answers, but from your experience, anyone found a behavior like this?
Thank you.
Bye.
Giuseppe
↧
After upgrading to 6.5.0, why is search head cluster skipping about 50% of scheduled searches?
On 10 Node SHC deployment – post upgrade from 6.2.5 to 6.5.0 system, instance is skipping about 50% of the scheduled searches.
↧
How to use KV Store in a search head cluster?
Hello
I have a search head cluster with 3 peers.
Now I want to use KV Store within that cluster, with `|inputlookup`\`|lookup` commands.
What is the correct way to do that?
When I created manually KV Store on one of the search heads, it didn't replicate to the other peers, so I cannot run lookup queries from the other search heads.
As I understood from the docs, it shouldn't replicate through SHC, but there should be a way to 'remotely' query the KV Store.
↧
↧
How to fix my Search Head Cluster when SH members are not in sync?
Just created a Search Head Cluster with 3 nodes and a deployer. Deployer and 2 of 3 SHC members are in sync, but one isn't. I have reissued the deploy command but issue persists. A dashboard is fine with 2 of 3 search heads but 1 fails to display anything...for just a few dashboards. Why? And how do I fix this?
↧
Splunk Enterprise Security: How to configure data enrichment?
As I am fairly new to SHC, I seem to be getting the same message in ES when attempting to edit/view > Configure > Data Enrichment and any of the options related to Identity or anything else from the license manager and deployment server. Where is this properly configured at and can it still be done through Splunk Web or only CLI?
Current instance is running in SHC mode and is not able to add new inputs - is the message I receive when attempting to access Threat Intelligence and Identity Management but not Lists and Lookups.
Thank you!
↧
How to control email sender's displayed name at receiver's inbox from all members in our search head cluster?
We have 4 servers in a search head cluster. When we receive Splunk alerts from 3 out of 4 servers, they are displayed as received From "Splunk Alert". Emails from the last server are displayed as From `splunk@hostname`
All 4 servers have identical $SPLUNK_HOME/etc/system/default/alert_actions.conf and local/alert_actions.conf files:
1) ...default/alert_actions.conf:
"...# from email address (name only, host will be appended automatically from mailserver)
from=splunk
subject = Splunk Alert: $name$
subject.alert = Splunk Alert: $name$
subject.report = Splunk Report: $name$
useNSSubject = 0"
2) ...local/alert_actions.conf:
[email]
from = splunk
pdf.header_left = none
pdf.header_right = none
Any ideas what might cause this situation? Our goal to receive emails from all 4 servers as from "Splunk Alert"
![alt text][1]
[1]: /storage/temp/169249-splunkalertfrom.png
↧
What is the recommended procedure to move an app from one Search Head Cluster to another SHC?
I need to move a few apps from SHC1 to SHC2. My plan is below. Critique please!
(SHC1 uses deployer Dply1, SHC2 Dply2)
* Stop all SHC members on SHC1
* Copy target-app entirely from SHC1 to all members of SHC2. Move originals outside of $splunk
* Copy target-app/default, target-app/metadata/default.meta and target-app/bin from SHC1 to Dply2 shcluster/apps. Move originals outside of $splunk.
* Copy all users/target-app in their dir from SHC1 to Dply2 shcluster/users. Move originals outside of $splunk
* Remove target-app from Dply1 shcluster/apps
* Restart all SHC1 members
* Apply shcluster bundle both deployers, Dply1 and Dply2
* sacrifice goat or other farm animal
↧
↧
How to resolve "approaching the maximum number of historical searches" message received after moving to search head cluster?
heyyyy everyone, anyone run into this annoying message before?
we keep getting this after moving to a search head cluster
> "The system is approaching the> maximum number of historical searches> that can be run concurrently.> current=blahhh maximum=blahhahhh"
↧
how to make splunk cluster deployer use mgmt_uri instead of mgmt_uri_alias
I am using docker container for splunk search head clustering, in that deployer not able to push the apps bundle, because it uses master_uri_alias instaed of master_uri,
./splunk show shcluster-status
Captain:
dynamic_captain : 1
elected_captain : Tue Nov 8 07:46:08 2016
id : E6351F32-3459-4777-85A2-026ADB829F7E
initialized_flag : 1
label : sh.member1
mgmt_uri : https://100.73.24.15:9100
min_peers_joined_flag : 1
rolling_restart_flag : 0
service_ready_flag : 1
Members:
sh.member2
label : sh.member2
mgmt_uri : https://100.73.24.15:9101
mgmt_uri_alias : https://172.17.0.1:9101
status : Up
sh.member1
label : sh.member1
mgmt_uri : https://100.73.24.15:9100
mgmt_uri_alias : https://172.17.0.1:9100
status : Up
./splunk apply shcluster-bundle -target https://100.73.24.15:9100 -auth admin:admin
Warning: Depending on the configuration changes being pushed, this command might initiate a rolling restart of the cluster members. Please refer to the documentation for
the details. Do you wish to continue? [y/n]: y
Error while deploying apps to first member: Error while fetching apps baseline on target=https://172.17.0.1:9101: Network-layer error: Connect Timeout
Since i am using docker container mgmt_uri_alias will not reachable out side the host machine, is it possible to make splunk deployer to use mgmt_uri directly to push configuration ?
Thanks.
↧
What is the best practice to change the Splunk admin password?
We are running multi cluster Splunk environment with a few indexers, few search heads, heavy forwarder, etc.
We need to change Splunk admin password and we will be doing it via command line on each server.
Is there any potential problems we should be looking for?
What else should we pay attention to prior making the change?
Thank you in advance!
↧
Can a search head deployer manage multiple search head clusters?
Hi,
Can the search head deployer manage multiple search head clusters (outside of running multiple instances)? We are looking at using our private cloud to implement multiple search head clusters, and hopefully manage them with one deployer.
↧
↧
Why can't the deployment server / search head deployer see my search head cluster in the DMC?
Hi all,
I'm trying to set up the Distributed Management Console (DMC) for my search head cluster on my deployment server / search head deployer (it performs both functions) in my test environment. As it's my test environment, my license master is on production and this means that the indexer cluster master cannot see the search head cluster by default.
So, as recommended by [this documentation](http://docs.splunk.com/Documentation/Splunk/6.5.0/DMC/WheretohostDMC), I have set up the Splunk Monitoring Console / DMC on the search head deployer. However, no matter what I seem to do, it is not allowing the search head cluster to be managed by either deployment server nor the cluster master.
I've manually updated assets.csv in /opt/splunk/etc/apps/splunk_monitoring_console/lookups and tried restarting the instance after this, but not had much luck.
Does anyone have any ideas that might help?
Thanks and best regards,
Alex
↧
Why am I unable to see all LDAP users under Access Controls?
Running Splunk 6.4.3 in a Search Head Cluster using AD/Ldap authentication.
A user contacted me about adding some capabilities. When I looked under: Access Controls -> Users, I could not find that the person's ID was visible. However when I search audit.log I see that he has logged in.
Additionally when I search: `| rest /services/admin/users | search roles=his_role | stats values(realname)` I see a list of about 365 names, (alphabetical by first name), which scrolls to the letter R or sometimes S, but seems to leave off the names towards the end of the alphabet.
So, I'm wondering if there's some sort of limit to the number of names that the rest call or the UI Access Controls module. As I mentioned at the top: The person, who's firstname starts with "V" can login, he's just not referenceable, so I can't look at his inherited roles and/or capabilities.
Thank you.
↧
kvstore on Search Head Cluster - disk utilisation imbalance
I have a Splunk 6.4 environment with 3-member SH Cluster running kvstore without replication to the indexer tier.
The kvstore is not particularly heavily utilised, with only three user-defined collections. The biggest of these is a table with ~130,000 rows, while the other two are both <30,000 rows.
(the Cluster also runs Enterprise Security with some vendor apps installed for good measure. Between them, these also defined some collections, but their contribution is negligible - fewer than 10,000 rows in total)
All three lookups operate as state tables - they are frequently updated, with new data being written and old existing data deleted from them, and I suspect this could be a cause of the problem I'm seeing, which is the total size of the MongoDB files in,
$SPLUNK_HOME/var/lib/splunk/kvstore/mongo
is,
SH1 - 13GB
SH2 - 2.8GB
SH3 - 12GB
The 2.8GB on SH2 looks almost plausible for the amount of data I have in my lookups, but the >10GB sizes on the other two SH's.. no way.
Checking operation of kvstore on each SHC member using,
curl -k -u https://localhost:8089/services/server/introspection/kvstore/serverstatus
returns (albeit fairly incomprehensible) introspection data, so the kvstore on the two bloated SHC Members shouldn't be stale and wouldn't benefit from a resync... or would it?
Thanks, folks.
↧