New to Splunk, can anyone help me build a SH Cluster? Any videos would be great, I tried reading the tutorials on Splunk but i'm still confused. I already have a practice environment setup.
http://docs.splunk.com/Documentation/Splunk/6.6.3/DistSearch/SHCdeploymentoverview
↧
Help setting up a search head cluster?
↧
SA-ldapsearch + TLS in a Search Head Cluster
Hi there,
Does anyone here have succeeded in configuring SA-ldapsearch using TLS on a SHC ?
We have successfully configured it on a Heavy Forwarder part of our architecture but it does not work on a Search Head member of our Search Head Cluster where it does not seem to event load SSL settings.
Here is some details:
SSL Config is the same on both instances :
SH $ splunk cmd btool --app=SA-ldapsearch ssl list
[sslConfig]
caCertFile = /opt/splunk/etc/auth/ca.pem
sslVersions = tls
HF $ splunk cmd btool --app=SA-ldapsearch ssl list
[sslConfig]
caCertFile = /opt/splunk/etc/auth/ca.pem
sslVersions = tls
There is also an sslConfig stanza in another App, also indentical :
[sslConfig]
sslRootCAPath = $SPLUNK_HOME/etc/auth/ca.pem
serverCert = $SPLUNK_HOME/etc/auth/splunk-cert.pem
requireClientCert = true
sslVerifyServerCert = true
certCreateScript =
sslVersions = tls1.1, tls1.2
sslVersionsForClient = tls1.1, tls1.2
SA-ldapsearch.log on the HF OK :
2017-09-11 18:56:32,218, Level=DEBUG, Pid=16868, File=search_command.py, Line=294, LdapTestConnectionCommand arguments: ['/opt/splunk/etc/apps/SA-ldapsearch/bin/ldaptestconnection.py', '__EXECUTE__', 'domain="default"']
2017-09-11 18:56:32,220, Level=DEBUG, Pid=16868, File=search_command_internals.py, Line=296, LdapTestConnectionCommand: ldaptestconnection domain="default"
2017-09-11 18:56:32,220, Level=DEBUG, Pid=16868, File=ldaptestconnection.py, Line=48, Command = ldaptestconnection domain="default"
2017-09-11 18:56:32,220, Level=DEBUG, Pid=16868, File=configuration.py, Line=47, Command = ldaptestconnection domain="default"
2017-09-11 18:56:32,242, Level=DEBUG, Pid=16868, File=configuration.py, Line=536, Configuration = ldaptestconnection(server=[Server(host='domain.local', port=636, use_ssl=True, allowed_referral_hosts=[(u'*', True)], tls=Tls(validate=2, version=2, ca_certs_file='/opt/splunk/etc/auth/ca.pem'), get_info=3), Server(host='domain.local', port=636, use_ssl=True, allowed_referral_hosts=[(u'*', True)], tls=Tls(validate=2, version=2, ca_certs_file='/opt/splunk/etc/auth/ca.pem'), get_info=3), Server(host='domain.local', port=636, use_ssl=True, allowed_referral_hosts=[(u'*', True)], tls=Tls(validate=2, version=2, ca_certs_file='/opt/splunk/etc/auth/ca.pem'), get_info=3)], credentials=CN=Splunk,OU=Admins,DC=domain,DC=local, alternatedomain=domain.local, basedn=dc=domain,dc=local, decode=True, paged_size=1000)
2017-09-11 18:56:32,242, Level=DEBUG, Pid=16868, File=ldaptestconnection.py, Line=65, Testing the connection to ldaps://domain.local:636
SA-ldapsearch.log on the SH KO, seems like SSL parameters are not loaded:
2017-09-11 18:54:06,998, Level=DEBUG, Pid=9063, File=search_command.py, Line=294, LdapTestConnectionCommand arguments: ['/opt/splunk/etc/apps/SA-ldapsearch/bin/ldaptestconnection.py', '__EXECUTE__', 'domain="default"']
2017-09-11 18:54:07,000, Level=DEBUG, Pid=9063, File=search_command_internals.py, Line=296, LdapTestConnectionCommand: ldaptestconnection domain="default"
2017-09-11 18:54:07,000, Level=DEBUG, Pid=9063, File=ldaptestconnection.py, Line=48, Command = ldaptestconnection domain="default"
2017-09-11 18:54:07,001, Level=DEBUG, Pid=9063, File=configuration.py, Line=47, Command = ldaptestconnection domain="default"
2017-09-11 18:54:07,006, Level=ERROR, Pid=9063, File=search_command.py, Line=346, Traceback (most recent call last):
File "/opt/splunk/etc/apps/SA-ldapsearch/bin/packages/splunklib/searchcommands/search_command.py", line 320, in process
self._execute(operation, reader, writer)
File "/opt/splunk/etc/apps/SA-ldapsearch/bin/packages/splunklib/searchcommands/generating_command.py", line 79, in _execute
for record in operation():
File "/opt/splunk/etc/apps/SA-ldapsearch/bin/ldaptestconnection.py", line 49, in generate
configuration = app.Configuration(self)
File "/opt/splunk/etc/apps/SA-ldapsearch/bin/packages/app/configuration.py", line 52, in __init__
self._read_configuration()
File "/opt/splunk/etc/apps/SA-ldapsearch/bin/packages/app/configuration.py", line 432, in _read_configuration
settings = self._read_default_configuration()
File "/opt/splunk/etc/apps/SA-ldapsearch/bin/packages/app/configuration.py", line 457, in _read_default_configuration
response = service.get('properties/ldap/default', namespace.owner, namespace.app, namespace.sharing)
File "/opt/splunk/etc/apps/SA-ldapsearch/bin/packages/splunklib/binding.py", line 241, in wrapper
return request_fun(self, *args, **kwargs)
File "/opt/splunk/etc/apps/SA-ldapsearch/bin/packages/splunklib/binding.py", line 62, in new_f
val = f(*args, **kwargs)
File "/opt/splunk/etc/apps/SA-ldapsearch/bin/packages/splunklib/binding.py", line 586, in get
response = self.http.get(path, self._auth_headers, **query)
File "/opt/splunk/etc/apps/SA-ldapsearch/bin/packages/splunklib/binding.py", line 1056, in get
return self.request(url, { 'method': "GET", 'headers': headers })
File "/opt/splunk/etc/apps/SA-ldapsearch/bin/packages/splunklib/binding.py", line 1108, in request
response = self.handler(url, message, **kwargs)
File "/opt/splunk/etc/apps/SA-ldapsearch/bin/packages/splunklib/binding.py", line 1226, in request
connection.request(method, path, body, head)
File "/opt/splunk/lib/python2.7/httplib.py", line 1057, in request
self._send_request(method, url, body, headers)
File "/opt/splunk/lib/python2.7/httplib.py", line 1097, in _send_request
self.endheaders(body)
File "/opt/splunk/lib/python2.7/httplib.py", line 1053, in endheaders
self._send_output(message_body)
File "/opt/splunk/lib/python2.7/httplib.py", line 897, in _send_output
self.send(msg)
File "/opt/splunk/lib/python2.7/httplib.py", line 859, in send
self.connect()
File "/opt/splunk/lib/python2.7/httplib.py", line 1278, in connect
server_hostname=server_hostname)
File "/opt/splunk/lib/python2.7/ssl.py", line 352, in wrap_socket
_context=self)
File "/opt/splunk/lib/python2.7/ssl.py", line 579, in __init__
self.do_handshake()
File "/opt/splunk/lib/python2.7/ssl.py", line 808, in do_handshake
self._sslobj.do_handshake()
SSLError: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:603)
Thanks in advance for any feedback!
↧
↧
Can a search head cluster can be implemented without integrating with deployer?
I have a standalone search head connected to only one search peer. Now I am introducing another search head to the environment and trying to implement a search head cluster with two search heads.
Now can I achieve that without integrating these search heads with a deployer instance OR deployer is mandatory to implement search head cluster?
↧
Can a search head cluster be implemented without integrating with deployer?
I have a standalone search head connected to only one search peer. Now I am introducing another search head to the environment and trying to implement a search head cluster with two search heads.
Now can I achieve that without integrating these search heads with a deployer instance OR deployer is mandatory to implement search head cluster?
↧
How do I update the certification in the search head cluster?
One of the certificate expiring soon. Where to update new certification in search head cluster in Splunk?
↧
↧
SSL certificate for F5 VIP to search head cluster?
We're finishing up our migration from a single search head to a search head cluster. Our company uses F5 load balancers. Per this http://docs.splunk.com/Documentation/Splunk/latest/DistSearch/UseSHCwithloadbalancers , I had the web guys set me up with a VIP that points to our 2 search heads, using layer-7 processing and persistence.
In order to keep the clients from getting the SSL certificate warning every time they log in, I wanted to have a certificate made for the friendly 'splunk.company.com' URL. The web guys are telling me that because Splunk specifies a layer 7 profile, that they can have a cert on the VIP, that it would have to be an individual cert on each search head, which I don't think would prevent the warnings in the browser...
Has anyone else run into this?
↧
Does splunk support running a standalone search head next to a search head cluster?
While reading the guide for upgrading stand alone search heads to a cluster, I noticed that you cannot add an existing search head.
It must be a new instance, or cleaned using `splunk clean all`,
Because our one instance had many custom scripts and settings, I don't want to wipe and upgrade this cluster yet, could this existing instance be used alongside (but separate from) the search head cluster?
↧
How do I distribute the search app bundles on a search head cluster?
In the search header cluster, we can use deployer to distribute app bundles
but I've always had a question.
If i need to update a configuration file on the search app. for example: I would like to add a lookup table in the search app `/search/lookups/` or add a static file (.js) in the `/search/appserver/static/` directory. So how should i do it?
A、copy `$SPLUNK_HOME/etc/apps/search` from search header to the `$SPLUNK_HOME/etc/shcluter/apps/`of the deployer
and then add the new lookup table or add a static file. Finally, through the `splunk apply` CLI to distribute the bundle
B、create a search directory directly under `$SPLUNK_HOME/etc/shcluter/apps` on the deployer, and then create the `lookups` directory in the search directory. Add a lookup table here, and finally through the `splunk apply` CLI to distribute the bundle
A and B, which method is correct?
Please forgive my English level, all the help would be appreciated.
↧
Alert Manager app: Can I integrate alerts to all search heads in a search head cluster?
Hi,
I have a search head cluster with 3 members. I want to integrate alert manager app in the search head cluster in such a way that on all the search heads I should be able to get all the alerts OR all the alerts should come on any one search head.
Because right now what is happening is that alert manager app has been installed on all the search heads through deployer and "alerts" index is also created on all the search heads.
And whenever the scheduled searches run different alerts are coming on different different search heads. Typically the search head which initiates the search gets the alert triggered in alert manager.
How do I integrate Alert Manager so that the alert gets triggered in either all the search head's or any one??
↧
↧
Search head cluster dilemma -- Is there a way to reverse this configuration issue?
hi everyone:
I seem to have made a mistake on the cluster. I wanted to add a lookup table in the lookups directory of search app (`$SPLUNK_HOME/etc/apps/search/lookups` on everyone cluster member). In order to make all the search head (4 search head) have the same configuration. I did the following steps:
Step 1: copy the search app of one of the search heads to deployer
Step 2: then I added a lookup table in the `$SPLUNK_HOME/etc/shcluster/apps/search/lookups/` directory on deployer.
Step 3: I pushed the configuration changes to the cluster members through the `splunk apply shcluster-bundle -target https://xxxx:8089` command
I thought that would allow all members to have the same lookup table, Prior to this, all knowledge objects were created through GUI
But then I found that I could not delete my own fields, alerts and other knowledge objects.
As an administrator, I can't delete my own knowledge objects, but about 1% of the knowledge objects can be deleted
Did i make a mistake on the cluster?So now, how do I rescue my search header cluster and get them back to normal?
may you tell me the steps?
See screenshot 1:
Two new directories( `default.old.date-bundle id` ) are added to the search head ,( because I pushed twice bundles through the deployer. ).
See screenshot 2:
I am copying the entire search app (`$SPLUNK_HOME/etc/apps/search`) to the deployer. And then configure the changes. Finally pushed to the cluster member
Why i would use the wrong method? I always thought that only put lookup table in the lookups directory of search app, then can call the lookup table on the Search APP(search & Reporting).If the lookup table put other app directory , then can not call the lookup table on the Search APP (search & Reporting).So my idea is wrong?
![alt text][1]
![alt text][2]
[1]: /storage/temp/216644-01.png
[2]: /storage/temp/216645-02.jpg
↧
unable to search static captain during search head clustering. - RESOLVED
We have followed the link to create a search head cluster.
https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.splunk.com%2FDocumentation%2FSplunk%2F6.5.3%2FDistSearch%2FStaticcaptain&data=02%7C01%7CVikram.Mawinkatti%40honeywell.com%7C11d72e1b5dcb4346aeb108d4d88f552d%7C96ece5269c7d48b08daf8b93c90a5d18%7C0%7C0%7C636371557512280453&sdata=nYIWxLt5gCOx0CdmZnE0HtM4vS6tl%2Bmh%2BkwG3k6321E%3D&reserved=0
however it says finally we need to initiate a captain then captain election will be random.
On all the servers we connected them to the deployment server and when I hit the final command to elect a captain for the first time I see below error.
I checked all 8089 and the mgmt port for Search replication are opened on VM level on all the search heads but it giving following erorr.
[splunk@search1 bin]$ ./splunk bootstrap shcluster-captain -servers_list ":8089,:8089,:8089 " -auth admin:******
Failed to Set Configuration. One potential reason is captain could not hear back from all the nodes in a timeout period. Ensure all to be added nodes are up, and increase the raft timeout. If all nodes are up and running, look at splunkd.log for appendEntries errors due to mgmt_uri mismatch
Thanks for your reply.
Vikram.
↧
Splunk search head cluster bundle push is very slow
Hey Splunkers,
I am running into issues with applying a search head cluster bundle.
This bundle has around 200 MB including Splunk Enterprise Security and they run in AWS.
When I apply the usual apply shcluster-bundle command, everything works fine, except that it takes ~2 hours to push it ( 3 SH )
SH deployer is running on t2.medium and searchheads on m4.xlarge. CPU is not overwhelmed during the push at all and i have also verified the bandwidth with iperf3 and it is more than allright ( ~500 Mb/s ). There are no searches running at the moment and no data are being indexed. I am just building and testing the infrastructure.
I have tailed the splunkd.log during the push on the deployer and also there was no WARN or ERROR regarding that.
Do you have any idea what else to test and where could potentially be the root cause ?
Thank you for any feedback,
Marek
↧
Do I need to install deployment monitor, cluster master, search head cluster separately on the same machine?
I have one machine for Deployer,Cluster Master,Deployment server and license master.Do I really need a separate installation of these components in same machine?If yes please help me with the steps,if no where the separate installation on same machine required for testing distributed environment?
↧
↧
Alert Manager not working on a clustered search head setup
We have setup the Alert Manager on clustered search heads environment. We see the alerts triggered in Alert Manager app on some occasions and doesn't see any alerts otherwise. Same is the case when we see the events in index=alerts.
1. Checked the internal logs and do not see any errors here
index=_internal sourcetype=splunkd component=sendmodalert action="alert_manager"
2. i can see that the incidents are present in in kvstore with current date
| inputlookup incidents | eval eventtime=strftime(alert_time, "%D") | stats count by eventtime
Any idea what might be going wrong here ? Any required configuration on indexes which in SHC setup ?
↧
Upgrade of a Search Head Cluster (v6.4.2 > 7.0.0) - Can I do a rolling upgrade?
Hi at all,
I have to upgrade a Search Head Cluster from version 6.4.2 to 7.0.0 and I have a doubt:
in https://docs.splunk.com/Documentation/Splunk/7.0.0/DistSearch/UpgradeaSHC there's written:> Starting with version 6.5, you can perform a rolling upgrade. This allows the cluster to continue operating during the upgrade. To use the rolling upgrade process, you must be upgrading from version 6.4 or later.
It's not so clear for me if I can perform a rolling upgrade from 6.4.2 to 7.0.0 or I must before upgrade from 6.4.2 to 6.5 (not rolling upgrade) and after I can perform the rolling upgrade to 7.0.0.
Anyone has already performed this upgrade?
bye.
Giuseppe
↧
map and sendmail commands in search head clustering
In my environment, I am building search head clustering consisting of three search heads and one deployer.
In addition, I am using an alert that sends mail individually with the "map" command and "sendmail" command for logs that meet certain conditions.
However, as a result of checking this morning, only one alert was caught, and even though the result was one line, two mails were sent.
When I checking the internal logs, the logs below were issued in the internal logs of the two search heads at approximately the same timing (deviation of about 0.4 seconds).
"INFO sendemail:128 - Sending email..."
From this I thought that the same search ran for the two search heads.
Is there a workaround for this phenomenon?
Also, are "sendmail" and "map" commands not recommended in clustering?
And Is there a possibility that it is the cause?
↧
Error message: "Search process did not exit cleanly, exit_code=255, description="exited with code 255". Please look in search.log "
I have deployed search head app from deployer to search head clustering environment.. but while querying any search inside the app.. it is showing error "Search process did not exit cleanly, exit_code=255, description="exited with code 255". Please look in search.log for this peer in the Job Inspector for more info."
inside search log :- ERROR dispatchRunner - RunDispatch::runDispatchThread threw error: Application does not exist: new_app
Note : same app is working fine in Testing environment. but when deployed to prod it is showing above error.
↧
↧
How to create a bunch of tags in a search head cluster
I am admin in Splunk 6.6.2 clustered environment. I create 10 tags through the GUI. In my SHC, the 10 tags get distributed to the other search heads. Next, I want to edit tags.conf with my UNIX text editor and create about 90 more tags. What actions need to take place after I save tags.conf in order to propagate the additional tags to the other search heads under the admin account? A rolling restart did not produce the desired affect. Thank you.
↧
Can you verify my plans for a search head cluster configuration?
Hi All,
I'm trying to create a sh cluster, here are the sequential things that I have. Please correct me.
**On the deployer**
[shclustering]
pass4SymmKey = shc@cluster
shcluster_label = sh_cluster
restart the deployer
**On all the search heads except deployer**
splunk init shcluster-config -auth admin:changeme -mgmt_uri https://respective_sh_ip:mgmt_port -replication_port rep_port -replication_factor 2 -conf_deploy_fetch_url https://deployer_ip:mgmt_port -secret shc@cluster -shcluster_label sh_cluster
restart the search heads after the configuration.
**Only on one search head and not deployer**
splunk bootstrap shcluster-captain -servers_list "https://sh1_ip:mgmt_port,https://sh2_ip:mgmt_port,https://sh3_ip:mgmt_port" -auth admin:admin
**Push configurations from deployer to sh member**.
create a app and moved it to etc/shcluster on deployer, then
splunk apply shcluster-bundle -target https://sh1_ip:mgmt_port
Thanks,
Allan
↧
How can we avoid data loss in the summary indexes when there is an indexing latency in the cluster?
We reach situations where summary indexes are incomplete because we have an indexing latency in the cluster.
We usually set the same number of minutes for the **Earliest** and the **Run every** parameters...
![alt text][1]
[1]: /storage/temp/217914-amar3.jpg
What can be done? I think the issue is that the latency varies throughout the day and the week.
↧