There is quite a lot of documentation about doing a Splunk deployment and I just want to see if anyone has a consolidate source/weblink on the subject. I am currently putting together a cluster and reading up on the activities ... I have a simple setup running now with a search head cluster and an indexer cluster, but I have many questions on a few things, do's-don'ts, best method, howto's ...etc
1. Looking for step-by-steps docs on bringing in my indexed data and search apps/configs ...
1a. looking for current step-by-step cluster deployment docs/links!
2. Looking for any purty and nice workflows with pop-up pictures ...
3. Do I need a deployer, deployment server, master, and license server?
4. Is a captain for search heads a current thing?
Thank you!
↧
Where can I find detailed documentation on best practices deploying search head and indexer clusters?
↧
Why am I unable to add a new member to a search head cluster with constant "Your session is invalid. Please login." errors?
Hi all,
I got a running search head cluster on latest Splunk Version 6.4.1.
I now need to add an additional member to the Cluster, and processed as described here:
http://docs.splunk.com/Documentation/Splunk/6.4.1/DistSearch/Addaclustermember
I'm not struggling with the process [Add_the_instance][1].
When using the command on the new member, I receive a login prompt asking over and over again for my username and password.
I checked the password and made sure it is correct. I double proofed it by entering a wrong password which directly exits the "authentication loop".
splunk4.my.net:/opt/splunk # /opt/splunk/bin/splunk add shcluster-member -current_member_uri https://splunk3.my.net:8089
Your session is invalid. Please login.
Splunk username: admin
Password:
Your session is invalid. Please login.
Splunk username: admin
Password:
Your session is invalid. Please login.
Splunk username: admin
Password:
Your session is invalid. Please login.
Splunk username: admin
Password:
[and so on and on ....]
Thankfully there is a second way, which allows adding a new member to the cluster. So I ran the following command on one of the members which are already within the cluster, which also ends with an error:
splunk3.my.net::/opt/splunk # /opt/splunk/bin/splunk add shcluster-member -new_member_uri https://splunk4.my.net:8089
In handler 'shclustermemberconsensus': Failed to Set Configuration. One potential reason is captain could not hear back from all the nodes in a timeout period. Ensure all to be added nodes are up, and increase the raft timeout. If all nodes are up and running, look at splunkd.log for appendEntries errors due to mgmt_uri mismatch
I saw the above error mentioned in other answers which point to a password mismatch. So I added the password in cleartext to the server.conf and also gave the already hashed pass4SymKey from another instance a chance. Situation doesn't change.
I also checked the network connectivity by telnet to port 8089 from the new system to a cluster member and vice versa - both directions are working.
How can I add the new node to the search head cluster?
Any ideas or recommendations?
Btw: I already restarted setting up the new node by removing and installing Splunk again for several times - every time the same result.
Greets
Christian
[1]: http://docs.splunk.com/Documentation/Splunk/6.4.1/DistSearch/Addaclustermember#Add_the_instance
↧
↧
Will Search Head Pooling work in Splunk 6.4.1?
Hi,
We are upgrading to Splunk 6.4.1, but in multiple stages, the second stage being migrating to Search Head Clustering. Will Search Head Pooling work in Splunk 6.4.1? I know that it's deprecated, but will it still function?
↧
Why are admin settings not accessible after upgrading the search head cluster environment to 6.4.1?
Hidden admin settings are not accessible after upgrading the search head cluster environment to 6.4.1. As a admin user, I am not seeing admin related settings (Server Settings, Servers Controls, Licensing, Indexer clustering, Forwarder management, Distributed search) when I click on "Show All Settings". It's still loading same objects that are available for all users.
Is anyone having same issue after upgrade?
Thanks,
sru
↧
Search Head Deployer in a SH Cluster: What happens to local?
I have been doing a few tests on how configurations are pushed when applying a shcluster bundle. However, I would like to find some definitive answers if at all possible.
On the deployer in shcluster/apps I have a Splunk app with
- appname/default/props.conf
- appname/default/transforms.conf
- appname/default/savedsearches.conf
- appname/local/props.conf
- appname/local/transforms.conf
- appname/local/savedsearches.conf
Now it appears when I apply the cluster bundle with
sudo -u splunk /opt/splunk/bin/splunk apply shcluster-bundle -target https://10.10.1.1:8089 -auth admin:changeme
The app gets pushed to the search head cluster members.
However, on the search heads, it appears everything in appname/local has been "merged" with appname/default. This is great and I understand the reasoning behind this because it then means that users can make changes to the apps on the SH cluster and only changes are stored in the appname/local. This means that if the apps are deployed again, they won't overwrite local users changes to the app.
**First question** is. Where is this deployment behavior documented? I would assume matching stanzas in local/props.conf would override the default/props.conf, but is this documented somewhere?
What happens to local really isn't covered here
http://docs.splunk.com/Documentation/Splunk/6.4.1/DistSearch/PropagateSHCconfigurationchanges
**Second Question** is if I want to "take a snapshot" of an app from a search head in the cluster to "update" the deployer with the most recent version is it just a matter of copying off the entire app directory?
Removing any folders like appname/default.old.20160304-103301 which appear to be backups from the last deployment. Then copy this across to the deployer as the lastest "version". I can see the documentation says you don't need to but it seems like a good idea to "track" an app as it grows.
**Bonus Knowledge**
I just discovered you have control over how the deployer handles lookups which is great. This is one of the reasons I have been hesitant to deploy at times.
splunk apply shcluster-bundle -target : -preserve-lookups true -auth :
http://docs.splunk.com/Documentation/Splunk/6.4.1/DistSearch/HowconfrepoworksinSHC
↧
↧
Does anyone have a sample configuration for running a license master and a deployer on the same node?
I am planning on running a license master and deployer on the same node. Please can you provide example configuration as I am struggling to get them both to work together.
I have 2 sites:
Site 1 has:
1 x master indexer
2 x indexers
2 x search heads
1 combined license master/deployer
Site 2 has:
2 x indexers
1 x search head
Master and 4 indexers form a single indexer cluster
3 x SH form a single Search Head cluster
Do I need 2 separate installs of Splunk, or is it possible to get it work with one install? Any help, sample configs or advice much appreciated.
↧
What is the best way to recreate and deploy an app with a custom navigation bar from a 5.0.7 search head in a 6.3.3 search head cluster?
We currently have a standalone search head (5.0.7) with customization to the nav bar ( etc/apps/search/local/data/ui/nav/default.xml ) to help users quickly access searches and dashboards.
We are building a new search head CLUSTER (6.3.3), and I do not know how to recreate this when using the deployer for the cluster. It seems it would be a simple matter of deploying the default search app with our tweaks, but the docs specifically say not to deploy default apps like search.
Manually copying the file into the search app on all the cluster nodes seems backwards and would be annoying to keep up to date.
What's the best way to solve this?
↧
Will CSV files produced by the outputcsv command be replicated by the search head cluster?
Hi all,
I currently have 1 search head running all my scheduled searches. Some of these searches use the `outputcsv` command to export Splunk results for use in other systems. Will these CSV files be replicated by the search head cluster? I won't be able to control which search head produces the CSV, so I need to know if Splunk deals with this or not.
I've searched through the documentation, but haven't found anything explicit. Any help would be greatly appreciated!
Thanks
↧
How to troubleshoot why accounts and objects are not replicating in our Search Head Cluster?
Hi,
We are finding that numerous objects and accounts are not replicating across our Search Head Cluster. Are there any troubleshooting steps? Log entries to look at?
↧
↧
In what order do we upgrade Splunk servers in our clustered environment from 6.3.0 to 6.4.1?
Hi,
We are upgrading all of our Splunk components from Splunk 6.3.0 to 6.4.1.
Following are the servers in the clustered environment.
a) License server
b) Deployment server
c) Cluster Master
d) Search Head Cluster Deployer
e) Search Heads
f) Indexers
g) Forwarders
What is recommended order to upgrade the above servers?
Thanks,
Jitendra
↧
How to set up HTTP event collector in a search head cluster, and does the token need to be in a specific format?
I do not see an option for http event collector in Splunk Web.
We have a search head cluster and an indexer cluster.
Should I create an app on the deployer and push the configuration to all search heads?
Also, another question is the token which needs to be generated. Does it have to be in any specific format or can any random token can work?
Thanks a ton.
↧
"Too many search jobs found in the dispatch directory" - Can we run this command on our clustered search heads to clean it?
We are on search head clustering with 4 search heads and version 6.3.3.
Recently started seeing WARNING:
Too many search jobs found in the dispatch directory (found=3186, warning level=2000). This could negatively impact Splunk's performance, consider removing some of the old search jobs.
Can we run below command on all our search heads to clean it?
splunk cmd splunkd clean-dispatch /apps/old-splunk_dispatch -1d
Will this only clean 1 day old jobs, or can we safely run it with -7d?
↧
Getting "Error while fetching apps baseline on target=http://:8089: Network-layer error: Connection reset by peer'" trying to deploy apps to a search head cluster
I'm having an issue running the command:
splunk apply shcluster-bundle -target http://:8089
This yields the error:
Error while deploying apps to first member: ConfDeploymentException: Error while fetching apps baseline on target=http://:8089: Network-layer error: Connection reset by peer
After numerous attempts to change my pass4symkey under the shclustering stanza in my server.conf file, I'm still getting the same error. When we had originally set up the search head cluster, this worked properly, but I can't seem to find a way to debug this issue.
Adding the -debug flag gives the following:
(1ry) QUERYING: 'base_url:https://127.0.0.1:8089 auth_option:(null) relative_url:/services/apps/deploy command=POST'
In build_full_rest_url(): Composing URL from base=https://127.0.0.1:8089 + relative=/services/apps/deploy
In build_full_rest_url(): Composed URL=https://127.0.0.1:8089/services/apps/deploy
In make_simple_rest_call_online(): using_basic_auth=0
In make_simple_rest_call_online(): [Re-]Initialized HTTP request headers:
In make_simple_rest_call_online(): HTTP request response_code=500
Online REST call to /services/apps/deploy returned -1
Error while deploying apps to first member: ConfDeploymentException: Error while fetching apps baseline on target=http://:8089: Network-layer error: Connection reset by peer
(1ry) FAILED: 'HTTP/1.1 500 Error while deploying apps to first member: ConfDeploymentException: Error while fetching apps baseline on target=http://:8089: Network-layer error: Connection reset by peer'
As I can restart all of my search heads in the cluster and it's able to reform a cluster, I don't believe this is a pass4symkey issue. Any suggestions what would cause this?
↧
↧
Adding a search head to a search head cluster, do we need to do anything else other than adding the same configurations as existing members?
Hi Splunk experts!
We have a Splunk Enterprise Search Head Cluster with 3 Search Heads.
We need more cores, so we're adding a 4th physical server.
Is there anything special we need to do besides give it the same config as the other Search Heads? This is the first time since we built the Search Head Cluster that we have added an additional Search Head into it.
Thanks!
↧
Where do data model summaries reside in a distributed environment with search head and indexer clusters?
In a Distributed Environment with a Search head Cluster and Indexer Cluster, where do Data Model acceleration summaries resides?
In the Searchhead Cluster or in the indexer Cluster? This is a question to estimate storage sizing requirements.
Assuming that users only access through Searchheads.
Does the cluster replication / bundle work with available search copies of the data?
Thanks.
↧
Changing the pass4SymmKey for Search Head Clustering not being encrypted in the Deployer
We were having trouble with one of the Search Head cluster members. We went in to ensure that all of the shclustering pass4SymmKey values were correct. After restarting the Deployer splunkd service we found that the value for the pass4SymmKey did not get encrypted. Other pass4SymmKeys are being encrypted (clustering and general) but not the shclustering. This has happened on both 6.4.0 and 6.4.1 versions.
We double checked that the field was listed in the encrypt fields and it is.
I'm figuring we are missing something. Any suggestions?
↧
Is there a way to migrate custom apps to a search head cluster without folding everything into default?
All,
We're looking for a way to migrate our custom apps from a standalone search head to a search head cluster without having all of our user-defined configurations (App shared) end up in the default folder. We typically create skeleton apps (literally just shells of apps with no knowledge objects) for each workgroup or department that uses Splunk, then users populate those apps with things that are relevant to them. This works well, but it results in lots of configuration in local. This configuration then, as expected, gets pulled into default if we try to use the Deployer to migrate it to a search head cluster. After reading the docs, browsing Answers (see option 2) and opening a case with Support, our options seem to be:
1. Deploy the apps as-is using the Deployer and explain to users that they can't delete anything that they previously shared at the App level. Not sure if there are other side-effects. Certainly not the end of the world, but something feels off about this; in my mind, the Deployer doesn't seem to be meant for these user-created apps.
2. We noticed [this answer][1] about a script called migrate_users.py. It seems like a good idea to use the API to export and then import objects in a sort of cluster-native way. We would deploy the skeleton apps using the Deployer then export/import. We mainly just have to deal with saved searches; if there's code that exists to retrieve all the saved searches from an app in a standalone search head and create them in a search head cluster using the REST API or CLI, that would be great! If not, any known gotchas if we tried to roll our own?
What do you all think? This couldn't be a huge problem, otherwise I imagine there would be more talk about it, but it seems like almost everyone migrating to a search head cluster would be affected by it to some degree. Thanks for your thoughts!
[1]: https://answers.splunk.com/answers/331146/after-migration-from-our-standalone-search-head-to.html#answer-331180
↧
↧
What is the best practice for bringing down a search head cluster member?
I'm trying to find some guidance on the best way to bring down a member of a search head cluster without impacting any of the searches that are currently running on it. What I have found in my limited testing is that following this documentation: http://docs.splunk.com/Documentation/Splunk/6.2.2/DistSearch/Removeaclustermember will cause the searches running on the search head to be orphaned and they are no longer viewable in the job manager on the remaining active cluster members.
Is there a best practice when preparing for a scheduled maintenance to bring down a member of a search head cluster so that there will be no impact to the system as a whole?
↧
How to add a new app from the deployment server in a search head cluster?
I have a search head clustering environment. How can I add a new app from the deployment server? I tried to use app builder from Splunk Web on the deployment serve,r but the new app was not replicated to search heads, therefore, I cannot see the new app.
↧
How do you manage the content of users' Splunk apps in a Search Head Cluster?
Our splunk install has a (new) Search Head Cluster. Previously we were running Search Head Pooling. I'm struggling to figure out how to manage the content of users' Splunk apps. With SHP, the users had access to delete and change objects from any search head in the pool. Now, the SHC promotes artifacts from local to default, and users cannot, for example, remove an unwanted dashboard. Or update a lookup file. I've tried to force the requested changes on the Deployer, but most often the changes are not pushed out to the Search Heads in the SHC.
For a specific example, let's use the lookup file situation. On the SHP, to update the file, they would upload a new one, delete the old one, and change the perms on the new one to be "app level," promoting it into place. How can they accomplish this task in SHC?
Another example: User wanted a view removed. Since they can't do this, I went to the Deployer, deleted the XML file, and issued a `splunk apply shcluster-bundle` command. However, the file was not removed from SHC members. What is the proper way to do this?
↧