Will the Qualys VM App for Splunk Enterprise run on a search head cluster?
I don't see anything in the documentation about it. I know some of the third party apps we've used haven't supported running on a search head cluster and we've had to install them on a standalone.
↧
Will the Qualys VM App for Splunk Enterprise run on a search head cluster?
↧
How does metadata in a SAML world translate to a search head cluster?
We have one stand-alone search head and then also a search head cluster.
On the stand-alone we just implemented SAML/SSO and one of the annoying pieces is that all saved objects (searches, macros, lookups, etc) are "owned" by the SAML ID # of the user who created them. Annoying that I have to then translate it to the user name but I can live with that.
More of a hassle is when our administrators want to publish something as being owned by the "admin" account. Since we can no longer sign on as that account to publish content, we have to go in and manually adjust the metadata files to change ownership. Again, hassle but at least it works.
What I haven't yet figured out, is how this will translate once we get SAML implemented in our search head cluster. I understand that when the cluster deployer pushes new configurations that the deployer /local directories get bundled into the /default directories on the search nodes. But I haven't found anything in the documentation to explain if default.meta and local.meta do the same thing since they're in their own /metadata directory.
Also... if the default.meta and local.meta DO successfully change the ownership of the objects, since this is a change at the deployer rather than on the nodes, does that mean we're going to lose the ability to delete these objects within the web interface on the nodes? I'm suspecting so, since they'd persist in the metadata on the deployer and would get re-created every time we do a new push. I'm also suspecting that they'd cause multiple errors since the metadata would exist but they would be missing whatever content was stored on the nodes. (ie: the search name would persist but the search properties would get removed)
Can anyone point me to more in-depth documentation on how metadata and search head clustering interact?
Or, does anyone have better ideas on how to adjust object ownership other than manually editing metadata files?
Thank you!
↧
↧
Splunk DB Connect 2.4: How to resolve "AuthenticationError: Request failed: Session is not logged in" error on Heavy Forwarder?
Hello Guys,
I have a problem with Splunk DB Connect.
Splunk DB Connect 2.4 is installed on a heavy forwarder and I'm using a Search Head Cluster.
I keep having this error in dbx2.log every time :
2017-04-21T13:41:13+0200 [INFO] [mi_base.py], line 190: action=caught_exception_in_modular_input_with_retries modular_input=mi_input://Data_URL retrying="5 of 6" error=Request failed: Session is not logged in.
Traceback (most recent call last)
File "$SPLUNK_HOME/db_connect/bin/dbx2/splunk_client/../../splunk_sdk-1.5.0-py2.7.egg/splunklib/binding.py", line 300, in wrapper
"Request failed: Session is not logged in.", he)
AuthenticationError: Request failed: Session is not logged in.
Any help ?
↧
For Application Cleanup, what are the best practices for moving default objects back to local?
During an upgrade last summer, Splunk PS (Professional Services) had our admin move all the local assets into default... which left us with a bunch of objects that we can't edit/delete. I will be leaving current company soon and looking to do a clean up before I go for my successor.
Looking for best practices for moving default back to local without overwriting local.
I am a power user and work exclusively from Splunk Web. We have a separate team of Splunk admins who manage the environment. We use search head clustering. Because we don't have access to back end we would rather just have full access to all objects in app.
Plan is to...
Get copy of app
Merge .conf file entries we wish to retain from default into local.
Similar for views = move to local
Overwrite the app on the search head captain with cleaned up copy
Restart splunk or destructive sync??? <---------------------------------------------**MAIN QUESTION!!!**
Thanks in advance for help/feedback.
↧
can CIM be installed on search head cluster or search head only
can Common Interface Model be installed on search head clusters or search head only?
↧
↧
Using Deployment Server as Search Head Deployer
Hi,
We currently have a distributed setup with a Deployment Server, Indexer Cluster Master, Peer Indexers and a single Search Head. We are trying to migrate to Search head clustering.
Currently, we are distributing apps using Deployment Server to the Search Head as well. However, in the SH clustering documentation, it has been mentioned that the Deployer should be used for distributing apps to the SH cluster. But this would involve moving the Search head related apps to a new location `/etc/shcluster/apps/` which would complicate things for distributed deployment. We tested app distribution using the Deployment Server configuration to the new SH cluster and it is working fine.
I would like to know the reason why the use of Deployment Server is discouraged in SH clustering.
Thanks in advance,
Keerthana
↧
How to resolve error "Error pulling configurations from the search head cluster captain"?
I am getting the error "Error pulling configurations from the search head cluster captain; consider performing a destructive configuration resync on this search head cluster member"
I tried to run the following command
# splunk resync shcluster-replicated-config
but i am getting the error "Cannot resync_destructive: this instance is the captain"
I then tried to perform the rolling restart among search head cluster, run the following command
# splunk rolling-restart shcluster-members
But still I am getting the error "Error pulling configurations from the search head cluster captain"
I also ran splunk resync shcluster-replicated-config after rolling-restart.
But still not fix. and I am getting above errors
Please suggest a fix
↧
What is the recommended upgrade order for search heads, indexers, heavy forwarders, deployment server, etc.?
I am currently planning on upgrading our Splunk Enterprise to version 6.5.2. I know I need to upgrade the Search Heads prior to the Indexers but I'm not sure what order everything else belongs in and am looking for a recommendation.
We have 18 indexers, running version 6.4.1.
We have 8 search heads in a cluster, running version 6.4.1.
We have a deployer (Cluster Master), running version 6.4.1.
We have a deployment server, running version 6.3.1.
We have 4 heavy forwarders that we use as syslog-ng and snmptrapd servers, running versions 6.3.1
We have several standalone search heads, not in the cluster, that do our alerting and run Splunk DB Connect and/or the Splunk App for CEF, running in either 6.3.1 or 6.4.1.
We have a mixed bag of Universal Forwarders running 5.x and 6.x versions.
↧
PagerDuty "Setup PagerDuty Incidents" button is missing from UI. (App requires special instructions for SHC)
I'm stuck at step 4 in the Integration because the button referenced in the Integration Guide is missing from my UI. I'm running Splunk 6.5.
![alt text][1]
![alt text][2]
Any ideas?
[1]: /storage/temp/193324-screen-shot-2017-04-24-at-103229-pm.png
[2]: /storage/temp/193323-screen-shot-2017-04-24-at-103043-pm.png
↧
↧
Problems with URL Toolbox App Installed on a Search Head Cluster
I have installed the URL Toolbox app on a search head cluster, but the app is not working properly. When I try to use the macros associated with the app, I get these errors
Could not find 'ut_countset.py'. It is required for lookup 'ut_countset_lookup'.
Streamed search execute failed because: Error in 'lookup' command: The lookup table 'ut_countset_lookup' does not exist or is not available.
I have double checked that those objects are in the app and it is installed on the search head. According to the documentation, the app does not need to be installed on the indexers, but to try and fix this error, I installed the app on the indexers anyway. Installing the app on the indexers did not resolve the errors. I have verified the permissions of the files and used the Splunk btool to ensure the stanzas are showing up. The below command displayed lines from the transforms.conf in the utbox folder.
/opt/splunk/bin/splunk btool transforms list --debug | grep ut_
I was able to run the following command, which indicates that the python files are working and do not have permissions issues.
/opt/splunk/bin/splunk cmd python /opt/splunk/etc/slave-apps/APP_utbox/bin/ut_countset.py
↧
Why am I getting a warning from my search head cluster captian stating "unable to distribute to peer"?
I'm attempting to convert from a search head (sh) pool to a search head cluster. All instances (cluster master, index peers, heavy forwarders and the original sh pool) are at v6.5.3 on linux. I've followed the steps in the migrate from pool to cluster documentation, carefully I think, a couple of times now. I've missed "something" but I don't know how to find what that is.
I turned on DEBUG for DistributedBundleReplicationManager but didn't find any extra useful information. Same thing for SearchPeerBundlesSetup on one of the peers. To me, it looks like the bundle replication process is working from the sh cluster to the search peer(s) but whatever response is expected from the peer is not happening. Just a wag though. Any thoughts you have on the subject are much appreciated.
o **Sending done. uploaded_bytes=82954240, elapsed_ms=5594. Waiting for peer.uri=https://xx.xx.xx.xx:8089 to respond**
o **got non-200 response from peer. uri=https://xx.xx.xx.xx:8089, reply="HTTP/1.1 204 No Content" response_code=204**
o **Unable to upload bundle to peer named xxxxx**
↧
Why is a Search Head Cluster Member not replicating all changes?
We recently added a new member to our search head cluster and upon changing the captain once adding the new member have been experiencing replication issues with one of the members in the cluster.
One member is not publishing its changes to the rest of the cluster and this can be seen in a dashboard created on one but not appearing on the other. The strange part is that reports will replicate. It seems like the bundle push for the problem member to the captain is taking a long time and by the time it gets there it is out of date. This appears in the logs as:
05-19-2017 11:49:30.853 -0400 WARN ConfMetrics - single_action=PUSH_TO took wallclock_ms=118946! Consider a lower value of conf_replication_max_push_count in server.conf on all members
05-19-2017 11:49:30.853 -0400 WARN ConfReplicationThread - Error pushing configurations to captain=, consecutiveErrors=1 msg="Error in acceptPush: Non-200 status_code=400: ConfReplicationException: Cannot accept push with outdated_baseline_op_id=52b08cafbfb11ce9d453f78003f3449bb74d4829; current_baseline_op_id=36a8837153caf8be7e1ca7604851fa75dc9b4e06"
--
05-19-2017 11:51:50.296 -0400 WARN ConfMetrics - single_action=PUSH_TO took wallclock_ms=118399! Consider a lower value of conf_replication_max_push_count in server.conf on all members
05-19-2017 11:51:50.296 -0400 WARN ConfReplicationThread - Error pushing configurations to captain=, consecutiveErrors=1 msg="Error in acceptPush: Non-200 status_code=400: ConfReplicationException: Cannot accept push with outdated_baseline_op_id=36a8837153caf8be7e1ca7604851fa75dc9b4e06; current_baseline_op_id=f662e069cf5cafa23d57fda3281422c33fe03b46"
--
05-19-2017 11:54:03.011 -0400 WARN ConfMetrics - single_action=PUSH_TO took wallclock_ms=117277! Consider a lower value of conf_replication_max_push_count in server.conf on all members
05-19-2017 11:54:03.011 -0400 WARN ConfReplicationThread - Error pushing configurations to captain=, consecutiveErrors=1 msg="Error in acceptPush: Non-200 status_code=400: ConfReplicationException: Cannot accept push with outdated_baseline_op_id=1936b8e36a94adc7f8321bfa46889d05fd70476b; current_baseline_op_id=40f8f8a3c05895d2f295bc4b4d58c8be9d7dbe82"
--
05-19-2017 11:56:13.752 -0400 WARN ConfMetrics - single_action=PUSH_TO took wallclock_ms=115828! Consider a lower value of conf_replication_max_push_count in server.conf on all members
05-19-2017 11:56:13.752 -0400 WARN ConfReplicationThread - Error pushing configurations to captain=https://, consecutiveErrors=1 msg="Error in acceptPush: Non-200 status_code=400: ConfReplicationException: Cannot accept push with outdated_baseline_op_id=9071097ce08bbc4988be45b8f5bc9ae5d61569e3; current_baseline_op_id=038728b4743c774e8d3a3a89d3c47f4a3be5a59d"
The problem member is a previously working member of the the cluster and is not the newly added on. It was previously the captain and after switching to another captain began running into the issue. Trying to move the captain back to this member almost crashed the cluster. Our unpublished changes on this member are not very high yet and the consecutiveErrors doesn't exceed 1 so looking at documentation it would seem a destructive resync is not needed yet. I would like to avoid that if possible and allow the changes existing on the problem member to be replicated.
↧
Best way to move half of a SHC?
We have a multi-site (2 sites) environment with two 6-member SHCs. Each site is in a different physical location. And each site has 3 members of each SHC. I know, I should probably have a majority of in one site for each cluster, but I don't.
Next year, one site is being physically moved to a new location which could be a 3-day outage. I'm trying to determine how best to handle that for my SHCs. If i just move them, I'll lose a majority of members and won't be able to select a captain.
Some ideas I have:
1. statically set the captain before the outage and make it dynamic again once the boxes are back up
2. remove one or more of the members that are being removed first, leaving a majority in the site that will remain up. And then add them back after the migration
3. Add a temporary search head to the cluster to the site that will remain up, giving it a majority
I'm leaning toward 1 or 2. Any thoughts on the best approach? Does it matter?
Thanks,
↧
↧
Has anyone seen search returning different numbers of events after upgrading to 6.6.0?
I upgraded our DMC (Distributed Management Console) to 6.6.0 last week, but everything else in our environment is still 6.5.3.
This search returns different results on the 6.6 DMC than on the 6.5.3 SHC (Search Head Cluster):
index=_* earliest=-2h@h latest=-1h@h
| stats count by index
| sort index
6.6.0:
index count
_audit 49747
_internal 16173711
_introspection 67630
6.5.3:
index count
_audit 33771
_internal 7392283
_introspection 47820
↧
Why does Splunk DB Connect not forward any event since being in a Search Head cluster?
Hi,
I had 1 search head (SH) on which i installed Splunk DB connect everything was working fine.
Recently, i added 2 more SH and put them in a cluster mode.
However, i used the deployer to install Splunk DB Connect on the 2 other SH but since then db connect doesn't forward any data to the indexer cluster. The last event i have is the one sent with the Stand alone SH
I checked that my index is created also that the connection is fine.
Here is log that i have:
2017-05-24T05:01:29+0200 [INFO] [mi_base.py], line 188: action=caught_exception_in_modular_input_with_retries modular_input=mi_input://answers-oab retrying="6 of 6" error=Request failed: Session is not logged in. Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/mi_base.py", line 177, in run should_execute = runner.pre_run() File "/opt/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/mi_base.py", line 107, in pre_run should_execute = self.clustering_precheck() File "/opt/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/mi_base.py", line 92, in clustering_precheck is_clustering_enabled = shc_cluster_config.is_clustering_enabled() File "/opt/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/splunk_client/shc_cluster_config.py", line 17, in is_clustering_enabled mode = self.content['mode']
I added an outputs.conf on the SH but it doesn't work.
I'm really stuck with this!
Thanks for your help
↧
Search affinity for non-multisite cluster
I have 2 locations, and not a ton of resources. Multisite clustering took too much -- it seems like I need at least 3 indexers (or maybe it was 2 per site). But, I only have 2 indexers, so I decided a multisite cluster was more then I needed. Instead, I set up a basic index cluster that I was hoping to have span multiple locations. **Main goal = data safety**. 2 copies of active splunk indexes, plus backups at each location looks to be exactly what I need.
![alt text][1]
[1]: /storage/temp/204582-arch.png
But, my pipe between sites is pretty limited. Ideally, my search head would be tied to a specific indexer, so I am not trying to pull data across sites. I looked at affinity (but that is multisite only) and distributed search (but that is non-cluster only). Is it possible to restrict my SearchHead1 to only search Indexer1?
↧
Splunk_TA_stream and search head clusters (shc)
While trying to deploy both `Splunk_TA_stream` and `splunk_app_stream` to a SHC, you see the following error and the deploy push fails:
Error while deploying apps to target=https://burch:splunkd-port with members=3: Error while updating app=Splunk_TA_stream on target=https://burch-ip:splunkd-port: Non-200/201 status_code=500; {"messages":[{"type":"ERROR","text":"\n In handler 'localapps': Error installing application: Failed to copy: /opt/splunk/var/run/splunk/bundle_tmp/010fb5c688614565/Splunk_TA_stream to /opt/splunk/etc/apps/Splunk_TA_stream. Error occurred while copying source to destination error=\"Text file busy\" src=\"/opt/splunk/var/run/splunk/bundle_tmp/010fb5c688614565/Splunk_TA_stream/linux_x86_64/bin/streamfwd\" dest=\"/opt/splunk/etc/apps/Splunk_TA_stream/linux_x86_64/bin/streamfwd\""}]}
↧
↧
Is there a way to share a Data Model across 2 Search Head Clusters
Hi,
We would like to use the same Data Model (same field extractions, same events, same acceleration window, etc.) in two different SH Clusters. Is it possible to do it without having to compute and store the acceleration files twice on the indexers?
Thank you!
↧
How to increase the default srchDiskQuota setting?
I am looking to increase the default srchDiskQuota of my users' role. How do I determine the maximum limit this setting can be safely increased by?
I currently have both a Search Head cluster (4 Search Heads) and Indexer peer cluster (5 Indexers). Searches are performed on the Indexer peer level.
↧
Search Head Deployer create .splunk folder in home directory
When pushing a shcluster bundle as our sudo splunk user, I got the following message:
Can't create directory "/home//.splunk": Permission denied
I was able to mod the directory so it could create .splunk, but my question is why is it creating that in my home folder?
(I was not in my home folder when the push command was run and I was using the absolute path to the bundle push command)
Thanks, all!
↧