We've recently moved our production search heads to a search head cluster, since last week (6.2.6?) I have noticed that any ad-hoc jobs (via REST API or WEB UI) are not expiring and quickly stack up.
I've checked the limits.conf and savedsearches.conf, and have confirmed that the ttl's are set to 600 seconds or less.
This only happens in a clustered environment. We have dev servers running the exact same searches without issue.
In the job inspector info below, I can see that the job was created yesterday. It has completed successfully and has TTLs of 600 seconds, so why is it still there?? The expiration time just updates to now whenever I refresh the jobs list.
Is there some config specific to SHC that sets the TTL for completed jobs?
This is an example from the job inspector
Search job inspector
This search has completed and has returned 1,192 results by scanning 4,986 events in 8.404 seconds.
The following messages were returned by the search subsystem:
INFO: Your timerange was substituted based on your search string
(SID: 1443331931.9_ECBEC051-E014-4F98-95CC-90307C8D43D7) search.log
Execution costs
Duration (seconds) Component Invocations Input count Output count
0.16 command.addinfo 158 4,808 4,808
0.02 command.eval 5 19,877 19,877
0.07 command.fields 158 4,808 4,808
0.00 command.presort 1 1,192 1,192
0.91 command.prestats 158 4,808 4,742
21.69 command.search 317 8,347 10,808
6.22 command.search.rawdata 149 - -
0.74 command.search.kv 149 - -
0.46 command.search.typer 149 4,808 4,808
0.32 command.search.filter 309 - -
0.15 command.search.calcfields 149 4,986 4,986
0.15 command.search.fieldalias 149 4,986 4,986
0.09 command.search.lookups 149 4,986 4,986
0.08 command.search.tags 149 4,808 4,808
0.05 command.search.summary 157 - -
0.00 command.search.index.usec_1_8 38,306 - -
0.00 command.search.index.usec_32768_262144 2 - -
0.00 command.search.index.usec_4096_32768 838 - -
0.00 command.search.index.usec_512_4096 68 - -
0.00 command.search.index.usec_64_512 153 - -
0.00 command.search.index.usec_8_64 1,053 - -
0.00 command.sort 1 1,192 1,192
0.12 command.stats 1 - 3,539
0.61 command.stats.execute_input 159 - -
0.15 command.stats.execute_output 1 - -
0.00 command.table 1 1,192 2,384
0.00 dispatch.check_disk_usage 1 - -
0.08 dispatch.createdSearchResultInfrastructure 1 - -
0.06 dispatch.evaluate 1 - -
0.06 dispatch.evaluate.search 2 - -
0.00 dispatch.evaluate.eval 5 - -
0.00 dispatch.evaluate.stats 2 - -
0.00 dispatch.evaluate.sort 1 - -
0.00 dispatch.evaluate.table 1 - -
7.32 dispatch.fetch 159 - -
0.00 dispatch.localSearch 1 - -
0.32 dispatch.parserThread 157 - -
0.00 dispatch.stream.local 1 - -
22.39 dispatch.stream.remote 157 - 32,716,802
0.03 dispatch.writeStatus 12 - -
0.26 startup.configuration 9 - -
3.49 startup.handoff 9 - -
Search job properties
bundleVersion 4206439116757466412
canSummarize 1
**createTime 2015-09-27T15:32:11.000+10:00**
cursorTime 1970-01-01T10:00:00.000+10:00
defaultSaveTTL 604800
**defaultTTL 600**
delegate None
diskUsage 188416
**dispatchState DONE**
doneProgress 1.0
dropCount 0
eai:acl
{
"app": "apm_snpm",
"can_write": "1",
"modifiable": "1",
"owner": "username",
"perms": {
"read": [
"username"
],
"write": [
"username"
]
},
"sharing": "global",
**"ttl": "600"**
}
earliestTime 2015-09-13T00:00:00.000+10:00
eventAvailableCount 0
eventCount 4808
eventFieldCount 0
eventIsStreaming True
eventIsTruncated True
eventSearch search (eventtype="summary_cvc_util") eventtype=summary_sanitized earliest=1442066400 latest=1443276000 CVC_ID="CVC000000123456"
eventSorting none
isBatchModeSearch True
isDone True
isFailed False
isFinalized False
isGoodSummarizationCandidate 1
isPaused False
isPreviewEnabled False
isRealTimeSearch False
isRemoteTimeline False
isSaved False
isSavedSearch False
isTimeCursored 1
isZombie False
keywords cvc_id::cvc000000123456 earliest::1442066400 eventtype::summary_cvc_util eventtype::summary_sanitized latest::1443276000 tclass::4
label None
latestTime 2015-09-27T00:00:00.000+10:00
modifiedTime 2015-09-28T10:10:59.478+10:00
normalizedSearch litsearch foo bar
numPreviews 0
pid 19020
priority 5
reduceSearch foo bar
request
{
"namespace": "apm_snpm",
"search": "| savedsearch cvc_util_up_down_green cvcid=\"CVC000000123456\" startdate=\"1442066400\" enddate=\"1443276000\" | search tclass=4 | sort 0 date | table date, ACCESS_SEEKER_ID, CSA_ID, POI_CODE, POI_STATE, CVC_ID, tclass, bandwidth, inboundUtilizationPcnt, inboundThroughputMbps, inboundDroppedPcnt, inboundDroppedMbps, outboundUtilizationPcnt, outboundThroughputMbps, outboundDroppedPcnt, outboundDroppedMbps"
}
resultCount 1192
resultIsStreaming False
resultPreviewCount 1192
runDuration 8.404
runtime
{
"auto_cancel": "0",
"auto_pause": "0"
}
scanCount 4986
search | savedsearch cvc_util_up_down_green cvcid="CVC000000123456" startdate="1442066400" enddate="1443276000" | search tclass=4 | sort 0 date | table date, ACCESS_SEEKER_ID, CSA_ID, POI_CODE, POI_STATE, CVC_ID, tclass, bandwidth, inboundUtilizationPcnt, inboundThroughputMbps, inboundDroppedPcnt, inboundDroppedMbps, outboundUtilizationPcnt, outboundThroughputMbps, outboundDroppedPcnt, outboundDroppedMbps
searchCanBeEventType 0
searchEarliestTime 1442066400.000000000
searchLatestTime 1443276000.000000000
searchProviders
[
"indexer1-heavy",
"indexer2-heavy",
"indexer3-heavy",
"indexer4-heavy",
"indexer5-heavy",
"indexer6-heavy",
"indexer7-heavy",
"indexer8-heavy",
"searchead1-heavy"
]
sid 1443331931.9_ECBEC051-E014-4F98-95CC-90307C8D43D7
statusBuckets 0
ttl 600
Additional info search.log
Server info: Splunk 6.2.6, foo.bar.local:8000, Mon Sep 28 10:10:59 2015 User: keithmuggleton
↧
Why are ad-hoc jobs not expiring in a Splunk 6.2.6 search head cluster?
↧
Search Head Clustering: How to push config bundles from a deployer to SHC members without a restart?
We have an environment where restart processes are controlled and monitored via a third party tool.
How do we push config bundles from a deployer to search head cluster members without a mandatory restart?
↧
↧
After upgrading a search head cluster to Splunk 6.3, why are all our launcher app icons missing?
After upgrading to 6.3 (search head clustering) all our launcher app icons have disappeared even for default untouched apps.
ie. search & reporting.
![alt text][1]
The path to the icon shows an "undefined" path.
Upgrade was performed by stopping all instances.
Upgrading deployer.
Restart deployer.
Upgrade all search head cluster member instances.
Start member instances.
Wait for 5 minutes.
Do a new deploy from deployer.
[1]: http://i.imgur.com/54g3WwB.png

↧
After upgrading a search head cluster to Splunk 6.3. why do all menus on the top right of Splunk Web no longer function?
All the menus in the top right corner no longer function after upgrading to 6.3 (search head clustering).
The logged in user also doesn't show.
![alt text][1]
We upgraded our members and deployer in the standard documented way (deployer, members, push a deployment).
[1]: http://i.imgur.com/xanaBaW.png
↧
Is there a REST API call to apply shcluster-bundle with the deployer?
We need a fast and easy way to push changes to our three search head clusters and need a way to deploy updated configuration bundles with a curl cmd.
I.E.
curl -k -u admin:changeme https://:8089/services/shcluster/member/consensus
↧
↧
Splunk Add-on for Unix and LInux: Is it required to install the supporting add-on (SA-nix) on search head and Indexer clusters?
Dear SPLUNK Community,
According to the documentation: http://docs.splunk.com/Documentation/UnixAddOn/5.2.0/User/DeploytheSplunkAdd-onforUnixandLinuxinadistributedSplunkenvironment
we need to install the Supporting Add-on (SA-nix) on the Search Head and Indexer clusters.
I have already installed the Splunk Add-on for Unix and Linux on the Search Head and Indexer clusters. And I do forward all data from SHs to indexer cluster.
I would like to know what would happen if I do not install the SA-nix there?
Thank you!
Ishaan
↧
Anyone else having problem with a 6.3.0 search head cluster talking to a 6.2.3 indexer cluster?
The moment I upgraded I started getting:
The searchhead is unable to update the peer information. Error = 'failed method=POST path=/services/cluster/master/generation/..../?output_mode=json master=...:8089 rv=0 actual_response_code=400 expected_response_code=200 status_line=Bad Request error=No error' for master=https://...:8089
The cluster master is giving the following:
09-29-2015 19:33:23.050 +0000 ERROR AdminManager - Argument "host" is not supported by this handler.
Anyone else seeing this?
↧
How migrate dashboards of the Search App on a standalone dev Search Head to a Search Head Cluster?
Dear SPLUNK Community,
I have one development Search Head and a Search Head Cluster in my cluster. I am using the default search app to create dashboards on a dev SH.
I create dashboards with the admin role and share it with all users within the App. I also use custom css, js, and images in the dashboards.
Could you please help me understand the best way to migrate the dashboards I develop in dev SH to SHC? (I do have a Deployer as well)
I read somewhere that since Search is a default app, I should not be pushing it to a SHC using the Deployer as it may override the app version.
Thank you so much!
Ishaan
↧
How to remove an app from a search head cluster and cluster peers?
I wish to uninstall an app from my Search Head cluster and cluster peers. Is this the following the way to go about it?
On each Search Head and peer cluster member:
1.Run the following command in CLI
./splunk remove app [appname] -auth :
2.Remove user-specific directories created for your app or add-on by deleting the files found here:
$SPLUNK_HOME/splunk/etc/users/*/
And, If the instance is a search peer, also delete the relevant index
4.Restart Splunk
Then,
Delete directory from $SPLUNK_HOME/etc/master-apps in Master Node
Delete directory from $SPLUNK_HOME/etc/shcluster/apps in Deployer
=====
Or, is there a central way to do this from the Deployer/Master?
↧
↧
In Search Head Clustering, what splunkd.log entries will show when an instance may have been the captain or member?
I know I can run the following to get the current SHC captain,
splunk show shcluster-status -auth :
but for debugging, what text can I search for to see the sequence of the dynamic captain change over time in my SHC instances?
↧
Best practice for restoring a dashboard on a Search Head Cluster
I recently had to pull a dashboard raw XML file off of an archive. What is the process for actually putting it back in? I copied the file to the original directory ... `$SPLUNK_HOME/etc/users/username/search/local/data/ui/views` ... Then I executed a debug/refresh which did make the dashboard visible to the UI on the cluster member, but it didn't result in that dashboard being replicated on the other cluster members. Then I used the UI to edit the dashboard, made a trivial change, and at that point, it was pushed.
I'm looking really for the best practice to actually restore something in an SHC ... be it a dashboard or a lookup table.
↧
Are there recommendations for upgrading a search head and indexer clustering environment from Splunk 6.2 to 6.3?
Trying to work through building our first cluster. I really do not have any data that is that "important", but due to labor time to build it to this stage, am a bit hesitant to fire off a mass upgrade from 6.2 to 6.3. Just want a pulse from the community if anyone has done this yet?
Question:
1) Has anyone done an `rpm -Uvh splunk-6.3.0-aa7d4b1ccb80-linux-2.6-x86_64.rpm`" on a cluster (SH cluster, indexer cluster, deployment server, cluster master)?
2) I know there are more robust tools of automation for larger Splunk deployments (CHEF, PUPPET, etc..) but as the total cluster I have is only 12VMs, a p-shell update would just about be as easy if their are no "gotcha's!!" with the update.
Looking for recommendations.
↧
How can I design my search head and indexer clustering architecture?
Hi to everyone
I have a design, with four Splunk instances (two search head, and two indexers). I want an "indexer cluster" (for replication and fault tolerance), and a "search head cluster" (for search efficiency). I'll send only syslog to indexers (no forwarder).
I need two searchable data copies and I have some questions:
1.- Do I need more Splunk instances?
2.- Do I need to send syslog to only one indexer, or the same syslog to two indexers?
3.- If I send data to only one indexer, with replication, will I have the same data in two indexers?
4.- If I send same data to two indexers, with replication, will I have data copies twice, in two indexers?
5.- If one indexer is down, will the other one be enough for service continuity?
6.- If I have a traffic balancer, only for sending syslog data, can I send data to any indexer, do I need any special consideration?
Any help, I'll be very grateful
Thanks you
↧
↧
Is KVStore supported in search head clustering?
I am trying to build a dashboard based on certain time series data for monthly and yearly trends. We have been using CSV inputlookup for that, but came to know they are not for storage. So was wondering if KVStore will be a better choice and if thats being supported in clustered search heads.
↧
Why are new report accelerations showing "Summarization not started Updated: Never" in our search head clustering environment?
We're running a large Splunk cluster with search head clustering. We currently have 30 reports with acceleration turned on. I recently added a new report and turned on acceleration for the past 7 days. The next day, the acceleration was listed as `Summarization not started Updated: Never`. Even after telling Splunk to rebuild the summary, the status didn't change. I've added a new simple report that definitely should summarize, and it too never leaves the `Summarization not Started` status. Any ideas what is going on here?
↧
Search Head Clustering: How to push config bundles from a deployer to SHC members without a restart?
We have an environment where restart processes are controlled and monitored via a third party tool.
How do we push config bundles from a deployer to search head cluster members without a mandatory restart?
↧
After upgrading a search head cluster to Splunk 6.3, why are all our launcher app icons missing?
After upgrading to 6.3 (search head clustering) all our launcher app icons have disappeared even for default untouched apps.
ie. search & reporting.
![alt text][1]
The path to the icon shows an "undefined" path.
Upgrade was performed by stopping all instances.
Upgrading deployer.
Restart deployer.
Upgrade all search head cluster member instances.
Start member instances.
Wait for 5 minutes.
Do a new deploy from deployer.
[1]: http://i.imgur.com/54g3WwB.png

↧
↧
Is there a REST API call to apply shcluster-bundle with the deployer?
We need a fast and easy way to push changes to our three search head clusters and need a way to deploy updated configuration bundles with a curl cmd.
I.E.
curl -k -u admin:changeme https://:8089/services/shcluster/member/consensus
↧
Splunk Add-on for Unix and LInux: Is it required to install the supporting add-on (SA-nix) on search head and Indexer clusters?
Dear SPLUNK Community,
According to the documentation: http://docs.splunk.com/Documentation/UnixAddOn/5.2.0/User/DeploytheSplunkAdd-onforUnixandLinuxinadistributedSplunkenvironment
we need to install the Supporting Add-on (SA-nix) on the Search Head and Indexer clusters.
I have already installed the Splunk Add-on for Unix and Linux on the Search Head and Indexer clusters. And I do forward all data from SHs to indexer cluster.
I would like to know what would happen if I do not install the SA-nix there?
Thank you!
Ishaan
↧
Anyone else having problem with a 6.3.0 search head cluster talking to a 6.2.3 indexer cluster?
The moment I upgraded I started getting:
The searchhead is unable to update the peer information. Error = 'failed method=POST path=/services/cluster/master/generation/..../?output_mode=json master=...:8089 rv=0 actual_response_code=400 expected_response_code=200 status_line=Bad Request error=No error' for master=https://...:8089
The cluster master is giving the following:
09-29-2015 19:33:23.050 +0000 ERROR AdminManager - Argument "host" is not supported by this handler.
Anyone else seeing this?
↧