Skip to content
This repository has been archived by the owner on Jun 28, 2024. It is now read-only.

core during computing join #31

Closed
bertol83 opened this issue May 20, 2016 · 13 comments
Closed

core during computing join #31

bertol83 opened this issue May 20, 2016 · 13 comments

Comments

@bertol83
Copy link

screenshot from 2016-05-20 09 44 08
To improve performance and it would be possible make computing joins multi core? If you look my screenshot there is just one core running, but joins in this case are 7.

@rendel
Copy link
Contributor

rendel commented May 20, 2016

The join is already multi-threaded. However, this depends on the query, and the query plan. If the joins are organised in a sequence, then there is no possibility for the query optimizer to execute them in parallel.

@rendel
Copy link
Contributor

rendel commented May 20, 2016

To be more precise, there could be some additional optimisation that the query optimizer could do to try to execute more joins in parallel, but this is not something that will be available immediately. Also we are working on a new distributed computing platform for joins that will leverage better the available computing nodes.
But, as I said, there is already some optimisation to compute joins in parallel when it is possible. What you are observing also is probably one big join that is demanding more computing resources than the other join, and is executed on a single core. Some of the other more trivial joins might have been already pre-computed and might be more difficult to observe since the overhead is on the larger join.
If you can share the join query that is executed and the schema, I could tell you if some of the joins are executed in parallel.

@bertol83
Copy link
Author

Here the logs when i refresh my dashboard Veicoli (means Cars) with many relations.
Is it enough or you need some other details?
logs.txt

@rendel
Copy link
Contributor

rendel commented May 20, 2016

@bertol83 I am not able to see the entire query in the given log, the best would be the query generated by Kibi.
@szydan, is there an easy way for the user to get this query ?

@scampi
Copy link
Contributor

scampi commented May 20, 2016

You can set the log level to DEBUG in kibi.yml

Stéphane Campinas
On 20 May 2016 13:02, "Renaud" notifications@github.com wrote:

@bertol83 https://github.com/bertol83 I am not able to see the entire
query in the given log, the best would be the query generated by Kibi.
@szydan https://github.com/szydan, is there an easy way for the user to
get this query ?


You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
#31 (comment)

@rendel
Copy link
Contributor

rendel commented May 20, 2016

You have also in the query response form elasticsearch some statistics about each join, where you can see the execution time, size, etc, of each join. This might indicate you which one is taking time.
From the log you sent, I can see that one join is based on 10m terms, which takes a few seconds to compute. Also, it seems that this join is not cached (or it is evicted) as there is always a cache miss:

[2016-05-20 12:04:16,668][DEBUG][solutions.siren.join.action.coordinate.CachedFilterJoinVisitor] Cache miss for terms by query action: -478837812

If you have enough memory on your node, you might want to tune the node-level cache [1] for the siren join plugin.

[1] https://github.com/sirensolutions/siren-join/blob/master/src/main/asciidoc/filter-join-cache.asciidoc

@bertol83
Copy link
Author

@scampi in kibi.yml i can only set logging.verbose: true or i have to set something else?

@rendel this is the answer to the stats:
{"cluster_name":"aci-roma","nodes":{"Midnight Man":{"timestamp":1463749200534,"stats":{"size":0,"requestCount":24,"hitCount":0,"hitRate":0.0,"missCount":24,"missRate":1.0,"loadCount":0,"loadSuccessCount":0,"loadExceptionCount":0,"loadExceptionRate":0.0,"totalLoadTime":0,"evictionCount":24}}}}

I have also enabled caching (size set to 512 MB) as you suggest me but i can't see any improvment...

@szydan
Copy link
Contributor

szydan commented May 20, 2016

@scampi I think the debug logging does not work in kibi
I've seen a TODO comment in the code to enable it back
I think it was lost on some refactoring

@rendel
Copy link
Contributor

rendel commented May 23, 2016

@bertol83 the cache is empty for some reason (size = 0) and all the entries have been evicted (evictionCount = 24). Can you try increasing the time before an entry expires (siren.filterjoin.cache.expire) and see if the cache stats changes (immediately after executing the join query a few times) ?

@bertol83
Copy link
Author

bertol83 commented May 24, 2016

Here my configuration in elasticsearch.yml
siren.filterjoin.cache.enabled: true
siren.filterjoin.cache.size: 4512000
siren.filterjoin.cache.expire: 3600

Here the stats after some requests:

{
  "cluster_name": "aci-roma",
  "nodes": {
    "Free Spirit": {
      "timestamp": 1464074644278,
      "stats": {
        "size": 0,
        "requestCount": 24,
        "hitCount": 0,
        "hitRate": 0,
        "missCount": 24,
        "missRate": 1,
        "loadCount": 0,
        "loadSuccessCount": 0,
        "loadExceptionCount": 0,
        "loadExceptionRate": 0,
        "totalLoadTime": 0,
        "evictionCount": 24
      }
    }
  }
}

Here kibi logs when i press F5:

respons [09:21:36.823]  GET /app/kibana 200 3ms - 9.0B
respons [09:21:36.840]  GET /bundles/commons.style.css?v=8971 304 2ms - 9.0B
respons [09:21:36.841]  GET /bundles/kibana.style.css?v=8971 304 3ms - 9.0B
respons [09:21:36.842]  GET /bundles/commons.bundle.js?v=8971 304 3ms - 9.0B
  log   [09:21:36.876] [debug][plugin] Checking Elasticsearch version
respons [09:21:36.932]  GET /bundles/kibana.bundle.js?v=8971 304 2ms - 9.0B
respons [09:21:38.257]  GET /bundles/src/ui/public/kibi/images/no_border@2x.png 304 1ms - 9.0B
respons [09:21:38.258]  GET /bundles/node_modules/font-awesome/fonts/fontawesome-webfont.woff2 304 2ms - 9.0B
respons [09:21:38.304]  GET /bundles/src/ui/public/kibi/images/favicon.ico 304 1ms - 9.0B
respons [09:21:38.324]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 5ms - 9.0B
respons [09:21:38.326]  POST /elasticsearch/.kibi/dashboard/_search?size=100 200 8ms - 9.0B
respons [09:21:38.411]  POST /elasticsearch/.kibi/index-pattern/_search?fields= 200 5ms - 9.0B
respons [09:21:38.440]  GET /elasticsearch/.kibi/_mapping/*/field/_source?_=1464074498424 200 4ms - 9.0B
respons [09:21:38.500]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 4ms - 9.0B
  ops   [09:21:38.515]  memory: 100.3MB uptime: 0:09:52 load: [1.61 1.28 0.76] delay: 0.549
respons [09:21:38.661]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 5ms - 9.0B
respons [09:21:39.069]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 12ms - 9.0B
respons [09:21:39.069]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 11ms - 9.0B
respons [09:21:39.070]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 15ms - 9.0B
respons [09:21:39.070]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 13ms - 9.0B
respons [09:21:39.071]  POST /elasticsearch/.kibi/dashboardgroup/_search?size=100 200 24ms - 9.0B
respons [09:21:39.071]  POST /elasticsearch/.kibi/dashboardgroup/_search?size=100 200 17ms - 9.0B
respons [09:21:39.083]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 10ms - 9.0B
respons [09:21:39.084]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 11ms - 9.0B
respons [09:21:39.086]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 12ms - 9.0B
respons [09:21:39.087]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 15ms - 9.0B
respons [09:21:39.090]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 5ms - 9.0B
respons [09:21:39.272]  GET /bundles/installedPlugins/kibi_timeline_vis-0.1.4/node_modules/kibi-icons/lib/fonts/FontAwesomeKibi_d9197fc0e304109c6d609065d21d2fa3.woff 304 2ms - 9.0B
respons [09:21:39.389]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 13ms - 9.0B
respons [09:21:39.389]  POST /elasticsearch/.kibi/dashboard/_search?size=100 200 16ms - 9.0B
respons [09:21:39.390]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 13ms - 9.0B
respons [09:21:39.390]  POST /elasticsearch/.kibi/search/_search?size=100 200 14ms - 9.0B
respons [09:21:39.391]  POST /elasticsearch/.kibi/dashboard/_search?size=100 200 16ms - 9.0B
respons [09:21:39.394]  POST /elasticsearch/.kibi/dashboard/_search?size=100 200 14ms - 9.0B
respons [09:21:39.399]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 6ms - 9.0B
respons [09:21:39.400]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 7ms - 9.0B
  log   [09:21:39.415] [debug][plugin] Checking Elasticsearch version
respons [09:21:39.764]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 18ms - 9.0B
respons [09:21:39.764]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 14ms - 9.0B
respons [09:21:39.765]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 16ms - 9.0B
respons [09:21:39.769]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 19ms - 9.0B
respons [09:21:39.779]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 28ms - 9.0B
respons [09:21:39.780]  POST /elasticsearch/.kibi/search/_search?size=100 200 23ms - 9.0B
respons [09:21:39.794]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 24ms - 9.0B
respons [09:21:39.794]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 24ms - 9.0B
respons [09:21:39.795]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 23ms - 9.0B
respons [09:21:39.797]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 12ms - 9.0B
respons [09:21:39.797]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 17ms - 9.0B
respons [09:21:39.798]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 13ms - 9.0B
respons [09:21:39.859]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 28ms - 9.0B
respons [09:21:39.860]  POST /elasticsearch/.kibi/query/_search?size=100 200 27ms - 9.0B
respons [09:21:39.860]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 28ms - 9.0B
respons [09:21:39.861]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 8ms - 9.0B
respons [09:21:39.898]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 5ms - 9.0B
respons [09:21:39.911]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 11ms - 9.0B
respons [09:21:39.912]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 13ms - 9.0B
respons [09:21:39.942]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 5ms - 9.0B
respons [09:21:40.268]  POST /elasticsearch/_msearch?getSourceCountForJoinSeqFilter= 200 365ms - 9.0B
respons [09:21:40.540]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 8ms - 9.0B
respons [09:21:40.541]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 11ms - 9.0B
respons [09:21:40.543]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 10ms - 9.0B
respons [09:21:40.543]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 8ms - 9.0B
respons [09:21:40.596]  POST /elasticsearch/_mget?timeout=0&ignore_unavailable=true&preference=1464074497965 200 20ms - 9.0B
respons [09:21:41.597]  POST /elasticsearch/_msearch?timeout=0&ignore_unavailable=true&preference=1464074497965 200 1759ms - 9.0B
  log   [09:21:41.940] [debug][plugin] Checking Elasticsearch version
respons [09:21:42.096]  POST /elasticsearch/_msearch?timeout=0&ignore_unavailable=true&preference=1464074497965 200 2260ms - 9.0B
respons [09:21:42.907]  POST /elasticsearch/_msearch?getCountsOnTabs= 200 2243ms - 9.0B
respons [09:21:43.108]  POST /elasticsearch/_msearch?timeout=0&ignore_unavailable=true&preference=1464074497965 200 2540ms - 9.0B
  ops   [09:21:43.514]  memory: 103.5MB uptime: 0:09:57 load: [1.64 1.29 0.76] delay: 0.340
respons [09:21:43.788]  POST /elasticsearch/_msearch?getSourceCountForJoinSeqFilter= 200 94ms - 9.0B
respons [09:21:44.542]  POST /elasticsearch/_msearch?timeout=0&ignore_unavailable=true&preference=1464074497965 200 423ms - 9.0B
  log   [09:21:45.092] [debug][plugin] Checking Elasticsearch version
  log   [09:21:47.612] [debug][plugin] Checking Elasticsearch version
  ops   [09:21:48.515]  memory: 106.0MB uptime: 0:10:02 load: [1.67 1.30 0.77] delay: 0.326
  log   [09:21:50.134] [debug][plugin] Checking Elasticsearch version
  ops   [09:21:53.519]  memory: 106.2MB uptime: 0:10:07 load: [1.62 1.30 0.77] delay: 0.390
  ops   [09:21:58.524]  memory: 106.3MB uptime: 0:10:12 load: [1.65 1.31 0.78] delay: 0.464
  ops   [09:22:03.528]  memory: 106.3MB uptime: 0:10:17 load: [1.60 1.31 0.78] delay: 0.338
  ops   [09:22:08.533]  memory: 106.4MB uptime: 0:10:22 load: [1.63 1.32 0.79] delay: 0.340
  ops   [09:22:13.538]  memory: 106.4MB uptime: 0:10:27 load: [1.58 1.31 0.79] delay: 0.331
  ops   [09:22:18.542]  memory: 106.5MB uptime: 0:10:32 load: [1.53 1.31 0.79] delay: 0.379
  log   [09:22:20.138] [error][status][plugin:elasticsearch] Status changed from green to red - Request Timeout after 30000ms
  log   [09:22:22.642] [debug][plugin] Checking Elasticsearch version
  log   [09:22:22.646] [info][status][plugin:elasticsearch] Status changed from red to green - Kibana index ready
  log   [09:22:22.654] [info][query_engine] Found kibi index
  log   [09:22:22.654] [info][query_engine] Loading templates
  log   [09:22:22.660] [info][query_engine] Loading datasources
  log   [09:22:22.666] [warning][query_engine] Template [kibi-json-jade] already exists
  log   [09:22:22.666] [warning][query_engine] Template [kibi-table-jade] already exists
  log   [09:22:22.667] [warning][query_engine] Template [kibi-table-handlebars] already exists
  ops   [09:22:23.542]  memory: 107.8MB uptime: 0:10:37 load: [1.57 1.32 0.79] delay: 0.347
  log   [09:22:25.173] [debug][plugin] Checking Elasticsearch version
  log   [09:22:27.692] [debug][plugin] Checking Elasticsearch version
  ops   [09:22:28.543]  memory: 108.9MB uptime: 0:10:42 load: [1.68 1.35 0.81] delay: 0.321
  log   [09:22:30.209] [debug][plugin] Checking Elasticsearch version
  ops   [09:22:33.546]  memory: 109.2MB uptime: 0:10:47 load: [1.63 1.34 0.81] delay: 0.311
  ops   [09:22:38.547]  memory: 109.2MB uptime: 0:10:52 load: [1.58 1.33 0.81] delay: 0.369
  ops   [09:22:43.550]  memory: 109.3MB uptime: 0:10:57 load: [1.53 1.33 0.81] delay: 0.582
  ops   [09:22:48.559]  memory: 109.4MB uptime: 0:11:02 load: [1.65 1.36 0.82] delay: 0.760
  log   [09:22:51.000] [warning][query_engine] Datasource [Kibi-Gremlin-Server] already exists
  log   [09:22:51.000] [info][query_engine] Loading queries
  log   [09:22:51.007] [warning][query_engine] Query [1Kibi-Graph-Query] already exists
respons [09:22:51.236]  POST /elasticsearch/_msearch?getCountsOnButton= 200 67538ms - 9.0B
  log   [09:22:53.508] [debug][plugin] Checking Elasticsearch version
  ops   [09:22:53.559]  memory: 110.6MB uptime: 0:11:07 load: [1.60 1.35 0.82] delay: 0.454
  log   [09:22:56.036] [debug][plugin] Checking Elasticsearch version
  ops   [09:22:58.563]  memory: 111.2MB uptime: 0:11:12 load: [1.47 1.33 0.82] delay: 1.505
  log   [09:22:58.564] [debug][plugin] Checking Elasticsearch version
  log   [09:23:01.092] [debug][plugin] Checking Elasticsearch version
  ops   [09:23:03.564]  memory: 100.0MB uptime: 0:11:17 load: [1.35 1.31 0.81] delay: 0.521
  log   [09:23:03.626] [debug][plugin] Checking Elasticsearch version
  log   [09:23:06.153] [debug][plugin] Checking Elasticsearch version
  ops   [09:23:08.567]  memory: 101.2MB uptime: 0:11:22 load: [1.32 1.30 0.81] delay: 0.500
  log   [09:23:08.682] [debug][plugin] Checking Elasticsearch version
  log   [09:23:11.205] [debug][plugin] Checking Elasticsearch version
  ops   [09:23:13.570]  memory: 102.3MB uptime: 0:11:27 load: [1.22 1.28 0.81] delay: 0.491
  log   [09:23:13.728] [debug][plugin] Checking Elasticsearch version
  log   [09:23:16.256] [debug][plugin] Checking Elasticsearch version
  ops   [09:23:18.572]  memory: 103.4MB uptime: 0:11:32 load: [1.12 1.26 0.81] delay: 0.518
  log   [09:23:18.782] [debug][plugin] Checking Elasticsearch version
  log   [09:23:21.310] [debug][plugin] Checking Elasticsearch version
  ops   [09:23:23.575]  memory: 104.5MB uptime: 0:11:37 load: [1.03 1.24 0.80] delay: 0.532
  log   [09:23:23.835] [debug][plugin] Checking Elasticsearch version
  log   [09:23:26.363] [debug][plugin] Checking Elasticsearch version
  ops   [09:23:28.578]  memory: 105.7MB uptime: 0:11:42 load: [0.95 1.22 0.80] delay: 0.585
  log   [09:23:28.890] [debug][plugin] Checking Elasticsearch version
  log   [09:23:31.420] [debug][plugin] Checking Elasticsearch version
  ops   [09:23:33.580]  memory: 106.8MB uptime: 0:11:47 load: [0.87 1.20 0.79] delay: 0.535
  log   [09:23:33.941] [debug][plugin] Checking Elasticsearch version
  log   [09:23:36.471] [debug][plugin] Checking Elasticsearch version
  ops   [09:23:38.582]  memory: 108.0MB uptime: 0:11:52 load: [1.36 1.29 0.83] delay: 0.646
  log   [09:23:38.998] [debug][plugin] Checking Elasticsearch version
  log   [09:23:41.518] [debug][plugin] Checking Elasticsearch version
respons [09:23:42.093]  POST /elasticsearch/_msearch?timeout=0&ignore_unavailable=true&preference=1464074497965 200 600ms - 9.0B
respons [09:23:42.783]  POST /elasticsearch/_msearch?getSourceCountForJoinSeqFilter= 200 16ms - 9.0B
respons [09:23:43.005]  POST /elasticsearch/_msearch?getCountsOnTabs= 200 7ms - 9.0B
respons [09:23:43.349]  POST /elasticsearch/_msearch?timeout=0&ignore_unavailable=true&preference=1464074497965 200 328ms - 9.0B
  ops   [09:23:43.581]  memory: 112.2MB uptime: 0:11:57 load: [1.25 1.27 0.82] delay: 0.302

Here the elasticsearch logs:

[2016-05-24 09:26:54,787][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#11]{New I/O worker #45}: Execute coordinated multi-search action
[2016-05-24 09:26:54,787][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#11]{New I/O worker #45}: Parsed source: Tuple [v1=JSON, v2={highlight={pre_tags=[@kibana-highlighted-field@], post_tags=[@/kibana-highlighted-field@], fields={*={}}, require_field_match=false, fragment_size=2147483647}, size=0, query={filtered={filter={bool={must_not=[], must=[{query={query_string={query=*, analyze_wildcard=true}}}]}}, query={query_string={query=*, analyze_wildcard=true}}}}, aggs={}}]
[2016-05-24 09:26:54,787][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#11]{New I/O worker #45}: Parsed source: Tuple [v1=JSON, v2={highlight={pre_tags=[@kibana-highlighted-field@], post_tags=[@/kibana-highlighted-field@], fields={*={}}, require_field_match=false, fragment_size=2147483647}, size=0, query={filtered={filter={bool={must_not=[], must=[{query={query_string={query=*, analyze_wildcard=true}}}]}}, query={query_string={query=*, analyze_wildcard=true}}}}, aggs={}}]
[2016-05-24 09:26:54,787][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#11]{New I/O worker #45}: Parsed source: Tuple [v1=JSON, v2={size=0, query={filtered={filter={bool={must_not=[], must=[{query={query_string={query=*, analyze_wildcard=true}}}]}}, query={query_string={query=*, analyze_wildcard=true}}}}, aggs={2={terms={field=fabbrica, size=20, order={_count=desc}}}}}]
[2016-05-24 09:26:54,788][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#11]{New I/O worker #45}: Parsed source: Tuple [v1=JSON, v2={size=0, query={filtered={filter={bool={must_not=[], must=[{query={query_string={query=*, analyze_wildcard=true}}}]}}, query={query_string={query=*, analyze_wildcard=true}}}}, aggs={2={terms={field=annoDiCircolazione, size=5, order={_count=desc}}}}}]
[2016-05-24 09:26:54,788][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#11]{New I/O worker #45}: Parsed source: Tuple [v1=JSON, v2={size=0, query={filtered={filter={bool={must_not=[], must=[{query={query_string={query=*, analyze_wildcard=true}}}]}}, query={query_string={query=*, analyze_wildcard=true}}}}, aggs={2={terms={field=annoTassaNonPagata, size=7, order={_count=desc}}}}}]
[2016-05-24 09:26:54,788][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#11]{New I/O worker #45}: Parsed source: Tuple [v1=JSON, v2={size=0, query={filtered={filter={bool={must_not=[], must=[{query={query_string={query=*, analyze_wildcard=true}}}]}}, query={query_string={query=*, analyze_wildcard=true}}}}, aggs={2={terms={field=modelloVeicolo, size=20, order={_count=desc}}}}}]
[2016-05-24 09:26:54,788][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#11]{New I/O worker #45}: Parsed source: Tuple [v1=JSON, v2={size=0, query={filtered={filter={bool={must_not=[], must=[{query={query_string={query=*, analyze_wildcard=true}}}]}}, query={query_string={query=*, analyze_wildcard=true}}}}, aggs={2={histogram={field=annoDiCircolazione, interval=1}}}}]
[2016-05-24 09:26:54,788][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#11]{New I/O worker #45}: Parsed source: Tuple [v1=JSON, v2={highlight={pre_tags=[@kibana-highlighted-field@], post_tags=[@/kibana-highlighted-field@], fields={*={}}, require_field_match=false, fragment_size=2147483647}, size=0, query={filtered={filter={bool={must_not=[], must=[{query={query_string={query=*, analyze_wildcard=true}}}]}}, query={query_string={query=*, analyze_wildcard=true}}}}, aggs={}}]
[2016-05-24 09:26:54,788][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#11]{New I/O worker #45}: Parsed source: Tuple [v1=JSON, v2={highlight={pre_tags=[@kibana-highlighted-field@], post_tags=[@/kibana-highlighted-field@], fields={*={}}, require_field_match=false, fragment_size=2147483647}, size=10, query={filtered={filter={bool={must_not=[], must=[{query={query_string={query=*, analyze_wildcard=true}}}]}}, query={query_string={query=*, analyze_wildcard=true}}}}, fielddata_fields=[dataVersamento, dataScadenza, dataDecorrenza], script_fields={}, fields=[*, _source]}]
[2016-05-24 09:26:54,788][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#11]{New I/O worker #45}: Parsed source: Tuple [v1=JSON, v2={highlight={pre_tags=[@kibana-highlighted-field@], post_tags=[@/kibana-highlighted-field@], fields={*={}}, require_field_match=false, fragment_size=2147483647}, size=10, query={filtered={filter={bool={must_not=[], must=[{query={query_string={query=*, analyze_wildcard=true}}}]}}, query={query_string={query=*, analyze_wildcard=true}}}}, fielddata_fields=[dataFormalita, dataAtto], script_fields={}, fields=[*, _source]}]
[2016-05-24 09:26:54,788][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#11]{New I/O worker #45}: Parsed source: Tuple [v1=JSON, v2={highlight={pre_tags=[@kibana-highlighted-field@], post_tags=[@/kibana-highlighted-field@], fields={*={}}, require_field_match=false, fragment_size=2147483647}, size=10, query={filtered={filter={bool={must_not=[], must=[{query={query_string={query=*, analyze_wildcard=true}}}]}}, query={query_string={query=*, analyze_wildcard=true}}}}, fielddata_fields=[dataFineProprieta, dataInizioProprieta], script_fields={}, fields=[*, _source]}]
[2016-05-24 09:26:54,788][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#11]{New I/O worker #45}: Parsed source: Tuple [v1=JSON, v2={highlight={pre_tags=[@kibana-highlighted-field@], post_tags=[@/kibana-highlighted-field@], fields={*={}}, require_field_match=false, fragment_size=2147483647}, size=10, query={filtered={filter={bool={must_not=[], must=[{query={query_string={query=*, analyze_wildcard=true}}}]}}, query={query_string={query=*, analyze_wildcard=true}}}}, fielddata_fields=[dataFineEsenzione, dataInizioEsenzione], script_fields={}, fields=[*, _source]}]
[2016-05-24 09:26:54,788][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#11]{New I/O worker #45}: Parsed source: Tuple [v1=JSON, v2={highlight={pre_tags=[@kibana-highlighted-field@], post_tags=[@/kibana-highlighted-field@], fields={*={}}, require_field_match=false, fragment_size=2147483647}, size=100, query={filtered={filter={bool={must_not=[], must=[{query={query_string={query=*, analyze_wildcard=true}}}]}}, query={query_string={query=*, analyze_wildcard=true}}}}, fielddata_fields=[dataFineMancatoPagamento, dataInizioMancatoPagamento], script_fields={evasione 30 cents={lang=expression, script=doc['numeroDiGiorniSenzaBollo'].value*0.3}, evasione 50 cents={lang=expression, script=doc['numeroDiGiorniSenzaBollo'].value*0.5}, evasione 40 cents={lang=expression, script=doc['numeroDiGiorniSenzaBollo'].value*0.4}}, fields=[*, _source]}]
[2016-05-24 09:26:54,789][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#11]{New I/O worker #45}: Parsed source: Tuple [v1=JSON, v2={highlight={pre_tags=[@kibana-highlighted-field@], post_tags=[@/kibana-highlighted-field@], fields={*={}}, require_field_match=false, fragment_size=2147483647}, size=500, query={filtered={filter={bool={must_not=[], must=[{query={query_string={query=*, analyze_wildcard=true}}}]}}, query={query_string={query=*, analyze_wildcard=true}}}}, fielddata_fields=[dataImmatricolazione], script_fields={}, sort=[{_score={unmapped_type=boolean, order=desc}}], fields=[*, _source], aggs={}}]
[2016-05-24 09:26:54,791][DEBUG][index.fielddata.plain    ] Global-ordinals[fabbrica][460] took 2 ms
[2016-05-24 09:26:54,805][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#11]{New I/O worker #45}: Coordinated multi-search action completed
[2016-05-24 09:26:54,825][DEBUG][index.fielddata.plain    ] Global-ordinals[modelloVeicolo][4946] took 19 ms
[2016-05-24 09:26:55,588][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#12]{New I/O worker #46}: Execute coordinated multi-search action
[2016-05-24 09:26:55,588][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#12]{New I/O worker #46}: Parsed source: Tuple [v1=JSON, v2={size=0, query={bool={filter={bool={must=[]}}, must_not=[], must={match_all={}}}}}]
[2016-05-24 09:26:55,588][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#12]{New I/O worker #46}: Coordinated multi-search action completed
[2016-05-24 09:26:57,007][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#13]{New I/O worker #47}: Execute coordinated multi-search action
[2016-05-24 09:26:57,008][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#13]{New I/O worker #47}: Parsed source: Tuple [v1=JSON, v2={size=0, query={bool={filter={bool={must=[]}}, must_not=[], must={match_all={}}}}}]
[2016-05-24 09:26:57,008][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#13]{New I/O worker #47}: Coordinated multi-search action completed
[2016-05-24 09:26:57,013][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#14]{New I/O worker #48}: Execute coordinated multi-search action
[2016-05-24 09:26:57,013][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#14]{New I/O worker #48}: Parsed source: Tuple [v1=JSON, v2={size=0, query={bool={filter={bool={must=[{filterjoin={targa={path=targa, indices=[aci-veicoli], query={bool={filter={bool={must=[]}}, must=[{match_all={}}, {bool={filter={bool={must=[]}}, must_not=[], must={query_string={query=*, analyze_wildcard=true}}}}]}}}}}]}}, must_not=[], must={match_all={}}}}}]
[2016-05-24 09:26:57,013][DEBUG][solutions.siren.join.action.coordinate.CachedFilterJoinVisitor] Cache miss for terms by query action: -478837812
[2016-05-24 09:26:57,013][DEBUG][solutions.siren.join.action.coordinate.CachedFilterJoinVisitor] Executing async actions
[2016-05-24 09:26:57,013][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor$AsyncFilterJoinVisitorAction] Executing async cardinality action
[2016-05-24 09:26:57,013][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor] Visitor thread block - blocking queue size: 0
[2016-05-24 09:26:57,308][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#5]{New I/O worker #39}: Execute coordinated multi-search action
[2016-05-24 09:26:57,308][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#5]{New I/O worker #39}: Parsed source: Tuple [v1=JSON, v2={highlight={pre_tags=[@kibana-highlighted-field@], post_tags=[@/kibana-highlighted-field@], fields={*={}}, require_field_match=false, fragment_size=2147483647}, size=500, query={filtered={filter={bool={must_not=[], must=[{query={query_string={query=*, analyze_wildcard=true}}}]}}, query={query_string={query=*, analyze_wildcard=true}}}}, fielddata_fields=[dataImmatricolazione], script_fields={}, sort=[{_score={unmapped_type=boolean, order=desc}}], fields=[*, _source], aggs={}}]
[2016-05-24 09:26:57,308][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#5]{New I/O worker #39}: Coordinated multi-search action completed
[2016-05-24 09:26:58,998][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor$AsyncFilterJoinVisitorAction] Executing async terms by query action
[2016-05-24 09:26:59,000][DEBUG][solutions.siren.join.action.terms.TransportTermsByQueryAction] [Free Spirit] elasticsearch[Free Spirit][generic][T#1]: Executes search for collecting terms [aci-veicoli][0]
[2016-05-24 09:27:05,630][DEBUG][solutions.siren.join.action.terms.TransportTermsByQueryAction] [Free Spirit] elasticsearch[Free Spirit][generic][T#1]: Returns terms response with 10434420 terms for shard [aci-veicoli][0]
[2016-05-24 09:27:05,640][DEBUG][solutions.siren.join.action.terms.collector.BloomFilterTermsSet] Serialized 10434420 terms - took 5 ms
[2016-05-24 09:27:05,641][DEBUG][solutions.siren.join.action.coordinate.FilterJoinCache] elasticsearch[Free Spirit][generic][T#1]: New cache entry -478837812
[2016-05-24 09:27:05,641][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor] Received terms by query response with 10434420 terms
[2016-05-24 09:27:05,641][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor] Visitor thread unblock - blocking queue size: 1
[2016-05-24 09:27:05,644][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#14]{New I/O worker #48}: Parsed source: Tuple [v1=JSON, v2={size=0, query={bool={filter={bool={must=[{filterjoin={targa={path=targa, indices=[aci-veicoli], query={bool={filter={bool={must=[]}}, must=[{match_all={}}, {bool={filter={bool={must=[]}}, must_not=[], must={query_string={query=*, analyze_wildcard=true}}}}]}}}}}]}}, must_not=[], must={match_all={}}}}}]
[2016-05-24 09:27:05,644][DEBUG][solutions.siren.join.action.coordinate.CachedFilterJoinVisitor] Cache miss for terms by query action: -478837812
[2016-05-24 09:27:05,644][DEBUG][solutions.siren.join.action.coordinate.CachedFilterJoinVisitor] Executing async actions
[2016-05-24 09:27:05,644][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor$AsyncFilterJoinVisitorAction] Executing async cardinality action
[2016-05-24 09:27:05,645][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor] Visitor thread block - blocking queue size: 0
[2016-05-24 09:27:07,690][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor$AsyncFilterJoinVisitorAction] Executing async terms by query action
[2016-05-24 09:27:07,691][DEBUG][solutions.siren.join.action.terms.TransportTermsByQueryAction] [Free Spirit] elasticsearch[Free Spirit][generic][T#1]: Executes search for collecting terms [aci-veicoli][0]
[2016-05-24 09:27:08,042][DEBUG][indices.memory           ] [Free Spirit] recalculating shard indexing buffer, total is [1.5gb] with [3] active shards, each shard set to indexing=[512mb], translog=[64kb]
[2016-05-24 09:27:14,387][DEBUG][solutions.siren.join.action.terms.TransportTermsByQueryAction] [Free Spirit] elasticsearch[Free Spirit][generic][T#1]: Returns terms response with 10434420 terms for shard [aci-veicoli][0]
[2016-05-24 09:27:14,399][DEBUG][solutions.siren.join.action.terms.collector.BloomFilterTermsSet] Serialized 10434420 terms - took 7 ms
[2016-05-24 09:27:14,399][DEBUG][solutions.siren.join.action.coordinate.FilterJoinCache] elasticsearch[Free Spirit][generic][T#1]: New cache entry -478837812
[2016-05-24 09:27:14,399][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor] Received terms by query response with 10434420 terms
[2016-05-24 09:27:14,399][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor] Visitor thread unblock - blocking queue size: 1
[2016-05-24 09:27:14,404][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#14]{New I/O worker #48}: Parsed source: Tuple [v1=JSON, v2={size=0, query={bool={filter={bool={must=[{filterjoin={targa={path=targa, indices=[aci-veicoli], query={bool={filter={bool={must=[]}}, must=[{match_all={}}, {bool={filter={bool={must=[]}}, must_not=[], must={query_string={query=*, analyze_wildcard=true}}}}]}}}}}]}}, must_not=[], must={match_all={}}}}}]
[2016-05-24 09:27:14,404][DEBUG][solutions.siren.join.action.coordinate.CachedFilterJoinVisitor] Cache miss for terms by query action: -478837812
[2016-05-24 09:27:14,404][DEBUG][solutions.siren.join.action.coordinate.CachedFilterJoinVisitor] Executing async actions
[2016-05-24 09:27:14,404][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor$AsyncFilterJoinVisitorAction] Executing async cardinality action
[2016-05-24 09:27:14,405][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor] Visitor thread block - blocking queue size: 0
[2016-05-24 09:27:16,401][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor$AsyncFilterJoinVisitorAction] Executing async terms by query action
[2016-05-24 09:27:16,401][DEBUG][solutions.siren.join.action.terms.TransportTermsByQueryAction] [Free Spirit] elasticsearch[Free Spirit][generic][T#1]: Executes search for collecting terms [aci-veicoli][0]
[2016-05-24 09:27:23,015][DEBUG][solutions.siren.join.action.terms.TransportTermsByQueryAction] [Free Spirit] elasticsearch[Free Spirit][generic][T#1]: Returns terms response with 10434420 terms for shard [aci-veicoli][0]
[2016-05-24 09:27:23,024][DEBUG][solutions.siren.join.action.terms.collector.BloomFilterTermsSet] Serialized 10434420 terms - took 5 ms
[2016-05-24 09:27:23,024][DEBUG][solutions.siren.join.action.coordinate.FilterJoinCache] elasticsearch[Free Spirit][generic][T#1]: New cache entry -478837812
[2016-05-24 09:27:23,024][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor] Received terms by query response with 10434420 terms
[2016-05-24 09:27:23,024][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor] Visitor thread unblock - blocking queue size: 1
[2016-05-24 09:27:23,029][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#14]{New I/O worker #48}: Parsed source: Tuple [v1=JSON, v2={size=0, query={bool={filter={bool={must=[{filterjoin={targa={path=targa, indices=[aci-veicoli], query={bool={filter={bool={must=[]}}, must=[{match_all={}}, {bool={filter={bool={must=[]}}, must_not=[], must={query_string={query=*, analyze_wildcard=true}}}}]}}}}}]}}, must_not=[], must={match_all={}}}}}]
[2016-05-24 09:27:23,029][DEBUG][solutions.siren.join.action.coordinate.CachedFilterJoinVisitor] Cache miss for terms by query action: -478837812
[2016-05-24 09:27:23,029][DEBUG][solutions.siren.join.action.coordinate.CachedFilterJoinVisitor] Executing async actions
[2016-05-24 09:27:23,029][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor$AsyncFilterJoinVisitorAction] Executing async cardinality action
[2016-05-24 09:27:23,029][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor] Visitor thread block - blocking queue size: 0
[2016-05-24 09:27:25,133][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor$AsyncFilterJoinVisitorAction] Executing async terms by query action
[2016-05-24 09:27:25,133][DEBUG][solutions.siren.join.action.terms.TransportTermsByQueryAction] [Free Spirit] elasticsearch[Free Spirit][generic][T#1]: Executes search for collecting terms [aci-veicoli][0]
[2016-05-24 09:27:32,109][DEBUG][solutions.siren.join.action.terms.TransportTermsByQueryAction] [Free Spirit] elasticsearch[Free Spirit][generic][T#1]: Returns terms response with 10434420 terms for shard [aci-veicoli][0]
[2016-05-24 09:27:32,121][DEBUG][solutions.siren.join.action.terms.collector.BloomFilterTermsSet] Serialized 10434420 terms - took 8 ms
[2016-05-24 09:27:32,121][DEBUG][solutions.siren.join.action.coordinate.FilterJoinCache] elasticsearch[Free Spirit][generic][T#1]: New cache entry -478837812
[2016-05-24 09:27:32,121][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor] Received terms by query response with 10434420 terms
[2016-05-24 09:27:32,121][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor] Visitor thread unblock - blocking queue size: 1
[2016-05-24 09:27:32,124][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#14]{New I/O worker #48}: Parsed source: Tuple [v1=JSON, v2={size=0, query={bool={filter={bool={must=[{filterjoin={targa={path=targa, indices=[aci-veicoli], query={bool={filter={bool={must=[]}}, must=[{match_all={}}, {bool={filter={bool={must=[]}}, must_not=[], must={query_string={query=*, analyze_wildcard=true}}}}]}}}}}]}}, must_not=[], must={match_all={}}}}}]
[2016-05-24 09:27:32,124][DEBUG][solutions.siren.join.action.coordinate.CachedFilterJoinVisitor] Cache miss for terms by query action: -478837812
[2016-05-24 09:27:32,124][DEBUG][solutions.siren.join.action.coordinate.CachedFilterJoinVisitor] Executing async actions
[2016-05-24 09:27:32,124][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor$AsyncFilterJoinVisitorAction] Executing async cardinality action
[2016-05-24 09:27:32,124][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor] Visitor thread block - blocking queue size: 0
[2016-05-24 09:27:34,152][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor$AsyncFilterJoinVisitorAction] Executing async terms by query action
[2016-05-24 09:27:34,153][DEBUG][solutions.siren.join.action.terms.TransportTermsByQueryAction] [Free Spirit] elasticsearch[Free Spirit][generic][T#1]: Executes search for collecting terms [aci-veicoli][0]
[2016-05-24 09:27:34,833][DEBUG][cluster.service          ] [Free Spirit] processing [put-mapping [template]]: execute
[2016-05-24 09:27:34,836][DEBUG][cluster.service          ] [Free Spirit] processing [put-mapping [template]]: took 1ms no change in cluster_state
[2016-05-24 09:27:34,840][INFO ][rest.suppressed          ] /.kibi/template/kibi-table-handlebars Params: {index=.kibi, op_type=create, id=kibi-table-handlebars, type=template, timeout=1000ms}
RemoteTransportException[[Free Spirit][127.0.0.1:9330][indices:data/write/index[p]]]; nested: DocumentAlreadyExistsException[[template][kibi-table-handlebars]: document already exists];
Caused by: [.kibi][[.kibi][0]] DocumentAlreadyExistsException[[template][kibi-table-handlebars]: document already exists]
    at org.elasticsearch.index.engine.InternalEngine.innerCreateNoLock(InternalEngine.java:432)
    at org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:390)
    at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:362)
    at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:515)
    at org.elasticsearch.index.engine.Engine$Create.execute(Engine.java:810)
    at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnPrimary(TransportIndexAction.java:236)
    at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:157)
    at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:65)
    at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.doRun(TransportReplicationAction.java:595)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:263)
    at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:260)
    at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2016-05-24 09:27:34,840][INFO ][rest.suppressed          ] /.kibi/template/kibi-table-jade Params: {index=.kibi, op_type=create, id=kibi-table-jade, type=template, timeout=1000ms}
RemoteTransportException[[Free Spirit][127.0.0.1:9330][indices:data/write/index[p]]]; nested: DocumentAlreadyExistsException[[template][kibi-table-jade]: document already exists];
Caused by: [.kibi][[.kibi][0]] DocumentAlreadyExistsException[[template][kibi-table-jade]: document already exists]
    at org.elasticsearch.index.engine.InternalEngine.innerCreateNoLock(InternalEngine.java:432)
    at org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:390)
    at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:362)
    at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:515)
    at org.elasticsearch.index.engine.Engine$Create.execute(Engine.java:810)
    at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnPrimary(TransportIndexAction.java:236)
    at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:157)
    at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:65)
    at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.doRun(TransportReplicationAction.java:595)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:263)
    at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:260)
    at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2016-05-24 09:27:34,840][INFO ][rest.suppressed          ] /.kibi/template/kibi-json-jade Params: {index=.kibi, op_type=create, id=kibi-json-jade, type=template, timeout=1000ms}
RemoteTransportException[[Free Spirit][127.0.0.1:9330][indices:data/write/index[p]]]; nested: DocumentAlreadyExistsException[[template][kibi-json-jade]: document already exists];
Caused by: [.kibi][[.kibi][0]] DocumentAlreadyExistsException[[template][kibi-json-jade]: document already exists]
    at org.elasticsearch.index.engine.InternalEngine.innerCreateNoLock(InternalEngine.java:432)
    at org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:390)
    at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:362)
    at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:515)
    at org.elasticsearch.index.engine.Engine$Create.execute(Engine.java:810)
    at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnPrimary(TransportIndexAction.java:236)
    at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:157)
    at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:65)
    at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.doRun(TransportReplicationAction.java:595)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:263)
    at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:260)
    at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2016-05-24 09:27:34,840][INFO ][rest.suppressed          ] /.kibi/datasource/Kibi-Gremlin-Server Params: {index=.kibi, op_type=create, id=Kibi-Gremlin-Server, type=datasource, timeout=1000ms}
RemoteTransportException[[Free Spirit][127.0.0.1:9330][indices:data/write/index[p]]]; nested: DocumentAlreadyExistsException[[datasource][Kibi-Gremlin-Server]: document already exists];
Caused by: [.kibi][[.kibi][0]] DocumentAlreadyExistsException[[datasource][Kibi-Gremlin-Server]: document already exists]
    at org.elasticsearch.index.engine.InternalEngine.innerCreateNoLock(InternalEngine.java:432)
    at org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:390)
    at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:362)
    at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:515)
    at org.elasticsearch.index.engine.Engine$Create.execute(Engine.java:810)
    at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnPrimary(TransportIndexAction.java:236)
    at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:157)
    at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:65)
    at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.doRun(TransportReplicationAction.java:595)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:263)
    at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:260)
    at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2016-05-24 09:27:38,043][DEBUG][indices.memory           ] [Free Spirit] recalculating shard indexing buffer, total is [1.5gb] with [3] active shards, each shard set to indexing=[512mb], translog=[64kb]
[2016-05-24 09:27:40,749][DEBUG][solutions.siren.join.action.terms.TransportTermsByQueryAction] [Free Spirit] elasticsearch[Free Spirit][generic][T#1]: Returns terms response with 10434420 terms for shard [aci-veicoli][0]
[2016-05-24 09:27:40,758][DEBUG][solutions.siren.join.action.terms.collector.BloomFilterTermsSet] Serialized 10434420 terms - took 5 ms
[2016-05-24 09:27:40,758][DEBUG][solutions.siren.join.action.coordinate.FilterJoinCache] elasticsearch[Free Spirit][generic][T#1]: New cache entry -478837812
[2016-05-24 09:27:40,758][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor] Received terms by query response with 10434420 terms
[2016-05-24 09:27:40,758][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor] Visitor thread unblock - blocking queue size: 1
[2016-05-24 09:27:40,762][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#14]{New I/O worker #48}: Parsed source: Tuple [v1=JSON, v2={size=0, query={bool={filter={bool={must=[{filterjoin={targa={path=targa, indices=[aci-veicoli], query={bool={filter={bool={must=[]}}, must=[{match_all={}}, {bool={filter={bool={must=[]}}, must_not=[], must={query_string={query=*, analyze_wildcard=true}}}}]}}}}}]}}, must_not=[], must={match_all={}}}}}]
[2016-05-24 09:27:40,762][DEBUG][solutions.siren.join.action.coordinate.CachedFilterJoinVisitor] Cache miss for terms by query action: -478837812
[2016-05-24 09:27:40,763][DEBUG][solutions.siren.join.action.coordinate.CachedFilterJoinVisitor] Executing async actions
[2016-05-24 09:27:40,763][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor$AsyncFilterJoinVisitorAction] Executing async cardinality action
[2016-05-24 09:27:40,763][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor] Visitor thread block - blocking queue size: 0
[2016-05-24 09:27:42,733][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor$AsyncFilterJoinVisitorAction] Executing async terms by query action
[2016-05-24 09:27:42,734][DEBUG][solutions.siren.join.action.terms.TransportTermsByQueryAction] [Free Spirit] elasticsearch[Free Spirit][generic][T#1]: Executes search for collecting terms [aci-veicoli][0]
[2016-05-24 09:27:49,361][DEBUG][solutions.siren.join.action.terms.TransportTermsByQueryAction] [Free Spirit] elasticsearch[Free Spirit][generic][T#1]: Returns terms response with 10434420 terms for shard [aci-veicoli][0]
[2016-05-24 09:27:49,370][DEBUG][solutions.siren.join.action.terms.collector.BloomFilterTermsSet] Serialized 10434420 terms - took 5 ms
[2016-05-24 09:27:49,370][DEBUG][solutions.siren.join.action.coordinate.FilterJoinCache] elasticsearch[Free Spirit][generic][T#1]: New cache entry -478837812
[2016-05-24 09:27:49,370][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor] Received terms by query response with 10434420 terms
[2016-05-24 09:27:49,370][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor] Visitor thread unblock - blocking queue size: 1
[2016-05-24 09:27:49,374][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#14]{New I/O worker #48}: Parsed source: Tuple [v1=JSON, v2={size=0, query={bool={filter={bool={must=[{filterjoin={targa={path=targa, indices=[aci-veicoli], query={bool={filter={bool={must=[]}}, must=[{match_all={}}, {bool={filter={bool={must=[]}}, must_not=[], must={query_string={query=*, analyze_wildcard=true}}}}]}}}}}]}}, must_not=[], must={match_all={}}}}}]
[2016-05-24 09:27:49,375][DEBUG][solutions.siren.join.action.coordinate.CachedFilterJoinVisitor] Cache miss for terms by query action: -478837812
[2016-05-24 09:27:49,375][DEBUG][solutions.siren.join.action.coordinate.CachedFilterJoinVisitor] Executing async actions
[2016-05-24 09:27:49,375][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor$AsyncFilterJoinVisitorAction] Executing async cardinality action
[2016-05-24 09:27:49,375][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor] Visitor thread block - blocking queue size: 0
[2016-05-24 09:27:51,383][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor$AsyncFilterJoinVisitorAction] Executing async terms by query action
[2016-05-24 09:27:51,384][DEBUG][solutions.siren.join.action.terms.TransportTermsByQueryAction] [Free Spirit] elasticsearch[Free Spirit][generic][T#1]: Executes search for collecting terms [aci-veicoli][0]
[2016-05-24 09:27:58,182][DEBUG][solutions.siren.join.action.terms.TransportTermsByQueryAction] [Free Spirit] elasticsearch[Free Spirit][generic][T#1]: Returns terms response with 10434420 terms for shard [aci-veicoli][0]
[2016-05-24 09:27:58,191][DEBUG][solutions.siren.join.action.terms.collector.BloomFilterTermsSet] Serialized 10434420 terms - took 5 ms
[2016-05-24 09:27:58,191][DEBUG][solutions.siren.join.action.coordinate.FilterJoinCache] elasticsearch[Free Spirit][generic][T#1]: New cache entry -478837812
[2016-05-24 09:27:58,191][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor] Received terms by query response with 10434420 terms
[2016-05-24 09:27:58,192][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor] Visitor thread unblock - blocking queue size: 1
[2016-05-24 09:27:58,196][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#14]{New I/O worker #48}: Parsed source: Tuple [v1=JSON, v2={size=0, query={bool={filter={bool={must=[{filterjoin={targa={path=targa, indices=[aci-veicoli], query={bool={filter={bool={must=[]}}, must=[{match_all={}}, {bool={filter={bool={must=[]}}, must_not=[], must={query_string={query=*, analyze_wildcard=true}}}}]}}}}}]}}, must_not=[], must={match_all={}}}}}]
[2016-05-24 09:27:58,196][DEBUG][solutions.siren.join.action.coordinate.CachedFilterJoinVisitor] Cache miss for terms by query action: -478837812
[2016-05-24 09:27:58,196][DEBUG][solutions.siren.join.action.coordinate.CachedFilterJoinVisitor] Executing async actions
[2016-05-24 09:27:58,196][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor$AsyncFilterJoinVisitorAction] Executing async cardinality action
[2016-05-24 09:27:58,197][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor] Visitor thread block - blocking queue size: 0
[2016-05-24 09:28:00,190][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor$AsyncFilterJoinVisitorAction] Executing async terms by query action
[2016-05-24 09:28:00,191][DEBUG][solutions.siren.join.action.terms.TransportTermsByQueryAction] [Free Spirit] elasticsearch[Free Spirit][generic][T#1]: Executes search for collecting terms [aci-veicoli][0]
[2016-05-24 09:28:06,843][DEBUG][solutions.siren.join.action.terms.TransportTermsByQueryAction] [Free Spirit] elasticsearch[Free Spirit][generic][T#1]: Returns terms response with 10434420 terms for shard [aci-veicoli][0]
[2016-05-24 09:28:06,852][DEBUG][solutions.siren.join.action.terms.collector.BloomFilterTermsSet] Serialized 10434420 terms - took 5 ms
[2016-05-24 09:28:06,853][DEBUG][solutions.siren.join.action.coordinate.FilterJoinCache] elasticsearch[Free Spirit][generic][T#1]: New cache entry -478837812
[2016-05-24 09:28:06,853][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor] Received terms by query response with 10434420 terms
[2016-05-24 09:28:06,853][DEBUG][solutions.siren.join.action.coordinate.FilterJoinVisitor] Visitor thread unblock - blocking queue size: 1
[2016-05-24 09:28:06,889][DEBUG][solutions.siren.join.action.coordinate.TransportCoordinateMultiSearchAction] [Free Spirit] elasticsearch[Free Spirit][http_server_worker][T#14]{New I/O worker #48}: Coordinated multi-search action completed
[2016-05-24 09:28:06,893][DEBUG][http.netty               ] [Free Spirit] Caught exception while handling client http traffic, closing connection [id: 0xc73a02d9, /127.0.0.1:55218 :> /127.0.0.1:9200]
java.nio.channels.ClosedChannelException
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.cleanUpWriteBuffer(AbstractNioWorker.java:433)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.close(AbstractNioWorker.java:373)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:93)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2016-05-24 09:28:06,897][INFO ][rest.suppressed          ] /.kibi/query/1Kibi-Graph-Query Params: {index=.kibi, op_type=create, id=1Kibi-Graph-Query, type=query, timeout=1000ms}
RemoteTransportException[[Free Spirit][127.0.0.1:9330][indices:data/write/index[p]]]; nested: DocumentAlreadyExistsException[[query][1Kibi-Graph-Query]: document already exists];
Caused by: [.kibi][[.kibi][0]] DocumentAlreadyExistsException[[query][1Kibi-Graph-Query]: document already exists]
    at org.elasticsearch.index.engine.InternalEngine.innerCreateNoLock(InternalEngine.java:432)
    at org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:390)
    at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:362)
    at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:515)
    at org.elasticsearch.index.engine.Engine$Create.execute(Engine.java:810)
    at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnPrimary(TransportIndexAction.java:236)
    at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:157)
    at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:65)
    at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.doRun(TransportReplicationAction.java:595)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:263)
    at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:260)
    at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2016-05-24 09:28:06,913][DEBUG][http.netty               ] [Free Spirit] Caught exception while handling client http traffic, closing connection [id: 0xf1a99037, /127.0.0.1:55343 :> /127.0.0.1:9200]
java.nio.channels.ClosedChannelException
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.cleanUpWriteBuffer(AbstractNioWorker.java:433)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.writeFromUserCode(AbstractNioWorker.java:128)
    at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:99)
    at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:36)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:779)
    at org.jboss.netty.channel.Channels.write(Channels.java:725)
    at org.jboss.netty.handler.codec.oneone.OneToOneEncoder.doEncode(OneToOneEncoder.java:71)
    at org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:59)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:784)
    at org.elasticsearch.http.netty.pipelining.HttpPipeliningHandler.handleDownstream(HttpPipeliningHandler.java:87)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:582)
    at org.elasticsearch.http.netty.NettyHttpChannel.sendResponse(NettyHttpChannel.java:174)
    at org.elasticsearch.rest.action.support.RestActionListener.onFailure(RestActionListener.java:60)
    at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.finishAsFailed(TransportReplicationAction.java:528)
    at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$1.handleException(TransportReplicationAction.java:490)
    at org.elasticsearch.transport.TransportService$DirectResponseChannel.processException(TransportService.java:821)
    at org.elasticsearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:799)
    at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.finishAsFailed(TransportReplicationAction.java:684)
    at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.doRun(TransportReplicationAction.java:611)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:263)
    at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:260)
    at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2016-05-24 09:28:08,043][DEBUG][indices.memory           ] [Free Spirit] recalculating shard indexing buffer, total is [1.5gb] with [3] active shards, each shard set to indexing=[512mb], translog=[64kb]

@bertol83
Copy link
Author

The cache is ever empty, and the time to compute joins is always 70 seconds. Maybe this is the limit because joins are very big. Index Cars (10M documents) is in join with:
-payments (12M)
-Events (33M)
-Ownerships (15M)
-Change Ownerships (17M)
-Evasion (7M)

Maybe i could try to use a second node for elasticsearch, what do you think?

@rendel
Copy link
Contributor

rendel commented May 24, 2016

Adding a second node would probably help in distributing the load. Is the server using ssd or spinning disks ?
The cache being empty does not look normal. The best would be to work on it together offline. I'll contact you privately.

@rendel
Copy link
Contributor

rendel commented May 26, 2016

After investigation, we discovered that the performance issue was coming from the cache size setting. The siren join is setting an incorrect default cache size of 256kb (instead of 256mb). An issue has been opened to resolve this in the next siren-join release.
Adding the following line in the elasticsearch.yml

siren.filterjoin.cache.size: 268435456

solved the issue.

@rendel rendel closed this as completed May 26, 2016
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants