Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a track to test nested / parent child performance #8

Closed
danielmitterdorfer opened this issue Aug 17, 2016 · 13 comments
Closed

Add a track to test nested / parent child performance #8

danielmitterdorfer opened this issue Aug 17, 2016 · 13 comments

Comments

@danielmitterdorfer
Copy link
Member

  • Index + search
  • We should also force a high update rate (to see the cost of updating nested docs)
@danielmitterdorfer
Copy link
Member Author

We should implement elastic/rally#155 first before adding such a track.

@jpountz
Copy link
Contributor

jpountz commented Feb 14, 2017

We discussed this in the search meeting as we had a significant regression with nested queries in 2.0 which is only being addressed now. We would like to catch such regressions earlier in the future and were thinking about writing a track that would use the StackOverflow dataset and index comments and answers as nested documents of the questions, which would be the top-level documents.

I agree the update rate would be an interesting thing to benchmark but for now I think pure indexing speed + simple queries (both nested and non-nested, since the use of nested mappings forces ES to apply filters internally to exclude nested documents from eg. match_all queries) would be a great start?

cc @markharwood

@markharwood
Copy link
Contributor

The question I have is do we want to test an artificial scenario where all Q&As are pre-fused as nested docs to be queried or a more real-world scenario where new answers continually revise existing question docs while searches are also being serviced. I'm not sure if rally would support these search-while-reindexing scenarios?

@jpountz
Copy link
Contributor

jpountz commented Feb 14, 2017

I think we should start with the pre-fused scenario for now, which should be simpler to implement and would have caught the 2.0 regression. I'm all for making benchmarking as realistic as possible but let's get there step by step?

@danielmitterdorfer
Copy link
Member Author

do we want to test an artificial scenario where all Q&As are pre-fused as nested docs to be queried

We usually implement our tracks this way to see effects in isolation.

The latter case could be implemented in a second step as a separate challenge. Just for reference, here's an implementation hint for this. You can index an search concurrently by defining the schedule as follows:

"schedule": [
  {
    "parallel": {
      "tasks": [
        {
          "operation": "bulk",
          "warmup-time-period": 240,
          "clients": 8,
          "target-throughput": 50
        },
        {
          "operation": "some-simple-query",
          "clients": 2,
          "warmup-iterations": 500,
          "iterations": 1000,
          "target-throughput": 50
        },
        {
          "operation": "some-complex-query",
          "clients": 2,
          "warmup-iterations": 500,
          "iterations": 1000,
          "target-throughput": 2
        }
      ]
    }
  }
]

@markharwood
Copy link
Contributor

Cool.
I can pre-fuse some data on my laptop or we might want to benchmark that one-off fusion process.
There's typically 2 ways that process can be done using elasticsearch:

  1. Bulk load using scripted updates to append Answers to Query docs.
  2. Index questions, index answers, use scroll API on the 2 indices sorted on a common key and Python client assembles new docs to output to bulk index API.

Do you want to benchmark either of these?

@danielmitterdorfer
Copy link
Member Author

danielmitterdorfer commented Feb 14, 2017

I tend to do option 1. If you want me to run this benchmark for our comparison charts with older releases (2.x, 1.7(?)), then we just need to make sure it's implemented in a way that it's an apples-to-apples comparison in older releases (i.e. I guess it's Groovy before 5.0 and Painless afterwards but I think that's fine).

@markharwood
Copy link
Contributor

@danielmitterdorfer @jpountz Can you review the data/mapping below before I kick off an upload of the json data.

I'm proposing we have this basic data for each StackOverflow question:

// Example doc
	{
           "title": "Display Progress Bar at the Time of Processing",
           "qid": "1000000",
           "answers": [
              {
                 "date": "2009-06-16T09:55:57.320",
                 "user": "Michał Niklas (22595)"
              },
              {
                 "date": "2009-06-17T12:34:22.643",
                 "user": "Jack Njiri (77153)"
              }
           ],
           "tag": [
              "vb6",
              "progress-bar"
           ],
           "user": "Jash",
           "creationDate": "2009-06-16T07:28:42.770"
        }
     }

That gives us a little free-text and structured data in the root doc and just who/when data in the nested answer objects. I have a full StackOverflow dump as of Jun 2016 and converted to the above format json is 3.64GB unzipped and 700Mb zipped. The mapping I suggest is pretty basic:

   {
     "question": {
        "properties": {
           "answers": {
              "type": "nested",
              "properties": {
                 "date": {
                    "type": "date"
                 },
                 "user": {
                    "type": "keyword"
                 }
              }
           },
           "creationDate": {
              "type": "date"
           },
           "date": {
              "type": "date"
           },
           "qid": {
              "type": "keyword"
           },
           "tag": {
              "type": "keyword"
           },
           "title": {
              "type": "text"
           },
           "user": {
              "type": "keyword"
           }
        }
     }
  }

If it look OK with you I'll kickoff an upload to the S3 benchmarks corpora store

@jpountz
Copy link
Contributor

jpountz commented Feb 15, 2017

I think it is a good idea to have some minimal metadata, otherwise the indexing time that is specific to nested docs might be drowned into full text analysis + indexing. Maybe one minor suggestion would be to make the user field consistent between the question and answer objects (both in terms of mapping and format). Otherwise +1!

@danielmitterdorfer
Copy link
Member Author

Looks great! I guess you're showing the master version of the track. For 5.x, you should also turn off _all. Thanks for tackling this!

@markharwood
Copy link
Contributor

markharwood commented Feb 15, 2017

Thanks, both.

Maybe one minor suggestion would be to make the user field consistent between the question and answer objects

Good spot - that's a quirk of that particular example doc. In some questions the ownerID is missing and we only have display name instead.

markharwood added a commit to markharwood/rally-tracks that referenced this issue Feb 20, 2017
Uses StackOverflow questions+answers nested docs with just title text, tags, authors and dates for fields.

Closes elastic#8
markharwood added a commit to markharwood/rally-tracks that referenced this issue Feb 21, 2017
Uses StackOverflow questions+answers nested docs with just title text, tags, authors and dates for fields.

Closes elastic#8
@markharwood
Copy link
Contributor

@danielmitterdorfer I was getting into tangles with git/rally because I'd initially followed my usual practice of creating a local dev branch ("fix/8") to create my PR but then realised rally manages branch switching so to test master I have to do dev on a local master branch.

But.... I just tried moving my changes to a local master branch and it tested OK so pushed to master on my public repo here but I cannot create a PR from master.

What's the best way forward here?

@danielmitterdorfer
Copy link
Member Author

I suggest you do:

git checkout master
git checkout -b fix/8
git push name_of_your_clones_remote_here fix/8

This should work?

markharwood added a commit that referenced this issue Feb 21, 2017
* Nested docs querying benchmark.
Uses StackOverflow questions+answers nested docs with just title text, tags, authors and dates for fields.

Closes #8
markharwood added a commit that referenced this issue Feb 21, 2017
* Nested docs querying benchmark.
Uses StackOverflow questions+answers nested docs with just title text, tags, authors and dates for fields.

Closes #8
markharwood added a commit that referenced this issue Feb 21, 2017
* Nested docs querying benchmark.
Uses StackOverflow questions+answers nested docs with just title text, tags, authors and dates for fields.

Closes #8
markharwood added a commit that referenced this issue Feb 21, 2017
* Nested docs querying benchmark.
Uses StackOverflow questions+answers nested docs with just title text, tags, authors and dates for fields.

Closes #8
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants