Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dynamic Partitioning support #4197

Closed
ddorian opened this issue Nov 18, 2017 · 3 comments
Closed

Dynamic Partitioning support #4197

ddorian opened this issue Nov 18, 2017 · 3 comments

Comments

@ddorian
Copy link

ddorian commented Nov 18, 2017

Hey,

I searched the docs and didn't find anything. Think about the logs usecase in Elasticsearch where you create a new index dynamically for whatever you want to partition on. Looking at this: #4092 it's not available to dynamically partition.

While I can use a "special field" to filter, it would still have overhead when grouping, faceting, sorting.

Is my best case to just precreate about 250 search definitions ? (what I think I'll need totally)

@bratseth
Copy link
Member

Just write all the data and let Vespa deal with partitioning it for you.

Yes, it may be slightly more efficient to pre-partition when your queries only searches precisely one partition, but this is a special case, and I don't think the potential benefit outweights the additional complexity and interface surface. It's not much faster than filtering on an attribute with fast-search, and once you need to search across multiple partitions, it becomes slower.

@ddorian
Copy link
Author

ddorian commented Nov 20, 2017

Note that I want double partitioning (so each 'partition' will be distibuted).
And while it is a special case for Vespa, it's a normal case for this app to search 1/100 of data.
And the complexity on the app side would be small I think.
And I wouldn't need to keep all data(attributes) in memory, at leas not the old ones. While fast-search it would require even more memory.

I don't understand how fast-search would make things faster though ? You still have to merge bitsets from the other filters. (and partitioning will just make all bitsets smaller)
Needs to be some more explanation in the docs I think (like doc-types are stored separately? fast-search? attributes in-memory all-time or by-request, does each node have 1 inverted-index (or per-core))

Maybe by using different document-types ? Assuming they live in separate inverted-indexes.

@bratseth
Copy link
Member

Each document type is a separate instance of everything, yes.
Few people have the technical expertise to appreciate more implementation details in the doc, so I'm not sure it is worth it ...

Fast-search adds a B-tree over the attribute. Since (presumably) this attribute will be a strong filter it will be used to skip most of the document space without further work.

If you use partition to put more data on the node than you can search, then there needs to be protection from searching too many of them at the same time because one such query would then kill all the nodes. And, you'd need to have support for unloading a partition when another is needed (or, some more complicated eviction strategy where N can be kept at the same time). But at what time do you unload given that many queries run in parallel. And if you can make that work one query for an old partition will make the response time of all subsequent queries go through the roof ... this is the kind of complexity I mean.

At the very least this would be premature optimization. I suggest you try the straightforward approach first.

@ddorian ddorian closed this as completed Nov 20, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants