New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dynamic Partitioning support #4197
Comments
Just write all the data and let Vespa deal with partitioning it for you. Yes, it may be slightly more efficient to pre-partition when your queries only searches precisely one partition, but this is a special case, and I don't think the potential benefit outweights the additional complexity and interface surface. It's not much faster than filtering on an attribute with fast-search, and once you need to search across multiple partitions, it becomes slower. |
Note that I want double partitioning (so each 'partition' will be distibuted). I don't understand how fast-search would make things faster though ? You still have to merge bitsets from the other filters. (and partitioning will just make all bitsets smaller) Maybe by using different document-types ? Assuming they live in separate inverted-indexes. |
Each document type is a separate instance of everything, yes. Fast-search adds a B-tree over the attribute. Since (presumably) this attribute will be a strong filter it will be used to skip most of the document space without further work. If you use partition to put more data on the node than you can search, then there needs to be protection from searching too many of them at the same time because one such query would then kill all the nodes. And, you'd need to have support for unloading a partition when another is needed (or, some more complicated eviction strategy where N can be kept at the same time). But at what time do you unload given that many queries run in parallel. And if you can make that work one query for an old partition will make the response time of all subsequent queries go through the roof ... this is the kind of complexity I mean. At the very least this would be premature optimization. I suggest you try the straightforward approach first. |
Hey,
I searched the docs and didn't find anything. Think about the logs usecase in Elasticsearch where you create a new index dynamically for whatever you want to partition on. Looking at this: #4092 it's not available to dynamically partition.
While I can use a "special field" to filter, it would still have overhead when grouping, faceting, sorting.
Is my best case to just precreate about 250 search definitions ? (what I think I'll need totally)
The text was updated successfully, but these errors were encountered: