-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
more information about the dataset #27
Comments
Thanks for reaching out. Yes, wiki contains duplicates. @alexandervanrenen can you provide some information about the dataset? I don't understand what you mean by record length. We extend each 64-bit unsigned integer key with a 64-bit value (payload). The payloads don't affect the mapping of keys to positions. However, it is true that with larger payloads the last-mile search (e.g., binary search) would become more expensive due to extra cache misses. |
Sure! The wiki dataset is based on wikipedia. We gathered the timestamps of all edits to pages. I think they have a second or millisecond granularity. Which explains the collisions. You can find the data and descriptions here. If I remember correctly, we only used a subset of the english wikipedia, as that already contained more than enough time stamps. |
|
Thank you for your help, I find the description. |
Hi @chaohcc. No, the record length doesn't affect the positions since each record is fixed size. You can just multiply the model's prediction with the record size to get the byte offset of the record. The position of the 5th record will always be 4 (0-indexed). Its byte offset will be 4 * record_size. So the model only needs to know that the 5th record is stored at position 4. With variable-sized records that's a different story. But even then you probably wouldn't train on the raw byte offsets but instead use another structure for the indirection, like store an array of pointers and index into that with the model. I hope that helps. |
Thank you. It's very helpful, i also try the varialbe-size records and build a mapping table with just the keys in array instead of the raw byte offsets. |
Hi! I have another question about the dataset, I want more information about 'books_200M', is it timestamp or some other ID? Thank you for your help! Best wishes! |
@RyanMarcus If I remember correctly the book dataset has the popularity of each book. So the number of times it was accessed or bought on amazon. No timestamps. |
Thank you so much! |
Thank you!
I have run the RMI on wiki_ts_200M and books_200M successfully. I found that there are duplicated values in "wiki_ts" dataset, is it? and i would like to know more information about dataset, where can i find the description of dataset?
There is only the primary key, but I find that when the length of each record is extended, the performance of learned index will be greatly affected, the mapping between key to position becomes complicated, so i would like trying to complete the true record length to see the mapping of keys to positions.
Best wishes!
Thank you for your help!
The text was updated successfully, but these errors were encountered: