Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speeding up inserts in blob storage #1243

Closed
aravindsrinivasan opened this issue Apr 26, 2024 · 4 comments
Closed

Speeding up inserts in blob storage #1243

aravindsrinivasan opened this issue Apr 26, 2024 · 4 comments

Comments

@aravindsrinivasan
Copy link

I'm finding that inserting data into a table is quite slow when it's backed by blob storage (Azure blob storage in my case). I've tried:

  • Single insert of entire dataset at table creation time
  • Batched insert (Table.add) post table creation
  • Multi-threaded insert (is the Table object thread safe?)

All of them seem to be equally slow. What's the best way to speed this up?

@wjones127
Copy link
Contributor

Hi @aravindsrinivasan, thanks for trying out LanceDB.

The methods you are using to insert all seem reasonable. I don't think you are using LanceDB in a way that would make it slow.

I think the most productive thing you could do here is quantify how fast you are able to write on Azure with LanceDB vs some other library. For example, you can time how long it takes to insert 100,000 vectors. Then if you vectors are say 1024 dimensions, then you can estimate the data you wrote is about 100,000 * 1024 dim * 4 bytes ~= 400MB. (If there are other columns, you will have to account for the size of those as well.) Divide that by the number of seconds it took and you get an estimate of how much throughput you are getting from writing with LanceDB. For comparison, it would be useful to know how slow that is compared to using the Azure CLI to upload a file directly (like this).

@aravindsrinivasan
Copy link
Author

@wjones127 thank you for the response. Turns out my internet was the culprit -- plugging in the ethernet cable made it 10x faster.

Generally speaking, what has your team found to be the fastest way to upload into an index? Parallelization didn't seem to work as well as sequential inserts with a large batch size. This feels counter intuitive to me so curious if this is to be expected

@wjones127
Copy link
Contributor

Parallelization didn't seem to work as well as sequential inserts with a large batch size. This feels counter intuitive to me so curious if this is to be expected

Writing in batches is much more efficient in LanceDB. You could write batches of 10-100k in parallel, and that might work well. But writing <1k rows in parallel will perform poorly and produce a bad table layout, which would need to be fixed by calling compact_files.

@aravindsrinivasan
Copy link
Author

aravindsrinivasan commented May 8, 2024

Thanks @wjones127.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants