Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bounding number/size of requests #19

Closed
bakkot opened this issue Sep 16, 2021 · 3 comments · Fixed by #20
Closed

bounding number/size of requests #19

bakkot opened this issue Sep 16, 2021 · 3 comments · Fixed by #20

Comments

@bakkot
Copy link
Contributor

bakkot commented Sep 16, 2021

It'd be nice to have the ability to bail out if a query is going to require too much data, rather than potentially fetching arbitrarily large amounts of data.

@phiresky
Copy link
Owner

This is a good idea, but it would probably require some abstraction busting since the file system layer doesn't know anything about which query it is currently answering.. Otherwise it would be possible by sending a cancellation signal from the main thread, but that would probably require SharedArrayBuffer for signalling since everything in the worker thread happens synchronously. Adding a dependency to SharedArrayBuffer isn't great since it's not supported in many cases.

@bakkot
Copy link
Contributor Author

bakkot commented Sep 16, 2021

I'm happy to bust some abstractions!

https://github.com/bakkot/sql.js-httpvfs#a-fork-of-sqljs-httpvfs-which-bounds-the-number-of-requests

(See this commit. It's just overriding XMLHttpRequest.prototype.send in the worker so that it throws once the bound is reached.)

I imagine this is too gross to include in the actual repo (especially since it seems to put the database into an unrecoverable state - as would a network error, from what I can tell?), so I haven't PR'd it, but it does work. Happy to send a PR if you'd like, though.

@phiresky
Copy link
Owner

phiresky commented Sep 17, 2021

I think the reason error throws "corrupt" the database is since emscripten doesn't handle exceptions at all by default, leaving SQLite in an undefined state. You should be able to avoid that by returning an I/O error instead. That should be possible with something similar to throw FS.ErrnoError(FS.ERRNO_CODES.EIO). Probably you should even be able to throw EAGAIN (see sqliteErrorFromPosixError in https://www.sqlite.org/src/doc/trunk/src/os_unix.c) then it should report it as "Database is busy" instead of "database is corrupted".

As an aside you don't need to add your own event handler to the worker etc, you can add your functions to const mod = { and they will be automatically proxied by comlink. Passing functions works with fn(Comlink.proxy(callback)). The http request thing should probably also be added to the doXHR function of lazyfile instead of modifying the prototype, and of course reset the counter when the error is thrown.

If all that works cleanly, I think this would be a good addition to the library itself

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants