Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.
Sign upLazy file chunks store #599
Comments
This comment has been minimized.
This comment has been minimized.
|
@oleiba Thanks for opening this issue. It's important that we focus more on performance going forward, so I care a lot of making WebTorrent perform better for you. Can you share more details about the kind of stress you're putting on WebTorrent.
Please provide as much detail as you can! |
This comment has been minimized.
This comment has been minimized.
|
I can add another use case to this which is almost the opposite of oleib. From the torrent client perspective, I have a torrent with 8260 files inside of it that I'm trying to load with webtorrent. I'm using webtorrent inside of electron, so as soon as I load the torrent in webtorrent, all the file descriptors get used up for that process, and electron stops being able to open files which makes it fail in a quite spectacular way. To answer your list of questions
I'm currently using a native module that wraps libtorrent, but being able to drop the dependency on native modules would be amazing. |
Hi, I am currently using WebTorrent for seeding a large amount of files in server-side (node.js).
One of the problems I've noticed when trying to scale-up WebTorrent is the amount of opened files.
When each torrent is created - it eventually reaches
Torrent._onMetadatawhere it creates an instance of FSChunkStore (npm module fs-chunk-store).This opens a file for tracking and validating pieces downloaded.
If one assumption is that most torrents will have very little and rare activity, files could be loaded on demand, rather than up-front.
I want to suggest that the file's .open will occur lazily only when a peer is interested, or alternatively closing the file and re-open it upon request.
I know this is an important feature in libtorrent's codebase, I believe this would allow WebTorrent to scale above limitations it currently has.