Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.
Sign upSeed a torrent file without generating RAM using "Chunks" #1303
Comments
This comment has been minimized.
This comment has been minimized.
|
I was just trying something similar: #1293 In the case you are describing, it may would match best having a hybrid store. Using memory and gradually moving stuff onto disk (idb), when in memory storage is filling up. Something similar to RAM/SWAP behavior! Sounds for me like a feature request... smile Although, with all seriousness @feross , a webtorrent cache plus in memory / disk features would likely solve some issues webtorrents are having.
Cheers |
This comment has been minimized.
This comment has been minimized.
|
Hey @aalhama, After looking through a memory snapshot, the cause of the memory filling up is due to the usage of immediate-chunk-store in torrent.js@492, which stores the chunk in memory until it is written to the chunk store set in opts.store. In the current implementation of immediate-chunk-store in the webtorrent library, this seems unavoidable if the chosen chunk store is too slow to store the chunks. This seems to only be an issue when seeding files as read speeds from disk are greater than write speeds to the chunk store. Would reading only chunks of the file at a time be possible instead of processing the whole file at once? Hopefully this has been useful, but I would love to hear what peoples thoughts on how to deal with this are. |
This comment has been minimized.
This comment has been minimized.
|
I have confirmed the issue to be caused by the immediate-chunk-store I was able to avoid All the best. |
This comment has been minimized.
This comment has been minimized.
|
Can you say how I can change store function and does I have some default? |
This comment has been minimized.
This comment has been minimized.
|
Hey @GooG2e, For further questions I would suggest creating a new issue with your question, closing it after creation, instead of tagging onto a different issue. I already included an example which is linked in the thread above but it can be seen directly here. As for a custom store without using NPM, give this a look. If you need any more help, please create a new issue. |
This comment has been minimized.
This comment has been minimized.
|
Can you see if this PR helps with this? #1456 I created a deployment of instant.io using it here https://instant-io-idbkv.glitch.me/ |
This comment has been minimized.
This comment has been minimized.
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. |
What version of WebTorrent?
webtorrent@0.98.20
What operating system and Node.js version?
Linux Mint 18.2, Node.js v6.12.3
What browser and version? (if using WebTorrent in the browser)
Google Chrome 64.0.3282
Hello, the problem is that by using Webtorrent in the browser to see a multimedia file of a size such as 500Mb or higher, the RAM memory is filled until either the browser kills the process or the machine bursts. Researching I have seen that you can use the use of "Chunks" to save the content in local and seedear making use of the hard drive. The most interesting methods have been the following:

-idb-chunk-store
-fs-chunk-store
-chunk-store-stream
-ls-chunk-store
I tried the first one (idb-chunk-store) using the parameter that Webtorrent offers that is "store" to indicate the IndexDB and has not been successful since everything correctly storing the data, when it looks, after hashear the torrent the RAM memory keeps filling without stopping. I would like to solve in some way this problem that many people experience. Thank you very much.