Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Seed a torrent file without generating RAM using "Chunks" #1303

Closed
aalhama opened this issue Feb 19, 2018 · 7 comments
Closed

Seed a torrent file without generating RAM using "Chunks" #1303

aalhama opened this issue Feb 19, 2018 · 7 comments
Labels

Comments

@aalhama
Copy link

@aalhama aalhama commented Feb 19, 2018

What version of WebTorrent?
webtorrent@0.98.20

What operating system and Node.js version?
Linux Mint 18.2, Node.js v6.12.3

What browser and version? (if using WebTorrent in the browser)
Google Chrome 64.0.3282

Hello, the problem is that by using Webtorrent in the browser to see a multimedia file of a size such as 500Mb or higher, the RAM memory is filled until either the browser kills the process or the machine bursts. Researching I have seen that you can use the use of "Chunks" to save the content in local and seedear making use of the hard drive. The most interesting methods have been the following:
-idb-chunk-store
-fs-chunk-store
-chunk-store-stream
-ls-chunk-store
image

I tried the first one (idb-chunk-store) using the parameter that Webtorrent offers that is "store" to indicate the IndexDB and has not been successful since everything correctly storing the data, when it looks, after hashear the torrent the RAM memory keeps filling without stopping. I would like to solve in some way this problem that many people experience. Thank you very much.

@feross feross added the question label Feb 20, 2018
@Weedshaker

This comment has been minimized.

Copy link

@Weedshaker Weedshaker commented Feb 20, 2018

I was just trying something similar:

#1293
@SilentBot1 pointed out that the table gets overwritten, since name is not set... this maybe is the issue...

In the case you are describing, it may would match best having a hybrid store. Using memory and gradually moving stuff onto disk (idb), when in memory storage is filling up. Something similar to RAM/SWAP behavior! Sounds for me like a feature request... smile

Although, with all seriousness @feross , a webtorrent cache plus in memory / disk features would likely solve some issues webtorrents are having.

  1. Cache; webtorrents are fluctuating (disappearing when all peers go offline), a cache would help to have peers boot up torrents from their inbuilt cache === less fluctuation

  2. RAM/SWAP; browsers experience death of overload pretty quickly. Using some techniques like RAM/SWAP would possibly support browsers to better cope with the amount of data.

Cheers

@SilentBot1

This comment has been minimized.

Copy link
Member

@SilentBot1 SilentBot1 commented Feb 24, 2018

Hey @aalhama,

After looking through a memory snapshot, the cause of the memory filling up is due to the usage of immediate-chunk-store in torrent.js@492, which stores the chunk in memory until it is written to the chunk store set in opts.store. In the current implementation of immediate-chunk-store in the webtorrent library, this seems unavoidable if the chosen chunk store is too slow to store the chunks.

This seems to only be an issue when seeding files as read speeds from disk are greater than write speeds to the chunk store. Would reading only chunks of the file at a time be possible instead of processing the whole file at once?

Hopefully this has been useful, but I would love to hear what peoples thoughts on how to deal with this are.

@SilentBot1

This comment has been minimized.

Copy link
Member

@SilentBot1 SilentBot1 commented Mar 3, 2018

@aalhama @feross,

I have confirmed the issue to be caused by the immediate-chunk-store store.mem array filling up with pending writes when a slow stores, such as an IndexedDB store, is used. As the read stream is faster than the store, the pending chunks waiting to be written to store are then stored in store.mem until the store.put has finished. This eventually builds up on large files (4GB in my case) and causes the browser to crash (tested in both Chrome 64.0.3282.186 and Firefox 58.0.2)

I was able to avoid store.mem filling up by using a stream throttle, such as node-throttle, in the stream pump here, though setting a universal throttle speed isn't a good solution as user environments may vary, implementing some form of backpressure such as a variable bit rate throttle based on the immediate-chunk-store pending writes would be my two cents but I would love to hear other ideas on how to deal with the situation.

All the best.

@GooG2e

This comment has been minimized.

Copy link

@GooG2e GooG2e commented May 26, 2018

Can you say how I can change store function and does I have some default?
bcause I have tried to set store idb-chunk-store and this doesn't work
Can I add this store without npm?
As usual js file to html page?

@SilentBot1

This comment has been minimized.

Copy link
Member

@SilentBot1 SilentBot1 commented May 26, 2018

Hey @GooG2e,

For further questions I would suggest creating a new issue with your question, closing it after creation, instead of tagging onto a different issue. I already included an example which is linked in the thread above but it can be seen directly here. As for a custom store without using NPM, give this a look.

If you need any more help, please create a new issue.

@KayleePop

This comment has been minimized.

Copy link
Contributor

@KayleePop KayleePop commented Aug 2, 2018

Can you see if this PR helps with this? #1456

I created a deployment of instant.io using it here https://instant-io-idbkv.glitch.me/

@stale

This comment has been minimized.

Copy link

@stale stale bot commented Oct 31, 2018

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.

@stale stale bot added the stale label Oct 31, 2018
@stale stale bot closed this Nov 7, 2018
@lock lock bot locked as resolved and limited conversation to collaborators Feb 5, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
6 participants
You can’t perform that action at this time.