Is 100gb sequential writes with a seeded non-ipfs host on 80 servers pulling posible? #105
Comments
|
Correct, or at least somewhat in order and once enough data downloaded in order for a write operation to occur. This would be better optimized for spinning media than jumping sectors. I suppose this would also be similar while streaming video.
Seems like this would be an easy way in an Internet environment for a user to completely move over to using ipfs standard if it supported a http protocol. Example: Start local daemon, browse websites, daemon notices it does not have some cdn content and start downloading and then serve to the user normally. Others in your swarm could then pull from your daemon of that cdn content.
Oh interesting, is there a github ticket I could follow for this functionality? Thanks for your help |
|
This issue has been moved to https://discuss.ipfs.io/t/is-100gb-sequential-writes-with-a-seeded-non-ipfs-host-on-80-servers-pulling-posible/299. |
|
You can add files from an URL to IPFS via Merkle-dag$ ipfs urlstore add https://discordapp.com/assets/9c38ca7c8efaed0c58149217515ea19f.png
zb2rhmKRyibA2tfdE1cFzcLSnhhXpUKMrZcaYPkq92ZB8rSrh
Trickle-dagipfs urlstore add -t https://discordapp.com/assets/9c38ca7c8efaed0c58149217515ea19f.png
zdj7WcqTgCmCLRajzvT3HeS9jUUMDoS4zz5UAF6vVu9ceYtEd |
gerrickw commentedMar 30, 2016
Attempting to see if my usecase is possible, methods to make it possible, or if this would be a bad solution. This question will be multiple, but towards the same goal.
Usecase: I would have 80 servers attempting to download a 100gb file as quickly as possible in an intranet environment.
Questions:
Thanks for any help people can supply -- even answering a single questions will be helpful -- Thanks
The text was updated successfully, but these errors were encountered: