Skip to content
This repository has been archived by the owner. It is now read-only.

What is better? Large containers or large sets of files? #250

Closed
ghost opened this issue Apr 16, 2017 · 2 comments
Closed

What is better? Large containers or large sets of files? #250

ghost opened this issue Apr 16, 2017 · 2 comments

Comments

@ghost
Copy link

ghost commented Apr 16, 2017

Hello everyone! I am wondering whether it's better to share one big dump (e.g. latest english Wikipedia dump as one file) or may small files which constitute the dump file.

Arguments for big dumps: performance: one hash in the network.

Arguments for small files: gnome-4.5.tar.gz is already in the network, wouldn't be deduplicated if a Linux-ISO containing the file would be introduced, too. Only new additions need new storage space.

This is similar to the dichotomy of Debian's way to link packages if possible (both reducing disk space/RAM requirements) opposed to Docker images.

What do you think?

@hsanjuan
Copy link
Member

hsanjuan commented Apr 23, 2017

one hash in the network.

ipfs will chunk big files so you will have many hashes anyway.

@flyingzumwalt
Copy link
Contributor

flyingzumwalt commented May 23, 2017

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants