Skip to content

avoid duplicating files added to ipfs #875

Open
anarcat opened this Issue · 32 comments
@anarcat

it would be very useful to have files that are passed through ipfs add not copied into the datastore. for example here, i added a 3.2GB file, which meant the disk usage for that file now doubled!

Basically, it would be nice if the space usage for adding files would be O(1) instead of O(n) where n is the file sizes...

@jbenet
IPFS member

Yep, this can be implemented as either (a) a different repo altogether, or (b) just a different datastore. It should certainly be an advanced feature, as moving or modifying the original file at all would render the objects useless, so users should definitely know what they're doing.

note it is impossible for ipfs to monitor changes constantly, as it may be shut down when the user modifies the files. this sort of thing requires an explicit intention to use it this way. An intermediate point might be to give ipfs a set of directories to watch/scan and make available locally. this may be cpu intensive (may require lots of hashing on each startup, etc).

@anarcat

the way git-annex deals with this is by moving the file to a hidden directory (.git/annex/objects/[hashtree]/[hash]), making it readonly and symlinking the original file.

it's freaking annoying to have all those symlinks there, but at least there's only one copy of the file.

@MichaelMure

ipfs could track files the same way a media player trak its media collection:

  • track files in the background, possibly with a low OS priority
  • do a complete check (hashthe file) on demand, when the file is requested by the network or the user
  • quickly invalidate already shared files by checking if they still exist on disk and if the filesize change
@vitzli

Hello, my name is vitzli and I'm a dataholic
I am just a user and I operate several private/LAN repositories for open source projects, that gives me ability to update my operating systems and speed-up VM deployment when Internet connection doesn't work very well, right now Debian repository is approximately 173GB and 126000 files, debian images are about 120GB and I share them over BitTorrent (I'm using jigdo-lite to build them from the repository and download the difference between current repo and required template from the mirror), while I prefer to use public and official torrent trackers, some projects, like FreeBSD, do not offer torrent files and I get them over private trackers.
Same debian/centos images are there too and I don't mind sharing them for some sweet ratio. There is no need for them to be writable, so keep them as owned by root with 644 permissions. Unfortunately, people combine several images into one torrent and it breaks infohash (and DHT swarms got separated too) and I have to have two copies of iso images (I symlink/hardlink them to cope with that). As far as I understand, this won't be an issue with ipfs, but I would really like to keep those files there (as files in ro/root:644 partition, not symlinks/hardlinks; potentially, they could be mounted over local network). If ipfs could be used to clone/copy/provide CDN for Internet Archive and archive team - problems could be similar. Here is my list of demands, dealbreakers thoughts for ipfs addref command (or whatever it may be called):

  • I assume that:
    1. files in external to ipfs storage are much more often being read than written;
    2. Probably nobody would like to break their existing storage for ipfs;
    3. I call files and directories a file;
    4. I use B for bytes and b for bits;
  • files should be referenced in ipfs, not the other way; There is ipns/ipfs mount points, but I need to think/read/practice about that.
  • files are stored as files in a ext4/ZFS/XFS filesystem with arbitrary directory structure, it could be (more likely) be read-only, or mounted on read-only partition, or:
  • files are accessed over mounted network directory (NAS/SMB, NFS, Ceph, something else), that box could (should, really) be accessed in read-only mode, being hit by a cryptolocker, capable of encrypting a network-attached storage is a hell of a day;
  • personally, I'm okay with 1kB of ipfs storage on average per 256 kB of referenced data: this gives 1.1 GB of ipfs storage per 300 GB referenced files, 39 GB of ipfs storage per 10TB of files, and 273 GB per 70 TB - I could live with that, but it could be less;
  • ability to put files into fast SSD-like cache (configurable per file, root-node, source directory? seems like related feature, but it could/should be offloaded to underlying filesystem)
  • I am sorry for harsh words, but rehashing referenced files on startup is unacceptable - for 200k files and 400GB of repository it may require tens of minutes (and I don't want to think of rehashing 60TB of data), even trivial checking for size and modification/creation date would be slow-ish (maybe 'check file's hash on request' flag for files?). Although I would agree with rehasing on demand and on referencing file;
  • I have no opinion on tracking files in background mode, it may be a feature, but I have no idea how it could look performance-wise; it could be a million text/small-ish binary files, so… providing centos+fedora+epel+debian+ubuntu+freebsd mirror to ipfs perhaps would break 1 million files barrier and 1TB size;
  • [very unlikely] ability to pull file/directory from ipfs storage and reference it back. It could be split into get and addref tasks - this seems excessive, but somebody may ask for it.
@rubiojr

Here's a disk usage plot when adding a large (~3.4 GiB) file:

[1059][rubiojr@octox] ./df-monitor.sh ipfs add ~/mnt/octomac/Downloads/VMware-VIMSetup-all-5.5.0-1623099-20140201-update01.iso
1.51 GB / 3.34 GB [===============================>--------------------------------------] 45.13 % 5m6s
Killed

storage_usage

~12 GiB used while adding the file.

Half way I need to kill ipfs because I'm running out of space.

Somewhat related, is there a way to cleanup the partially added stuff after killing ipfs add?

UPDATE: it seems that ipfs repo gc helps a bit with the cleanup, but does not recover all the space.

@rubiojr

A couple of extra notes about the disk usage:

  • The file was the first one added after ipfs init && ipfs daemon
  • If I re-add the file after killing the first ipfs add and running ipfs repo gc the file is added correctly using only the required disk space:
[1029][rubiojr@octox] du -sh ~/.go-ipfs/datastore/
3,4G    /home/rubiojr/.go-ipfs/datastore/
  • If I add another large file after the first one, the disk used during the operation is roughly the same as the file size added (which is expected I guess).

Anyway, I've heard you guys are working on an a new repo backed so I just added this for the sake of completion.

@whyrusleeping
IPFS member

@rubiojr the disk space is being consumed by the eventlogs, which is on my short list for removing from ipfs. check ~/.go-ipfs/logs

@rubiojr

@whyrusleeping not in this case apparently:

[~/.go-ipfs]
[1106][rubiojr@octox] du -h --max-depth 1
12G ./datastore
5,8M    ./logs
12G .
[~/.go-ipfs/datastore]
[1109][rubiojr@octox] ls -latrS *.ldb|wc -l
6280
[~/.go-ipfs/datastore]
[1112][rubiojr@octox] ls -latrSh *.ldb|tail -n5
-rw-r--r-- 1 rubiojr rubiojr 3,8M abr  8 22:59 000650.ldb
-rw-r--r-- 1 rubiojr rubiojr 3,8M abr  8 23:00 002678.ldb
-rw-r--r-- 1 rubiojr rubiojr 3,8M abr  8 23:02 005705.ldb
-rw-r--r-- 1 rubiojr rubiojr 3,8M abr  8 23:01 004332.ldb
-rw-r--r-- 1 rubiojr rubiojr 3,8M abr  8 23:00 001662.ldb

6280 ldb files averaging 3.8MB files each. This is while adding a 1.7GiB file and killing the process before ipfs add finishes. First ipfs add after running ipfs daemon -init.

@rubiojr

The leveldb files did not average 3.8 MiB each, some of them were smaller in fact. My bad.

@whyrusleeping
IPFS member

wow. That sucks. But should be fixed quite soon, i just finished the migration tool to move block storage out of leveldb.

@jbenet
IPFS member

since this is a highly requested feature, can we get some proposals of how it would work with the present fsrepo ?

@cryptix
IPFS member

My proposal would be a shallow repo that acts like an index of torrent files. Where it thinks it can serve a block until it tries to open the file from the underlying file system.

I'm not sure how to manage chunking. Saving (hash)->(file path, offset) should be fine, I guess?

@loadletter

Saving (hash)->(file path, offset) should be fine

Something like (hash)->(file path, mtime, offset) would help checking if the file was changed.

@whyrusleeping
IPFS member

something like (hash)->(path, offset, length) is what we would need. and rehash the data upon read to ensure the hash matches.

@jbenet
IPFS member

piecing it together with the repo is trickier. maybe it can be a special datastore that stores this index info in the flatfs, but delegates looking up the blocks on disk. something like

// in shallowfs
// stores things only under dataRoot. dataRoot could be `/`.
// stores paths, offsets, and a hash in metadataDS.
func New(dataRoot string, metadataDS ds.Datastore) { ... }

// use
fds := flatfs.New(...) 
sfs := shallowfs.New("/", fds)
@whyrusleeping
IPFS member

would be cool if linux supported symlinks to segments of a file...

@davidar
IPFS member

Perhaps separating out the indexing operation (updating the hash->file-segment map) from actually adding files to the repo might work? The indexing could be done mostly separately from ipfs, and you'd be able to manually control what needs to be (re-)indexed. The blockstore then checks if the block has been indexed already (or passes through to the regular datastore otherwise).

@striepan

Copy-on-write filesystems with native deduplication can be relevant here. For example https://btrfs.wiki.kernel.org

Copying files just adds little metadata, data extents are shared. I can use it with big torrents, edit files still being a good citizen and seeding the originals. Additional disk space usage is in the size of the edits.

symlinks to segments of a file

are just files sharing extents

On adding a file that is already in the datastore you could trigger deduplication and save some space!

I am sure there is a lot of other more or less obvious ideas and some more crazy ones like using union mounts(unionfs/aufs) with ipfs as a RO fs with RW fs mounted over it for network live distro installation or going together with other VM stuff floating around here.

@jbenet
IPFS member

@striepan indeed! this all sounds good.

If anyone wants to look into making an fs-repo implementation patch, this could come sooner. (right now this is lower prio than other important protocol things.)

@lgierth lgierth added the repo label
@hmeine

I agree with @striepan; I even believe that copy-on-write filesystems are the solution to this problem. What needs to be done in ipfs, though, is to make sure that the right modern API (kernel ioctl) is used for the copy to be efficient. Probably, go-ipfs just uses native go API for copying, so we should eventually benefit from go supporting recent Linux kernels, right? Can anybody here give a definite status report on that?

@Mithgol

What would happen on Windows? (Are there any copy-on-write filesystems on Windows?)

@Kubuxu

I think Windows would be working as it is now.

@Mithgol

What would BitTorrent clients do in such situations? Do they check only modification time and filesize of shared files after they are restarted?

@lamarpavel

@Mithgol Depending on the client, but most of them have a session-db with binary files describing active torrents and their state. These are just descriptors and much smaller, so there is no duplication. Some torrent clients take a while to start up if you have many torrent files active, suggesting a quick check of meta data for every file, but no complete hashing.

@anarcat
@djdv

in my humble opinion, delegating this to a deduplicating filesystem just
avoids the problem altogher: it doesn't fix the problem in ipfs, and
assumes it will be fixed in the underlying filesystem.

I feel like this is important given that IPFS may be run on platforms that do not have de-duplication file systems or on hardware that can't maintain a de-duplication feature due to hardware constraints (low memory, etc.). I feel that it's likely that a system that has storage constraints (one that would benefit from this feature) would probably also have memory limits but that's just a broad assumption, I'm thinking of SoC systems.

*nix systems have ZFS and BTRFS but to my knowledge Windows doesn't have any kind of standard/stable filesystem that has copy-on-write support, even ReFS does not support de-duplication itself but relies on external tools to handle it. I think relying on third party filesystems via Dokan may not be the best option either if IPFS can just handle this directly and on all platforms it runs on.

It would be unfortunate to reimpliment a feature like this if the underlying filesystem is handling it already but IPFS is in itself a filesystem so it should probably also have such a feature.

Perhaps it's worth looking into Direct Connect clients and how they handle their hash lists, I believe it's similar to this

transmission does a "quick"
check (lstat, so size, mtime...) and rehashes if inconsistencies are
found

I know of one that (optionally) stores metadata inside an NTFS stream so things like the hash, last hash time, size, mod time, etc. are stored with the file and can be read even if the file is moved around or modified, it does not modify the file itself though so it doesn't compromise integrity and doesn't rely on file names/paths, etc.. It's useful for keeping track of files even if the user messes around with them a lot, if the file is in /a/ and gets hashed then moves from /a/ to /b/ it doesn't have to be rehashed, the client checks to see if that data stream is there, does checks on it and knows it hasn't changed so it doesn't have to process it again which can save a lot of time and processing depending on file size and overall file count. Likewise if the file remains in the same path with the same name but was modified there will be a modtime discrepancy so it will trigger a rehash.

I don't know if other filesystems have a similar method of appending sibling data like that, or if there's a more portable solution that does the same thing but I feel like it's worth mentioning.

@kevina

I might take a stab at this by creating a repo or datastore that simply uses files already in the filesystem. To me this seams an important step to getting a large amount of data into IPFS. Disk space is cheap but not free.

The best docs I can find are here https://github.com/ipfs/specs/tree/master/repo, if there is anything better please let me know.

Is there a way at having multiple repo or datastores with a single IPFS daemon? I am thinking one could be designated as the primary where all cached data goes and all the others secondary that require explicit actions to move data in or delete data from. For the purposes of this issue I think a read-only repo or datastore will be sufficient. In can read a set of directories (or files) to make available via a config file and can reread the file from time to time to pick up changes.

@MichaelMure

I'd like to work on that too.

You can see here how is it currently organized. There is:

  • a Datastore that store arbitrary key/value pair
  • a Blockstore that create a content addressed storage on top of this Datastore
  • a WriteCached that cache blocks for highly accessed data and bulk write

On top of that Blockstore, a DagService allow to work on a higher level in the MerkleDag.

We could create a chain of responsability that fetch blocks either from the regular Blockstore or from a new blockstore that would provide blocks directly from regular on-disk files.

The meta-data that needs to be kept around could be stored either in the Datastore as key/value pairs or as a DAG as the Pinner does.

@jefft0

In issue #2053, jbenet said "by extending IPLD to allow just raw data edge nodes, we can make #875 easy to implement". What is the status of using raw data edge nodes?

@Mithgol

To avoid duplicating files, two features are necessary:

  • publishing files to IPFS without duplicating (i.e. such files become shared from their original locations, without any duplicates in the special IPFS storage),

  • saving files from IPFS without duplicating (i.e. such files become shared from the locations chosen for their saving, their original versions in the special IPFS storage are deleted).

@Kubuxu

Second one of those features will be hard to achieve as files might be sharded in many different ways which mean that some smart system of storing the wrapping data and pointer to raw data would have to be created.

Also I don't see those features being feasible on non CoW filesystems.

@jefft0

Is there an issue to discuss extending IPLD to allow just raw data edge nodes? This would be a basic part of the implementation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.