You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The other day I was looking into how IPFS works. Even though I don't care a lot about some of its use cases (i.e., storing pictures of apes in blockchains), the tooling around it seems pretty solid. The most interesting part is how much overlap there is between us and them in terms of technology. Like us, they are interested in storing file system contents. Also like us, they decided to use Merkle trees for that, which they store in a CAS.
What I think the IPFS folks did pretty well is that the data model of their CAS (called IPLD) has links between objects as a first-class citizen. This differs from our model, where an REv2 Directory object is indistinguishable from a file that happens to have the same contents. In our case the CAS is a flat namespace that only happens to become DAG-like at the time of use, whereas in the case of IPFS the CAS is truly aware of the relationship between objects. This means that it's been pretty easy for them to implement generic tools for archiving and distributing contents. As a result, their ipfs dag export tool can be used to archive any hierarchy of objects into a CAR file, regardless of its schema.
Also good is that IPFS (not necessarily IPLD) places a fairly strong limit on the maximum object size (i.e., 1 MB). This means that things like (content defined) chunking of files are already present there.
If we ever get to a point where we want to start working on REv3, I would strongly suggest that we at least spend some time browsing through the IPLD documentation and the related 'multiformats' project on GitHub to see if there are parts that we can repurpose. For example:
Could our Digest message and DigestFunction enumeration be replaced with CIDs?
Could our Directory message be embedded into DAG-PB?
Or could we even go as far as dropping our Directory message in favour of the one used by UnixFS?
Could the Bytestream/ContentAddressableStorage services be replaced by some known IPFS transport?
The text was updated successfully, but these errors were encountered:
The other day I was looking into how IPFS works. Even though I don't care a lot about some of its use cases (i.e., storing pictures of apes in blockchains), the tooling around it seems pretty solid. The most interesting part is how much overlap there is between us and them in terms of technology. Like us, they are interested in storing file system contents. Also like us, they decided to use Merkle trees for that, which they store in a CAS.
What I think the IPFS folks did pretty well is that the data model of their CAS (called IPLD) has links between objects as a first-class citizen. This differs from our model, where an REv2 Directory object is indistinguishable from a file that happens to have the same contents. In our case the CAS is a flat namespace that only happens to become DAG-like at the time of use, whereas in the case of IPFS the CAS is truly aware of the relationship between objects. This means that it's been pretty easy for them to implement generic tools for archiving and distributing contents. As a result, their
ipfs dag export
tool can be used to archive any hierarchy of objects into a CAR file, regardless of its schema.Also good is that IPFS (not necessarily IPLD) places a fairly strong limit on the maximum object size (i.e., 1 MB). This means that things like (content defined) chunking of files are already present there.
If we ever get to a point where we want to start working on REv3, I would strongly suggest that we at least spend some time browsing through the IPLD documentation and the related 'multiformats' project on GitHub to see if there are parts that we can repurpose. For example:
Digest
message andDigestFunction
enumeration be replaced with CIDs?Directory
message be embedded into DAG-PB?Directory
message in favour of the one used by UnixFS?The text was updated successfully, but these errors were encountered: