Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sprint: (aka 300 TB Challenge) #87

flyingzumwalt opened this issue Jan 14, 2017 · 12 comments


None yet
4 participants
Copy link

commented Jan 14, 2017

Dates: 16-27 January 2017
Sprint Milestone:
Waffle Board:

Participants from IPFS Team:




During this sprint, we will work with collaborating institutions to load all of (350 TB of datasets) into IPFS, publish the hashes on the DHT, and replicate the data to nodes at participating institutions.

Main Issues & Boards for Tracking this Work

Sprint Milestone:
Waffle Board:

Main Issues for Tracking this work:

  • #104: Main Epic: Replicate 350 TB of Data Between 3 Peers (and then the World)
  • #107: Call for Participants/Collaborators for Sprint
  • #113: Download all of the datasets from


Top level objectives

  • Add (300+ TB) to IPFS so that people around the world can replicate authenticated copies of the data using IPFS
  • Provide advice to organizations who are adding large volumes of content to IPFS
  • Test IPFS performance at this scale and tune for performance, memory usage, stability, etc.
  • Improve User Experience for people adding and replicating large volumes of data
  • Identify possible next steps
  • Move towards making IPFS work at exabyte scale

What will be Downloaded

The website is a portal for searching through all the open data published by US federal agencies. It currently lists over 190,000 datasets. The goal is to download those datasets, back them up, and eventually publish them on IPFS, and replicate them across multiple institutions.

How is this different from the Internet Archive's EOT Harvest of

In short,the End of Term Presidential Harvest will capture the website but is not likely to capture the datasets that it links to. We aim to capture and replicate the datasets.

From the Federal Depository Library Program website:

The Library of Congress, California Digital Library, University of North Texas Libraries, Internet Archive, George Washington University Libraries, Stanford University Libraries, and the U.S. Government Publishing Office (GPO) are leading a collaborative project to harvest and preserve public U.S. Government websites at the conclusion of the current Presidential administration ending on January 20, 2017.

This End of Term Presidential Harvest is focused on crawling websites (ie. grabbing HTML & static files, following links), including It is not focused on downloading whole datasets, which often have to be retrieved using tools/processes beyond the capabilities of regular web crawlers.

You can see the ongoing EOT Harvest work, and nominate sites for harvest, at the End of Term Presidential Harvest 2016 Website


  • Allocating Storage: It's hard to spin up 350TB of storage on short notice. Ideally, we need at least 700TB (one full copy backed upoutside IPFS, another copy inside IPFS)
  • Time Constraints: We added this to the calendar at the last minute. The assigned IPFS team is only available for these 2 weeks. After that, they are booked on other sprints through the remainder of Q1. This means we need to make the most of this time and need to leave a trail for community members & collaborators to continue the work after the end of the sprint.


Isolating from the Main IPFS Network: We might do this on a separate "private" IPFS network to ensure stability while we load-test the system (we want to make sure it's all working smoothly before we flood the main public IPFS network with provide statements and additions to the DHT.

Possible Areas of Focus for Engineering Efforts

Areas that we might focus on in this sprint (needs prioritization):

  • ipfs-pack
  • Index-in-place (aka. IPFS "Filestore") -- allows you to serve content through IPFS without copying the content itself into the ipfs repo.
  • Providers UX
    • Providing only roots
    • Easy specification of blocks to autoprovide
    • Introspection of providers processes
  • Blockstore Perf
    • Analysis of different datastores
    • make datastores configurable (and tool for converting between them)
    • Run benchmarks on multi-TB datasets/repos
    • Investigate 'single file' blockstore
  • Delegated Content Routing
    • Supernode DHT
    • 'Trackers'
    • Multi-ContentRouting
    • DHT Record Signing
  • Memory Usage Improvements
    • Multiplex
    • Peerstore to disk
  • Deployment/Ops UX
    • see Operational Peace of Mind sprint

@flyingzumwalt flyingzumwalt self-assigned this Jan 14, 2017

@flyingzumwalt flyingzumwalt changed the title Sprint: Data.Gov (aka 300 TB Challenge) Sprint: (aka 300 TB Challenge) Jan 14, 2017

@flyingzumwalt flyingzumwalt added ready in progress and removed ready labels Jan 15, 2017


This comment has been minimized.

Copy link
Contributor Author

commented Jan 15, 2017

@jbenet suggested here:

Some things to do that would help, before tue:

  • review existing filestore stuff
  • review ipfs-pack draft proposal -- ipfs/notes#205
  • make a short list of the "big bugs" and "big optimizations" relevant for this sprint. (i can think of a couple -- file attrs, bitswap supporting paths (kills so many RTTs) -- , but we'll want a good list to have in mind)
  • refine the concrete use case and UX we're shooting for
  • create the test workloads + scripts we'll work against:
  • review ipfs-s3 datastore options -- ipfs/notes#214

This comment has been minimized.

Copy link
Contributor Author

commented Jan 15, 2017

Sprint Prep Action Items



  • #104: refine the concrete use case and UX we're shooting for
  • #105: make a short list of the "big bugs" and "big optimizations" relevant for this sprint. (i can think of a couple -- file attrs, bitswap supporting paths (kills so many RTTs) -- , but we'll want a good list to have in mind)
  • #102 create the test workloads + scripts we'll work against:


  • Set up a place to track sprint stories & milestones (probably an endeavor-specific repo)
  • Review the tools listed at, consider implications, write up conclusions
  • Generate List of Stories/Goals for the Sprint, associate them with a Milestone and a backlog
  • prioritize the objectives (because we won't get to them all)
  • plan for follow-up after the sprint is over
  • Get essential stories to "Ready"

This comment has been minimized.

Copy link

commented Jan 16, 2017

This sounds like it involves me. Is there a reason I am not mentioned?


This comment has been minimized.

Copy link
Contributor Author

commented Jan 16, 2017

@kevina this sprint arose very quickly based on sudden interest outside our org. I'm putting together the docs as quickly as I can so we can coordinate. I suspect that @jbenet and @whyrusleeping will pull you onto the sprint if you're available.


This comment has been minimized.

Copy link
Contributor Author

commented Jan 17, 2017

Sprint Planning: (aka 300 TB Challenge)

Date: 2017-17-01

Lead: @flyingzumwalt

Notetaker: @flyingzumwalt



Useful Links & Issues

Big Optimizations

TODO: dig up diagram @whyrusleeping created

  • Adding is still very slow. We can do way better.
    • adding large files is faster than adding lots of small files
    • need a way to test these things See #102
    • @lgierth recently added ~3.2 TB for CCC. It took about a day to add. Performance dropped as the repo grew. Would have taken half a day if performance had stayed constant.
    • @Kubuxu ran some tests (see #105 (comment))
    • Path forward: design good tests. See #102
  • fetching from network is very slow (won't be able to fill the pipes)
    • tests should also address this
  • DHT with huge datasets might get oversaturated
    • DHT is not going to scale in time for this sprint, so we just won't use the DHT -- this means we need to find a way to do the routing. See #120
    • {@whyrusleeping mentioned something i didn't hear..}
  • Garbage Collection might not work with huge datasets
    • leaving GC out of scope for this sprint.
  • Bitswap hasn't really been tested yet
  • Private Networks -- do we need it in order to do #116 and #120? @Kubuxu will follow up with estimates (or merged code)

Test Suite

See #102

Currently not scaling well. Don't have good metrics, graphs or reports about performance -- where/when/how performance dipped under certain circumstances.

We need to know more than "Does it scale?". We need to know "how does it scale?" So we can identify the domain of problems, etc.


See filestore Stories & Epics

The current implementation mixes porcelain UX concerns with the underlying iplementation/plumbing. This makes the interfaces confusing & complicated. It also makes the underlying plumbing more complicated and less robust than it should be.
Best approach: take the pieces of the code that we need and package it as an experimental feature with simple, straightforward interfaces.

@jbenet & @whyrusleeping need to sit down and figure out how they want to proceed with this. @flyingzumwalt will try to capture that info in the filestore Stories & Epics Main things that need to be specified:

  • How to do the internals/plumbing
  • What the UX should look like


The case for ipfs-pack

Currently the way people use go-ipfs is with ipfs add which creates a duplicate copy of the added data on the machine. With filestore we aim to build indexes of pointers to data/blocks in-place. This solves performance concerns, but creates a brittle situation -- if you move the file, ipfs won't be able to serve it any more. ipfs-pack aims to address this by building manifest files that hold the indexes that match ipfs hashes to the content. If you store those manifest files alongside the cotnent they point to, it becomes a portable dataset.

Extending that idea, if you create little .ipfs repositories next to the manifest files, it becomes possible to

  • serve that dataset as its own little ipfs node
  • register the contents of that dataset with another ipfs node, serving the content directly from wherever you've stored/mounted it

Why implement ipfs-pack now?

  • Makes the UX much smoother for providers and their peers
  • packs make the a lot of these concepts clear, straightforward & relatable

This comment has been minimized.

Copy link

commented Jan 18, 2017

I listened in from around 1:00 to 1:30.

I like the idea of ipfs-pack, but I see some potential problems. i have not had time to review the spec so it will be premature to bring them up.

I too would like to be present for the meeting on the filestore core so I can give feedback before we try to implement anything, there are some tricky aspects regarding multiple files with the same hash that need to be addressed for this to be considered a stable format. Most likely, the existing code can be adopted.


This comment has been minimized.

Copy link
Contributor Author

commented Jan 24, 2017

@mejackreed so @whyrusleeping can do more realistic load testing in #126 can you please run du -a {path-to-data} on the datasets you've downloaded so far?


This comment has been minimized.

Copy link

commented Jan 25, 2017

currently 2053258584 /data/master/pairtree_root/


This comment has been minimized.

Copy link

commented Jan 25, 2017

@mejackreed we are also interested in whole output of this command, it will allows us to know the distribution of filesizes, directories and so on.


This comment has been minimized.

Copy link

commented Jan 25, 2017


This comment has been minimized.

Copy link

commented Jan 25, 2017


This comment has been minimized.

Copy link
Contributor Author

commented Jan 30, 2017

Report from Sprint

The IPFS team have reached the end of our Sprint. Due to constraints on our very busy Q1 Roadmap, we were only able to allocate a single sprint 16-27 January 2017 (2 weeks) to work on this full-time. While we didn't reach all of the objectives, we have done our best to clear the path for our collaborators to finish the experiment. In the coming weeks, @flyingzumwalt will continue to participate in the project and the IPFS maintainers will provide information & advice when possible.

Within the IPFS team, we were excited to have the opportunity to help. This situation gets at one of the key reasons why we're building IPFS -- we want everyone to be able to hold and serve copies of the data they care about rather than relying on centralized services.

What we Accomplished

A number of collaborators have stepped up on very short notice to replicate these datasets. We're happy to tell them that the software is ready for you to use. Here's what @whyrusleeping, @Kubuxu, @kevina, @jbenet and @flyingzumwalt have done:

  • We've written instructions for you to follow when replicating the datasets (and any other datasets). They're titled Instructions for Replicating Large Amounts of Data with Minimal Overhead
  • We made version 0.4.5 of ipfs support adding content in-place. This means you can now add content to ipfs without ipfs creating a duplicate copy of the data. Using this approach cuts the storage overhead by nearly 50%.
  • We created ipfs-pack to simplify the process of adding downloaded datasets to the ipfs network.
  • We ran some preliminary tests to confirm that everything will work.

Next Steps

Next Steps with the Datasets

Specifically regarding the Datasets, next steps include:

@flyingzumwalt will remain the main point of contact on the ipfs side coordinating this work.

Next Steps for the IPFS Code Base

Relevant follow-up work on the IPFS code bases involve:

  • ipfs-cluster: IPFS Nodes Coordinating to Hold datasets
  • More Testing and Optimization
  • Deduplication of Datasets

Here's a breakdown of each:

ipfs-cluster: IPFS Nodes Coordinating to Hold datasets

The DataRescue effort has triggered multiple requests for tools that allow IPFS nodes to coordinate with each other in order to hold valuable content. We were already working on this functionality, under the name ipfs-cluster. The captain for ipfs-cluster is @hsanjuan. That code base will be moving forward throughout the quarter.

Some relevant discussions related to ipfs-cluster:

More Testing and Optimization to Come

We wish we could have done more testing and optimizing before the collaborators started replicating the actual datasets, but that work will have to wait for a few more weeks. We have two sprints scheduled later in the quarter that will be specifically focused on Improving our Testing & CI Infrastructure and Building a Proper Test Lab for Distributed Networks. We're confident that those tests will allow us to achieve major improvements in speed, stability, configurability, and security.

Deduplication of Datasets

In the aftermath of this high-speed effort to download datasets, people are now asking how to deduplicate datasets. This becomes especially relevant when we consider distributing and archiving datasets as they change over time -- if parts of the datasets stay the same between versions, we want to avoid storing & replicating them multiple times.

This has spurred interest in the different chunking algorithms IPFS supports. In particular, people are taking interest in rabin fingerprinting. Here are some Github issues where the discussion is happening:

Anything Else?

If we've missed anything important from our list of Next Steps, please let us know so we don't lose track of it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.