Skip to content

[DataCap Application] <National Institutes of Health> - <1000 Genomes> #42

@yuyu2024110

Description

@yuyu2024110

Data Owner Name

National Institutes of Health

Data Owner Country/Region

Afghanistan

Data Owner Industry

Life Science / Healthcare

Website

https://github.com/awslabs/open-data-docs/tree/main/docs/1000genomes

Social Media Handle

http://www.internationalgenome.org/contact

Social Media Type

Other

What is your role related to the dataset

Data Preparer

Total amount of DataCap being requested

7 PiB

Expected size of single dataset (one copy)

896 TiB

Number of replicas to store

8

Weekly allocation of DataCap requested

1 PiB

On-chain address for first allocation

f1yyvjru6ei4vuysexaxfa6kqskcp7zemtd7dalea

Data Type of Application

Public, Open Dataset (Research/Non-Profit)

Custom multisig

  • Use Custom Multisig

Identifier

No response

Share a brief history of your project and organization

The 1000 Genomes Project is an international collaboration which has established the most detailed catalogue of human genetic variation, including SNPs, structural variants, and their haplotype context. The final phase of the project sequenced more than 2500 individuals from 26 different populations around the world and produced an integrated set of phased haplotypes with more than 80 million variants for these individuals.

Is this project associated with other projects/ecosystem stakeholders?

No

If answered yes, what are the other projects/ecosystem stakeholders


Describe the data being stored onto Filecoin

The 1000 Genomes Project is an international collaboration which has established the most detailed catalogue of human genetic variation, including SNPs, structural variants, and their haplotype context. The final phase of the project sequenced more than 2500 individuals from 26 different populations around the world and produced an integrated set of phased haplotypes with more than 80 million variants for these individuals.

Where was the data currently stored in this dataset sourced from

AWS Cloud

If you answered "Other" in the previous question, enter the details here


If you are a data preparer. What is your location (Country/Region)

Hong Kong

If you are a data preparer, how will the data be prepared? Please include tooling used and technical details?

We downloaded the data from https://registry.opendata.aws/1000-genomes/, classified the content, and then used the built-in tools officially provided by Lotus (Filecoin's reference implementation) to split, compress, and package the data, and finally distributed it to SPs.

If you are not preparing the data, who will prepare the data? (Provide name and business)


Has this dataset been stored on the Filecoin network before? If so, please explain and make the case why you would like to store this dataset again to the network. Provide details on preparation and/or SP distribution.

No.

Please share a sample of the data

aws s3 ls --no-sign-request s3://1000genomes/ 696.8063 TiB

 s3://1000genomes/1000G_2504_high_coverage/additional_698_related/20130606_g1k_3202_samples_ped_population.txt
s3://1000genomes/1000G_2504_high_coverage/additional_698_related/20200526_1000G_2504plus698_high_cov_data_reuse_README.txt

s3://1000genomes/1000G_2504_high_coverage/additional_698_related/data/ERR3988761/HG00405.final.cram.crai
s3://1000genomes/1000G_2504_high_coverage/additional_698_related/data/ERR3988762/HG00408.final.cram

Confirm that this is a public dataset that can be retrieved by anyone on the Network

  • I confirm

If you chose not to confirm, what was the reason


What is the expected retrieval frequency for this data

Yearly

For how long do you plan to keep this dataset stored on Filecoin

2 to 3 years

In which geographies do you plan on making storage deals

Asia other than Greater China, Europe, North America, South America

How will you be distributing your data to storage providers

HTTP or FTP server, Shipping hard drives

How did you find your storage providers

Partners

If you answered "Others" in the previous question, what is the tool or platform you used


Please list the provider IDs and location of the storage providers you will be working with.

f03649204 HongKong
f03649212 HongKong
f03649217 Singapore
f03649227 Singapore
f03637821 Germany
f03637813 Brazil

How do you plan to make deals to your storage providers

Boost client

If you answered "Others/custom tool" in the previous question, enter the details here


Can you confirm that you will follow the Fil+ guideline

Yes

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions