-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DataCap Application] <Common Crawl > <2024-06-26T12:36:12.600Z> #43
Comments
Application is waiting for allocator review |
KYC has been requested. Please complete KYC at https://kyc.allocator.tech/?owner=fidlabs&repo=Open-Data-Pathway&client=f1ouxkkeacppvlyx2rjsrx4tuwr7udix6uu47gjiy&issue=43 |
I can confirm GitHub KYC was completed using Togggle third party check: https://filplus.storage/api/get-kyc-users |
Storage provider list |
@lyjmry are these SPs enabling retrievals on Spark Dashboard? https://spacemeridian.grafana.net/public-dashboards/32c03ae0d89748e3b08e0f08121caa14?orgId=1 |
”“It can be found in the index and cid.contact. Lassie and boost searches are also normal, but spark statistics cannot be found. Many people are responding to this problem.”“ |
Some of the SP lists I submitted can be retrieved by spark, but those that cannot be displayed or have a retrieval rate of 0 are given priority. We can indeed retrieve it through other methods. |
@lyjmry when ready, please provide an updated list of SPs. Miner ID Entity location We can start with 50TiB first allocation for matching SPs with retrievals |
Datacap Request TriggerTotal DataCap requested
Expected weekly DataCap usage rate DataCap Amount - First Tranche
Client address
|
DataCap Allocation requestedMultisig Notary addressClient address
DataCap allocation requested
Id
|
Application is ready to sign |
Request ApprovedYour Datacap Allocation Request has been approved by the Notary Message sent to Filecoin Network
Address
Datacap Allocated
Signer Address
Id
You can check the status of the message here: https://filfox.info/en/message/bafy2bzaceb5u3b7wbueon42qynvvbt6fbq4vrfimwp6vp346txtrmc4ckdcck |
Application is Granted |
checker:manualTrigger |
checker:manualTrigger |
Client used 75% of the allocated DataCap. Consider allocating next tranche. |
checker:manualTrigger |
DataCap and CID Checker Report Summary1Storage Provider Distribution✔️ Storage provider distribution looks healthy. Deal Data Replication✔️ Data replication looks healthy. Deal Data Shared with other Clients2✔️ No CID sharing has been observed. Full reportClick here to view the CID Checker report. Footnotes |
checker:manualTrigger |
DataCap and CID Checker Report Summary1Storage Provider Distribution✔️ Storage provider distribution looks healthy. Deal Data Replication✔️ Data replication looks healthy. Deal Data Shared with other Clients2✔️ No CID sharing has been observed. Full reportClick here to view the CID Checker report. Footnotes |
@lyjmry Why the retrieval is so low? |
Hi @martplo |
@lyjmry Yes, that's fine. What about low retrieval of US providers? Have you contacted them? |
@lyjmry, I've noticed that more SPs have dropped retrieval rates. I'm going to observe this before triggering the next tranche. When retrieval rates get better, I'll trigger the next round. |
DataCap and CID Checker Report Summary1Storage Provider Distribution✔️ Storage provider distribution looks healthy. Deal Data Replication✔️ Data replication looks healthy. Deal Data Shared with other Clients2✔️ No CID sharing has been observed. Full reportClick here to view the CID Checker report. Footnotes |
|
checker:manualTrigger |
DataCap and CID Checker Report Summary1Storage Provider Distribution✔️ Storage provider distribution looks healthy. Deal Data Replication✔️ Data replication looks healthy. Deal Data Shared with other Clients2✔️ No CID sharing has been observed. Full reportClick here to view the CID Checker report. Footnotes |
checker:manualTrigger |
DataCap and CID Checker Report Summary1Storage Provider Distribution✔️ Storage provider distribution looks healthy. Deal Data Replication✔️ Data replication looks healthy. Deal Data Shared with other Clients2✔️ No CID sharing has been observed. Full reportClick here to view the CID Checker report. Footnotes |
checker:manualTrigger |
DataCap and CID Checker Report Summary1Storage Provider Distribution✔️ Storage provider distribution looks healthy. Deal Data Replication✔️ Data replication looks healthy. Deal Data Shared with other Clients2✔️ No CID sharing has been observed. Full reportClick here to view the CID Checker report. Footnotes |
|
checker:manualTrigger |
DataCap and CID Checker Report Summary1Storage Provider Distribution✔️ Storage provider distribution looks healthy. Deal Data Replication✔️ Data replication looks healthy. Deal Data Shared with other Clients2✔️ No CID sharing has been observed. Full reportClick here to view the CID Checker report. Footnotes |
@lyjmry Okay, these SPs have indeed started working better. This is the result from the last 6 hours, so I hope the upward trend continues. However, I checked other SPs, and this one - f0122215 dropped to 0, while this one - f03035686 remains very low. As these two SPs have had higher retrieval rates in the past, I will give you DC, but with the caveat that you need to contact them to improve their retrieval. Unfortunately, in favour of your second application, we have used most of the DC we have, so, for now, I can offer you 50 TiB and the rest as soon as we get a new pool of DC unless you prefer to wait for the full tranche. |
Does this take a long time? I will contact the other two SPs to fix their retrieval performance |
@lyjmry It's hard to say. We haven't had the review yet, but other allocators got their refills around 2-3 weeks after the review. |
checker:manualTrigger |
DataCap and CID Checker Report Summary1Storage Provider Distribution✔️ Storage provider distribution looks healthy. Deal Data Replication✔️ Data replication looks healthy. Deal Data Shared with other Clients2✔️ No CID sharing has been observed. Full reportClick here to view the CID Checker report. Footnotes |
Version
2024-06-26T12:36:12.600Z
DataCap Applicant
@lyjmry
Data Owner Name
Common Crawl
Data Owner Country/Region
Not-for-Profit
Website
https://commoncrawl.org
Social Media Handle
https://x.com/commoncrawl
Social Media Type
Twitter
What is your role related to the dataset
Data Preparer
Total amount of DataCap being requested
5PiB
Expected size of single dataset (one copy)
500TiB
Number of replicas to store
10
Weekly allocation of DataCap requested
2000TiB
On-chain address for first allocation
f1ouxkkeacppvlyx2rjsrx4tuwr7udix6uu47gjiy
Data Type of Application
Public, Open Dataset (Research/Non-Profit)
Identifier
9547
Share a brief history of your project and organization
Project: This project is organized by the CommonCrawl open source organization to regularly crawl network data and save it in the AWS open data set. And study how to clean it into AI training data. They have stored data from 2008 onwards, which is an order of magnitude larger.
About me: I have joined the Filecoin ecosystem since 2019, and have been active in the community since the second half of 2020, and have contributed to filecoin Greater China by providing resource information reports, resource docking, etc.
Is this project associated with other projects/ecosystem stakeholders?
No
If answered yes, what are the other projects/ecosystem stakeholders
Because the data is huge and has AI training value. I very much hope that Fileocin can play its distributed, decentralized storage value. To prevent data loss due to catastrophic events on AWS storage nodes. It is also hoped that Filecoin’s developer ecosystem can also use this data for AI training to expand the value of the data.
Where was the data currently stored in this dataset sourced from
AWS Cloud
If you answered "Other" in the previous question, enter the details here
If you are a data preparer. What is your location (Country/Region)
China
If you are a data preparer, how will the data be prepared? Please include tooling used and technical details?
I will fully communicate with the storage providers, including downloading the data to my local hard disk for offline transactions. If bandwidth resources are sufficient due to long distances, the storage provider will download it from AWS, using including but not limited to boost.
If you are not preparing the data, who will prepare the data? (Provide name and business)
Me
Has this dataset been stored on the Filecoin network before? If so, please explain and make the case why you would like to store this dataset again to the network. Provide details on preparation and/or SP distribution.
N/A
Please share a sample of the data
https://data.commoncrawl.org/crawl-data/CC-NEWS/2016/09/warc.paths.gz
https://data.commoncrawl.org/crawl-data/CC-MAIN-2024-18/cc-index-table.paths.gz
https://data.commoncrawl.org/crawl-data/CC-MAIN-2024-18/cc-index.paths.gz
https://data.commoncrawl.org/crawl-data/CC-MAIN-2024-18/non200responses.paths.gz
Confirm that this is a public dataset that can be retrieved by anyone on the Network
Confirm
If you chose not to confirm, what was the reason
What is the expected retrieval frequency for this data
Yearly
For how long do you plan to keep this dataset stored on Filecoin
Permanently
In which geographies do you plan on making storage deals
Greater China, Asia other than Greater China, North America
How will you be distributing your data to storage providers
Cloud storage (i.e. S3), Shipping hard drives
How did you find your storage providers
Slack, Partners, Others
If you answered "Others" in the previous question, what is the tool or platform you used
Wechat
Please list the provider IDs and location of the storage providers you will be working with.
HONGKONG
CHINA
Singapore
USA
As for MInerID, they plan to use new nodes for sealing, I will update then.
How do you plan to make deals to your storage providers
Boost client, Lotus client
If you answered "Others/custom tool" in the previous question, enter the details here
Can you confirm that you will follow the Fil+ guideline
Yes
The text was updated successfully, but these errors were encountered: