-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1 | 1.1.1 | Minimum Viable Product (MVP) for registering metadata in the repository and connecting the metadata to the data in the research computing remote storage (NESE), including Globus endpoints | 15 #13
Comments
This issue represents a deliverable funded by the NIH Aim 1: Support the sharing of very large datasets (>TBs) by integrating the metadata in the repository with the data in the research computing storage An increasing number of research studies deal with very large datasets (>TB to PBs). When the study is completed or ready to be distributed, it is not always feasible nor desirable to deposit the data in the repository. Instead, in this project we propose to publish the metadata to the repository for discoverability of the study and access the data remotely from the research computing cluster or cloud storage. In this scenario, the data does not need to be downloaded to the user’s computer but can be viewed, explored, and analyzed directly in the research computing environment. The Harvard Dataverse repository will leverage the Northeast Storage Exchange (NESE) and the New England Research Cloud (NERC) to provide storage and compute for these very large datasets by finding and accessing them through the repository and keeping the metadata and data connected via a persistent link. These two services - NESE and NERC - are large-scale multi-institutional infrastructure components of the Massachusetts Green High Performance Computing Center (MGHPCC) -- a five member public-private partnership between Boston University, Harvard University, Massachusetts Institute of Technology, Northeastern University, and the University of Massachusetts. MGHPCC is a $90 million facility that has the capacity to grow up to 768 rack, 15 MW of power and 1 terabit of network capacity in the current 95,000 sq. ft data center. One of the key integration points to support large data transfers is to incorporate Globus endpoints. Globus is a distributed data transfer technology developed at University of Chicago that is becoming ubiquitous for research computing services. This will allow the realistic transfer of TBs of data in less than an hour. Globus will also be a front end of NESE Tape, a 100+ PB tape library within MGHPCC. The integration of the repository with research computing is one of the components of a Data Commons that will facilitate collaboration, dissemination, preservation and validation of data-centric research. Related Deliverables: This work also represents a deliverable funded internally. Harvard Data Commons MVP: Objective 1
This picture shows how the the Harvard Data Commons work maps to Dataverse work. This is a closer look at the Harvard Datacommons work: GDCC DataCommons Objective 1 Task Tracking
|
Next step:
|
Summary Prior to this work, Dataverse was capable of storing up to about 1TB in S3. The first part was to integrate Globus as a large file transfer mechanism, into Dataverse. This was done by building on the work already done in the Borealis, the Canadian Dataverse Repository (Formerly Scholar’s Portal Fork of Dataverse) to allow Dataverse to integrate with Globus. This integration allows files bigger than a terabyte to be transferred from within Dataverse and to an S3 store via globus. The second part addresses very large files; items up upward of a petabyte or so are not realistic for DV to store. This solution enables Dataverse to manage datasets where one or more of the files is referenced rather than being directly stored within a Dataverse repository. In this solution, the large file remains in its original location and then is referenced from Dataverse. What's left to complete the overall minimum viable process?
What's not in this deliverable?
|
Who:
|
Updated today. Met with Stefano and Len.
(1.1.1) Now that these changes have been released in Dataverse 5.12, the focus has shifted to setting up a production environment for researchers to use the remote large data support. Discussions are continuing over the next weeks around allocating storage and establishing a Globus endpoint for Dataverse from the Northeast Storage Exchange (NESE). |
Globus support needs to be revised. |
Slow download is not a problem with the current implementation using a Globus S3 connector over a file system. It will be a limit when the underlying storage is a tape system, where using the Globus S3 connector may also not be as useful as using their file connector. See IQSS/dataverse#9123 for details. scholarsportal/dataverse-globus#2 is also relevant. |
November: 2022 (1.1.1) Focused shifted to setting up a production environment for researchers to use the remote large data support. Discussions are continuing over the next weeks around allocating storage and establishing a Globus endpoint for Dataverse at the Northeast Storage Exchange (NESE). |
Discussions are proceeding around next steps Summary from talking with Jim, There was a meeting Monday where Scott Yokel, Len, Stefano, Jim and others went over the proposal/requirements/design options doc Jim put together. Planning to talk with Scholars Portal/Borealis about their interest in this as well. Next step:
|
Reviewed today. Closing out 2022. This work closes out the end of February.
The following two deliverables are still being worked.
We have an MVP representing a connection from HDV and globus but it does not support large files yet. This requires additional design and implementation steps. |
Last updated: Thu Dec 15 2022 before I left for the holiday Planning continues around supporting the Globus endpoint for Dataverse at the Northeast Storage Exchange (NESE) and moving beyond the MVP. The MVP enables connection from Harvard Dataverse to the storage but does not support large files. 90% |
priority discussion with Stefano;
|
Last Update: approx Dec 20, 2022 90% |
Monthly report. 90% |
March: (1.1.1) This activity was completed at an extent of 90% in year 1 and transferred to year 2. |
This activity was completed at an extent of 90% in year 1. Year 2 work toward completion will be tracked as deliverable (2.1.1a). Draft summary: This activity was completed at an extent of 90% in year 1. The dataverse integration required for the MVP was released as part of Dataverse 5.12. This involved integrating Dataverse with Globus endpoints to enable remote storage of large data files while maintaining metadata in Dataverse. The focus then shifted to setting up a production environment using Globus endpoint for Dataverse at the Northeast Storage Exchange (NESE). The challenge is that NESE can’t support real time browsing for large files yet due to specific technological characteristics of tape support. This is where work will continue during year 2. Year 2 work toward completion will be tracked as aim:1 yr:2 task:1a (2.1.1A) starting at 90% complete. |
References:
Problem Statement
Prior to this work, Dataverse is capable of storing up to about 1TB in S3.
Proposed Solution
The first part was to integrate Globus as a large file transfer mechanism, into Dataverse. This was done by building on the work already done in the Borealis, the Canadian Dataverse Repository (Formerly Scholar’s Portal Fork of Dataverse) to allow Dataverse to integrate with Globus. This integration allows files bigger than a terabyte to be transferred from within Dataverse and to an S3 store via globus.
The second part addresses very large files; items up upward of a petabyte or so are not realistic for DV to store. This solution enables Dataverse to manage datasets where one or more of the files is referenced rather than being directly stored within a Dataverse repository. In this solution, the large file remains in its original location and then is referenced from Dataverse.
Acceptance Criteria
Associated Issues:
See comments below for latest update.
┆Issue is synchronized with this Smartsheet row by Unito
The text was updated successfully, but these errors were encountered: