sidebar | permalink | keywords | summary |
---|---|---|---|
sidebar |
ai/aipod_nv_conclusion_add_info.html |
NetApp AI, AI, Artificial Intelligence, ML, Machine Learning, NVIDIA, NVIDIA AI Enterprise, NVIDIA BasePOD, NVIDIA DGX |
NetApp AIPod with NVIDIA DGX Systems - Where to Find Additional Information |
This section includes references for additional information for the NetApp AIPod with NVIDIA DGX systems.
The DGX BasePOD architecture is a next-generation deep learning platform that requires equally advanced storage and data management capabilities. By combining DGX BasePOD with NetApp AFF systems, the NetApp AIPod with DGX systems architecture can be implemented at almost any scale up to 48 DGX H100 systems on a 24-node AFF A900 cluster. Combined with the superior cloud integration and software-defined capabilities of NetApp ONTAP, AFF enables a full range of data pipelines that spans the edge, the core, and the cloud for successful DL projects.
To learn more about the information described in this document, please refer to the following documents and/or websites:
-
NetApp ONTAP data management software — ONTAP information library
-
NetApp AFF A900 storage systems-
-
NetApp ONTAP RDMA information-
-
NetApp DataOps Toolkit
-
NetApp Astra Trident
-
NetApp GPUDirect Storage Blog-
-
NVIDIA DGX BasePOD
-
NVIDIA DGX H100 systems
-
NVIDIA Networking
-
NVIDIA Magnum IO GPUDirect Storage
-
NVIDIA Base Command
-
NVIDIA Base Command Manager
-
NVIDIA AI Enterprise
This document is the work of the NetApp Solutions and ONTAP Engineering teams- David Arnette, Olga Kornievskaia, Dustin Fischer, Srikanth Kaligotla, Mohit Kumar and Rajeev Badrinath. The authors would also like to thank NVIDIA and the NVIDIA DGX BasePOD engineering team for their continued support.