diff --git a/cloud-infrastructure/ai-infra-gpu/ai-infrastructure/omniverse-digital-twins-for-fluid-simulation/README.md b/cloud-infrastructure/ai-infra-gpu/ai-infrastructure/omniverse-digital-twins-for-fluid-simulation/README.md new file mode 100644 index 000000000..2769c3355 --- /dev/null +++ b/cloud-infrastructure/ai-infra-gpu/ai-infrastructure/omniverse-digital-twins-for-fluid-simulation/README.md @@ -0,0 +1,159 @@ +# Building NVIDIA blueprint on OCI: Digital twins for fluid simulation + +This tutorial explains how to run the NVIDIA Omniverse Digital Twins for Fluid Simulation blueprint on OCI. This example shows how to study the aerodynamics (drag, down force, etc.) of a car using a virtual wind tunnel. + + +## Prerequisites + +To run this blueprint, you will need: +- an OCI tenancy with limits to use a BM.GPU.L40S-NC.4 shape +- an NVIDIA account for the NGC Catalog +- an NGC API key to download images from the NGC catalog + + +## Instance configuration + +### Compute part + +In the OCI Console, create an instance using: +* a BM.GPU.L40S-NC.4 shape (bare metal server with 4 x NVIDIA L40S GPU) +* a native Canonical Ubuntu 22.04 image (NVIDIA drivers will be installed afterwards) +* a boot volume with 200 GB + +### Network part + +Running this blueprint requires to open several ports for different protocols to allow the client machine (where the blueprint will be accessed through a web browser) to communicate with the instance where the blueprint is deployed. In the Virtual Cloud Network where the instance resides, go to the default security list and add the following ingress rules: +- web: + - 5273/tcp, + - 1024/udp +- kit: + - 8011/tcp, + - 8111/tcp, + - 47995-48012/tcp, + - 47995-48012/udp, + - 49000-49007/tcp, + - 49100/tcp, + - 49000-49007/udp +- other: + - 1024/udp + +### Installing NVIDIA drivers + +When the instance is up, a specific version NVIDIA drivers can be installed but beforehands, we must install additional packages to build them: +``` +sudo apt install -y build-essential +``` +Then we can download the NVIDIA driver version 535.161.07 available [here](https://www.nvidia.com/fr-fr/drivers/details/220428/) and install it. +``` +wget https://fr.download.nvidia.com/XFree86/Linux-x86_64/535.161.07/NVIDIA-Linux-x86_64-535.161.07.run +chmod +x NVIDIA-Linux-x86_64-535.161.07.run +sudo ./NVIDIA-Linux-x86_64-535.161.07.run +``` +The instance must be rebooted for the changes to be taken into account. +``` +sudo reboot +``` + + +### Installing additional packages + +As this is a native Ubuntu version, a few additional packages must be installed to clone the repo and add and configure docker. +``` +sudo apt install -y git-lfs +sudo apt install -y docker.io +sudo apt install -y docker-compose-v2 +sudo apt install -y docker-buildx +``` + +### Installing and configuring NVIDIA Container Toolkit + +First of all, we must add the NVIDIA Container Toolkit repository to the repository list: +``` +curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \ + && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \ + sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \ + sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list +``` +Then, we can update the list of packages from all repositories, install the `nvidia-container-toolkit` package and configure docker. +``` +sudo apt update +sudo apt install -y nvidia-container-toolkit +sudo nvidia-ctk runtime configure --runtime=docker +sudo systemctl restart docker +``` + +## Downloading and building the project + +At this stage it is necessary to set your NGC API key as an environment variable to be able to download the right content from the NGC Catalog. +``` +echo "export NGC_API_KEY=nvapi-xxx" >> ~/.bashrc +source ~/.bashrc +``` +where `nvapi-xxx` is your own NGC API key. + +Once done, we can clone the repository and build the images: +``` +git clone ssh://github.com/NVIDIA-Omniverse-Blueprints/digital-twins-for-fluid-simulation $HOME/digital_twins_for_fluid_simulation +cd $HOME/digital_twins_for_fluid_simulation +./build-docker.sh +``` +2 files have to be modified, namely `.env` and `compose.yml`. + +First, create a copy of the environment file template: +``` +cp .env_template .env +``` +and set the `ZMQ_IP` with the instance private IP address. +``` +ZMQ_IP=XXX.XXX.XXX.XXX +``` + +Then, modify `compose.yml` file at 3 different places: +1. In the `kit` section, replace the `network_mode: host` line by the following block: +``` +networks: + outside: + ipv4_address: XXX.XXX.XXX.XXX +``` +and set the `ipv4_address` variable with the instance public IP address. + +2. In the `aeronim` section, comment the `network_mode: host` line. + +3. At the bottom of the file, add the following block: +``` +networks: + outside: + driver: bridge + ipam: + driver: default + config: + - subnet: XXX.XXX.XXX.0/24 +``` +where the subnet mask is your public IP address with the last number replaced by 0. + +## Running the blueprint + +To start the digital twin, simply run the following command: +``` +sudo docker compose up -d +``` +The blueprint will take some time to initialize. Expect a minimum of 10 minutes before accessing the GUI in a web browser at `http://XXX.XXX.XXX.XXX:5273` where `XXX.XXX.XXX.XXX` is the public IP address of the instance. When everything is ready, you should see the sports car in the wind tunnel as on the image below. + +![NVIDIA Omniverse Digital Twin for Fluid Simulation Blueprint](assets/images/omniverse-blueprint-digital-twin-gui.png "NVIDIA Omniverse Digital Twin for Fluid Simulation Blueprint") + +You can now interactively modify the car setup (rims, mirrors, spoilers, height, etc.) and visualize the impact it makes on the airflow. + +To stop the project, simply run `sudo docker compose down`. + + +## External links + +* [Original NVIDIA GitHub repo](https://github.com/NVIDIA-Omniverse-blueprints/digital-twins-for-fluid-simulation) + +## License + +Copyright (c) 2025 Oracle and/or its affiliates. + +Licensed under the Universal Permissive License (UPL), Version 1.0. + +See [LICENSE](https://github.com/oracle-devrel/technology-engineering/blob/main/LICENSE) for more details. \ No newline at end of file diff --git a/cloud-infrastructure/ai-infra-gpu/ai-infrastructure/omniverse-digital-twins-for-fluid-simulation/assets/images/LICENSE b/cloud-infrastructure/ai-infra-gpu/ai-infrastructure/omniverse-digital-twins-for-fluid-simulation/assets/images/LICENSE new file mode 100644 index 000000000..7b220f9de --- /dev/null +++ b/cloud-infrastructure/ai-infra-gpu/ai-infrastructure/omniverse-digital-twins-for-fluid-simulation/assets/images/LICENSE @@ -0,0 +1,35 @@ +Copyright (c) 2024 Oracle and/or its affiliates. + +The Universal Permissive License (UPL), Version 1.0 + +Subject to the condition set forth below, permission is hereby granted to any +person obtaining a copy of this software, associated documentation and/or data +(collectively the "Software"), free of charge and under any and all copyright +rights in the Software, and any and all patent rights owned or freely +licensable by each licensor hereunder covering either (i) the unmodified +Software as contributed to or provided by such licensor, or (ii) the Larger +Works (as defined below), to deal in both + +(a) the Software, and +(b) any piece of software and/or hardware listed in the lrgrwrks.txt file if +one is included with the Software (each a "Larger Work" to which the Software +is contributed by such licensors), + +without restriction, including without limitation the rights to copy, create +derivative works of, display, perform, and distribute the Software and make, +use, sell, offer for sale, import, export, have made, and have sold the +Software and the Larger Work(s), and to sublicense the foregoing rights on +either these or other terms. + +This license is subject to the following condition: +The above copyright notice and either this complete permission notice or at +a minimum a reference to the UPL must be included in all copies or +substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. \ No newline at end of file diff --git a/cloud-infrastructure/ai-infra-gpu/ai-infrastructure/omniverse-digital-twins-for-fluid-simulation/assets/images/omniverse-blueprint-digital-twin-gui.png b/cloud-infrastructure/ai-infra-gpu/ai-infrastructure/omniverse-digital-twins-for-fluid-simulation/assets/images/omniverse-blueprint-digital-twin-gui.png new file mode 100644 index 000000000..7f919c1fc Binary files /dev/null and b/cloud-infrastructure/ai-infra-gpu/ai-infrastructure/omniverse-digital-twins-for-fluid-simulation/assets/images/omniverse-blueprint-digital-twin-gui.png differ