OpenFluid utilizes Gabriel, a platform designed for wearable cognitive assistance applications, to dynamically adjust fluid simulations in real-time on the server using acceleration sensor data from a mobile device. The simulation frames are then streamed to the device, enabling interactive 3D fluid simulations on mobile platforms.
Demo on Samsung Galaxy Fold 4
Copyright © 2017-2023 Carnegie Mellon University
This is a developing project.
Unless otherwise stated, all source code and documentation are under the Apache License, Version 2.0. A copy of this license is reproduced in the LICENSE file.
Parts of this repository include modified content from third-party sources. Files located in server/FleX
are copyrighted by their respective authors and are subject to the licenses mentioned. Please refer to server/FleX/README.md
for any additional copyright information for the codes used in the third-party source.
Project | Modified | License |
---|---|---|
NVIDIAGameWorks/FleX | Yes | Nvidia Source Code License (1-Way Commercial) |
OpenFluid server has been tested on Ubuntu 22.04 LTS (Jammy Jellyfish) using several nVidia GPUs ( Quadro P1000, GTX 1080 Ti, Tesla V100).
Based on our test, the most stable version (compatible with most of the Nvidia GPUs) is when you have:
- Ubuntu 22.04
- installed nvidia-driver-535 (535.86.10)
- using docker version 'test-driver535'
OpenFluid supports Android Client. We have tested the Android client on Nexus 6, Samsung Galaxy Z Fold 4, and Essential PH-1.
Easily set up an OpenFluid server by deploying our pre-configured Docker container. This build has been optimized for NVIDIA GPUs (still need to check compatibility with the RTX GPUs). Ensure you have root access to execute the following steps. We've validated these instructions using Docker version 24.0.5.
If Docker isn't already on your system, you can follow this Docker install guide. Alternatively, use the convenience script below:
curl -fsSL get.docker.com -o get-docker.sh
sh get-docker.sh
Visit the NVIDIA CUDA downloads page to select your operating system and receive step-by-step installation guidance. It is important that you get the version 535.86.10 for your driver.
For Debian-based Linux distributions, you can use the following command:
sudo apt-get update && apt-get install -y nvidia-driver-535
To verify an existing NVIDIA driver installation, execute nvidia-smi
. The installed driver version will be displayed at the top of the output.
Step 3. Set Up the NVIDIA Container Toolkit
This toolkit is essential for NVIDIA GPU support. Follow this instructions guide or use the command below:
sudo apt-get update \
&& sudo apt-get install -y nvidia-container-toolkit-base
Fetch the Docker image using:
docker pull ghcr.io/cmusatya/openfluid:version1.0
Or, if you're in the project's root directory:
make docker-pull
Execute:
docker run --gpus all --rm -it -p 9099:9099 ghcr.io/cmusatya/openfluid:version1.0
Alternatively, from the project root:
make docker-git-run
If you wish to compare between running the server on a cloudlet versus a cloud instance, you can launch the following instance type/image from your Amazon EC2 Dashboard:
Instance Type - p2.xlarge (can be found by filtering under GPU compute instance types)
Image - Deep Learning Base AMI (Ubuntu) - ami-041db87c
Ensure that port 9099 is open in your security group rules so that traffic to/from the mobile client will pass through to the server.
Once the server is running in AWS, you can follow the steps above to setup the server.
Note : If using vanilla Ubuntu Server 16.04 Image, install the required Nvidia driver (version 470) and reboot.
If you wish to compare between running the server on a cloudlet versus a cloud instance, you can launch the following VM from your Azure portal:
Size - Select NC4as_T4_v3
Image - NVIDIA GPU-Optimized VMI with vGPU driver
Network Setting - Make sure to open port 9099
in your Inbound port rules. This ensures that traffic from/to the mobile client will be directed to the server.
Extensions NvidiaGpuDriverLinux
After successfully setting up the VM:
- Run the following command to update and downgrade the NVIDIA driver:
sudo apt-get update && apt-get install -y nvidia-driver-470-server
- Proceed from step 4 of
Server Installation using Docker
. Note: Instead of using the version1.0 docker image as mentioned in previous steps, utilize the test-driver470 image.
We've provided a Docker image equipped with all necessary dependencies, enabling you to compile and run the server directly from the source within the given environment.
docker pull ghcr.io/cmusatya/openfluid:env
Alternatively, from the project root:
make docker-env-pull
OpenFluid employs the Flex SDK, a GPU-centric particle simulation library, which mandates the CUDA 9.2.148 toolkit for compilation. Download the cuda toolkit 9.2, and position the acquired /usr/local/cuda-9.2
in project_root/server/cuda-9.2
.
From the project root:
docker run --gpus all -it -p 9099:9099 --rm --name=openfluid-env --mount type=bind,source=${PWD}/server,target=/server ghcr.io/cmusatya/openfluid:env
Or:
make docker-env-git-run
Set up required libraries on your machine.
OpenFluid employs the Flex SDK, a GPU-centric particle simulation library, which mandates the CUDA 9.2.148 toolkit for compilation. Download the cuda toolkit 9.2, and position the acquired /usr/local/cuda-9.2
in project_root/server/cuda-9.2
.
For the CUDA runtime library and driver, visit the NVIDIA CUDA downloads page to select your operating system and receive step-by-step installation guidance. It is important that you get the version 535.86.10 for your driver.
To verify your NVIDIA driver, use nvidia-smi
. The driver version appears at the top of the resulting output.
Ensure you've set the path for the CUDA runtime libraries, for instance, in your .bashrc
:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64
OpenFluid relies on freeGLUT and EGL for headless fluid scene rendering, zeroMQ for IPCs, protobuf for packet serialization, and JPEG for image frame compression.
sudo apt-get -y install \
libzmq3-dev \
protobuf-compiler libprotobuf-dev \
libegl1-mesa-dev libgl1-mesa-dev \
freeglut3-dev \
libjpeg-dev \
software-properties-common
Match the OpenGL library version to your NVIDIA Driver (use nvidia-smi
for version checking):
sudo apt-get -y install libnvidia-gl-535
A portion of the server is Python-based. Ensure you use Python 3.8
sudo add-apt-repository ppa:deadsnakes/ppa && apt-get update && apt-get install -y \
python3.8 \
python3.8-dev \
python3-pip \
&& apt-get install -y --reinstall python3.8-distutils \
&& python3.8 -m pip install --upgrade pip \
&& python3.8 -m pip install --no-cache-dir \
-r server/requirements.txt
Navigate to /server
and execute:
make release && make run
You can download the client from the Google Play Store, or install openfluid-v1.0.apk in this repo. UPDATE
Alternatively, build the client using Android Studio. The source code for the client is located in the android-client
directory. You should use the standardDebug build variant.
Servers can be added by pressing the + sign button, then entering a server name and address. Selecting a server connects to the corresponding OpenFluid server. Swipe left or right to remove a server from the list.
Post connection, the application showcases a simulation scene. Fluid motion is influenced by the mobile device's acceleration sensor. Various scene manipulation options are available. For guidance, tap the "?" icon at the top right.
The interface offers modes like camera view adjustments, full-screen mode, screen orientation changes, simulation pausing/resuming, scene or liquid type switching, auto-sensor input generation, and rendering style toggling (particle/liquid).
- Image Resolution: Adjust the frame resolution. Higher resolutions might reduce FPS and increase latency.
- Gabriel Token Limit: Allows configuration of the token-based flow control mechanism in the Gabriel platform. This indicates how many frames can be in flight before a response frame is received back from the server. The minimum of the token limit specified in the app and the number of tokens specified on the server will be used. With a stable network, "No limit" yields the highest FPS.
- Set FPS Limit (Vsync): Enables the server to cap the maximum simulation fps. If disabled, there's no FPS limit.
- Target FPS (Vsync) Limit: When
Set FPS Limit
is true, set the desired limit for the FPS (60, 90, or 120) - Autoplay Sequence Interval: Switch the interval between the auto-generate sensor input from changing.
Ensure you saved your GitHub personal access toke (classic) into CR_PAT environment variable:
export CR_PAT=YOUR_TOKEN
Refer Working with the Container registry for more information on how to use GitHub Container Registry.
Download cuda toolkit 9.2 from the link bellow, and place the /usr/local/cuda-9.2 in server/cuda-9.2 first https://developer.nvidia.com/cuda-92-download-archive
make docker-build [version] [username]
--> username is github username (prob cmusatyalab)
make docker-run [version] [username]
make docker-push <image-id> [version] [username]
--> check the image-id by "docker image list" and see the IMAGE ID column
make docker-pull [version] [username]
make docker-git-run [version] [username]
make docker-env-build [username]
make docker-env-run [username]
use docker-push with "env" for the version name
make docker-env-git-run [username]
The Extras proto is defined in android-client/app/src/main/proto/openfluid.proto
and server/Flex/demo/proto
.
As you make changes to this proto, make sure they are identical. After making changes, it will be recompiled for the android client the next time you start the Android client from Android Studio. Do following command in /server
directory to recompile proto file for the server.
make protoc
Flex: Could not open CUDA driver, ensure you have the latest driver (nvcuda.dll) installed and an NVIDIA GPU present on the system
If you encounter the error message above while running the server, try following the steps below based on the option you chose for server installation:
- Install Nvidia-drive-470, instead of version 535:
e.g. if you are using Ubuntu:
sudo apt-get update && apt-get install -y nvidia-driver-470
- Use the docker image test-driver470:
docker pull ghcr.io/cmusatya/openfluid:test-driver470
- Run the docker container:
docker run --gpus all --rm -it -p 9099:9099 ghcr.io/cmusatya/openfluid:test-driver70
With keeping everything else the same
-
install nvidia-driver version 470 (as described above)
-
Match the OpenGL library version to your NVIDIA Driver:
sudo apt-get -y install libnvidia-gl-470
Please see the CREDITS file for a list of acknowledgments.