-
Notifications
You must be signed in to change notification settings - Fork 0
Installation guide
Nguyễn Thế Huy edited this page Nov 16, 2023
·
13 revisions
Before installing the BioTuring Ecosystem, some pre-installation steps are required:
- The system has one or multiple NVIDIA GPU(s) (at least 16 GB memory per GPU) - with Turing architecture or above.
- The system is running Ubuntu 20.04 or above.
- SSL certificate and a domain name for users to securely access the platform on the web browser.
- Please contact support@bioturing.com to get the token for your company.
Note: The ideal system that we recommend for most companies is AWS g5.8xlarge
Note: We suggest starting from scratch to avoid package/driver conflicts.
- Update the system:
sudo apt update && sudo apt upgrade -y
sudo apt install build-essential wget curl gnupg lsb-release ca-certificates xfsprogs -y
- Install NVIDIA CUDA Toolkit 11.7.
Run the commands below to install NVIDIA CUDA Toolkit 11.7 on Ubuntu 20.04.x:
wget https://developer.download.nvidia.com/compute/cuda/11.7.1/local_installers/cuda_11.7.1_515.65.01_linux.run
sudo sh cuda_11.7.1_515.65.01_linux.run
- Install docker.
curl https://get.docker.com | sh
sudo systemctl --now enable docker
- Install the NVIDIA container toolkit.
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
&& curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt update
sudo apt install nvidia-docker2
sudo systemctl restart docker
- Make sure that /dev/shm size is at least half of physical memory.
To change the configuration for /dev/shm, add one line to /etc/fstab. For example, if the system has 128 GB of physical memory:
tmpfs /dev/shm tmpfs defaults,size=64g 0 0
Run the command below to make the change immediately:
sudo mount -o remount /dev/shm
- Prepare an SSL certificate with two files named exactly:
tls.crt
andtls.key
. And put them in the/config/ssl
folder. For example:
sudo mkdir -p /config/ssl
sudo mv tls.crt /config/ssl
sudo mv tls.key /config/ssl
- Create default directories to store user data. We highly recommend using persistent storage for these directories. In the commands below, we use an empty EBS volume.
sudo mkfs -t ext4 /dev/nvme2n1
sudo mkdir /data
sudo mount /dev/nvme2n1 /data
sudo mkdir /data/app_data
sudo mkdir /data/user_data
- Pull the BioTuring Ecosystem image.
sudo docker pull bioturing/bioturing-ecosystem:2.0.6
- Run the docker image.
docker run -it -d \
-e WEB_DOMAIN='<yourcompany.com>' \
-e BIOTURING_TOKEN='<your token from BioTuring>' \
-e SSO_DOMAINS='<your company email address, example: @bioturing.com>' \
-e ON_BIOTURING_K8S='FALSE' \
-e K8S_BUFFER_PATH='' \
-e N_TQ_WORKERS='4' \
-e K8S_TQ_ADDR='' \
-e K8S_LENS_TQ_ADDR='' \
-v /data/user_data:/data/user_data \
-v /data/app_data:/data/app_data \
-v /config/ssl:/config/ssl \
--name bioturing-ecosystem \
--gpus all \
--shm-size=64gb \
-p 443:443 \
-p 80:80 \
bioturing/bioturing-ecosystem:2.0.6
Wait for a few minutes for the platform to download all of the required services.
After that, the BioTuring ecosystem is up and running.
- The BioTuring Ecosystem uses HTTPS protocol to securely communicate over the network.
- All of the users need to authenticate using a BioTuring account or the company's SSO to access the platform.
- We highly recommend setting up a private VPC network for IP restriction.
- The data stays behind the company firewall.
- Data can be uploaded to Personal Workspace or Data Sharing group.
- In the Personal Workspace, only the owner can see and manipulate the data she/he uploaded.
- In the Data Sharing group, only people in the group can see the data.
- In the Data Sharing group, only people with sufficient permissions can manipulate the data.