Skip to content

TAO Toolkit as a stand-alone service and TAO Client CLI package

License

Notifications You must be signed in to change notification settings

NVIDIA/tao_front_end_services

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

NVIDIA Transfer Learning API

Overview

NVIDIA Transfer Learning API is a cloud service that enables building end-to-end AI models using custom datasets. In addition to exposing NVIDIA Transfer Learning functionality through APIs, the service also enables a client to build end-to-end workflows - creating datasets, models, obtaining pretrained models from NGC, obtaining default specs, training, evaluating, optimizing, and exporting models for deployment on edge. NVTL jobs run on GPUs within a multi-node cloud cluster.

You can develop client applications on top of the provided API, or use the provided NVTL remote client CLI.

This repository includes the essential components for enabling clients to interact with NVTL DNN services through a well-defined set of RESTful endpoints. The NVTL API services can be deployed on any of the major cloud services like AWS, Azure or Google Cloud. NVTL DNN services includes a large pool of Deep Learnig models under various domains like Object Detection, Image Classification, Image Segmentation, Auto-Labeling, Data-Analytics, and many Deep Learning models curated to a particular use-case like Action recognition, Re-identification etc.

NVTL Client

NVTL-Client provides an command line interface to interact with the NVIDIA Transfer Learning API server by using click python package and requests to format and make RestFUL API calls, instead of relying on direct API calls

Pre-Requisites

Requirements

Hardware Requirements

  • 32 GB system RAM
  • 32 GB of GPU RAM
  • 8 core CPU
  • 1 NVIDIA GPU
  • 100 GB of SSD space

Software Requirements

Software Version
Ubuntu LTS >=18.04
python >=3.10.x
docker-ce >19.03.5

The host machine needs to have containerization technologies like kubernetes and helm installed to deploy different services. NVTL provides quick start scripts for deploying NVTL API onto your machine here.

Getting Started

Once the repository is cloned and NVTL API service is deployed, you can start working on the models supported via jupyternotebook interface. In your browser go to the following address http://<api_hosted_machine_ip>:31951/notebook/ and enter into the api or cli folders.

The notebooks in the api folder make REST API calls directly, whereas the notebooks in the cli folder abstract these restAPI calls into CLI commands to execute the workflow.

Updating docker

In the case where you would like to modify versions of the the third party dependencies supported by default in the API docker container, please follow the steps below:

Build docker

The dev docker is defined in $NV_NVTL_API_TOP/docker/Dockerfile. The python packages required for the NVTL dev is defined in $NV_NVTL_API_TOP/docker/requirements.txt. Once you have made the required change, please update the docker using the build script in the same directory.

Take necessary backups of previous runs as we are going to remove the existing pvc folder before re-deploying the api service.

sudo rm -rf /mnt/nfs_share/* && make docker_build && make helm_install && make cli_install

The above step produces a digest file associated with the docker. This is a unique identifier for the docker. So please note this, and update all references of the old digest in the repository with the new digest. You may find the old digest in the $NV_NVTL_API_TOP/docker/manifest.json.

Push you final updated changes to the repository so that other developers can leverage and sync with the new dev environment.

Please note that if for some reason you would like to force build the docker without using a cache from the previous docker, you may do so by using the --force option.

bash $NV_NVTL_API_TOP/docker/build.sh --build --push --force

MONAI Service Deployment

MONAI service deployment with some extra steps to above. Here are some links to MONAI service deployment, which show how to deploy MONAI service on different cloud service providers step-by-step.

  1. Azure deployment

Develop with DEV_MODE

Please refer to this Confluence Documentation to enable DEV_MODE for setup

Run MONAI premerge test locally

To run the MONAI premerge test locally, you'll have to sudo make docker_build first, and then

export NGC_KEY=<ngc key that has access to medical service ea NGC organization>
bash scripts/medical_premerge.sh

If you would like to use another image:

export IMAGE_API=<image to use>

If you would like to run the script outside of root folder

export NV_NVTL_API_TOP=<path to the root of this repo>

Contribution Guidelines

NVIDIA Transfer Learning API is not accepting contributions as part of the NVTL 5.2 release, but will be open in the future.

License

This project is licensed under the Apache-2.0 License.

About

TAO Toolkit as a stand-alone service and TAO Client CLI package

Resources

License

Security policy

Stars

Watchers

Forks

Packages

No packages published