Skip to content

Jaqpotpy Torch Inference is a web server designed to run pretrained PyTorch models and return inference results. This server provides a simple API to perform predictions using your pretrained models.

Notifications You must be signed in to change notification settings

ntua-unit-of-control-and-informatics/jaqpotpy-torch-inference

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Jaqpotpy Torch Inference

Jaqpotpy Torch Inference is a web server designed to run pretrained PyTorch models and return inference results. This server provides a simple API to perform predictions using your pretrained models.

Features

  • Load and run pretrained PyTorch models.
  • Expose a RESTful API for inference requests.

Installation

Step 1: Clone the Repository

git clone https://github.com/ntua-unit-of-control-and-informatics/jaqpotpy-torch-inference.git
cd jaqpotpy-torch-inference

Step 2: Create a Virtual Environment and Activate It

python3 -m venv venv
source venv/bin/activate

Step 3: Install the Required Packages

pip install -r requirements.txt

Usage

Start the Server

python main.py

About

Jaqpotpy Torch Inference is a web server designed to run pretrained PyTorch models and return inference results. This server provides a simple API to perform predictions using your pretrained models.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages