Skip to content

A comprehensive toolkit for seamless data generation and fine-tuning of NLP models, all conveniently packed into a single block.

License

Notifications You must be signed in to change notification settings

ksgr5566/AutoTuneNLP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AutoTuneNLP

A comprehensive toolkit for seamless data generation and fine-tuning of NLP models, all conveniently packed into a single block.

Setup

Clone the repo and cd to project root.

Environment

  1. Create and activate venv. Ex: (on windows)
python -m venv venv
.\venv\Scripts\activate
  1. This project uses poetry.
pip install poetry==1.5.1
poetry install
  1. Install torch from here based on your cuda version for GPU support else skip this step. Ex:
pip uninstall torch
pip install torch --index-url https://download.pytorch.org/whl/cu118

API

  1. Start your docker engine and run a redis image on port 6379.
docker run --name autotunenlp-redis -p 6379:6379 -d redis
  1. Start celery worker.
celery -A worker worker --loglevel=info
  • If you are running on windows, the above command won't work since celery is not supported on windows, but you can use the below command for testing (caveat: it's capabilities are lost).
celery -A worker worker --loglevel=info --pool=solo
  1. Specify a port number and start the application.
uvicorn main:app --port PORT_NUMBER --reload

About

A comprehensive toolkit for seamless data generation and fine-tuning of NLP models, all conveniently packed into a single block.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages