Skip to content

bdiaz29/autotagger

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Auto Tagger with TensorRT implementation

This is meant to work with similing wolfs trained booru taggers

Inferences can be made with the models in tensorflow, but inference can be much faster using TensorRT if you have an nvidia GPU.

explantion of how to convert these models are here https://github.com/bdiaz29/ConvertTagger2TensorRT however it is very important that the enviroment for conversation and inference be seperate, since newer versions of tensorflow do not have gpu support for windows past 2.10.

explantion of the parameters of autotagger.oy

"--image_dir" :the directory of the images to apply captions to
"--include_characters" :whether to include tags involving characters or not
'--tag_threshold': the threshold of wether to pass a tag or not
'--model_path' : the directory for the tagger model, will be a folder for WD tensorflow models and a file for TensorRT models
"--exclude_tags" : the tags you dont want to be applied to the captions even if they are above threshold
"--append_tags" : the tags you want to be applied to the front of the captions.
"--use_tensorrt" : set this flag if you intent to use tensorRT

There is also a gui for ease of use
image

Installation

git clone https://github.com/bdiaz29/autotagger
pip install -r requirements.txt

About

Auto Tagger with TensorRT implementation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages