Skip to content

ninja-96/kaiju

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Kaiju

Asynchronous runner for AI models

Installation

Install using pip
From source:

pip3 install git+https://github.com/ninja-96/kaiju

Getting Started

  1. Write your own class for pass data throught Pipeline
from kaiju.item import BaseItem

class ImageItem(BaseItem):
    image: torch.Tensor = torch.tensor([])
    predict: torch.Tensor = torch.tensor([])
  1. Write your own class for handler
from kaiju.handler import BaseHandler

class ModelHandler(BaseHandler):
    def __init__(self, device: str) -> None:
        super().__init__()
        self._model = torchvision.models.resnet18(weights='DEFAULT').eval().to(device)
        self._device = device

    def forward(self, data: ImageItem) -> ImageItem:
        data.predict = self._model(data.image.to(self._device)).cpu()
        return data
  1. Create Pipeline instance
from kaiju.runner import Runner

pipeline = Pipeline(
    [
        Runner(ModelHandler('cpu'))
    ]
)

Note

  • You can set number of worker for every Runner
Runner(ModelHandler('cpu')).n_workers(4)
  • If your model uses Nvidia GPU, you can device your Runner as critical section of Pipeline. It will be useful for preventing GPU memory overload. See
Runner(ModelHandler('cuda')).n_workers(2).critical_section()

Built with

  • pydantic - Data validation using Python type hints

Versioning

All versions available, see the tags on this repository.

Authors

  • Oleg Kachalov - Initial work - ninja-96

See also the list of contributors who participated in this project.

License

This project is licensed under the GPL-3.0 license - see the LICENSE.md file for details.