Skip to content
This repository has been archived by the owner on Oct 6, 2021. It is now read-only.

kemingy/batching

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

42 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Dynamic Batching for Deep Learning Serving

Language Go Report Card GoDoc GitHub Actions LICENSE

Ventu already implement this protocol, so it can be used as the worker for deep learning inference.

Attention

This project is just a proof of concept. Check the MOSEC for production usage.

Features

  • dynamic batching with batch size and latency
  • invalid request won't affects others in the same batch
  • communicate with workers through Unix domain socket or TCP
  • load balancing

If you are interested in the design, check my blog Deep Learning Serving Framework.

Configs

go run service/app.go --help
Usage app:
  -address string
        socket file or host:port (default "batch.socket")
  -batch int
        max batch size (default 32)
  -capacity int
        max jobs in the queue (default 1024)
  -host string
        host address (default "0.0.0.0")
  -latency int
        max latency (millisecond) (default 10)
  -port int
        service port (default 8080)
  -protocol string
        unix or tcp (default "unix")
  -timeout int
        timeout for a job (millisecond) (default 5000)

Demo

go run service/app.go
python examples/app.py
python examples/client.py