Important
This repository is no longer being maintained. Please use the new WAO instead.
Prediction service used in WAO-Scheduler and WAO-LB.
Author: kaz260
models/
: The power consumption modelclients/
: Clientsk8s/
: Kubernetes manifests
docker build -t wao-predict-konohana-pc:1.0 .
REGISTRY=localhost # set your registry here
docker tag wao-predict-konohana-pc:1.0 $REGISTRY/wao-predict-konohana-pc:1.0
docker push $IMAGE
Make sure you set the correct image name in k8s/tensorflow-server-dep.yaml
.
kubectl apply -f k8s
Note
You may need to expose the service with NodePort, Ingress, etc.
docker run -d --rm -p 8500:8500 -p 8501:8501 --name wao-pred $IMAGE
Usage: ./TOOL SERVER_IP CPU_USAGE AMBIENT_TEMP CPU1_TEMP CPU2_TEMP
Example: ./main.py 10.0.0.100 0.5 0.5 0.5 0.5
Note
CPU usage and temperatures are normalized to [0, 1].
Local:
cd clients/pred_pc_grpc_py
python -mvenv venv
. venv/bin/activate
pip install -r requirements.txt
./main.py -h
Docker:
cd clients/pred_pc_grpc_py
docker run -it --rm -v $PWD:/tmp -w /tmp tensorflow/tensorflow:2.9.1 /bin/bash
pip install tensorflow-serving-api==2.9.1
./main.py -h
You will get results like:
outputs {
key: "outputs"
value {
dtype: DT_FLOAT
tensor_shape {
dim {
size: 1
}
dim {
size: 1
}
}
float_val: 150.6072540283203
}
}
model_spec {
name: "konohana"
version {
value: 1218
}
signature_name: "konohana"
}
cd clients/pred_pc_http_py
python -mvenv venv
. venv/bin/activate
pip install -r requirements.txt
./main.py -h
You will get results like:
{
"outputs": [
[
150.607254
]
]
}
cd clients/pred_pc_grpc_go
./prepare.sh
go run main.go -h
You will get results like:
name:"konohana" version:<value:1218 > signature_name:"konohana"
map[outputs:dtype:DT_FLOAT tensor_shape:<dim:<size:1 > dim:<size:1 > > float_val:150.60725 ]
150.607254
Note
You can rebuild the TensorFlow Serving gRPC client by running:
./prepare.sh
cd clients/pred_pc_http_go
go run main.go -h
You will get results like:
{
"outputs": [
[
150.607254
]
]
}