Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dockerfile fails to build #1

Closed
alexellis opened this Issue Mar 9, 2019 · 9 comments

Comments

Projects
None yet
2 participants
@alexellis
Copy link

alexellis commented Mar 9, 2019

The Dockerfile fails to build, please could you check it over?

You may also want to try a newer watchdog version.

Thanks,

Alex

update-alternatives: using /usr/bin/file-rename to provide /usr/bin/rename (rename) in auto mode
Setting up protobuf-compiler (2.6.1-1.3) ...
Processing triggers for libc-bin (2.23-0ubuntu10) ...
Processing triggers for systemd (229-4ubuntu21.15) ...
Processing triggers for ca-certificates (20170717~16.04.2) ...
Updating certificates in /etc/ssl/certs...
148 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
Pulling watchdog binary from Github.
Cloning into 'models'...


object_detection/protos/calibration.proto:34:3: Expected "required", "optional", or "repeated".
object_detection/protos/calibration.proto:34:6: Expected field name.
object_detection/protos/calibration.proto:48:3: Expected "required", "optional", or "repeated".
object_detection/protos/calibration.proto:48:6: Expected field name.
The command '/bin/sh -c apt-get update && apt-get install -y     curl     git     protobuf-compiler     python-pip python-dev build-essential     python-tk     wget     && echo "Pulling watchdog binary from Github."     && curl -sSL https://github.com/openfaas/faas/releases/download/0.6.9/fwatchdog > /usr/bin/fwatchdog     && chmod +x /usr/bin/fwatchdog     && git clone https://github.com/tensorflow/models.git     && cd /models/research/     && protoc object_detection/protos/*.proto --python_out=.     && cd /     && wget http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_11_06_2017.tar.gz     && tar -zxvf ssd_mobilenet_v1_coco_11_06_2017.tar.gz' returned a non-zero code: 1
@alexellis

This comment has been minimized.

Copy link
Author

alexellis commented Mar 9, 2019

The error would appear to be coming from here:

  && git clone https://github.com/tensorflow/models.git \
    && cd /models/research/ \
    && protoc object_detection/protos/*.proto --python_out=.
@salekd

This comment has been minimized.

Copy link
Owner

salekd commented Mar 14, 2019

I have fixed the Dockerfile and added versions so hopefully it will stay future-proof in this way.

Please note that this image is based on Ubuntu. If you intend to run on Raspberry Pi Zero, we need to work on Dockerfile.rpizero

@salekd salekd closed this Mar 14, 2019

@alexellis

This comment has been minimized.

Copy link
Author

alexellis commented Mar 14, 2019

What if we got it working on RPi3 B+ instead of RPi Zero?

@alexellis

This comment has been minimized.

Copy link
Author

alexellis commented Mar 14, 2019

I also wondered if we could try the newer template for OpenFaaS Python to see if the model is faster if it's preloaded in memory?

@salekd

This comment has been minimized.

Copy link
Owner

salekd commented Mar 14, 2019

I made one more update - take a look at the Dockerfile, it is based on a newer python3 template. You can compare the new and the old templates by deploying salekd/faas-mobilenet:1.1.0 and salekd/faas-mobilenet:1.0.0, respectively.

What exactly do you mean by preloading a model into memory? I do not think I am doing that, everything is implemented in the handle function.

@salekd

This comment has been minimized.

Copy link
Owner

salekd commented Mar 14, 2019

As for running on Raspberry Pi it should be straightforward with the following modifications:

@alexellis

This comment has been minimized.

Copy link
Author

alexellis commented Mar 15, 2019

What exactly do you mean by preloading a model into memory? I do not think I am doing that, everything is implemented in the handle function.

The python3-flask template available via faas-cli template store pull can preload the model and engine having it ready to do inferences much much quicker.

Alex

@salekd

This comment has been minimized.

Copy link
Owner

salekd commented Mar 15, 2019

Hi Alex,

After a few hours of compiling the image for Raspberry Pi 3 B+ is ready here: https://hub.docker.com/r/salekd/faas-mobilenet-rpi
It will be great if you find time to test it!

For Raspberry Pi Zero, it is just a matter of running docker build for half a day. I have opened a new issue for this one until it is done. #2

What a nice idea using flask! I guess it is no longer a pure serverless, but it should serve its purpose well in this case. It actually depends on how much time is taken by loading a model and how much time is taken by the inference itself. I opened an issue for this one too: #3

Cheers,

David

@alexellis

This comment has been minimized.

Copy link
Author

alexellis commented Mar 16, 2019

We could also try doing an echo to stderr when we know we are running the inference, or as an output parameter in the JSON? I'd be curious to know how long this portion takes in CPU.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.