Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running Deepspeech on the Raspberry Pi 4 #22

Closed
flatsiedatsie opened this issue Jul 18, 2019 · 1 comment
Closed

Running Deepspeech on the Raspberry Pi 4 #22

flatsiedatsie opened this issue Jul 18, 2019 · 1 comment
Labels

Comments

@flatsiedatsie
Copy link

A few questions:

  • Would the Raspberry Pi 4 have enough power/ram to run this well?
  • Would the CPU or GPU option be best/available?
@MainRo
Copy link
Owner

MainRo commented Jul 19, 2019

It should be possible to run it on a rpi, but probably with significant latency on the answers. If you use pre-trained models from mozilla, you can try to use the pbmm one, which should use less memory. Mozilla also generates tflite models but these will not work with deepspeech-server because it does not support tflite runtime.

You can only use the CPU because the GPU of the rpi is not supported by tensorflow (AFAIK).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants