Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using OpenCL wth lstmtraining #2901

Closed
victornguyen18 opened this issue Feb 27, 2020 · 1 comment
Closed

Using OpenCL wth lstmtraining #2901

victornguyen18 opened this issue Feb 27, 2020 · 1 comment
Labels

Comments

@victornguyen18
Copy link

victornguyen18 commented Feb 27, 2020

Environment

  • Tesseract Version: 4.0.0
  • Commit Number:
  • Platform: Windows 10 64 bits

Current Behavior:

Currently, lstmtraining use CPU to run. In some cases, i worry it will make crack in some services in the same system?

Expected Behavior:

We have any support for GPU to run lstmtraining and any training tool. It will improve timing or any else?

Suggested Fix:

@amitdo
Copy link
Collaborator

amitdo commented Feb 28, 2020

IMO, this is very unlikely to happen.

You can use kraken or Calamari. They support GPU acceleration for inference and training.

@amitdo amitdo closed this as completed Feb 28, 2020
@amitdo amitdo added the OpenCL label May 14, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants