You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 19, 2024. It is now read-only.
A further 'gotcha' is that autotuned training with the -autotune-modelsize option will invoke quantization, so the entire run will quietly switch to using a single core only. It took me a while to figure this out. Perhaps the documentation could be updated to warn of this.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I'm using quantize with
-thread 12
argument and the process only seems to use one core (ubuntu 16.0.4)are there any other ways to speed up the process?
I'm guessing
thread
is an internal C thread, but should that be able to use > 1 core?The text was updated successfully, but these errors were encountered: