Open
Description
In the demos I’ve seen of Leon AI, it appeared rather slow. I have no idea if this was a limitation of the hardware or there were inefficiencies that might be improved upon. GPT4All appears to be rather performant, even on systems without CUDA compatible GPUs. I have no idea if it is any faster than the inference engine you’re already using.