Prerequisites
Feature Description
Using --threads artificially slows down disk access during model load by an order of magnitude.
Could a new option (like --model-load-threads) be added so I can specify the full system limit, and not have model loading artificially constrained?
Motivation
My CPU-based inference server generates tokens most quickly with --threads 5, just given my particular hardware setup. However, that also limits the number of threads used for model loading, which makes it take about 10x longer than needed. My system has 32 cores, and 64 threads total.
- When I run with --threads 5, model loading happens at around 200MB/sec (I can see this in "sudo iotop -o").
- When I run with --threads 64, model loading happens at around 2000MB/sec (2GB/sec), which is my systems max SSD speed.
I need to run with --threads 5 because that optimizes inference speed, but it means I need to wait a really really long time for large models to load on initial start.
Possible Implementation
No response