Skip to content

Feature Request: Use all (or configurable #) of threads for model loading, not constrainted by --threads specified for inference #11873

@VanceVagell

Description

@VanceVagell

Prerequisites

  • I am running the latest code. Mention the version if possible as well.
  • I carefully followed the README.md.
  • I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • I reviewed the Discussions, and have a new and useful enhancement to share.

Feature Description

Using --threads artificially slows down disk access during model load by an order of magnitude.

Could a new option (like --model-load-threads) be added so I can specify the full system limit, and not have model loading artificially constrained?

Motivation

My CPU-based inference server generates tokens most quickly with --threads 5, just given my particular hardware setup. However, that also limits the number of threads used for model loading, which makes it take about 10x longer than needed. My system has 32 cores, and 64 threads total.

  • When I run with --threads 5, model loading happens at around 200MB/sec (I can see this in "sudo iotop -o").
  • When I run with --threads 64, model loading happens at around 2000MB/sec (2GB/sec), which is my systems max SSD speed.

I need to run with --threads 5 because that optimizes inference speed, but it means I need to wait a really really long time for large models to load on initial start.

Possible Implementation

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions