Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slow model load and cache ram does not free. #6807

Open
pisoiu opened this issue Sep 14, 2024 · 12 comments
Open

Slow model load and cache ram does not free. #6807

pisoiu opened this issue Sep 14, 2024 · 12 comments
Labels
bug Something isn't working needs more info More information is needed to assist

Comments

@pisoiu
Copy link

pisoiu commented Sep 14, 2024

What is the issue?

Hi all. My system: AMD TR PRO 3975WX CPU, 512G RAM DDR4 ECC, 3xRTX A4000 (48G VRAM) GPU, 4TB Nvme corsair mp600 core xt, Ubuntu 22.04.1 LTS
I'm not specialist in Linux, so don't throw stones.
Problem 1: According to various tests, transfer speed of DDR4 can go up to 25GB/s. According to the benchmark of my local nvme disk, read speed is around 6GB/s. However, when I start 'ollama run llama3.1:70b' from terminal, system monitor indicate constant disk activity during model transfer and read speed tops around 1.7GB/s, no more. Why isn't it loaded faster if both disk and RAM can do much more? System isn't doing anything else. Let's say this isn't problematic with 70b, but with 405b is really annoying.
Problem 2: 48G VRAM is enough to fit :70b model. When I start 'ollama run llama3.1:70b', it is loaded first in the RAM, in the system monitor window I see 'cache' jumping up. After the model is completely transferred to RAM, I see it pushed into VRAM of GPU for inferrence. The 'memory' section of system monitor indicates '7.3GiB(1.5%) of 503.5GiB, cache 44.6 GiB'. When I'm done with the model and send '/bye' to ollama, I can see VRAM still filled for few more minutes, then it is freed. But not the 'cache' from RAM. It stays at 44.6GiB forever if I'm not doing anything else (I waited >30 min). This is becoming problematic when I load a different model. That will top over the already existing models in cache memory and will increase its size. Continuing loading different models will progressively fill it up to the top, eventually data will go in the swap. Old models are never removed from cache even if newer ones needs memory. Why?
Thank you.

LE: one detail which may or may not be important. Ollama is installed directly and I run it from terminal prompt and there is another installation in a docker container where it was installed from open-webui with built in ollama support, that one is inferred from the network. But both behave the same in regards to cache memory.

OS

Linux

GPU

Nvidia

CPU

AMD

Ollama version

0.3.8

@pisoiu pisoiu added the bug Something isn't working label Sep 14, 2024
@rick-github
Copy link
Contributor

  1. PCI bus speed. Disk -> RAM -> PCI -> VRAM
  2. Cache is evicted when something else needs it. If nothing else needs it, cache sticks around so that data is readily available if it needs to be used again. If you load a new model, the old model in cache will be replaced by new model.

@pdevine
Copy link
Contributor

pdevine commented Sep 15, 2024

@pisoiu you can check out the PCIe bus speed + lane width using lspci -s <BUS:DEVICE.FUNCTION> -vvv in Linux. Use lspci to figure out the bus/device/function.

As @rick-github mentioned, the cache tables will stay dirty until they're reused for something else. Also, starting in Ollama 0.3.11 you'll be able to ollama stop <model> to free up any vram being used by a model.

LMK the bus speed / lane width data and either I can close the issue or see if it's something more serious.

@pdevine pdevine added the needs more info More information is needed to assist label Sep 15, 2024
@pisoiu
Copy link
Author

pisoiu commented Sep 16, 2024

Hi all, thanks for info.

  1. @rick-github , my problem is not Disk -> RAM -> PCI -> VRAM, it is just Disk->RAM. Disk is capable of 6GB/s, RAM is capable of way more. Then why during Disk->RAM the speed graph shows only 1.6GB/s? Yesterday I worked with various models and even thow most of them transfers at 1.6GB/s, ocasionally I saw transfers topping at 7-800MB/s.
  2. This is how I knew the cache should work, but I have the impression it is not in my case. Maybe I have a wrong understanding about memory working intricacies. I expected that once memory cache is filled with models, a new model should displace an older unused model from cache and should load into cache without touching the swap memory (which by my understanding is on nvme and will slow down the access). In my case the swap memory increases once the RAM cache is filled. I will make more tests to be sure.
    @pdevine I will do what you suggested asap and post the results.

@pisoiu
Copy link
Author

pisoiu commented Sep 16, 2024

Hi,
lspci is :
00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Root Complex
00:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD] Starship/Matisse IOMMU
00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 61)
00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)
00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 0
00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 1
00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 2
00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 3
00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 4
00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 5
00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 6
00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship Device 24; Function 7
01:00.0 VGA compatible controller: NVIDIA Corporation GA104GL [RTX A4000] (rev a1)
01:00.1 Audio device: NVIDIA Corporation GA104 High Definition Audio Controller (rev a1)
02:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function
03:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
03:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Starship USB 3.0 Host Controller
20:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Root Complex
20:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD] Starship/Matisse IOMMU
20:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
20:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
20:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
20:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
20:03.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
20:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
20:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
20:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
20:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
20:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
20:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
21:00.0 VGA compatible controller: NVIDIA Corporation GA104GL [RTX A4000] (rev a1)
21:00.1 Audio device: NVIDIA Corporation GA104 High Definition Audio Controller (rev a1)
22:00.0 VGA compatible controller: NVIDIA Corporation GA104GL [RTX A4000] (rev a1)
22:00.1 Audio device: NVIDIA Corporation GA104 High Definition Audio Controller (rev a1)
23:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function
24:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
24:00.1 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP
24:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Starship USB 3.0 Host Controller
24:00.4 Audio device: Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller
40:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Root Complex
40:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD] Starship/Matisse IOMMU
40:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
40:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
40:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
40:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
40:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
40:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
40:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
40:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
40:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
40:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
40:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
41:00.0 VGA compatible controller: NVIDIA Corporation GA104GL [RTX A4000] (rev a1)
41:00.1 Audio device: NVIDIA Corporation GA104 High Definition Audio Controller (rev a1)
42:00.0 VGA compatible controller: NVIDIA Corporation GA104GL [RTX A4000] (rev a1)
42:00.1 Audio device: NVIDIA Corporation GA104 High Definition Audio Controller (rev a1)
43:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function
44:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
60:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Root Complex
60:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD] Starship/Matisse IOMMU
60:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
60:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
60:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
60:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
60:03.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
60:03.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
60:03.4 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
60:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
60:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
60:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
60:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
60:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
60:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
61:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse Switch Upstream
62:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
62:08.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
62:09.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
62:0a.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
63:00.0 Network controller: Intel Corporation Wi-Fi 6E(802.11ax) AX210/AX1675* 2x2 [Typhoon Peak] (rev 1a)
64:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
64:00.1 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
64:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
65:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
66:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
67:00.0 Non-Volatile memory controller: Phison Electronics Corporation PS5021-E21 PCIe4 NVMe Controller (DRAM-less) (rev 01)
68:00.0 Ethernet controller: Aquantia Corp. AQtion AQC113CS NBase-T/IEEE 802.3an Ethernet Controller [Antigua 10G] (rev 03)
69:00.0 Ethernet controller: Aquantia Corp. AQtion AQC113CS NBase-T/IEEE 802.3an Ethernet Controller [Antigua 10G] (rev 03)
6a:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function
6b:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP

lspci -s 67:00.0 -vvv (the nvme controller) is:

67:00.0 Non-Volatile memory controller: Phison Electronics Corporation PS5021-E21 PCIe4 NVMe Controller (DRAM-less) (rev 01) (prog-if 02 [NVM Express])
Subsystem: Phison Electronics Corporation PS5021-E21 PCIe4 NVMe Controller (DRAM-less)
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 64 bytes
Interrupt: pin A routed to IRQ 141
IOMMU group: 2
Region 0: Memory at cdf00000 (64-bit, non-prefetchable) [size=16K]
Capabilities: [80] Express (v2) Endpoint, MSI 00
DevCap: MaxPayload 512 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75W
DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
MaxPayload 512 bytes, MaxReadReq 512 bytes
DevSta: CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
LnkCap: Port #1, Speed 16GT/s, Width x4, ASPM L1, Exit Latency L1 unlimited
ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 16GT/s, Width x4
TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt+ EETLPPrefix-
EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
FRS- TPHComp- ExtTPHComp-
AtomicOpsCap: 32bit- 64bit- 128bitCAS-
DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR+ 10BitTagReq- OBFF Disabled,
AtomicOpsCtl: ReqEn-
LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
Compliance Preset/De-emphasis: -6dB de-emphasis, 0dB preshoot
LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete+ EqualizationPhase1+
EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
Retimer- 2Retimers- CrosslinkRes: Upstream Port
Capabilities: [d0] MSI-X: Enable+ Count=33 Masked-
Vector table: BAR=0 offset=00002000
PBA: BAR=0 offset=00003000
Capabilities: [e0] MSI: Enable- Count=1/32 Maskable+ 64bit+
Address: 0000000000000000 Data: 0000
Masking: 00000000 Pending: 00000000
Capabilities: [f8] Power Management version 3
Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [100 v1] Latency Tolerance Reporting
Max snoop latency: 1048576ns
Max no snoop latency: 1048576ns
Capabilities: [110 v1] L1 PM Substates
L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+
PortCommonModeRestoreTime=10us PortTPowerOnTime=50us
L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
T_CommonMode=0us LTR1.2_Threshold=32768ns
L1SubCtl2: T_PwrOn=50us
Capabilities: [128 v1] Alternative Routing-ID Interpretation (ARI)
ARICap: MFVC- ACS-, Next Function: 0
ARICtl: MFVC- ACS-, Function Group: 0
Capabilities: [1e0 v1] Data Link Feature Capabilities: [200 v2] Advanced Error Reporting UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol- UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol- UESvrt: DLP+ SDES- TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol- CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+ CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr- AERCap: First Error Pointer: 00, ECRCGenCap- ECRCGenEn- ECRCChkCap+ ECRCChkEn- MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap- HeaderLog: 04000001 6000020f 67070000 aa4ead37 Capabilities: [300 v1] Secondary PCI Express LnkCtl3: LnkEquIntrruptEn- PerformEqu- LaneErrStat: 0 Capabilities: [340 v1] Physical Layer 16.0 GT/s
Capabilities: [378 v1] Lane Margining at the Receiver <?>
Kernel driver in use: nvme
Kernel modules: nvme

Does this contain information you need?

@pdevine
Copy link
Contributor

pdevine commented Sep 17, 2024

@pisoiu I think it's saying the lane width is 4x so the theoretical maximum would be 1GB/s?

@pisoiu
Copy link
Author

pisoiu commented Sep 17, 2024

I don't think it can be that low. Disk is this one:
https://www.corsair.com/us/en/p/data-storage/cssd-f4000gbmp600cxt/mp600-core-xt-4tb-pcie-4-0-gen4-x-4-nvme-m-2-ssd-cssd-f4000gbmp600cxt?srsltid=AfmBOorxQXu5A3fxBl46EWHM72o08Xqhf2l0yZO95LUeCgrSWmymvNUZ#tab-techspecs
It has 4 lanes PCIe4.0, so lanes can do 8 GB/s:
https://www.trentonsystems.com/en-gb/blog/pcie-gen4-vs-gen3-slots-speeds#:~:text=PCIe%204.0%20has%20a%2016,s%20with%20bidirectional%20travel%20considered.
I paid special attention when I installed it, on the mainboard (asrock wrx80 creator r2.0) it is installed on CPU lanes, not chipset lanes. Specs on the product page is around 5GB/s, I benchmarked it under ubuntu and I confirmed that value.

@pisoiu
Copy link
Author

pisoiu commented Sep 18, 2024

I have another information, it may help. I asked on another forum about this issue and I was told to send iostat -x during both operations, benchmark and model transfer.
Results for model transfer are:

Linux 6.8.0-44-generic (..-ai-server) 09/13/2024 x86_64 (64 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
1.52 0.00 0.96 0.19 0.00 97.32
Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
nvme0n1 6046.99 756697.47 50.87 0.83 0.16 125.14 112.13 11833.27 23.60 17.39 1.72 105.53 0.00 0.00 0.00 0.00 0.00 0.00 2.78 0.47 1.16 34.30

Results for benchmark are:

Linux 6.8.0-44-generic (flo-ai-server) 09/13/2024 x86_64 (64 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0.35 0.00 0.49 0.37 0.00 98.79
Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
nvme0n1 16239.55 2053969.68 27.80 0.17 19.04 126.48 93.25 7287.69 28.47 23.39 1.30 78.15 0.00 0.00 0.00 0.00 0.00 0.00 5.29 0.40 309.40 39.44

This is a bit criptic to me but a user on that forum commented that aqu-sz parameter indicates the problem. For benchmark there are 309.4 requests to be processed by nvme, and for model transfer only 1.16 . This is why I came here with the question, because this indicates to me that is not a hardware related issue, I hope I'm not wrong.

@rick-github
Copy link
Contributor

If your concern is model loading speed (as opposed to reading data off disk), then PCI bandwidth may be the bottleneck. A block of model data is read from disk and cached in RAM. That block is then written to VRAM via PCI bus. The speed of model loading cannot be faster than the speed of the slowest link in that chain. If your PCI bus or VRAM has less bandwidth than the bandwidth used to read from the nvme, that will be the speed of reading from nvme.

What's the PCI config of your GPU devices?

sudo lspci -vv -s 01:00.0
sudo lspci -vv -s 21:00.0
sudo lspci -vv -s 22:00.0
sudo lspci -vv -s 41:00.0
sudo lspci -vv -s 42:00.0

@pisoiu
Copy link
Author

pisoiu commented Sep 18, 2024

@rick-github , this is the result of the first command:

01:00.0 VGA compatible controller: NVIDIA Corporation GA104GL [RTX A4000] (rev a1) (prog-if 00 [VGA controller])
Subsystem: NVIDIA Corporation GA104GL [RTX A4000]
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- SERR- <PERR- INTx-
Latency: 0
Interrupt: pin A routed to IRQ 210
IOMMU group: 36
Region 0: Memory at f2000000 (32-bit, non-prefetchable) [size=16M]
Region 1: Memory at 20060000000 (64-bit, prefetchable) [size=256M]
Region 3: Memory at 20070000000 (64-bit, prefetchable) [size=32M]
Region 5: I/O ports at 3000 [size=128]
Expansion ROM at f3000000 [virtual] [disabled] [size=512K]
Capabilities: [60] Power Management version 3
Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-)
Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [68] MSI: Enable+ Count=1/1 Maskable- 64bit+
Address: 00000000fee00000 Data: 0000
Capabilities: [78] Express (v2) Legacy Endpoint, MSI 00
DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s unlimited, L1 <64us
ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+
DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
MaxPayload 256 bytes, MaxReadReq 512 bytes
DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
LnkCap: Port #0, Speed 16GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <512ns, L1 <16us
ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
ExtSynch- ClockPM+ AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 2.5GT/s (downgraded), Width x16
TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
DevCap2: Completion Timeout: Range AB, TimeoutDis+ NROPrPrP- LTR+
10BitTagComp+ 10BitTagReq+ OBFF Via message, ExtFmt- EETLPPrefix-
EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
FRS-
AtomicOpsCap: 32bit- 64bit- 128bitCAS-
DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR+ 10BitTagReq- OBFF Disabled,
AtomicOpsCtl: ReqEn-
LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
Compliance Preset/De-emphasis: -6dB de-emphasis, 0dB preshoot
LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete+ EqualizationPhase1+
EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
Retimer- 2Retimers- CrosslinkRes: unsupported
Capabilities: [b4] Vendor Specific Information: Len=14 Capabilities: [100 v1] Virtual Channel Caps: LPEVC=0 RefClk=100ns PATEntryBits=1 Arb: Fixed- WRR32- WRR64- WRR128- Ctrl: ArbSelect=Fixed Status: InProgress- VC0: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans- Arb: Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256- Ctrl: Enable+ ID=0 ArbSelect=Fixed TC/VC=ff Status: NegoPending- InProgress- Capabilities: [250 v1] Latency Tolerance Reporting Max snoop latency: 34326183936ns Max no snoop latency: 34326183936ns Capabilities: [258 v1] L1 PM Substates L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+ PortCommonModeRestoreTime=255us PortTPowerOnTime=10us L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1- T_CommonMode=0us LTR1.2_Threshold=32768ns L1SubCtl2: T_PwrOn=10us Capabilities: [128 v1] Power Budgeting
Capabilities: [420 v2] Advanced Error Reporting
UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO+ CmpltAbrt- UnxCmplt+ RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
AERCap: First Error Pointer: 00, ECRCGenCap- ECRCGenEn- ECRCChkCap- ECRCChkEn-
MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
HeaderLog: 00000000 00000000 00000000 00000000
Capabilities: [600 v1] Vendor Specific Information: ID=0001 Rev=1 Len=024 Capabilities: [900 v1] Secondary PCI Express LnkCtl3: LnkEquIntrruptEn- PerformEqu- LaneErrStat: 0 Capabilities: [bb0 v1] Physical Resizable BAR BAR 0: current size: 16MB, supported: 16MB BAR 1: current size: 256MB, supported: 64MB 128MB 256MB 512MB 1GB 2GB 4GB 8GB 16GB 32GB BAR 3: current size: 32MB, supported: 32MB Capabilities: [c1c v1] Physical Layer 16.0 GT/s
Capabilities: [d00 v1] Lane Margining at the Receiver Capabilities: [e00 v1] Data Link Feature
Kernel driver in use: nvidia
Kernel modules: nvidiafb, nouveau, nvidia_drm, nvidia

All GPU are identical (Nvidia RTX A4000), there are 5 of them, soon I will install another 2 and then all the PCIe slots on the board are filled. Two of them are at x8 lanes, the others are at x16, all on PCIe4.0.
I apologise for not being exact on terminology. I am not concerned about model loading (as in transfer from RAM to VRAM). That is very fast, usually seconds. I am concerned about that part when a block of model is read from disk and transferred into RAM cache. That usually tops at 1.6GB/s, even if the disk is capable of 5-6GB/s. I've never seen it faster than 1.6GB/s but ocasionally I've seen it slower, around 7-800MB/s, without apparent reason. There is no other task in the server, just AI inferrence tests in identical hardware and software conditions.

@pisoiu
Copy link
Author

pisoiu commented Sep 18, 2024

Screenshot from 2024-09-13 22-41-10

This is an example during model reading from disk to RAM.

@rick-github
Copy link
Contributor

Do you have a graph from when you are running the benchmark? What benchmark program are you using?

What's the output of

sudo hdparm -tT /dev/nvme0n1

@pisoiu
Copy link
Author

pisoiu commented Sep 18, 2024

Result is:
/dev/nvme0n1:
Timing cached reads: 24618 MB in 1.99 seconds = 12345.65 MB/sec
Timing buffered disk reads: 2164 MB in 3.00 seconds = 721.31 MB/sec

I don't have a graph from benchmark because the benchmark does not make graph to move, don't know why. In ubuntu utilities windows, disks, there is an option under 3 vertical dotted button, benckmark disk. That's the one I used to benchmark some time ago. But what is strange is I repeated the test now and it gives me lower results now, around 2.8GB/s. When I got 5GB/s I had only 3 GPU installed, now I have 5, that's the only difference, but I don't understand why should this matter for disk speed. Afaik they're not on shared PCIe lanes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working needs more info More information is needed to assist
Projects
None yet
Development

No branches or pull requests

3 participants