Skip to content

Continous batching for single GPU LLM inference #3164

Continous batching for single GPU LLM inference

Continous batching for single GPU LLM inference #3164

Triggered via pull request October 3, 2023 20:54
Status Success
Total duration 45m 45s
Artifacts

ci_cpu.yml

on: pull_request
Matrix: ci-cpu
Fit to window
Zoom out
Zoom in

Annotations

1 warning
ci-cpu (macOS-latest)
Attempt 1 failed. Reason: Child_process exited with error code 1