Skip to content

[Feat] Use cache mount for genai docker#4954

Merged
Bobholamovic merged 1 commit intoPaddlePaddle:developfrom
Bobholamovic:feat/use-cache-mount
Jan 29, 2026
Merged

[Feat] Use cache mount for genai docker#4954
Bobholamovic merged 1 commit intoPaddlePaddle:developfrom
Bobholamovic:feat/use-cache-mount

Conversation

@Bobholamovic
Copy link
Member

No description provided.

@paddle-bot
Copy link

paddle-bot bot commented Jan 29, 2026

Thanks for your contribution!

@Bobholamovic Bobholamovic changed the base branch from release/3.4 to develop January 29, 2026 12:47
@Bobholamovic Bobholamovic merged commit f0ccfd4 into PaddlePaddle:develop Jan 29, 2026
1 check passed
@Bobholamovic Bobholamovic deleted the feat/use-cache-mount branch January 29, 2026 12:47
Bobholamovic added a commit that referenced this pull request Jan 29, 2026
Bobholamovic added a commit that referenced this pull request Feb 11, 2026
* Use cache mount for genai docker (#4954)

* Fix HPS order bug (#4955)

* Fix transformers version (#4956)

* Fix HPS and remove scipy from required deps (#4957)

* [Cherry-Pick]bugfix: unexpected change of the constant IMAGE_LABELS (#4961)

* bugfix: unexpected change of the constant IMAGE_LABELS

* update doc

* [METAX] add ppdoclayv3 to METAX_GPU_WHITELIST (#4959)

Co-authored-by: duqiemng <1640472053@qq.com>

* vllm 0.10.2 needs transformers 4.x (#4963)

* vllm 0.10.2 needs transformers 4.x

* update

* Bump version to 3.4.1

* Support setting PDF rendering scale factor (#4967)

* Fix/doc vlm async cancellation (#4969) (#4971)

* fix(doc_vlm): cancel pending futures on batch request failure

When a batch of requests is sent to the VLM service and one fails,
the remaining pending futures are now properly cancelled to avoid
wasting VLM service resources.

* chore: remove test file and documentation for async cancellation fix

* Fix typo (#4982)

* Revert "Fix typo (#4982)"

This reverts commit 0a936ba.

* feat(ROCm): Add ROCm 7.0 compatibility patches

* version

---------

Co-authored-by: Lin Manhui <bob1998425@hotmail.com>
Co-authored-by: changdazhou <142379845+changdazhou@users.noreply.github.com>
Co-authored-by: SuperNova <91192235+handsomecoderyang@users.noreply.github.com>
Co-authored-by: duqiemng <1640472053@qq.com>
Co-authored-by: zhang-prog <69562787+zhang-prog@users.noreply.github.com>
Co-authored-by: Bobholamovic <mhlin425@whu.edu.cn>
Co-authored-by: Bvicii <98971614+scyyh11@users.noreply.github.com>
M4jupitercannon added a commit to M4jupitercannon/PaddleX that referenced this pull request Feb 12, 2026
* Use cache mount for genai docker (PaddlePaddle#4954)

* Fix HPS order bug (PaddlePaddle#4955)

* Fix transformers version (PaddlePaddle#4956)

* Fix HPS and remove scipy from required deps (PaddlePaddle#4957)

* [Cherry-Pick]bugfix: unexpected change of the constant IMAGE_LABELS (PaddlePaddle#4961)

* bugfix: unexpected change of the constant IMAGE_LABELS

* update doc

* [METAX] add ppdoclayv3 to METAX_GPU_WHITELIST (PaddlePaddle#4959)

Co-authored-by: duqiemng <1640472053@qq.com>

* vllm 0.10.2 needs transformers 4.x (PaddlePaddle#4963)

* vllm 0.10.2 needs transformers 4.x

* update

* Bump version to 3.4.1

* Support setting PDF rendering scale factor (PaddlePaddle#4967)

* Fix/doc vlm async cancellation (PaddlePaddle#4969) (PaddlePaddle#4971)

* fix(doc_vlm): cancel pending futures on batch request failure

When a batch of requests is sent to the VLM service and one fails,
the remaining pending futures are now properly cancelled to avoid
wasting VLM service resources.

* chore: remove test file and documentation for async cancellation fix

* Fix typo (PaddlePaddle#4982)

* Revert "Fix typo (PaddlePaddle#4982)"

This reverts commit 0a936ba.

* feat(ROCm): Add ROCm 7.0 compatibility patches

* version

---------

Co-authored-by: Lin Manhui <bob1998425@hotmail.com>
Co-authored-by: changdazhou <142379845+changdazhou@users.noreply.github.com>
Co-authored-by: SuperNova <91192235+handsomecoderyang@users.noreply.github.com>
Co-authored-by: duqiemng <1640472053@qq.com>
Co-authored-by: zhang-prog <69562787+zhang-prog@users.noreply.github.com>
Co-authored-by: Bobholamovic <mhlin425@whu.edu.cn>
Co-authored-by: Bvicii <98971614+scyyh11@users.noreply.github.com>
Bobholamovic added a commit that referenced this pull request Feb 12, 2026
* Use cache mount for genai docker (#4954)

* Fix HPS order bug (#4955)

* Fix transformers version (#4956)

* Fix HPS and remove scipy from required deps (#4957)

* [Cherry-Pick]bugfix: unexpected change of the constant IMAGE_LABELS (#4961)

* bugfix: unexpected change of the constant IMAGE_LABELS

* update doc

* [METAX] add ppdoclayv3 to METAX_GPU_WHITELIST (#4959)



* vllm 0.10.2 needs transformers 4.x (#4963)

* vllm 0.10.2 needs transformers 4.x

* update

* Bump version to 3.4.1

* Support setting PDF rendering scale factor (#4967)

* Fix/doc vlm async cancellation (#4969) (#4971)

* fix(doc_vlm): cancel pending futures on batch request failure

When a batch of requests is sent to the VLM service and one fails,
the remaining pending futures are now properly cancelled to avoid
wasting VLM service resources.

* chore: remove test file and documentation for async cancellation fix

* Fix typo (#4982)

* Revert "Fix typo (#4982)"

This reverts commit 0a936ba.

* feat(ROCm): Add ROCm 7.0 compatibility patches

* version

---------

Co-authored-by: Lin Manhui <bob1998425@hotmail.com>
Co-authored-by: changdazhou <142379845+changdazhou@users.noreply.github.com>
Co-authored-by: SuperNova <91192235+handsomecoderyang@users.noreply.github.com>
Co-authored-by: duqiemng <1640472053@qq.com>
Co-authored-by: zhang-prog <69562787+zhang-prog@users.noreply.github.com>
Co-authored-by: Bobholamovic <mhlin425@whu.edu.cn>
Co-authored-by: Bvicii <98971614+scyyh11@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant