-
Notifications
You must be signed in to change notification settings - Fork 75
Insights: deepjavalibrary/djl-serving
Overview
Could not load contribution data
Please try again later
10 Pull requests merged by 4 people
-
Fix Gradle dependency issues for TRT-LLM release
#2850 merged
Jun 24, 2025 -
[trtllm] Upgrade TRT-LLM to version 0.21.0rc1 for djl-serving 0.33.0
#2848 merged
Jun 19, 2025 -
[0.33.0-dlc][docker] Pin cuda-compat-12-8 to 570.148.08-0ubuntu1
#2842 merged
Jun 10, 2025 -
[docs] Update docs to 0.33.0
#2835 merged
Jun 3, 2025 -
Promote SERVING_HEALTH_CHECK_OVERRIDE to mainline
#2840 merged
Jun 3, 2025 -
[patch][port] forward port of SERVING_HEALTH_CHECK_OVERRIDE
#2839 merged
Jun 2, 2025 -
[awscurl] properly handle missing -jq case
#2838 merged
Jun 2, 2025 -
Uses new recommended way to build pip wheel
#2837 merged
May 31, 2025 -
Upgrade gradle to 8.14
#2836 merged
May 30, 2025 -
[fix] Add vllm_async_service as entrypoint to vllm performance handle…
#2833 merged
May 27, 2025
4 Pull requests opened by 2 people
-
Bump vllm from 0.8.4 to 0.9.0 in /serving/docker
#2834 opened
May 28, 2025 -
build: vllm upgrade to 0.9.0.1
#2841 opened
Jun 6, 2025 -
Bump protobuf from 3.20.3 to 4.25.8 in /serving/docker
#2844 opened
Jun 17, 2025 -
Bump torch from 2.6.0 to 2.7.1 in /serving/docker
#2846 opened
Jun 18, 2025
2 Issues closed by 2 people
-
Llama 2 7b chat model output quality is low
#2093 closed
Jun 3, 2025 -
awscurl not working with OpenAI completions schema
#2832 closed
Jun 2, 2025
2 Issues opened by 2 people
-
Publish consistent information about the release versions and artifacts
#2845 opened
Jun 17, 2025 -
Inference Fails with “model_name is required” Despite Successful Deployment (LMI v15 + vLLM + Qwen 3)
#2843 opened
Jun 11, 2025