Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
92 commits
Select commit Hold shift + click to select a range
4b4d92b
docs: fix server documentation formatting (#10776)
CentricStorm Dec 11, 2024
484d2f3
bug-fix: snprintf prints NULL in place of the last character (#10419)
kallewoof Dec 11, 2024
92f77a6
ci : pin nodejs to 22.11.0 (#10779)
ngxson Dec 11, 2024
1a31d0d
Update README.md (#10772)
Dec 11, 2024
235f6e1
server : (UI) add tok/s, get rid of completion.js (#10786)
ngxson Dec 11, 2024
fb18934
gguf-py : bump version to 0.11.0
ggerganov Dec 11, 2024
973f328
Merge pull request #10788 from ggerganov/gg/gguf-py-0.11.0
ggerganov Dec 11, 2024
5555c0c
docs: update server streaming mode documentation (#9519)
CentricStorm Dec 11, 2024
9fdb124
common : add missing env var for speculative (#10801)
ngxson Dec 12, 2024
dc5301d
Vulkan: Add VK_EXT_subgroup_size_control support to ensure full subgr…
0cc4m Dec 12, 2024
4064c0e
Vulkan: Use improved q4_k and q5_k dequant code in dequant shaders (#…
0cc4m Dec 12, 2024
cb13ef8
remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797)
slaren Dec 12, 2024
8faa1d4
CUDA: faster non-contiguous concat (#10760)
A3shTnT Dec 12, 2024
274ec65
contrib : add ngxson as codeowner (#10804)
ngxson Dec 12, 2024
adffa6f
common : improve -ctv -ctk CLI arguments (#10806)
ngxson Dec 12, 2024
d583cd0
ggml : Fix compilation issues on ARM platform when building without f…
kkontny Dec 13, 2024
83ed24a
SYCL: Reduce most of the compiler warnings (#10748)
qnixsynapse Dec 13, 2024
64ae065
vulkan: small mul_mat_vec optimizations (#10665)
netrunnereve Dec 13, 2024
9f35e44
Fix crash caused by ggml_backend_load_all when launching on Android A…
sienaiwun Dec 13, 2024
4601a8b
gguf-py : numpy 2 newbyteorder fix (#9772)
jettjaniak Dec 13, 2024
11e07fd
fix: graceful shutdown for Docker images (#10815)
co42 Dec 13, 2024
c27ac67
Opt class for positional argument handling (#10508)
ericcurtin Dec 13, 2024
a76c56f
Introducing experimental OpenCL backend with support for Qualcomm Adr…
lhez Dec 13, 2024
56eea07
Removes spurious \r in output that causes logging in journalctl to tr…
cduk Dec 13, 2024
ba1cb19
llama : add Qwen2VL support + multimodal RoPE (#10361)
HimariO Dec 14, 2024
e52aba5
nix: allow to override rocm gpu targets (#10794)
kurnevsky Dec 14, 2024
89d604f
server: Fix `has_next_line` in JSON response (#10818)
MichelleTanPY Dec 14, 2024
b5ae1dd
gguf-py : bump to v0.13.0
ggerganov Dec 15, 2024
5478bbc
server: (UI) add syntax highlighting and latex math rendering (#10808)
VJHack Dec 15, 2024
87cf323
scripts : change build path to "build-bench" for compare-commits.sh (…
ggerganov Dec 15, 2024
a097415
llama : add Deepseek MoE v1 & GigaChat models (#10827)
Inf1delis Dec 15, 2024
4ddd199
llava : Allow locally downloaded models for QwenVL (#10833)
bartowski1182 Dec 15, 2024
644fd71
sampling : refactor + optimize penalties sampler (#10803)
ggerganov Dec 16, 2024
08ea539
unicode : improve naming style (#10838)
ggerganov Dec 16, 2024
160bc03
rwkv6: add wkv6 support for Vulkan backend (#10829)
zhiyuan1i Dec 16, 2024
7b1ec53
vulkan: bugfixes for small subgroup size systems + llvmpipe test (#10…
netrunnereve Dec 17, 2024
227d7c5
server : (UI) fix missing async generator on safari (#10857)
ngxson Dec 17, 2024
4f51968
readme : update typos (#10863)
ruanych Dec 17, 2024
382bc7f
llama : add Falcon3 support (#10864)
mokeddembillel Dec 17, 2024
05c3a44
server : fill usage info in embeddings and rerank responses (#10852)
krystiancha Dec 17, 2024
0006f5a
ggml : update ggml_backend_cpu_device_supports_op (#10867)
ggerganov Dec 17, 2024
3919da8
ggml : add check for grad_accs (ggml/1046)
danbev Dec 13, 2024
130d0c9
ggml : remove return from ggml_gallocr_allocate_node (ggml/1048)
danbev Dec 14, 2024
8dd19a4
vulkan : fix soft_max.comp division by zero (whisper/2633)
gn64 Dec 16, 2024
78f7667
cmake : fix "amd64" processor string (whisper/2638)
ggerganov Dec 17, 2024
5437d4a
sync : ggml
ggerganov Dec 17, 2024
081b29b
tests: add tests for GGUF (#10830)
JohannesGaessler Dec 17, 2024
d62b532
Use model->gguf_kv for loading the template instead of using the C AP…
dranger003 Dec 17, 2024
4da69d1
Revert "llama : add Falcon3 support (#10864)" (#10876)
slaren Dec 18, 2024
6b064c9
docs: Fix HIP (née hipBLAS) in README (#10880)
brianredbeard Dec 18, 2024
4682887
server : (embeddings) using same format for "input" and "content" (#1…
ngxson Dec 18, 2024
0e70ba6
server : add "tokens" output (#10853)
ggerganov Dec 18, 2024
152610e
server : output embeddings for all tokens when pooling = none (#10861)
ggerganov Dec 18, 2024
7bbb5ac
server: avoid overwriting Authorization header (#10878)
vesath Dec 18, 2024
0bf2d10
tts : add OuteTTS support (#10784)
ggerganov Dec 18, 2024
9177484
ggml : fix arm build (#10890)
slaren Dec 18, 2024
7909e85
llama-run : improve progress bar (#10821)
ericcurtin Dec 19, 2024
cd920d0
tests: disable GGUF test for bad value size (#10886)
JohannesGaessler Dec 19, 2024
7585edb
convert : Add support for Microsoft Phi-4 model (#10817)
fairydreaming Dec 19, 2024
2fffc52
llama : fix Roberta embeddings (#10856)
Ssukriti Dec 19, 2024
a3c33b1
ggml: fix arm build with gcc (#10895)
angt Dec 19, 2024
57bb2c4
server : fix logprobs, make it OAI-compatible (#10783)
ngxson Dec 19, 2024
36319de
tts : small QoL for easy model fetch (#10903)
ggerganov Dec 19, 2024
5cab3e4
llama : minor grammar refactor (#10897)
ggerganov Dec 19, 2024
d408bb9
clip : disable GPU support (#10896)
ggerganov Dec 19, 2024
0a11f8b
convert : fix RWKV v6 model conversion (#10913)
MollySophia Dec 20, 2024
21ae3b9
ggml : add test for SVE and disable when it fails (#10906)
slaren Dec 20, 2024
0ca416c
server : (UI) fix copy to clipboard function (#10916)
ngxson Dec 20, 2024
eb5c3dc
SYCL: Migrate away from deprecated ggml_tensor->backend (#10840)
qnixsynapse Dec 20, 2024
e34c5af
ggml-cpu: replace NEON asm with intrinsics in ggml_gemv_q4_0_4x8_q8_0…
angt Dec 20, 2024
a91a413
vulkan: optimize coopmat2 dequant functions (#10855)
jeffbolznv Dec 21, 2024
5cd85b5
convert : add BertForMaskedLM (#10919)
ggerganov Dec 21, 2024
ebdee94
vulkan: build fixes for 32b (#10927)
jeffbolznv Dec 22, 2024
7ae33a6
llama : add Falcon3 support (#10883)
mokeddembillel Dec 22, 2024
7c0e285
devops : add docker-multi-stage builds (#10832)
rudiservo Dec 22, 2024
7024d59
ggml : fix run-time on FreeBSD in get_executable_path() (#10948)
yurivict Dec 23, 2024
dab76c9
llama-run : include temperature option (#10899)
ericcurtin Dec 23, 2024
6f0c9e0
llama : support for Llama-3_1-Nemotron-51B (#10669)
ymcki Dec 23, 2024
b92a14a
llama : support InfiniAI Megrez 3b (#10893)
dixyes Dec 23, 2024
86bf31c
rpc-server : add support for the SYCL backend (#10934)
rgerganov Dec 23, 2024
485dc01
server : add system_fingerprint to chat/completion (#10917)
ngxson Dec 23, 2024
14b699e
server : fix missing model id in /model endpoint (#10957)
ngxson Dec 23, 2024
32d6ee6
ggml : fix const usage in SSE path (#10962)
slaren Dec 23, 2024
3327bb0
ggml : fix arm enabled features check (#10961)
slaren Dec 24, 2024
60cfa72
ggml : use wstring for backend search paths (#10960)
slaren Dec 24, 2024
30caac3
llama : the WPM vocabs use the CLS token as BOS (#10930)
ggerganov Dec 24, 2024
09fe2e7
server: allow filtering llama server response fields (#10940)
nvrxq Dec 24, 2024
2cd43f4
ggml : more perfo with llamafile tinyblas on x86_64 (#10714)
Djip007 Dec 24, 2024
9ba399d
server : add support for "encoding_format": "base64" to the */embeddi…
elk-cloner Dec 24, 2024
d283d02
examples, ggml : fix GCC compiler warnings (#10983)
peter277 Dec 26, 2024
d79d8f3
vulkan: multi-row k quants (#10846)
netrunnereve Dec 26, 2024
5c8aa73
Merge branch 'layla-build' into merge
l3utterfly Dec 28, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
81 changes: 81 additions & 0 deletions .devops/cpu.Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
ARG UBUNTU_VERSION=22.04

FROM ubuntu:$UBUNTU_VERSION AS build

RUN apt-get update && \
apt-get install -y build-essential git cmake libcurl4-openssl-dev

WORKDIR /app

COPY . .

RUN cmake -S . -B build -DGGML_BACKEND_DL=ON -DGGML_NATIVE=OFF -DGGML_CPU_ALL_VARIANTS=ON -DLLAMA_CURL=ON -DCMAKE_BUILD_TYPE=Release && \
cmake --build build -j $(nproc)

RUN mkdir -p /app/lib && \
find build -name "*.so" -exec cp {} /app/lib \;

RUN mkdir -p /app/full \
&& cp build/bin/* /app/full \
&& cp *.py /app/full \
&& cp -r gguf-py /app/full \
&& cp -r requirements /app/full \
&& cp requirements.txt /app/full \
&& cp .devops/tools.sh /app/full/tools.sh

## Base image
FROM ubuntu:$UBUNTU_VERSION AS base

RUN apt-get update \
&& apt-get install -y libgomp1 curl\
&& apt autoremove -y \
&& apt clean -y \
&& rm -rf /tmp/* /var/tmp/* \
&& find /var/cache/apt/archives /var/lib/apt/lists -not -name lock -type f -delete \
&& find /var/cache -type f -delete

COPY --from=build /app/lib/ /app

### Full
FROM base AS full

COPY --from=build /app/full /app

WORKDIR /app

RUN apt-get update \
&& apt-get install -y \
git \
python3 \
python3-pip \
&& pip install --upgrade pip setuptools wheel \
&& pip install -r requirements.txt \
&& apt autoremove -y \
&& apt clean -y \
&& rm -rf /tmp/* /var/tmp/* \
&& find /var/cache/apt/archives /var/lib/apt/lists -not -name lock -type f -delete \
&& find /var/cache -type f -delete

ENTRYPOINT ["/app/tools.sh"]

### Light, CLI only
FROM base AS light

COPY --from=build /app/full/llama-cli /app

WORKDIR /app

ENTRYPOINT [ "/app/llama-cli" ]

### Server, Server only
FROM base AS server

ENV LLAMA_ARG_HOST=0.0.0.0

COPY --from=build /app/full/llama-server /app

WORKDIR /app

HEALTHCHECK CMD [ "curl", "-f", "http://localhost:8080/health" ]

ENTRYPOINT [ "/app/llama-server" ]
94 changes: 94 additions & 0 deletions .devops/cuda.Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
ARG UBUNTU_VERSION=22.04
# This needs to generally match the container host's environment.
ARG CUDA_VERSION=12.6.0
# Target the CUDA build image
ARG BASE_CUDA_DEV_CONTAINER=nvidia/cuda:${CUDA_VERSION}-devel-ubuntu${UBUNTU_VERSION}

ARG BASE_CUDA_RUN_CONTAINER=nvidia/cuda:${CUDA_VERSION}-runtime-ubuntu${UBUNTU_VERSION}

FROM ${BASE_CUDA_DEV_CONTAINER} AS build

# CUDA architecture to build for (defaults to all supported archs)
ARG CUDA_DOCKER_ARCH=default

RUN apt-get update && \
apt-get install -y build-essential cmake python3 python3-pip git libcurl4-openssl-dev libgomp1

WORKDIR /app

COPY . .

RUN if [ "${CUDA_DOCKER_ARCH}" != "default" ]; then \
export CMAKE_ARGS="-DCMAKE_CUDA_ARCHITECTURES=${CUDA_DOCKER_ARCH}"; \
fi && \
cmake -B build -DGGML_NATIVE=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON ${CMAKE_ARGS} -DCMAKE_EXE_LINKER_FLAGS=-Wl,--allow-shlib-undefined . && \
cmake --build build --config Release -j$(nproc)

RUN mkdir -p /app/lib && \
find build -name "*.so" -exec cp {} /app/lib \;

RUN mkdir -p /app/full \
&& cp build/bin/* /app/full \
&& cp *.py /app/full \
&& cp -r gguf-py /app/full \
&& cp -r requirements /app/full \
&& cp requirements.txt /app/full \
&& cp .devops/tools.sh /app/full/tools.sh

## Base image
FROM ${BASE_CUDA_RUN_CONTAINER} AS base

RUN apt-get update \
&& apt-get install -y libgomp1 curl\
&& apt autoremove -y \
&& apt clean -y \
&& rm -rf /tmp/* /var/tmp/* \
&& find /var/cache/apt/archives /var/lib/apt/lists -not -name lock -type f -delete \
&& find /var/cache -type f -delete

COPY --from=build /app/lib/ /app

### Full
FROM base AS full

COPY --from=build /app/full /app

WORKDIR /app

RUN apt-get update \
&& apt-get install -y \
git \
python3 \
python3-pip \
&& pip install --upgrade pip setuptools wheel \
&& pip install -r requirements.txt \
&& apt autoremove -y \
&& apt clean -y \
&& rm -rf /tmp/* /var/tmp/* \
&& find /var/cache/apt/archives /var/lib/apt/lists -not -name lock -type f -delete \
&& find /var/cache -type f -delete


ENTRYPOINT ["/app/tools.sh"]

### Light, CLI only
FROM base AS light

COPY --from=build /app/full/llama-cli /app

WORKDIR /app

ENTRYPOINT [ "/app/llama-cli" ]

### Server, Server only
FROM base AS server

ENV LLAMA_ARG_HOST=0.0.0.0

COPY --from=build /app/full/llama-server /app

WORKDIR /app

HEALTHCHECK CMD [ "curl", "-f", "http://localhost:8080/health" ]

ENTRYPOINT [ "/app/llama-server" ]
33 changes: 0 additions & 33 deletions .devops/full-cuda.Dockerfile

This file was deleted.

33 changes: 0 additions & 33 deletions .devops/full-musa.Dockerfile

This file was deleted.

50 changes: 0 additions & 50 deletions .devops/full-rocm.Dockerfile

This file was deleted.

38 changes: 0 additions & 38 deletions .devops/full.Dockerfile

This file was deleted.

Loading
Loading