-
Notifications
You must be signed in to change notification settings - Fork 13.8k
Closed
Labels
Description
Git commit
Operating systems
Linux
GGML backends
CUDA, CPU
Problem description & steps to reproduce
Since #17216 was merged, compilation of llama-server fails.
First Bad Commit
Compile command
rm -rf build
time cmake -B build -DGGML_CUDA=1 -DCMAKE_CUDA_ARCHITECTURES="89" -DLLAMA_OPENSSL=1 -DGGML_CUDA_FA_ALL_QUANTS=1 # -DGGML_CCACHE=0 -DCMAKE_BUILD_TYPE=Debug
time cmake --build build --config Release -j 16Relevant log output
/home/dylan/llama.cpp/tools/server/server-http.cpp: In member function ‘bool server_http_context::init(const common_params&)’:
/home/dylan/llama.cpp/tools/server/server-http.cpp:52:9: error: ‘svr’ was not declared in this scope
52 | svr.reset(
| ^~~
/home/dylan/llama.cpp/tools/server/server-http.cpp:57:9: error: ‘svr’ was not declared in this scope
57 | svr.reset(new httplib::Server());
| ^~~
gmake[2]: *** [tools/server/CMakeFiles/llama-server.dir/build.make:98: tools/server/CMakeFiles/llama-server.dir/server-http.cpp.o] Error 1
gmake[2]: *** Waiting for unfinished jobs....
[100%] Linking CXX executable ../bin/test-backend-ops
[100%] Built target test-backend-ops
gmake[1]: *** [CMakeFiles/Makefile2:4019: tools/server/CMakeFiles/llama-server.dir/all] Error 2
gmake: *** [Makefile:146: all] Error 2