Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

talk-llama no longer builds #1256

Closed
przemoc opened this issue Sep 6, 2023 · 0 comments
Closed

talk-llama no longer builds #1256

przemoc opened this issue Sep 6, 2023 · 0 comments

Comments

@przemoc
Copy link
Contributor

przemoc commented Sep 6, 2023

It got broken in commit 59a3d0c.

cc  -I.              -O3 -DNDEBUG -std=c11   -fPIC -pthread -msse3 -mssse3   -c ggml.c -o ggml.o
g++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread -msse3 -mssse3 -c whisper.cpp -o whisper.o
g++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread -msse3 -mssse3 examples/talk-llama/talk-llama.cpp examples/talk-llama/llama.cpp examples/common.cpp examples/common-ggml.cpp examples/common-sdl.cpp ggml.o whisper.o -o talk-lla
ma `sdl2-config --cflags --libs`
examples/talk-llama/llama.cpp: In function 'bool llama_eval_internal(llama_context&, const llama_token*, int, int, int)':
examples/talk-llama/llama.cpp:1207:8: error: 'struct ggml_cgraph' has no member named 'n_threads'
 1207 |     gf.n_threads = N >= 32 && ggml_cpu_has_blas() && !ggml_cpu_has_gpublas() ? 1 : n_threads;
      |        ^~~~~~~~~
examples/talk-llama/llama.cpp:1224:32: error: too few arguments to function 'ggml_tensor* ggml_rms_norm(ggml_context*, ggml_tensor*, float)'
 1224 |             cur = ggml_rms_norm(ctx0, inpL);
      |                   ~~~~~~~~~~~~~^~~~~~~~~~~~
In file included from examples/talk-llama/llama.cpp:12:
./ggml.h:933:35: note: declared here
  933 |     GGML_API struct ggml_tensor * ggml_rms_norm(
      |                                   ^~~~~~~~~~~~~
examples/talk-llama/llama.cpp:1332:36: error: too few arguments to function 'ggml_tensor* ggml_rms_norm(ggml_context*, ggml_tensor*, float)'
 1332 |                 cur = ggml_rms_norm(ctx0, inpFF);
      |                       ~~~~~~~~~~~~~^~~~~~~~~~~~~
./ggml.h:933:35: note: declared here
  933 |     GGML_API struct ggml_tensor * ggml_rms_norm(
      |                                   ^~~~~~~~~~~~~
examples/talk-llama/llama.cpp:1370:29: error: too few arguments to function 'ggml_tensor* ggml_rms_norm(ggml_context*, ggml_tensor*, float)'
 1370 |         inpL = ggml_rms_norm(ctx0, inpL);
      |                ~~~~~~~~~~~~~^~~~~~~~~~~~
./ggml.h:933:35: note: declared here
  933 |     GGML_API struct ggml_tensor * ggml_rms_norm(
      |                                   ^~~~~~~~~~~~~
examples/talk-llama/llama.cpp:1388:31: error: cannot convert 'ggml_context*' to 'ggml_cgraph*'
 1388 |     ggml_graph_compute       (ctx0, &gf);
      |                               ^~~~
      |                               |
      |                               ggml_context*
./ggml.h:1632:72: note:   initializing argument 1 of 'int ggml_graph_compute(ggml_cgraph*, ggml_cplan*)'
 1632 |     GGML_API               int ggml_graph_compute(struct ggml_cgraph * cgraph, struct ggml_cplan * cplan);
      |                                                   ~~~~~~~~~~~~~~~~~~~~~^~~~~~
./ggml.h:287:12: note: class type 'ggml_context' is incomplete
  287 |     struct ggml_context;
      |            ^~~~~~~~~~~~
examples/talk-llama/llama.cpp: In function 'int llama_apply_lora_from_file_internal(llama_context*, const char*, const char*, int)':
examples/talk-llama/llama.cpp:2491:16: error: 'struct ggml_cgraph' has no member named 'n_threads'
 2491 |             gf.n_threads = n_threads;
      |                ^~~~~~~~~
examples/talk-llama/llama.cpp:2492:32: error: cannot convert 'ggml_context*' to 'ggml_cgraph*'
 2492 |             ggml_graph_compute(lora_ctx, &gf);
      |                                ^~~~~~~~
      |                                |
      |                                ggml_context*
./ggml.h:1632:72: note:   initializing argument 1 of 'int ggml_graph_compute(ggml_cgraph*, ggml_cplan*)'
 1632 |     GGML_API               int ggml_graph_compute(struct ggml_cgraph * cgraph, struct ggml_cplan * cplan);
      |                                                   ~~~~~~~~~~~~~~~~~~~~~^~~~~~
./ggml.h:287:12: note: class type 'ggml_context' is incomplete
  287 |     struct ggml_context;
      |            ^~~~~~~~~~~~
examples/talk-llama/llama.cpp: In function 'size_t llama_copy_state_data(llama_context*, uint8_t*)':
examples/talk-llama/llama.cpp:2638:16: error: 'struct ggml_cgraph' has no member named 'n_threads'
 2638 |             gf.n_threads = 1;
      |                ^~~~~~~~~
examples/talk-llama/llama.cpp:2658:32: error: cannot convert 'ggml_context*' to 'ggml_cgraph*'
 2658 |             ggml_graph_compute(cpy_ctx, &gf);
      |                                ^~~~~~~
      |                                |
      |                                ggml_context*
./ggml.h:1632:72: note:   initializing argument 1 of 'int ggml_graph_compute(ggml_cgraph*, ggml_cplan*)'
 1632 |     GGML_API               int ggml_graph_compute(struct ggml_cgraph * cgraph, struct ggml_cplan * cplan);
      |                                                   ~~~~~~~~~~~~~~~~~~~~~^~~~~~
./ggml.h:287:12: note: class type 'ggml_context' is incomplete
  287 |     struct ggml_context;
      |            ^~~~~~~~~~~~
examples/talk-llama/llama.cpp: In function 'size_t llama_set_state_data(llama_context*, uint8_t*)':
examples/talk-llama/llama.cpp:2746:16: error: 'struct ggml_cgraph' has no member named 'n_threads'
 2746 |             gf.n_threads = 1;
      |                ^~~~~~~~~
examples/talk-llama/llama.cpp:2766:32: error: cannot convert 'ggml_context*' to 'ggml_cgraph*'
 2766 |             ggml_graph_compute(cpy_ctx, &gf);
      |                                ^~~~~~~
      |                                |
      |                                ggml_context*
./ggml.h:1632:72: note:   initializing argument 1 of 'int ggml_graph_compute(ggml_cgraph*, ggml_cplan*)'
 1632 |     GGML_API               int ggml_graph_compute(struct ggml_cgraph * cgraph, struct ggml_cplan * cplan);
      |                                                   ~~~~~~~~~~~~~~~~~~~~~^~~~~~
./ggml.h:287:12: note: class type 'ggml_context' is incomplete
  287 |     struct ggml_context;
      |            ^~~~~~~~~~~~
bdonkey added a commit to bdonkey/whisper.cpp that referenced this issue Sep 13, 2023
* master: (96 commits)
  whisper : fix bench regression + fix performance when using CPU BLAS (ggerganov#1275)
  whisper : faster beam_search sampling via reduced KV cache copies (ggerganov#1243)
  java : fixed signing of java artifact using gradle (ggerganov#1267)
  ci : try to fix gradle action (ggerganov#1265)
  gitignore : update
  sync : ggml (HBM + Metal + style) (ggerganov#1264)
  ci : upgrade gradle to 2.4.2 (ggerganov#1263)
  sync : ggml (CUDA faster rope)
  cmake : noramlize case (ggerganov#1129)
  build : do not use _GNU_SOURCE gratuitously (ggerganov#1129)
  examples : fix build + compile warnings (close ggerganov#1256)
  models : add quantum models to download-ggml-model.sh (ggerganov#1235)
  whisper.android : bump gradle plugin and dependencies + a lint pass (ggerganov#1255)
  sign jar for Maven Central repo
  whisper.android : address ARM's big.LITTLE arch by checking cpu info (ggerganov#1254)
  make : fix detection of AVX2 on macOS (ggerganov#1250)
  ggml : posixify pagesize (ggerganov#1251)
  configured publishing.repositories
  ggml : sync latest llama.cpp (view_src + alloc improvements) (ggerganov#1247)
  make : improve cpuinfo handling on x86 hosts (ggerganov#1238)
  ...
jacobwu-b pushed a commit to jacobwu-b/Transcriptify-by-whisper.cpp that referenced this issue Oct 24, 2023
jacobwu-b pushed a commit to jacobwu-b/Transcriptify-by-whisper.cpp that referenced this issue Oct 24, 2023
vonstring pushed a commit to vonstring/whisper.cpp that referenced this issue Nov 7, 2023
landtanin pushed a commit to landtanin/whisper.cpp that referenced this issue Dec 16, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant