Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sync : ggml (ggml_scale, ggml_row_size, etc.) #1677

Merged
merged 7 commits into from Dec 22, 2023
Merged

sync : ggml (ggml_scale, ggml_row_size, etc.) #1677

merged 7 commits into from Dec 22, 2023

Conversation

ggerganov
Copy link
Owner

@ggerganov ggerganov commented Dec 22, 2023

No description provided.

@@ -449,11 +449,10 @@ static void init_view(ggml_gallocr_t galloc, struct ggml_tensor * view, bool upd
if (update_backend) {
view->backend = view->view_src->backend;
}
view->buffer = view->view_src->buffer;
// views are initialized in the alloc buffer rather than the view_src buffer
view->buffer = alloc->buffer;
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@slaren After the sync, the following command produces incorrect transcription:

make samples
make -j && ./main -m ./models/ggml-medium.en.bin -f ./samples/gb0.wav

Reverting just this line fixes the issue. Any guess what this could be related to?
Maybe something is incompatible with the whisper_allocr_graph_realloc logic

Copy link
Collaborator

@slaren slaren Dec 22, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is because this change interferes with auto-inline logic. This should fix it:

diff --git a/ggml-alloc.c b/ggml-alloc.c
index a97436b..a27dd54 100644
--- a/ggml-alloc.c
+++ b/ggml-alloc.c
@@ -72,7 +72,7 @@ static void remove_allocated_tensor(ggml_tallocr_t alloc, struct ggml_tensor * t

 // check if a tensor is allocated by this buffer
 static bool ggml_tallocr_is_own(ggml_tallocr_t alloc, const struct ggml_tensor * tensor) {
-    return tensor->buffer == alloc->buffer;
+    return tensor->buffer == alloc->buffer && (!tensor->view_src || tensor->view_src->buffer == alloc->buffer);
 }

 static bool ggml_is_view(struct ggml_tensor * t) {

@ggerganov ggerganov marked this pull request as ready for review December 22, 2023 11:06
@ggerganov ggerganov changed the title sync : ggml sync : ggml (ggml_scale, ggml_row_size, etc.) Dec 22, 2023
@ggerganov ggerganov merged commit 3a53021 into master Dec 22, 2023
74 checks passed
@ggerganov ggerganov deleted the sync branch December 22, 2023 15:54
bygreencn added a commit to bygreencn/whisper.cpp that referenced this pull request Dec 25, 2023
* ggerganov/master:
  whisper : Replace WHISPER_PRINT_DEBUG with WHISPER_LOG_DEBUG (ggerganov#1681)
  sync : ggml (ggml_scale, ggml_row_size, etc.) (ggerganov#1677)
  docker :  Dockerize whisper.cpp (ggerganov#1674)
  CI : Add coverage for talk-llama when WHISPER_CUBLAS=1 (ggerganov#1672)
  examples : Revert CMakeLists.txt for talk-llama (ggerganov#1669)
  cmake : set default CUDA architectures (ggerganov#1667)
viktor-silakov pushed a commit to viktor-silakov/whisper_node_mic.cpp that referenced this pull request May 11, 2024
* sync : ggml

* sync : llama.cpp

* talk-llama : fix obsolete param

* ggml-alloc : fix ggml_tallocr_is_own

* talk.wasm : update to new ggml

* ggml : fix type punning in ggml_scale

* ggml : cuda jetson + arm quants warnings
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants