Skip to content

ruby : add VAD::Context#segments_from_samples, allow Pathname, etc.#3633

Merged
KitaitiMakoto merged 44 commits intoggml-org:masterfrom
KitaitiMakoto:ruby-dev
Jan 30, 2026
Merged

ruby : add VAD::Context#segments_from_samples, allow Pathname, etc.#3633
KitaitiMakoto merged 44 commits intoggml-org:masterfrom
KitaitiMakoto:ruby-dev

Conversation

@KitaitiMakoto
Copy link
Collaborator

Hello,

I added some code to Ruby bindings:

  • Whisper::VAD::Context#segments_from_samples, which run VAD from Ruby array or C array
  • Whisper::Context#transcribe and Whisper::VAD::Context#detect accepts Pathname in addition to String
  • MemoryView management was refined
  • Memory leak was fixed

Thank you.

@KitaitiMakoto KitaitiMakoto merged commit aa1bc0d into ggml-org:master Jan 30, 2026
56 of 66 checks passed
@KitaitiMakoto
Copy link
Collaborator Author

Thanks for approval!

NBaLogn pushed a commit to NBaLogn/whisper.cpp that referenced this pull request Feb 2, 2026
…ggml-org#3633)

* ruby : Bump version to 1.3.6

* Fix code in example

* Add sample code to transcribe from MemoryView

* Define GetVADContext macro

* Use GetVADContext

* Extract parse_full_args function

* Use parse_full_args in ruby_whisper_full_parallel

* Free samples after use

* Check return value of parse_full_args()

* Define GetVADParams macro

* Add VAD::Context#segments_from_samples

* Add tests for VAD::Context#segments_from_samples

* Add signature for VAD::Context#segments_from_samples

* Add sample code for VAD::Context#segments_from_samples

* Add test for Whisper::Context#transcribe with Pathname

* Make Whisper::Context#transcribe and Whisper::VAD::Context#detect accept Pathname

* Update signature of Whisper::Context#transcribe

* Fix variable name

* Don't free memory view

* Make parse_full_args return struct

* Fallback when failed to get MemoryView

* Add num of samples when too long

* Check members of MemoryView

* Fix a typo

* Remove unnecessary include

* Fix a typo

* Fix a typo

* Care the case of MemoryView doesn't fit spec

* Add TODO comment

* Add optimazation option to compiler flags

* Use ALLOC_N instead of malloc

* Add description to sample code

* Rename and change args: parse_full_args -> parse_samples

* Free samples when exception raised

* Assign type check result to a variable

* Define wrapper function of whisper_full

* Change signature of parse_samples for rb_ensure

* Ensure release MemoryView

* Extract fill_samples function

* Free samples memory when filling it failed

* Free samples memory when transcription failed

* Prepare transcription in wrapper funciton

* Change function name

* Simplify function boundary
bygreencn added a commit to bygreencn/whisper.cpp that referenced this pull request Feb 2, 2026
* ggerganov/master: (73 commits)
  ruby : add `VAD::Context#segments_from_samples`, allow Pathname, etc. (ggml-org#3633)
  scripts : Fix dSYMs path case for macOS xcframework build (ggml-org#3630)
  cuda : fix compile warnings (#0)
  talk-llama : sync llama.cpp
  sync : ggml
  add tensor type checking as part of cuda graph properties (llama/19186)
  sycl: implement GGML_UNARY_OP_SOFTPLUS (llama/19114)
  sycl: implement GGML_OP_TRI (llama/19089)
  ggml-webgpu: improve flastAttention performance by software pipelining (llama/19151)
  hexagon: enable offloading to Hexagon on Windows on Snapdragon (llama/19150)
  cuda : fix nkvo, offload and cuda graph node properties matching (llama/19165)
  HIP: add mmf for CDNA (llama/18896)
  ggml-zendnn : resolve ZenDNN backend cross-module symbol dependency (llama/19159)
  CUDA: refactor topk-moe to enable more models (GLM 4.7, Nemotron etc.) (llama/19126)
  sycl: fix norm kernels: l2_norm, group_norm, rms_norm by remove assert to support more cases (llama/19154)
  Vulkan Flash Attention Coopmat1 Refactor (llama/19075)
  ggml-sycl: remove unused syclcompat header (llama/19140)
  vulkan: handle device dedup on MacOS + Vega II Duo cards (llama/19058)
  ggml: new backend for Virglrenderer API Remoting acceleration (v2) (llama/18718)
  ggml-cpu: arm64: Q4_K scale unroll and vectorization (llama/19108)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants