Skip to content

Conversation

@pwilkin
Copy link
Collaborator

@pwilkin pwilkin commented Dec 1, 2025

Fixes #17556

I believe we should at least support the source files in the llama.cpp codebase ;)

Copy link
Collaborator

@allozaur allozaur left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pwilkin looking good in general.

Please run npm run format for linting CI to pass + please update the static build :)

@pwilkin
Copy link
Collaborator Author

pwilkin commented Dec 2, 2025

@pwilkin looking good in general.

Please run npm run format for linting CI to pass + please update the static build :)

Just when I was getting used to running editorconfig-checker on each run, another linter... 😆

@allozaur
Copy link
Collaborator

allozaur commented Dec 2, 2025

Just when I was getting used to running editorconfig-checker on each run, another linter... 😆

Did you run npm run dev locally? It should install git hooks where we have a pre-commit hook checking the linting, types etc.

@pwilkin
Copy link
Collaborator Author

pwilkin commented Dec 2, 2025

Did you run npm run dev locally? It should install git hooks where we have a pre-commit hook checking the linting, types etc.

Nope, just npm run build :>

@allozaur
Copy link
Collaborator

allozaur commented Dec 2, 2025

Did you run npm run dev locally? It should install git hooks where we have a pre-commit hook checking the linting, types etc.

Nope, just npm run build :>

Busted! 😝 lemme know when you have formatted the code and maybe add a short test video to the PR description? 😊

@pwilkin
Copy link
Collaborator Author

pwilkin commented Dec 2, 2025

@allozaur looks good 😄

https://youtu.be/Zy2-AdXqNMU

Copy link
Collaborator

@allozaur allozaur left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pwilkin just plz do one more rebase and static build and let's merge it

@pwilkin pwilkin merged commit c6d1a00 into ggml-org:master Dec 3, 2025
7 checks passed
khemchand-zetta pushed a commit to khemchand-zetta/llama.cpp that referenced this pull request Dec 4, 2025
* Add a couple of file types to the text section

* Format + regenerate index

* Rebuild after rebase
gabe-l-hart added a commit to gabe-l-hart/llama.cpp that referenced this pull request Dec 4, 2025
* origin/master:
server: strip content-length header on proxy (ggml-org#17734)
server: move msg diffs tracking to HTTP thread (ggml-org#17740)
examples : add missing code block end marker [no ci] (ggml-org#17756)
common : skip model validation when --help is requested (ggml-org#17755)
ggml-cpu : remove asserts always evaluating to false (ggml-org#17728)
convert: use existing local chat_template if mistral-format model has one. (ggml-org#17749)
cmake : simplify build info detection using standard variables (ggml-org#17423)
ci : disable ggml-ci-x64-amd-* (ggml-org#17753)
common: use native MultiByteToWideChar (ggml-org#17738)
metal : use params per pipeline instance (ggml-org#17739)
llama : fix sanity checks during quantization (ggml-org#17721)
build : move _WIN32_WINNT definition to headers (ggml-org#17736)
build: enable parallel builds in msbuild using MTT (ggml-org#17708)
ggml-cpu: remove duplicate conditional check 'iid' (ggml-org#17650)
Add a couple of file types to the text section (ggml-org#17670)
convert : support latest mistral-common (fix conversion with --mistral-format) (ggml-org#17712)
Use OpenAI-compatible `/v1/models` endpoint by default (ggml-org#17689)
webui: Fix zero pasteLongTextToFileLen to disable conversion being overridden (ggml-org#17445)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Misc. bug: Regression on file uploads

2 participants