Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vulnerable release version #294

Closed
d-z-m opened this issue Mar 18, 2024 · 5 comments
Closed

Vulnerable release version #294

d-z-m opened this issue Mar 18, 2024 · 5 comments
Labels

Comments

@d-z-m
Copy link

d-z-m commented Mar 18, 2024

From commit 6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9 up until commit a4b07c057a553b1ac253051efc3f040351e2eae1, llama.cpp upstream was vulnerable to the heap based buffer overflow described here(cve link).

I realize this is fixed on main, but the most recent llamafile release is still vulnerable.

I propose cutting a new release that has this patched, so that newcomers to llamafile aren't downloading a version that is vulnerable to exploit.

@jart
Copy link
Collaborator

jart commented Mar 19, 2024

We anticipated things like this would happen. llamafile supports sandboxing when running in cpu mode on linux and openbsd. llamafile also recently started supporting sandboxing on macos. https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file#security We'll be doing an upstream synchronization soon.

@upost
Copy link

upost commented Mar 22, 2024

I'm developing a game using llamafile, and on one tester's machine, the McAfee isolated the exe as a threat (of course making the game unusable). This might be related to this vulnerability, so a fix would be greatly appreciated.

@jart
Copy link
Collaborator

jart commented Mar 22, 2024

@upost Windows Defender is the only virus scanner we support. It should never flag the release binaries on our GitHub releases page. If it does, then you should file an issue so I can fix it. Others like McAfee, I can't help you.

@jart jart added the bug label Mar 22, 2024
@jart jart closed this as completed in c0208c1 Mar 22, 2024
@d-z-m
Copy link
Author

d-z-m commented Mar 23, 2024

Just wanted to clarify...you already had the fix on main(although I appreciate the update nonetheless, as I think it contains support for IQ quants). I was suggesting cutting a new release that has the fix compiled in, as the current release version was built off of a llama.cpp upstream that was vulnerable.

@jart
Copy link
Collaborator

jart commented Mar 31, 2024

We have a new llamafile 0.7 release out which includes a sync incorporating the upstream fix. Therefore I believe this issue should be settled. Thank you for bringing the CVE to my attention. Enjoy using llamafile!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants