Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'./quantize' is not recognized as the name of a cmdlet, function, script file, or operable program #241

Open
taaalha opened this issue Mar 23, 2023 · 25 comments · May be fixed by #296
Open

Comments

@taaalha
Copy link

taaalha commented Mar 23, 2023

::OutputEncoding=[System.Console]::InputEncoding=[System.Text.Encoding]::UTF8; ./quantize

C:\Users\dalai\llama\models\7B\ggml-model-f16.bin C:\Users\dalai\llama\models\7B\ggml-model-q4_0.bin 2
./quantize : 

The term './quantize' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.

At line:1 char:96
+ ... sole]::InputEncoding=[System.Text.Encoding]::UTF8; ./quantize C:\User ...
+                                                        ~~~~~~~~~~
    + CategoryInfo          : ObjectNotFound: (./quantize:String) [], CommandNotFoundException
    + FullyQualifiedErrorId : CommandNotFoundException

What could be the solution to this?

I was trying to install 7B model by npx dalai llama install 7B on Windows 10

@dennis-gonzales
Copy link

same error here.

@MarPan
Copy link

MarPan commented Mar 23, 2023

In my case, there was an error earlier on while running CMake in llama. For some reason it was expecting Visual Studio 15 2017 (and it couldn't find it). So I cleared CMakeCache and CMakeFiles and ran manually cmake -G "Visual Studio 16 2019" .

Then I reran the npx command, but I got stuck anyway on /quantize, this time with error: Cannot create process, error code: 267

@lszoszk
Copy link

lszoszk commented Mar 23, 2023

Same problem here (Windows 10). I had no issues with installing alpaca though.

@changquan
Copy link

Same problem here

@Moralizing
Copy link

I just fixed this on Windows Server 2019, but also works on Windows 11, I had to manually quantize it.

In command line as administrator. CD to your C:\Users\YOURUSER\dalai\llama\build\bin\Releases

then run ./quantize C:\Users\YOURUSER\dalai\llama\models\7B\ggml-model-f16.bin C:\Users\YOURUSER\dalai\llama\models\7B\ggml-model-q4_0.bin 2

this will unpack the llama model. I now have the model showing in the drop now for both llama and alpaca on windows

@taaalha
Copy link
Author

taaalha commented Mar 23, 2023

@Moralizing Thanks. This worked like a charm.

However now there's another issue #245 . Basically nothing happens when you give it a prompt.

@taaalha taaalha closed this as completed Mar 23, 2023
@aliasfoxkde
Copy link

aliasfoxkde commented Mar 24, 2023

I had the same issue on both Windows 10 and 11 (different machines, both VS 2022). I think it's a environment variable issue or it it should have changed directories to where the binary is. But basically, it appears to have had an issue finding quantize.exe (also I believe "./quantize" is invalid syntax for Windows). I just searched for "quantize" in the "%userprofile%\dalai" directory and replaced ./quantize with the full path to the EXE and everything worked.

@bradharms
Copy link

@taaalha this sounds like a bug in the install scripts so I think the issue should remain open until its resolved; just because there are workaround doesn't mean there isn't a problem

@m4r71n85
Copy link

I had the same issue on both Windows 10 and 11 (different machines, both VS 2022). I think it's a environment variable issue or it it should have changed directories to where the binary is. But basically, it appears to have had an issue finding quantize.exe (also I believe "./quantize" is invalid syntax for Windows). I just searched for "quantize" in the "%userprofile%\dalai" directory and replaced ./quantize with the full path to the EXE and everything worked.

I find it only under "%userprofile%\dalai\llama.devops\tools.sh".
After removing the "./" before quantize command - it is still called the same way "./quantized"
(also why is this issue closed?)

@taaalha taaalha reopened this Mar 24, 2023
@mruthes
Copy link

mruthes commented Mar 24, 2023

same error here.

PS C:\dalai\llama\build\Release> [System.Console]::OutputEncoding=[System.Console]::InputEncoding=[odels\7B\ggml-model-q4_0.bin 2/quantize c:\dalai\llama\models\7B\ggml-model-f16.bin c:\dalai\llama\mo
./quantize : O termo './quantize' não é reconhecido como nome de cmdlet, função, arquivo de script ou programa operável. Verifique a grafia do nome ou, se um caminho tiver sido incluído, veja se o caminho está correto e tente novamente.
No linha:1 caractere:96

  • ... sole]::InputEncoding=[System.Text.Encoding]::UTF8; ./quantize c:\dala ...
  •                                                    ~~~~~~~~~~
    
    • CategoryInfo : ObjectNotFound: (./quantize:String) [], CommandNotFoundException
    • FullyQualifiedErrorId : CommandNotFoundException

@aliasfoxkde
Copy link

I had the same issue on both Windows 10 and 11 (different machines, both VS 2022). I think it's a environment variable issue or it it should have changed directories to where the binary is. But basically, it appears to have had an issue finding quantize.exe (also I believe "./quantize" is invalid syntax for Windows). I just searched for "quantize" in the "%userprofile%\dalai" directory and replaced ./quantize with the full path to the EXE and everything worked.

I find it only under "%userprofile%\dalai\llama.devops\tools.sh". After removing the "./" before quantize command - it is still called the same way "./quantized" (also why is this issue closed?)

I tried just 'quantized' and it couldn't find the binary, but providing the full path worked. But now I'm having a new issue (related) with the WebUI. With "Debug" enabled, it appears that the backslashes in the Windows paths are being removed when Powershell is called. I looked at the code but can't track down where this is happening, the regex and whatnot seems fine. I've had this issue on two machines and I'm going to check to see if it happens on a friends PC.

Might just be moving everything to Linux, which is fine and probably for the best.

@RzNmKX
Copy link

RzNmKX commented Mar 24, 2023

I had the same issue on both Windows 10 and 11 (different machines, both VS 2022). I think it's a environment variable issue or it it should have changed directories to where the binary is. But basically, it appears to have had an issue finding quantize.exe (also I believe "./quantize" is invalid syntax for Windows). I just searched for "quantize" in the "%userprofile%\dalai" directory and replaced ./quantize with the full path to the EXE and everything worked.

./command is actually how powershell prefers commands. it will not run them otherwise

@RaposaRale
Copy link

Navigate to the 'bin/releases' folder and then either open the terminal there or copy the necessary files to the 'release' folder

@RIAZAHAMMED
Copy link

Navigate to the 'bin/releases' folder and then either open the terminal there or copy the necessary files to the 'release' folder

This will fix this issue

@chrismark
Copy link

Had this problem for Llama. Alpaca worked fine for me and was able to play with it.
I did mine using the docker route. Did it non-docker way first but it uses C drive and there's no more free space there. Learned about --home path later.
For llama the docker way it fails while converting to ggml. I searched here and found the --home /path for non-docker way. Did that and it successfully converted to ggml but failed when calling quantize. No quantize command and no /bin/releases directory. So then I went into the docker container and found quantize there and ran the quantize from there then copy resulting file to /models/llama/models/7B.

@dgasparri
Copy link

It's a bug in the install script.

xec: ./quantize C:\Users\XXX\dalai\llama\models\7B\ggml-model-f16.bin C:\Users\XXX\dalai\llama\models\7B\ggml-model-q4_0.bin 2 in C:\Users\XXX\dalai\llama\build\Release

The install script is running the command from C:\Users\XXX\dalai\llama\build\Release , but the quantize.exe was built in C:\Users\XXX\dalai\llama\build*bin*\Release (Microsoft Visual Studio 2022)

I opened a powershell in C:\Users\XXX\dalai and ran the command

.\llama\build\bin\Release\quantize.exe C:\Users\XXX\dalai\llama\models\7B\ggml-model-f16.bin C:\Users\XXX\dalai\llama\models\7B\ggml-model-q4_0.bin 2

It worked

@FoxPopBR
Copy link

FoxPopBR commented Apr 5, 2023

for me what solved the problem via cmd was.
the command was in the cmd using the expression ./quantize to launch a quantize.exe application but inside the cmd to run an .exe program you should just type the name of the command. so removing ./ and typing only the app name did it successfully in cmd
quantize C:\dalai\llama\models\7B\ggml-model-f16.bin C:\dalai\llama\models\7B\ggml-model-q4_0.bin 2

@akjoshi
Copy link

akjoshi commented Apr 9, 2023

right after the build process, and while the model is being downloaded, do these two steps:

  1. copy all files from .\llama\build\bin\Release\ to .\llama\build\Release\
  2. copy .\llama\build\Release\llama.exe to .\llama\build\Release\main.exe

The reason being some version of VC++ will build the output to .\llama\build\bin\Release\ instead of .\llama\build\Release\

Once the model is downloaded, the quantize.exe and the consecutive steps will run without issues.

final: $> npx dalai serve --home .
(runs on localhost:3000)

@Mathuzala
Copy link

Can't quantize the ggml-model-q4_0.bin file for the 13B version:

llama_model_quantize: loading model from 'ggml-model-q4_0.bin'
llama_model_quantize: failed to open 'ggml-model-q4_0.bin' for reading
main: failed to quantize model from 'ggml-model-q4_0.bin'

But the others, ggml-model-f16.bin and ggml-model-f16.bin.1 work when quantized

I'm trying to quantize the models in this folder:
C:\Users\MYUSER\dalai\llama\build\Release>

@Anthony-Breneliere
Copy link

In my case I have Visual Studio 2022 Profesionnal installed.

There is an issue to correct in the vcproj generation. The output folder for binaries (quantize.exe, llama.exe ..) has a 'bin' directory added, not expected by the Daila installation script:

image

@anom35
Copy link

anom35 commented May 4, 2023

J'ai aussi VS 2022 mais je n'ai pas quantize

@Anthony-Breneliere
Copy link

To fix the issue, in CMakeLists.txt:

set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/bin) => set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR})

@anom35
Copy link

anom35 commented May 5, 2023

J'ai trouvé un moyen très simple juste en utilisant le gestionnaire de fichier, lorsque vous avez lancé npx ...... et qu'il est en train de télécharger 7b, alors vous faite un copier/coller des fichiers qui sont dans C:\Users\Utilisateur\dalai\llama\build\bin\Release et vous les mettez dans C:\Users\Utilisateur\dalai\llama\build\Release et la suite de l'installation continuera sans problème ;)

@syedmaaz9905
Copy link

after the quantization runs the model still is not quantized,
llama_model_quantize: loading model from 'F:\Dalai\llama\models\7B\ggml-model-f16.bin'
llama_model_quantize: n_vocab = 32000
llama_model_quantize: n_ctx = 512
llama_model_quantize: n_embd = 4096
llama_model_quantize: n_mult = 256
llama_model_quantize: n_head = 32
llama_model_quantize: n_layer = 32
llama_model_quantize: f16 = 1
this comes and its finished without being quantized only the name ggml-model-q4_0.bin exist of 0Kb

@LizsDing
Copy link

It's a bug in the install script.

xec: ./quantize C:\Users\XXX\dalai\llama\models\7B\ggml-model-f16.bin C:\Users\XXX\dalai\llama\models\7B\ggml-model-q4_0.bin 2 in C:\Users\XXX\dalai\llama\build\Release

The install script is running the command from C:\Users\XXX\dalai\llama\build\Release , but the quantize.exe was built in C:\Users\XXX\dalai\llama\buildbin\Release (Microsoft Visual Studio 2022)

I opened a powershell in C:\Users\XXX\dalai and ran the command

.\llama\build\bin\Release\quantize.exe C:\Users\XXX\dalai\llama\models\7B\ggml-model-f16.bin C:\Users\XXX\dalai\llama\models\7B\ggml-model-q4_0.bin 2

It worked

Found the bug and fixed on my end on a windows machine:
Line 120 on node_modules\dalai\llama.js

change from
const bin_path = platform === "win32" ? path.resolve(this.home, "build", "Release") : this.home

to
const bin_path = platform === "win32" ? path.resolve(this.home, "build", "bin", "Release") : this.home

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.