Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cmake .. error #1

Closed
fritol opened this issue Apr 29, 2023 · 7 comments
Closed

cmake .. error #1

fritol opened this issue Apr 29, 2023 · 7 comments

Comments

@fritol
Copy link

fritol commented Apr 29, 2023

on windows (i had no problems compiling llama.cpp)

but here upon
cmake ..

`H:\LlamaGPTJ-chat\build>cmake ..
-- Building for: Visual Studio 16 2019
-- Selecting Windows SDK version 10.0.19041.0 to target Windows 10.0.19045.
-- The C compiler identification is MSVC 19.29.30037.0
-- The CXX compiler identification is MSVC 19.29.30037.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/BuildTools/VC/Tools/MSVC/14.29.30037/bin/Hostx64/x64/cl.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/BuildTools/VC/Tools/MSVC/14.29.30037/bin/Hostx64/x64/cl.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - not found
-- Found Threads: TRUE
CMake Error at CMakeLists.txt:94 (add_subdirectory):
The source directory

H:/LlamaGPTJ-chat/llmodel/llama.cpp

does not contain a CMakeLists.txt file.

-- Configuring incomplete, errors occurred!`

@kuvaus
Copy link
Owner

kuvaus commented Apr 30, 2023

Thank you for this!

I am on mac so I haven't done much testing on Windows. So reporting windows issues is really helpful!

I used to include unix headers like <unistd.h> because I was building on mac. That meant that one had to include mingw to be able to compile. I removed the unix headers this weekend and only use normal <windows.h> if on Windows.

Now it should compile without extra steps! At least the most current build v0.1.2 should work.

Since you got llama.cpp working I doubt its because of the following but just in case that error was because you did not have the whole llama.cpp subfolder, check that you get the llama.cpp submodule when downloading the repo:

git clone --recurse-submodules https://github.com/kuvaus/LlamaGPTJ-chat

Then there should be a folder called LlamaGPTJ-chat/llmodel/llama.cpp. It needs to be in the subfolder to work.

Do let me know if you still have issues building.

Thanks again!

@wrh
Copy link

wrh commented May 11, 2023

Hi, I've been trying to follow this path as well, but got a different error.
-> Building for Visual Studio 2022 (17)
-> Made sure to use git clone --recurse-submodules https://github.com/kuvaus/LlamaGPTJ-chat

Results in two errors when using cmake --build . --parallel:
[...]
C:\WRDATA\Code\LlamaGPTJ-chat\llmodel\llama.cpp\llama.cpp(2703,41): warning C4267: 'return': conversion from 'size_t' to 'int', possible loss of data [C:\WRDATA\Code\LlamaGPTJ-chat\buil
d\llmodel\llama.cpp\llama.vcxproj]
llama.vcxproj -> C:\WRDATA\Code\LlamaGPTJ-chat\build\llmodel\llama.cpp\Debug\llama.lib
Building Custom Rule C:/WRDATA/Code/LlamaGPTJ-chat/llmodel/CMakeLists.txt
gptj.cpp
C:\WRDATA\Code\LlamaGPTJ-chat\llmodel\gptj.cpp(15,10): fatal error C1083: Cannot open include file: 'unistd.h': No such file or directory [C:\WRDATA\Code\LlamaGPTJ-chat\build\llmodel\l
lmodel.vcxproj]
llamamodel.cpp
C:\WRDATA\Code\LlamaGPTJ-chat\llmodel\llamamodel.cpp(16,10): fatal error C1083: Cannot open include file: 'unistd.h': No such file or directory [C:\WRDATA\Code\LlamaGPTJ-chat\build\llm
odel\llmodel.vcxproj]
common.cpp
C:\WRDATA\Code\LlamaGPTJ-chat\llmodel\llama.cpp\examples\common.cpp(279,44): warning C4101: 'e': unreferenced local variable [C:\WRDATA\Code\LlamaGPTJ-chat\build\llmodel\llmodel.vcxproj
]
[...]

Any idea what's going on here?

@kuvaus
Copy link
Owner

kuvaus commented May 12, 2023

Hi,

That is my fault. I should have updated this issue!

The llmodel backend and upcoming gpt4all-backend (which I hope to switch soonish) use <unistd.h>. So I recently switched back to MinGW on Windows. Otherwise I'd have change #include <unistd.h> to #include <windows.h> and add some #ifdefs every time there's an update on the backend.

I hope this works on Windows:

mkdir build
cd build
cmake .. -G "MinGW Makefiles"
cmake --build . --parallel

You can get MinGW from MSYS2 but I think Visual Studio has support too, at least these docs show MinGW-w64 target.

Let me know if this is too much of a hassle. Also let me know if you really need MSVC target. I could try to switch back from MinGW... I know it would be easier for Windows users.

@wrh
Copy link

wrh commented May 15, 2023

FYI, tried to get a cygwin + MinGW/MSYS22 toolchain working on Windows this week-end but got stuck. Despite trying all three versions of cmake I'm aware of (independent, inside VS or inside MSYS2), different MingGW installs, and both versions of VS (2022 vs VS Code) haha. I never got beyond either cmake telling me that there's no "MinGW Makefiles" build target, or VS 2022 telling me it doesn't know about unistd.h. :)

This was a lesson and I'm not asking you to fix it. Instead I think I'll wait for the REST API, that will also make it easier to switch between different LLM providers and between local/hosted in the future. Thanks anyway for you help man, sorry I wasn't able to improve things!

@kuvaus
Copy link
Owner

kuvaus commented May 15, 2023

Oh wow!

I didn't know it was such a hurdle. I'm sorry. :(

Let me see if I can make the backend windows MSVC compatible and submit a pull request to the backend. If that gets approved, I can then once again switch back to using <windows.h>.

This is a bit of back-and-forth but do I think it would be generally useful to have it easily compiled using Visual Studio. I just wanted to get the new backend working first because that actually improved things a LOT.

Will update this issue if I can get it working again...

I agree that if you're going to need different LLM providers, then REST API will be a better solution.

@kuvaus
Copy link
Owner

kuvaus commented May 15, 2023

EDIT:
This should now just work from the main source.

@kuvaus
Copy link
Owner

kuvaus commented May 17, 2023

v0.2.0 comes with big changes:

Also updates during past few versions:

  • You can save chat-logs with --save_log
  • First response is tiny bit faster if you turn no-animation flag.
  • Prints if your computer supports AVX1/AVX2 at startup
  • Backend updated to v0.1.1. Thanks to GPT4All people. They were super fast on this!
  • Slightly better Readme.

Big thanks to everyone so far! You have been hugely helpful. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants