Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Code Llama 7b does not work on MacBook M1 #95

Open
AbhiPawar5 opened this issue Sep 6, 2023 · 6 comments
Open

Code Llama 7b does not work on MacBook M1 #95

AbhiPawar5 opened this issue Sep 6, 2023 · 6 comments

Comments

@AbhiPawar5
Copy link

Hi team, awesome work making these models run locally :)

I see the following connection refused error when I try to run Code Llama 7b on MacBook M1 Pro.
Command:
./run-mac.sh --model code-7b

Output:
Xcode is installed at /Library/Developer/CommandLineTools
Conda is installed.
Conda environment 'llama-gpt' already exists.
Activating the conda environment 'llama-gpt'...
ERROR: Pipe to stdout was broken
Exception ignored in: <_io.TextIOWrapper name='' mode='w' encoding='utf-8'>
BrokenPipeError: [Errno 32] Broken pipe
llama-cpp-python is installed.
llama-cpp-python version is 0.1.80.
./models/code-llama-7b-chat.gguf model found.
Initializing server with:
Batch size: 2096
Number of CPU threads: 8
Number of GPU layers: 1
Context window: 8192
GQA:
[+] Building 0.3s (2/3) docker:desktop-linux
=> [llama-gpt-ui-mac internal] load .dockerignore 0.0s
=> => transferring context: 82B 0.0s
=> [llama-gpt-ui-mac internal] load build definition from no-wait.Docker 0.0s
=> => transferring dockerfile: 753B 0.0s
=> [llama-gpt-ui-mac internal] load metadata for docker.io/library/node: 0.3s
/Users/abhishek.pawar/miniconda3/envs/llama-gpt/lib/python3.11/site-packages/pyd[+] Building 1.0s (17/17) FINISHED docker:desktop-linuxi => [llama-gpt-ui-mac internal] load .dockerignore 0.0s
=> => transferring context: 82B 0.0s
=> [llama-gpt-ui-mac internal] load build definition from no-wait.Docker 0.0se => => transferring dockerfile: 753B 0.0s
=> [llama-gpt-ui-mac internal] load metadata for docker.io/library/node: 0.9s
=> [llama-gpt-ui-mac base 1/3] FROM docker.io/library/node:19-alpine@sha 0.0s
=> [llama-gpt-ui-mac internal] load build context 0.0s
=> => transferring context: 13.73kB 0.0s
=> CACHED [llama-gpt-ui-mac base 2/3] WORKDIR /app 0.0s
=> CACHED [llama-gpt-ui-mac base 3/3] COPY package*.json ./ 0.0s
=> CACHED [llama-gpt-ui-mac dependencies 1/1] RUN npm ci 0.0s
=> CACHED [llama-gpt-ui-mac production 3/8] COPY --from=dependencies /ap 0.0s
=> CACHED [llama-gpt-ui-mac build 1/2] COPY . . 0.0s
=> CACHED [llama-gpt-ui-mac build 2/2] RUN npm run build 0.0s
=> CACHED [llama-gpt-ui-mac production 4/8] COPY --from=build /app/.next 0.0s
=> CACHED [llama-gpt-ui-mac production 5/8] COPY --from=build /app/publi 0.0s
=> CACHED [llama-gpt-ui-mac production 6/8] COPY --from=build /app/packa 0.0s
=> CACHED [llama-gpt-ui-mac production 7/8] COPY --from=build /app/next. 0.0s
=> CACHED [llama-gpt-ui-mac production 8/8] COPY --from=build /app/next- 0.0s
=> [llama-gpt-ui-mac] exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:4187e01c6c862f9db9160be158ee74bb0151edec0a84a 0.0s
=> => naming to docker.io/library/llama-gpt-llama-gpt-ui-mac 0.0s
[+] Running 2/1
✔ Network llama-gpt_default Created 0.0s
✔ Container llama-gpt-llama-gpt-ui-mac-1 Created 0.0s
Attaching to llama-gpt-llama-gpt-ui-mac-1
llama-gpt-llama-gpt-ui-mac-1 |
llama-gpt-llama-gpt-ui-mac-1 | > ai-chatbot-starter@0.1.0 start
llama-gpt-llama-gpt-ui-mac-1 | > next start
llama-gpt-llama-gpt-ui-mac-1 |
llama-gpt-llama-gpt-ui-mac-1 |
llama-gpt-llama-gpt-ui-mac-1 | ready - started server on 0.0.0.0:3000, url: http://localhost:3000
llama-gpt-llama-gpt-ui-mac-1 | making request to http://host.docker.internal:3001/v1/models
llama-gpt-llama-gpt-ui-mac-1 | [TypeError: fetch failed] {
llama-gpt-llama-gpt-ui-mac-1 | cause: [Error: connect ECONNREFUSED 192.168.65.254:3001] {
llama-gpt-llama-gpt-ui-mac-1 | errno: -111,
llama-gpt-llama-gpt-ui-mac-1 | code: 'ECONNREFUSED',
llama-gpt-llama-gpt-ui-mac-1 | syscall: 'connect',
llama-gpt-llama-gpt-ui-mac-1 | address: '192.168.65.254',
llama-gpt-llama-gpt-ui-mac-1 | port: 3001
llama-gpt-llama-gpt-ui-mac-1 | }

llama-gpt-llama-gpt-ui-mac-1 | }
llama-gpt-llama-gpt-ui-mac-1 | making request to http://host.docker.internal:3001/v1/models
llama-gpt-llama-gpt-ui-mac-1 | [TypeError: fetch failed] {
llama-gpt-llama-gpt-ui-mac-1 | cause: [Error: connect ECONNREFUSED 192.168.65.254:3001] {
llama-gpt-llama-gpt-ui-mac-1 | errno: -111,
llama-gpt-llama-gpt-ui-mac-1 | code: 'ECONNREFUSED',
llama-gpt-llama-gpt-ui-mac-1 | syscall: 'connect',
llama-gpt-llama-gpt-ui-mac-1 | address: '192.168.65.254',
llama-gpt-llama-gpt-ui-mac-1 | port: 3001
llama-gpt-llama-gpt-ui-mac-1 | }
llama-gpt-llama-gpt-ui-mac-1 | }
llama-gpt-llama-gpt-ui-mac-1 | making request to http://host.docker.internal:3001/v1/models
llama-gpt-llama-gpt-ui-mac-1 | [TypeError: fetch failed] {
llama-gpt-llama-gpt-ui-mac-1 | cause: [Error: connect ECONNREFUSED 192.168.65.254:3001] {
llama-gpt-llama-gpt-ui-mac-1 | errno: -111,
llama-gpt-llama-gpt-ui-mac-1 | code: 'ECONNREFUSED',
llama-gpt-llama-gpt-ui-mac-1 | syscall: 'connect',
llama-gpt-llama-gpt-ui-mac-1 | address: '192.168.65.254',
llama-gpt-llama-gpt-ui-mac-1 | port: 3001
llama-gpt-llama-gpt-ui-mac-1 | }
llama-gpt-llama-gpt-ui-mac-1 | }
llama-gpt-llama-gpt-ui-mac-1 | making request to http://host.docker.internal:3001/v1/models
llama-gpt-llama-gpt-ui-mac-1 | [TypeError: fetch failed] {
llama-gpt-llama-gpt-ui-mac-1 | cause: [Error: connect ECONNREFUSED 192.168.65.254:3001] {
llama-gpt-llama-gpt-ui-mac-1 | errno: -111,
llama-gpt-llama-gpt-ui-mac-1 | code: 'ECONNREFUSED',
llama-gpt-llama-gpt-ui-mac-1 | syscall: 'connect',
llama-gpt-llama-gpt-ui-mac-1 | address: '192.168.65.254',
llama-gpt-llama-gpt-ui-mac-1 | port: 3001
llama-gpt-llama-gpt-ui-mac-1 | }
llama-gpt-llama-gpt-ui-mac-1 | }
llama-gpt-llama-gpt-ui-mac-1 | making request to http://host.docker.internal:3001/v1/models
llama-gpt-llama-gpt-ui-mac-1 | [TypeError: fetch failed] {
llama-gpt-llama-gpt-ui-mac-1 | cause: [Error: connect ECONNREFUSED 192.168.65.254:3001] {
llama-gpt-llama-gpt-ui-mac-1 | errno: -111,
llama-gpt-llama-gpt-ui-mac-1 | code: 'ECONNREFUSED',
llama-gpt-llama-gpt-ui-mac-1 | syscall: 'connect',
llama-gpt-llama-gpt-ui-mac-1 | address: '192.168.65.254',
llama-gpt-llama-gpt-ui-mac-1 | port: 3001
llama-gpt-llama-gpt-ui-mac-1 | }
llama-gpt-llama-gpt-ui-mac-1 | }
llama-gpt-llama-gpt-ui-mac-1 | making request to http://host.docker.internal:3001/v1/models
llama-gpt-llama-gpt-ui-mac-1 | [TypeError: fetch failed] {
llama-gpt-llama-gpt-ui-mac-1 | cause: [Error: connect ECONNREFUSED 192.168.65.254:3001] {
llama-gpt-llama-gpt-ui-mac-1 | errno: -111,
llama-gpt-llama-gpt-ui-mac-1 | code: 'ECONNREFUSED',
llama-gpt-llama-gpt-ui-mac-1 | syscall: 'connect',
llama-gpt-llama-gpt-ui-mac-1 | address: '192.168.65.254',
llama-gpt-llama-gpt-ui-mac-1 | port: 3001
llama-gpt-llama-gpt-ui-mac-1 | }
llama-gpt-llama-gpt-ui-mac-1 | }
llama-gpt-llama-gpt-ui-mac-1 | making request to http://host.docker.internal:3001/v1/models
llama-gpt-llama-gpt-ui-mac-1 | [TypeError: fetch failed] {
llama-gpt-llama-gpt-ui-mac-1 | cause: [Error: connect ECONNREFUSED 192.168.65.254:3001] {
llama-gpt-llama-gpt-ui-mac-1 | errno: -111,
llama-gpt-llama-gpt-ui-mac-1 | code: 'ECONNREFUSED',
llama-gpt-llama-gpt-ui-mac-1 | syscall: 'connect',
llama-gpt-llama-gpt-ui-mac-1 | address: '192.168.65.254',
llama-gpt-llama-gpt-ui-mac-1 | port: 3001
llama-gpt-llama-gpt-ui-mac-1 | }
llama-gpt-llama-gpt-ui-mac-1 | }
^C./run-mac.sh: line 250: 32853 Segmentation fault: 11 python3 -m llama_cpp.server --n_ctx $n_ctx --n_threads $n_threads --n_gpu_layers $n_gpu_layers --n_batch $n_batch --model $MODEL --port 3001
Stopping docker-compose...
Gracefully stopping... (press Ctrl+C again to force)
Aborting on container exit...
[+] Stopping 1/1
✔ Container llama-gpt-llama-gpt-ui-mac-1 Stopped 0.1s
canceled
[+] Running 2/0
✔ Container llama-gpt-llama-gpt-ui-mac-1 Removed 0.0s
✔ Network llama-gpt_default Removed 0.0s
Stopping python server...

@Nisse123
Copy link

Nisse123 commented Sep 6, 2023

I think it is the same error, the repo is not working right now due to this issue.
abetlen/llama-cpp-python#520

@Daniel1989
Copy link

same issue

1 similar comment
@fangdajiang
Copy link

same issue

@100tomer
Copy link

100tomer commented Nov 4, 2023

any solution?

@automatumkilian
Copy link

same issue here

@dylan-sh
Copy link

same

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants