Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting gibberish output when running on llama.cpp #24

Closed
luungoc2005 opened this issue Sep 15, 2023 · 33 comments
Closed

Getting gibberish output when running on llama.cpp #24

luungoc2005 opened this issue Sep 15, 2023 · 33 comments
Assignees

Comments

@luungoc2005
Copy link

luungoc2005 commented Sep 15, 2023

Hi, I see the mention of running this model on llama.cpp in README. Did you get a manage to get it to run and quantize with good output? I'm trying to evaluate if this model can be used for speculative decoding for llama 2 7B

With the first checkpoint https://huggingface.co/PY007/TinyLlama-1.1B-step-50K-105b - seems like there might be some issue converting to gguf

python convert.py ../TinyLlama-1.1B-step-50K-105b/

./main -m ../TinyLlama-1.1B-step-50K-105b/ggml-model-f32.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 400 -ngl 0 --temp 0

Is resulting in the following - Either f16 or f32 would result in this, adding a <s> token at the beginning didn't help either:

(...)
Building a website can be done in 10 simple steps:\nStep 1:12000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
(...)

I can see that running with huggingface/torch is giving a more reasonable result, although it quickly becomes repeated

<s> Building a website can be done in 10 simple steps:
Step 1: Create a website.
Step 2: Add a logo.
Step 3: Add a contact form.
Step 4: Add a blog.
Step 5: Add a social media links.
Step 6: Add a contact page.
Step 7: Add a contact form.
Step 8: Add a contact form.
Step 9: Add a contact form.

Not sure where this mismatch is coming from

Thanks

@VatsaDev
Copy link

VatsaDev commented Sep 15, 2023

@luungoc2005 you can find GGUF files at https://huggingface.co/Green-Sky/TinyLlama-1.1B-step-50K-105b-GGUF,
Its pretty garbage on gguf right now, but its also an Incomplete model, same for vLLM. Wait till a better Checkpoint or final release, at least like the 1t token checkpoint, so its seen all the data once. There should be a checkpoint on the 16th I think.

@Green-Sky
Copy link
Contributor

Green-Sky commented Sep 15, 2023

at least like the 1t token checkpoint, so its seen all the data once

the learing rate at that point is still relatively high, so it wont have learned finer details and parameter values are still shifting alot. but you are not entirely wrong here.

Its pretty garbage on gguf right now

the model itself is not done. cant do anything about that but wait and see :)

edit: very much anticipating the 500B token checkpoint tomorrow, will see where the journey takes us :)

@Green-Sky
Copy link
Contributor

@luungoc2005 It looks like you want the \n to be interpreted as a new line. use -e to do that, or use an actual new line.
also you are maxing out the sampling temperature with the --temp 0, which makes it always choose the "most likely" token (equivalent to --top-k 1).

running your command i see the same behavior, but with supplying -e, i can see:

$ $ bin/main -m ../models/TinyLlama-1.1B-step-50K-105b-GGUF/ggml-model-f32.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 100 -e --seed 1694
......
 Building a website can be done in 10 simple steps:
Step 1: What we do you workforce is a person in it's doing an all the building a little about me to the other to well at it can't that.
What kind of a whole, what and you'lls what your home, I like my owners how many things with them as the way the I have? You know... 9s are:s or.
he asked is the other people know is I don's is to. . A well, and

which probably means there is indeed some issue with the tokenizer/model.

@VatsaDev
Copy link

VatsaDev commented Sep 15, 2023

anticipating the 500B token checkpoint tomorrow
same :)

Yep I see the point about the Learning rate and the incomplete model.
@Green-Sky, this issue with massive number generation is has affected all the GGUF models, I've seen the same issue on the 4 bit, and 8 bit. However, It hasn't happened on GPU.

Is it a tokenizer issue or a GGUF issue?

@jzhang38
Copy link
Owner

Stay tuned for today's 500B release!

@luungoc2005
Copy link
Author

@luungoc2005 It looks like you want the \n to be interpreted as a new line. use -e to do that, or use an actual new line. also you are maxing out the sampling temperature with the --temp 0, which makes it always choose the "most likely" token (equivalent to --top-k 1).

running your command i see the same behavior, but with supplying -e, i can see:

$ $ bin/main -m ../models/TinyLlama-1.1B-step-50K-105b-GGUF/ggml-model-f32.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 100 -e --seed 1694
......
 Building a website can be done in 10 simple steps:
Step 1: What we do you workforce is a person in it's doing an all the building a little about me to the other to well at it can't that.
What kind of a whole, what and you'lls what your home, I like my owners how many things with them as the way the I have? You know... 9s are:s or.
he asked is the other people know is I don's is to. . A well, and

which probably means there is indeed some issue with the tokenizer/model.

Hmm that's odd I still get pretty bad output even with the exact params.

Building a website can be done in 10 simple steps:
Step 1: 3145 to start to 2, if you know that all the one-2718867themes from what's it's all with the most, the same things
1500
25184.
910324115
5661s3.6247 208s (at which point I ctrl-C'ed)

My overall assumptions would be:

  • top-1 (temperature 0) shouldn't be a problem here, if anything it makes the output deterministic
  • gguf f32 shouldn't be different from huggingface (since no quantization)

And since huggingface is giving a reasonable output (more reasonable than even @Green-Sky 's gguf output), it's probably an issue with llama.cpp (could be because llama.cpp itself has a lot of hardcoding for the base llama 7b model).

Looking forward to the new checkpoint to see if the issue gets any better

@jzhang38
Copy link
Owner

jzhang38 commented Sep 16, 2023

"gguf f32 shouldn't be different from huggingface (since no quantization)"
Exactly. We are getting the same gibberish output on llama.cpp with our 104B checkpoint... Still trying to figure out the reason.

@Green-Sky
Copy link
Contributor

@jzhang38 you mean 503B ? also, the links in the readme 404

@jzhang38
Copy link
Owner

I mean 105B. Haven't tried 503B yet. I have updated the link. Thanks for spotting that out.

@VatsaDev
Copy link

503B chat on vLLM is much better, Haven't seen the new GGUFs

### Human: Give me a hello world in python? 
### Assistant:' 'Sure! Here is a simple "hello, world" program in Python:\n\n```python\nprint("Hello, World!")\n```\n\nSave this code in a file with a .py extension (for example, hello.py). Then, run the code by typing python hello.py into your terminal. The output should be "Hello, World!"
### Human: Now, create a function that returns a hello world using the print function. You can call this function from the terminal using python and pass it the name of the .py file that you just created.
### Assistant: Sure! Here\'s how you can create a function in Python to print a string and return its value:\n\n```python\ndef print_hello():\n    return "Hello, World!"\n\nprint_hello()\n```\n\nNow, when you run the code above, it will print "Hello, World!" to the console and also return that value to the'

@Green-Sky
Copy link
Contributor

Green-Sky commented Sep 16, 2023

I did some more testing, and it looks like the first couple of tokens seem to be correct, but your prompt is already too long.
eg

$ main -m models/TinyLlama-1.1B-step-50K-105b/ggml-model-f16.gguf -p "Hi, my name" --temp 0
...
 Hi, my name is a very important.
Askin the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the^C

edit: which means, it is probably not a tokenizer issue. Also the fact that it is the (exact) same tokenizer model as llama2 basically removes the tokenizer from the suspect list.

edit2: I remembered i tested llama.cpp lora finetuning with tinyllama, and the output of the model with lora applied seems more coherrent, but still quickly descents into chaos.

thoughts3: since f16 and q8_0, generated from f32 (or quantized in convert.py) - produce the same kind of bugged output, i think the initial conversion might be bugged.

@tic-top
Copy link

tic-top commented Sep 20, 2023

There must have something wrong on wk.
I have check the intermidiate output of key_value_cache.
And find that v cache is the same, k cache is totally different.

hf kv cache

pos:0 layer:0 k_head: 0
[-0.025054931640625, -0.16552734375, -0.01611328125, -0.017608642578125, -0.1715087890625]
pos:0 layer:0 v_head: 0
[-0.0006232261657714844, 0.0031871795654296875, -0.001255035400390625, -0.00691986083984375, 0.0015745162963867188]
pos:0 layer:0 k_head: 1
[-0.10040283203125, -0.058258056640625, -0.04949951171875, 0.01540374755859375, 0.0694580078125]
pos:0 layer:0 v_head: 1
[-0.0012483596801757812, -0.003940582275390625, -0.0029850006103515625, -0.0014734268188476562, 0.007114410400390625]
pos:0 layer:0 k_head: 2
[0.301513671875, 0.283935546875, 0.25244140625, 0.385009765625, 0.391845703125]
pos:0 layer:0 v_head: 2
[0.0005435943603515625, 0.00131988525390625, -0.0009245872497558594, 0.002666473388671875, 0.000408172607421875]

llama.c kv_cache (llama.c almost have the same convertor as llama.cpp)

pos:0 layer:0 k_head:0
[-0.025109, -0.118275, -0.165686, -0.231919, -0.016185, ]
pos:0 layer:0 v_head:0
[-0.000623, 0.003188, -0.001255, -0.006919, 0.001574, ]
pos:0 layer:0 k_head:1
[-0.100462, -0.029795, -0.058325, -0.031617, -0.049527, ]
pos:0 layer:0 v_head:1
[-0.001248, -0.003941, -0.002986, -0.001474, 0.007119, ]
pos:0 layer:0 k_head:2
[0.301691, -0.345232, 0.284151, 0.111524, 0.252517, ]
pos:0 layer:0 v_head:2
//forget to copy

@magician-blue
Copy link

I have found out the way to fix it.
The convert part and the rope part in llama.c need changing.
image
image

@magician-blue
Copy link

magician-blue commented Sep 20, 2023

tinyllama's rope is different from llama.
Besides, the export.py of llama.c is actually wrong, but nobody realize it because we never use it to convert the huggingface GQA model.
I will try to make it work in llama.cpp

@magician-blue
Copy link

My test is based on PY007/TinyLlama-1.1B-intermediate-step-240k-503b.

@magician-blue
Copy link

There must have something wrong on wk. I have check the intermidiate output of key_value_cache. And find that v cache is the same, k cache is totally different.

hf kv cache

pos:0 layer:0 k_head: 0
[-0.025054931640625, -0.16552734375, -0.01611328125, -0.017608642578125, -0.1715087890625]
pos:0 layer:0 v_head: 0
[-0.0006232261657714844, 0.0031871795654296875, -0.001255035400390625, -0.00691986083984375, 0.0015745162963867188]
pos:0 layer:0 k_head: 1
[-0.10040283203125, -0.058258056640625, -0.04949951171875, 0.01540374755859375, 0.0694580078125]
pos:0 layer:0 v_head: 1
[-0.0012483596801757812, -0.003940582275390625, -0.0029850006103515625, -0.0014734268188476562, 0.007114410400390625]
pos:0 layer:0 k_head: 2
[0.301513671875, 0.283935546875, 0.25244140625, 0.385009765625, 0.391845703125]
pos:0 layer:0 v_head: 2
[0.0005435943603515625, 0.00131988525390625, -0.0009245872497558594, 0.002666473388671875, 0.000408172607421875]

llama.c kv_cache (llama.c almost have the same convertor as llama.cpp)

pos:0 layer:0 k_head:0
[-0.025109, -0.118275, -0.165686, -0.231919, -0.016185, ]
pos:0 layer:0 v_head:0
[-0.000623, 0.003188, -0.001255, -0.006919, 0.001574, ]
pos:0 layer:0 k_head:1
[-0.100462, -0.029795, -0.058325, -0.031617, -0.049527, ]
pos:0 layer:0 v_head:1
[-0.001248, -0.003941, -0.002986, -0.001474, 0.007119, ]
pos:0 layer:0 k_head:2
[0.301691, -0.345232, 0.284151, 0.111524, 0.252517, ]
pos:0 layer:0 v_head:2
//forget to copy

Yes, you're right. The huggingface convertor part of llama.c and llama.cpp are almost the same.

@Green-Sky
Copy link
Contributor

@magician-blue
Copy link

I remove the permute

@magician-blue
Copy link

I'll make a pull request at weekends.

@jzhang38
Copy link
Owner

tinyllama's rope is different from llama.

@magician-blue This is something that I do not know. May I ask where is the difference?

@magician-blue
Copy link

tinyllama's rope is different from llama.

@magician-blue This is something that I do not know. May I ask where is the difference?

I'm so sorry to make you misunderstand. What I mean is tinyllama-1.1's rope is different from that of llama2.c and llama2.mojo.
0be5e7f3f3d707971e2979dde69b124
I guess llama.cpp's implementation is the same as tinyllma.

Now I only have found out the way to make it work on llama2.c and llama2.mojo(by remove permutation and modify rope).
Since I'm not familiar with the detail of llama.cpp, I'll check it later.

@magician-blue
Copy link

magician-blue commented Sep 23, 2023

HaHa!
Only do two things, remove the permute part in the convertor.py 983 987 line.

Change the ROPE part 2568 and 2572 line of llama.cpp from (from mode 0 to mode 2)

struct ggml_tensor * Kcur = ggml_rope_custom_inplace(ctx0, ggml_reshape_3d(ctx0, tmpk, n_embd_head, n_head_kv, N), n_past, n_embd_head, 0, 0, freq_base, freq_scale);
struct ggml_tensor * Qcur = ggml_rope_custom_inplace(ctx0, ggml_reshape_3d(ctx0, tmpq, n_embd_head, n_head, N),    n_past, n_embd_head, 0, 0, freq_base, freq_scale);

to

struct ggml_tensor * Kcur = ggml_rope_custom_inplace(ctx0, ggml_reshape_3d(ctx0, tmpk, n_embd_head, n_head_kv, N), n_past, n_embd_head, 2, 0, freq_base, freq_scale);
struct ggml_tensor * Qcur = ggml_rope_custom_inplace(ctx0, ggml_reshape_3d(ctx0, tmpq, n_embd_head, n_head, N),    n_past, n_embd_head, 2, 0, freq_base, freq_scale);
 ./main -m ./models/chat/ggml-model-q4_0.gguf \
        -n 512 --color --temp 0 -e \
        -p "<|im_start|>user\nExplain huggingface?<|im_end|>\n<|im_start|>assistant\n"
Log start
main: build = 1262 (7eb4117)
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
main: seed  = 1695483291
llama_model_loader: loaded meta data with 20 key-value pairs and 201 tensors from ./models/chat/ggml-model-q4_0.gguf (version GGUF V2 (latest))
llama_model_loader: - tensor    0:                token_embd.weight q4_0     [  2048, 32003,     1,     1 ]
llama_model_loader: - tensor    1:              blk.0.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor    2:              blk.0.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor    3:              blk.0.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor    4:         blk.0.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor    5:            blk.0.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor    6:              blk.0.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor    7:            blk.0.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor    8:           blk.0.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor    9:            blk.0.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   10:              blk.1.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   11:              blk.1.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   12:              blk.1.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   13:         blk.1.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   14:            blk.1.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   15:              blk.1.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   16:            blk.1.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   17:           blk.1.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   18:            blk.1.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   19:              blk.2.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   20:              blk.2.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   21:              blk.2.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   22:         blk.2.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   23:            blk.2.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   24:              blk.2.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   25:            blk.2.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   26:           blk.2.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   27:            blk.2.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   28:              blk.3.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   29:              blk.3.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   30:              blk.3.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   31:         blk.3.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   32:            blk.3.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   33:              blk.3.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   34:            blk.3.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   35:           blk.3.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   36:            blk.3.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   37:              blk.4.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   38:              blk.4.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   39:              blk.4.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   40:         blk.4.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   41:            blk.4.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   42:              blk.4.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   43:            blk.4.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   44:           blk.4.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   45:            blk.4.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   46:              blk.5.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   47:              blk.5.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   48:              blk.5.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   49:         blk.5.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   50:            blk.5.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   51:              blk.5.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   52:            blk.5.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   53:           blk.5.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   54:            blk.5.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   55:              blk.6.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   56:              blk.6.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   57:              blk.6.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   58:         blk.6.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   59:            blk.6.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   60:              blk.6.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   61:            blk.6.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   62:           blk.6.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   63:            blk.6.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   64:              blk.7.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   65:              blk.7.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   66:              blk.7.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   67:         blk.7.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   68:            blk.7.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   69:              blk.7.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   70:            blk.7.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   71:           blk.7.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   72:            blk.7.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   73:              blk.8.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   74:              blk.8.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   75:              blk.8.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   76:         blk.8.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   77:            blk.8.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   78:              blk.8.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   79:            blk.8.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   80:           blk.8.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   81:            blk.8.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   82:              blk.9.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   83:              blk.9.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   84:              blk.9.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   85:         blk.9.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   86:            blk.9.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   87:              blk.9.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   88:            blk.9.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   89:           blk.9.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   90:            blk.9.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   91:             blk.10.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   92:             blk.10.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   93:             blk.10.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor   94:        blk.10.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor   95:           blk.10.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   96:             blk.10.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor   97:           blk.10.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor   98:          blk.10.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor   99:           blk.10.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  100:             blk.11.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  101:             blk.11.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  102:             blk.11.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  103:        blk.11.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  104:           blk.11.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  105:             blk.11.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  106:           blk.11.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  107:          blk.11.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  108:           blk.11.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  109:             blk.12.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  110:             blk.12.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  111:             blk.12.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  112:        blk.12.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  113:           blk.12.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  114:             blk.12.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  115:           blk.12.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  116:          blk.12.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  117:           blk.12.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  118:             blk.13.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  119:             blk.13.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  120:             blk.13.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  121:        blk.13.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  122:           blk.13.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  123:             blk.13.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  124:           blk.13.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  125:          blk.13.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  126:           blk.13.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  127:             blk.14.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  128:             blk.14.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  129:             blk.14.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  130:        blk.14.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  131:           blk.14.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  132:             blk.14.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  133:           blk.14.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  134:          blk.14.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  135:           blk.14.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  136:             blk.15.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  137:             blk.15.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  138:             blk.15.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  139:        blk.15.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  140:           blk.15.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  141:             blk.15.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  142:           blk.15.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  143:          blk.15.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  144:           blk.15.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  145:             blk.16.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  146:             blk.16.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  147:             blk.16.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  148:        blk.16.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  149:           blk.16.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  150:             blk.16.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  151:           blk.16.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  152:          blk.16.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  153:           blk.16.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  154:             blk.17.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  155:             blk.17.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  156:             blk.17.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  157:        blk.17.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  158:           blk.17.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  159:             blk.17.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  160:           blk.17.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  161:          blk.17.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  162:           blk.17.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  163:             blk.18.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  164:             blk.18.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  165:             blk.18.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  166:        blk.18.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  167:           blk.18.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  168:             blk.18.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  169:           blk.18.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  170:          blk.18.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  171:           blk.18.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  172:             blk.19.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  173:             blk.19.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  174:             blk.19.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  175:        blk.19.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  176:           blk.19.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  177:             blk.19.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  178:           blk.19.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  179:          blk.19.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  180:           blk.19.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  181:             blk.20.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  182:             blk.20.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  183:             blk.20.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  184:        blk.20.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  185:           blk.20.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  186:             blk.20.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  187:           blk.20.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  188:          blk.20.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  189:           blk.20.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  190:             blk.21.attn_q.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  191:             blk.21.attn_k.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  192:             blk.21.attn_v.weight q4_0     [  2048,   256,     1,     1 ]
llama_model_loader: - tensor  193:        blk.21.attn_output.weight q4_0     [  2048,  2048,     1,     1 ]
llama_model_loader: - tensor  194:           blk.21.ffn_gate.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  195:             blk.21.ffn_up.weight q4_0     [  2048,  5632,     1,     1 ]
llama_model_loader: - tensor  196:           blk.21.ffn_down.weight q4_0     [  5632,  2048,     1,     1 ]
llama_model_loader: - tensor  197:          blk.21.attn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  198:           blk.21.ffn_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  199:               output_norm.weight f32      [  2048,     1,     1,     1 ]
llama_model_loader: - tensor  200:                    output.weight q6_K     [  2048, 32003,     1,     1 ]
llama_model_loader: - kv   0:                       general.architecture str     
llama_model_loader: - kv   1:                               general.name str     
llama_model_loader: - kv   2:                       llama.context_length u32     
llama_model_loader: - kv   3:                     llama.embedding_length u32     
llama_model_loader: - kv   4:                          llama.block_count u32     
llama_model_loader: - kv   5:                  llama.feed_forward_length u32     
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32     
llama_model_loader: - kv   7:                 llama.attention.head_count u32     
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32     
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32     
llama_model_loader: - kv  10:                       llama.rope.freq_base f32     
llama_model_loader: - kv  11:                          general.file_type u32     
llama_model_loader: - kv  12:                       tokenizer.ggml.model str     
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr     
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr     
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr     
llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32     
llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32     
llama_model_loader: - kv  18:            tokenizer.ggml.unknown_token_id u32     
llama_model_loader: - kv  19:               general.quantization_version u32     
llama_model_loader: - type  f32:   45 tensors
llama_model_loader: - type q4_0:  155 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_print_meta: format         = GGUF V2 (latest)
llm_load_print_meta: arch           = llama
llm_load_print_meta: vocab type     = SPM
llm_load_print_meta: n_vocab        = 32003
llm_load_print_meta: n_merges       = 0
llm_load_print_meta: n_ctx_train    = 2048
llm_load_print_meta: n_ctx          = 512
llm_load_print_meta: n_embd         = 2048
llm_load_print_meta: n_head         = 32
llm_load_print_meta: n_head_kv      = 4
llm_load_print_meta: n_layer        = 22
llm_load_print_meta: n_rot          = 64
llm_load_print_meta: n_gqa          = 8
llm_load_print_meta: f_norm_eps     = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: n_ff           = 5632
llm_load_print_meta: freq_base      = 10000.0
llm_load_print_meta: freq_scale     = 1
llm_load_print_meta: model type     = ?B
llm_load_print_meta: model ftype    = mostly Q4_0
llm_load_print_meta: model params   = 1.10 B
llm_load_print_meta: model size     = 606.54 MiB (4.63 BPW) 
llm_load_print_meta: general.name   = models
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token  = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.06 MB
llm_load_tensors: mem required  =  606.60 MB (+   11.00 MB per state)
.......................................................................................
llama_new_context_with_model: kv self size  =   11.00 MB
llama_new_context_with_model: compute buffer total size =   67.98 MB

system_info: n_threads = 6 / 12 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | 
sampling: repeat_last_n = 64, repeat_penalty = 1.100000, presence_penalty = 0.000000, frequency_penalty = 0.000000, top_k = 40, tfs_z = 1.000000, top_p = 0.950000, typical_p = 1.000000, temp = 0.000000, mirostat = 0, mirostat_lr = 0.100000, mirostat_ent = 5.000000
generate: n_ctx = 512, n_batch = 512, n_predict = 512, n_keep = 0


 <|im_start|>user
Explain huggingface?<|im_end|>
<|im_start|>assistant
Huggingface is a software company that provides APIs, frameworks, and tools for developers to build, train, and deploy models using AI. It offers a suite of products, including:

1. **Bash autopilot**: This is an open-source framework for building automation systems. You can use it to write scripts that run on various platforms, such as AWS Lambda or Google Cloud Platform, and interact with the system using natural language commands.
2. **Text-based autopilot**: Huggingface's Text-Based Autopilot is a simpler version of Bash autopilot that can be used to write simple scripts for tasks such as sentiment analysis or entity recognition.
3. **Transformers**: Huggingface's Transformers are a family of large language models that can be used for various tasks, including machine translation, summarization, and question-answering.
4. **Text generation**: Huggingface's Text Generation is a suite of tools for generating text on a wide range of topics, from poetry to news articles to code snippets.
5. **Open assistant**: Huggingface's Open Assistant is an open-source framework for building conversational AI systems that can be used to train and deploy chatbots that can understand natural language queries and provide relevant information or assistance.

These are just a few examples of the products and tools that you can use with Huggingface's APIs and tools. The company offers a wide range of resources, including documentation, tutorials, and example code, to help you get started with building and training your own models using their APIs.<|im_end|>
 [end of text]

llama_print_timings:        load time =    80.37 ms
llama_print_timings:      sample time =   225.77 ms /   364 runs   (    0.62 ms per token,  1612.27 tokens per second)
llama_print_timings: prompt eval time =   755.45 ms /    35 tokens (   21.58 ms per token,    46.33 tokens per second)
llama_print_timings:        eval time = 15450.13 ms /   363 runs   (   42.56 ms per token,    23.49 tokens per second)
llama_print_timings:       total time = 16638.44 ms
Log end

@magician-blue
Copy link

The reason why it will generate terrible output is the default llama's rope is to rotate pairs of even and odd dimensions (GPT-J style). However, tinyllama-1.1 is to rotate1st half and 2nd half (GPT-NeoX style).

@magician-blue
Copy link

magician-blue commented Sep 23, 2023

Now I don't understand why removing the permute make the model work.

I remember when converting meta's llama model need permuting.
Maybe converting all hf model doesn't need permuting. I'll check this later.

@magician-blue
Copy link

magician-blue commented Sep 23, 2023

@jzhang38 I have quantized the model and uploaded to https://huggingface.co/kirp/TinyLlama-1.1B-Chat-v0.2-gguf/.
Remember to fix llama.cpp and make again.

@magician-blue
Copy link

I have created a small demo on hf. https://huggingface.co/spaces/kirp/tinyllama-chat

@Green-Sky
Copy link
Contributor

cool. also very weird.... I can't find any indicator in the config that it has to be different. Is the gptneox style rope just a different layout? or do they actually perform different calculations?

@magician-blue
Copy link

magician-blue commented Sep 23, 2023

cool. also very weird.... I can't find any indicator in the config that it has to be different. Is the gptneox style rope just a different layout? or do they actually perform different calculations?

These two type of ROPE are shown above. They perform totally different calculation.
I find the difference on ROPE when checking the training file of Tinyllama-1.1 and llama.cpp.

@jzhang38
Copy link
Owner

jzhang38 commented Sep 27, 2023

@magician-blue Thanks a million! I managed to make it work on llama.cpp following your guide.

However, I don't get it. OpenLlama (actually all HF-format llama models) also follows the GPT-NeoX style RoPE. Why convert.py works fine for them?
(The original Llama weight released by Meta following the GPT-J style. HF permutes the weight such that the HF weight actually follows the GPT-Neo style)

@jzhang38
Copy link
Owner

jzhang38 commented Sep 27, 2023

I think I have found a bug in llama.cpp convert.py permute function:

https://github.com/ggerganov/llama.cpp/blob/99115f3fa654b593099c6719ad30e3f54ce231e1/convert.py#L442

changing this line from

n_head //= n_head_kv

to

n_head = n_head_kv

completely eliminates the issue. There is no need to change RoPE of llama.cpp.

This bug will only be triggered by HF GQA model, but nobody realized it before because we never used convert.py to convert the HF llama2 70B model Llama 2 70B has 64 heads and 8 num_key_value_heads. 64 / 8 = 8

@magician-blue's solution works because he bypasses the permute function and chooses to modify the behaviors of RoPE in llama.cpp

I have made a pull request to llama.cpp: ggerganov/llama.cpp#3364
Correctly converted model weight: https://huggingface.co/PY007/TinyLlama-1.1B-Chat-v0.2-GGUF

@magician-blue
Copy link

magician-blue commented Sep 27, 2023

@jzhang38 Do you mean all HF models are GPT-NeoX style (which rotates the 1st and 2nd half), and the implementation of llama.cpp is GPT-J style(which rotates pairs of even and odd dimensions)? So, we need to permute all the HF model.

BTW, is there any evidence showing that HF model only perform GPT-NeoX rope?

@jzhang38
Copy link
Owner

jzhang38 commented Sep 27, 2023

So, we need to permute all the HF model.

My bad. All HF llama model.

BTW, is there any evidence showing that HF model only perform GPT-NeoX rope?

You can check https://github.com/huggingface/transformers/blob/3ca18d6d09ee0d1610a400ead6f6041394f66421/src/transformers/models/llama/modeling_llama.py#L207

@Green-Sky
Copy link
Contributor

fix merged, i think this can be closed now 🥳

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants