We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I am geeting the following for Llama-LLM
2024-06-28 21:57:20 INFO openai - message='OpenAI API response' path=https://api.openai.com/v1/embeddings processing_ms=15 request_id=req_5533598d1e5469fd213c359055ce074d response_code=200 llama_model_loader: loaded meta data with 23 key-value pairs and 543 tensors from models/llava-v1.6-34b.Q6_K.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = LLaMA v2 llama_model_loader: - kv 2: llama.context_length u32 = 4096 llama_model_loader: - kv 3: llama.embedding_length u32 = 7168 llama_model_loader: - kv 4: llama.block_count u32 = 60 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 20480 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 56 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 5000000.000000 llama_model_loader: - kv 11: general.file_type u32 = 18 llama_model_loader: - kv 12: tokenizer.ggml.model str = llama llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,64000] = ["<unk>", "<|startoftext|>", "<|endof... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,64000] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,64000] = [2, 3, 3, 3, 3, 3, 1, 1, 1, 3, 3, 3, ... llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 7 llama_model_loader: - kv 18: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 21: tokenizer.chat_template str = {% for message in messages %}{{'<|im_... llama_model_loader: - kv 22: general.quantization_version u32 = 2 llama_model_loader: - type f32: 121 tensors llama_model_loader: - type q6_K: 422 tensors llm_load_vocab: special tokens cache size = 267 llm_load_vocab: token to piece cache size = 0.3834 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 64000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 4096 llm_load_print_meta: n_embd = 7168 llm_load_print_meta: n_head = 56 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_layer = 60 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 7 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 20480 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 5000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 30B llm_load_print_meta: model ftype = Q6_K llm_load_print_meta: model params = 34.39 B llm_load_print_meta: model size = 26.27 GiB (6.56 BPW) llm_load_print_meta: general.name = LLaMA v2 llm_load_print_meta: BOS token = 1 '<|startoftext|>' llm_load_print_meta: EOS token = 7 '<|im_end|>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: PAD token = 0 '<unk>' llm_load_print_meta: LF token = 315 '<0x0A>' llm_load_print_meta: EOT token = 2 '<|endoftext|>' llm_load_tensors: ggml ctx size = 0.28 MiB llm_load_tensors: CPU buffer size = 26905.46 MiB .................................................................................................... llama_new_context_with_model: n_batch is less than GGML_KQ_MASK_PAD - increasing to 32 llama_new_context_with_model: n_ctx = 512 llama_new_context_with_model: n_batch = 32 llama_new_context_with_model: n_ubatch = 32 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 120.00 MiB llama_new_context_with_model: KV self size = 120.00 MiB, K (f16): 60.00 MiB, V (f16): 60.00 MiB llama_new_context_with_model: CPU output buffer size = 0.24 MiB llama_new_context_with_model: CPU compute buffer size = 8.69 MiB llama_new_context_with_model: graph nodes = 1926 llama_new_context_with_model: graph splits = 1 AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | Model metadata: {'tokenizer.chat_template': "{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}", 'tokenizer.ggml.add_eos_token': 'false', 'tokenizer.ggml.padding_token_id': '0', 'tokenizer.ggml.eos_token_id': '7', 'general.architecture': 'llama', 'llama.rope.freq_base': '5000000.000000', 'llama.context_length': '4096', 'general.name': 'LLaMA v2', 'tokenizer.ggml.add_bos_token': 'false', 'llama.embedding_length': '7168', 'llama.feed_forward_length': '20480', 'llama.attention.layer_norm_rms_epsilon': '0.000010', 'llama.rope.dimension_count': '128', 'tokenizer.ggml.bos_token_id': '1', 'llama.attention.head_count': '56', 'llama.block_count': '60', 'llama.attention.head_count_kv': '8', 'general.quantization_version': '2', 'tokenizer.ggml.model': 'llama', 'general.file_type': '18'} Available chat formats from metadata: chat_template.default Guessed chat format: chatml 2024-06-28 21:57:56 ERROR rasa.dialogue_understanding.generator.llm_command_generator - [error ] llm_command_generator.llm.error error=ValueError('Requested tokens (804) exceed context window of 512') /home/hamza/PycharmProjects/G6-Voice-Assistant/.venv/lib/python3.10/site-packages/sanic/server/websockets/impl.py:521: DeprecationWarning: The explicit passing of coroutine objects to asyncio.wait() is deprecated since Python 3.8, and scheduled for removal in Python 3.11.
My config.yml is as follows
config.yml
recipe: default.v1 language: en pipeline: - name: LLMCommandGenerator llm: type: llamacpp model_path: "models/llava-v1.6-34b.Q6_K.gguf" chunk_size: 16 model_kwargs: device: "gpu" # llm: # model_name: gpt-4 policies: - name: FlowPolicy # - name: EnterpriseSearchPolicy # - name: RulePolicy assistant_id: 20240627-152245-khaki-isotope
The text was updated successfully, but these errors were encountered:
No branches or pull requests
I am geeting the following for Llama-LLM
My
config.yml
is as followsThe text was updated successfully, but these errors were encountered: