Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transform a string into input llama2-specific and llama3-specific input ? #11026

Open
JamieVC opened this issue May 15, 2024 · 1 comment
Open

Comments

@JamieVC
Copy link

JamieVC commented May 15, 2024

I try to transform a string into input llama2-specific and llama3-specific input in the function completion_to_prompt()
Is there a way to pass parameter model_option as a input? or else, I should use a global variable to let the function know which model is using?

def completion_to_prompt(completion):
    #return f"<s>[INST] <<SYS>>\n    \n<</SYS>>\n\n{completion} [/INST]"
    return f"<|begin_of_text|><|start_header_id|>user<|end_header_id|>\n\n{completion}<|eot_id|><|start_header_id|>    <|end_header_id|>"

@st.cache_resource
def create_model(model_name):
       llm_model = IpexLLM.from_model_id(
            model_name=model_name,
            tokenizer_name=tokenizer_name,
            context_window=512,
            max_new_tokens=64,
            completion_to_prompt=completion_to_prompt,
            generate_kwargs={"do_sample": True, 'temperature': 0.1},
            #messages_to_prompt=messages_to_prompt,
            device_map="cpu",
        )


with st.sidebar:
    st.markdown("## Configuration")
    model_option = st.selectbox(
    "model name",
    ("meta-llama/Llama-2-7b-chat-hf", "meta-llama/Meta-Llama-3-8B-Instruct"),
    index=None,
    placeholder="Select LLM model...",
    disabled=st.session_state.disabled,
    )
    

     st.session_state.disabled = True
     st.write("You selected:", model_option)

Refer to:
https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3/
https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-2/

@ivy-lv11
Copy link
Contributor

As the completion_to_prompt function only accepts the completion and an optional system_prompt as parameters like completion_to_prompt in llama_index.llms.llama_cpp.llama_utils, you can specify the model type by using a global variable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants