Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A demo without gradio #140

Open
liboliba opened this issue Jan 21, 2024 · 1 comment
Open

A demo without gradio #140

liboliba opened this issue Jan 21, 2024 · 1 comment

Comments

@liboliba
Copy link

Hello,
Thanks for the gradio example. But I wonder if there are examples of reading in video file and then Q&A in command line without using the gradio example since my GPUs are offline and does not need gradio. It is also a bit confusing for people to understand the demo if they do not want to use gradio/unfamiliar with it.

Thank you.

@llx-08
Copy link

llx-08 commented Mar 8, 2024

Hi, you can try extracting gradio's inference operations manually, as in the following code

if args.model_type == 'vicuna':
    chat_state = default_conversation.copy()
else:
    chat_state = conv_llava_llama_2.copy()

video_path = "your_path"
chat_state.system = ""
img_list = []
llm_message = chat.upload_video(video_path , chat_state, img_list)

while True:
    user_message = input("User/ ")

    chat.ask(user_message, chat_state)

    num_beams = 2
    temperature = 1.0

    llm_message = chat.answer(conv=chat_state,
                                  img_list=img_list,
                                  num_beams=num_beams,
                                  temperature=temperature,
                                  max_new_tokens=300,
                                  max_length=2000)[0]
    print(chat_state.get_prompt())
    print(chat_state)
    print(llm_message)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants