Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: investigate why color-related questions are not answerable in the LLM chat #387

Open
jooyoungseo opened this issue Feb 2, 2024 · 1 comment
Assignees

Comments

@jooyoungseo
Copy link
Member

Bug Report

Description

Steps to Reproduce

  1. Open user_study_pilot/tutorial7_stacked.html
  2. Press tab to focus in the plot.
  3. Press ctrl+shift+/ to open LLM chat.
  4. Ask color related questions.

Actual Behavior

It says like below:

### User
explain their color

### OpenAI
Unfortunately, due to my capabilities, I'm unable to provide the specific color details from the image. However, the chart uses a color-coding system to
help distinguish between the different drive types within each car class bar. The segments of each horizontal bar would be filled with different colors
based on the drive type they represent, as per the following "fill" labels from the JSON data: - "r" - "f" - "4" In a typical color-coded chart, a legend
is provided to help users interpret which color corresponds to which drive type, but as I cannot see the colors, I cannot describe them to you. You would
normally look for a key or legend within the chart that would illustrate this information. For instance, drive type "4" might be represented by one color,
"f" by another, and "r" by a third, unique color. These colors help viewers quickly differentiate the proportion of each drive type within a given car

Expected Behavior

It should answer the color related questions. If you keep asking about colors, it just falls back to json data that we provide to the model.

Screenshots

Environment

  • Operating System: [e.g. Windows 10]
  • Browser: Chrome
  • Version: latest

Additional Information

@jooyoungseo
Copy link
Member Author

Humm... This is very odd. Sometimes, I get the results correctly. I only tested with OpenAI GPT4-vision API. I think we need to further test this in multiple scenarios.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants