When asking AI a question in the browser, all of the "thinking" done by the LLM is visually the same as the actual response it delivers at the end, making it hard to see where the latter starts and ends. Would suggest some sort of visual differentiator.
When asking AI a question in the browser, all of the "thinking" done by the LLM is visually the same as the actual response it delivers at the end, making it hard to see where the latter starts and ends. Would suggest some sort of visual differentiator.