Hi authors, the current code can be successfuly runed. However, I encountered a consistent problem across multiple machines when using the default configurations: the trained model doesn't adopt the same reasoning paradigm as the provided checkpoint. Instead of generating the expected chain of visual tokens, it tends to directly output the final answer, skipping the intermediate visual reasoning step.
Can you provide some help?
Hi authors, the current code can be successfuly runed. However, I encountered a consistent problem across multiple machines when using the default configurations: the trained model doesn't adopt the same reasoning paradigm as the provided checkpoint. Instead of generating the expected chain of visual tokens, it tends to directly output the final answer, skipping the intermediate visual reasoning step.
Can you provide some help?