-
Notifications
You must be signed in to change notification settings - Fork 279
[doc] update baml example instruction on user roles #1316
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| **Why does this matter?** | ||
| - For Gemini and Anthropic models, BAML can sometimes infer or adjust the prompt structure, so extraction works either way. | ||
| - For **OpenAI models** (e.g., GPT-4/GPT-4o), if the PDF is **not** in the user role, the model doesn't see the file content — so extractions will fail or return empty fields. | ||
| - This can easily trip you up.If you're using BAML, **always double-check your prompt roles when adding file inputs**—especially for OpenAI backends. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: up.If
missed a whitespace
| > This ensures the **PDF content is explicitly included as part of the user message**, rather than the system prompt. | ||
|
|
||
| **Why does this matter?** | ||
| - For Gemini and Anthropic models, BAML can sometimes infer or adjust the prompt structure, so extraction works either way. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"BAML can sometimes infer or adjust the prompt structure"
Is it how it works? (I'm not sure, may missed something if this is specified somewhere)
I think it's just some subtle request format difference between OpenAI and others, so others tolerate that in the system prompt, while OpenAI doesn't.
No description provided.