Skip to content

Conversation

@rolandtannous
Copy link
Collaborator

Problem

Trying to run the inference cell in the original Magistral notebook resulted in

ValueError: Incorrect image source. Must be a valid URL starting with `http://` or `https://`, 
a valid path to an image file, or a base64 encoded string. Got <s>[SYSTEM_PROMPT]First draft 
your thinking process (inner monologue) until you arrive at a response...

The failing code was:

messages = [
    {"role": "user", "content": "Solve (x + 2)^2 = 0."}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize = False,
    add_generation_prompt = True,
)

# This line causes the error
inputs = tokenizer(text, return_tensors="pt").to("cuda")
_ = model.generate(**inputs, max_new_tokens=1024)

The Magistral-Small-2509 model uses a PixtralProcessor (multimodal processor), not a standard tokenizer. When you call tokenizer(text, ...) directly, the processor tries to parse the input for both text and images. Since text is a long string, the processor interprets it as a potential image URL/path and attempts to load it as an image, causing the error.

Solution

we don't call the processor directly on the formatted text string. Instead, pass the messages directly to apply_chat_template with tokenize=True.
Also , the PixtralProcessor is designed for multimodal inputs (text + images). It expects message["content"] to be a list of content blocks, where each block is a dictionary with a "type" . When we pass a plain string like "content": "Solve (x + 2)^2 = 0.", the processor tries to iterate over the string characters and access content["type"], which fails because string indices must be integers. So we should also make sure to structure the content as a list of content block dictionaries.
The final revised code is as follows:

messages = [
    {"role" : "user", "content" : [{"type": "text", "text": "Solve (x + 2)^2 = 0."}]}
]
inputs = tokenizer.apply_chat_template(
    messages,
    tokenize = True,
    add_generation_prompt = True, # Must add for generation
    return_tensors = "pt", 
    return_dict = True,
).to("cuda")

from transformers import TextStreamer
_ = model.generate(
    **inputs,
    max_new_tokens = 1024, # Increase for longer outputs!
    temperature = 0.7, top_p = 0.95,
    streamer = TextStreamer(tokenizer, skip_prompt = True),
)

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @rolandtannous, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical issue preventing successful inference in the Magistral notebook. The changes ensure that the multimodal processor correctly interprets chat messages, resolving a ValueError and enabling the model to generate responses as intended. The fix involves adjusting the message content structure and the tokenizer's application method to align with the processor's expectations.

Highlights

  • Inference Fix: Corrected a ValueError that occurred during inference in the Magistral notebook due to incorrect handling of multimodal inputs by the PixtralProcessor.
  • Message Formatting: Updated the messages structure to explicitly define content as a list of dictionaries with a 'type' key (e.g., {"type": "text", "text": "..."}), as required by the PixtralProcessor.
  • Tokenizer Usage: Modified the tokenizer.apply_chat_template call to directly tokenize messages (tokenize=True) and return tensors as a dictionary (return_tensors="pt", return_dict=True), streamlining the input preparation for model generation.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly fixes an issue with Magistral model inference by updating how the tokenizer is called. The changes in the Python scripts are correct. However, the same changes in the Jupyter Notebook files (.ipynb) introduce a syntax error due to missing commas between function arguments. I've added critical comments to address this in each of the affected notebook files.

@danielhanchen danielhanchen merged commit facbd80 into main Nov 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants