This repository was archived by the owner on Sep 10, 2025. It is now read-only.
  
  
  - 
                Notifications
    
You must be signed in to change notification settings  - Fork 248
 
Unify Input Generation for CLI and Openai API #1219
          
     Merged
      
      
    
                
     Merged
            
            
          Conversation
  
    
      This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
      Learn more about bidirectional Unicode characters
    
  
  
    
    
          
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchchat/1219
 Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 5f08656 with merge base e4b36f9 ( This comment was automatically generated by Dr. CI and updates every 15 minutes.  | 
    
              
                    Jack-Khuu
  
              
              approved these changes
              
                  
                    Sep 27, 2024 
                  
              
              
            
            
        
          
                torchchat/usages/openai_api.py
              
                Outdated
          
        
      | torchtune_contents = [] | ||
| if isinstance(message["content"], list): | ||
| for content_dict in message["content"]: | ||
| converted_content = [] | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused arg
| print("Starting Interactive Chat") | ||
| def _gen_model_input(self, prompt: str, image_prompts: Optional[List[str | Image.Image]] = None, max_new_tokens: Optional[int] = None) -> Tuple: | ||
| assert image_prompts is None or len(image_prompts) == 1, "At most one image is supported at the moment" | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
        Suggested change
      
    | assert image_prompts is None or len(image_prompts) == 1, "At most one image is supported at the moment" | |
| assert image_prompts is None or len(image_prompts) <= 1, "At most one image is supported at the moment" | 
    
  metascroy 
      pushed a commit
      that referenced
      this pull request
    
      Sep 30, 2024 
    
    
      
  
    
      
    
  
* support text-only input with llama3.2-11b * unify model generation between openai api and cli * Update typos * remove used arg --------- Co-authored-by: Jack-Khuu <jack.khuu.7@gmail.com>
  
      Sign up for free
      to subscribe to this conversation on GitHub.
      Already have an account?
      Sign in.
  
      
  Add this suggestion to a batch that can be applied as a single commit.
  This suggestion is invalid because no changes were made to the code.
  Suggestions cannot be applied while the pull request is closed.
  Suggestions cannot be applied while viewing a subset of changes.
  Only one suggestion per line can be applied in a batch.
  Add this suggestion to a batch that can be applied as a single commit.
  Applying suggestions on deleted lines is not supported.
  You must change the existing code in this line in order to create a valid suggestion.
  Outdated suggestions cannot be applied.
  This suggestion has been applied or marked resolved.
  Suggestions cannot be applied from pending reviews.
  Suggestions cannot be applied on multi-line comments.
  Suggestions cannot be applied while the pull request is queued to merge.
  Suggestion cannot be applied right now. Please check back later.
  
    
  
    
Before this diff, openai api and cli has their own pipeline to convert the user's input into model's input. We have to maintain both of them to make them up-to-date.
This PR unifies the model input generation pipeline to make openai api support text-only input, just like cli right now, and makes the pipeline more stable and maintainable.