Conversation
Summary of ChangesHello @gushiqiao, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request refines the model loading process by introducing more robust handling for lazy-loaded Safetensors files, accommodating different file structures. It also improves the maintainability of the Gradio user interface by restructuring how CSS is applied and adjusts the default quantization operation priority for specific GPU configurations. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request updates the disk offloading mechanism to support lazy loading from a single file, in addition to the existing directory-based loading. This is achieved by using pathlib.Path to check if lazy_load_file is a file. The changes are consistently applied across several modules, including mm_weight.py, layer_norm_weight.py, rms_norm_weight.py, tensor.py, and model.py. While the new functionality is correct, it has introduced significant code duplication. My review primarily focuses on suggesting refactoring to centralize the path resolution logic into helper methods, which would improve code maintainability. The pull request also includes some minor but positive refactoring in the Gradio UI code.
| if Path(self.lazy_load_file).is_file(): | ||
| lazy_load_file_path = self.lazy_load_file | ||
| else: | ||
| lazy_load_file_path = os.path.join(self.lazy_load_file, f"block_{source_name.split('.')[1]}.safetensors") |
There was a problem hiding this comment.
This logic to determine lazy_load_file_path is duplicated in several places in this file (e.g., lines 157-160, 220-223, 306-309, 349-352, 371-374, 694-697) and across other files in this PR. To improve maintainability, consider refactoring this into a helper method that takes a block identifier and returns the correct path.
| if Path(self.lazy_load_file).is_file(): | ||
| lazy_load_file_path = self.lazy_load_file | ||
| else: | ||
| lazy_load_file_path = os.path.join(self.lazy_load_file, f"block_{name.split('.')[1]}.safetensors") |
There was a problem hiding this comment.
This logic to determine lazy_load_file_path is duplicated in load_state_dict_from_disk within this file (lines 162-165 and 177-180). To improve maintainability and reduce code duplication, consider extracting this logic into a helper method. This pattern is also repeated in other files in this pull request.
| if Path(self.lazy_load_file).is_file(): | ||
| lazy_load_file_path = self.lazy_load_file | ||
| else: | ||
| lazy_load_file_path = os.path.join(self.lazy_load_file, f"block_{self.weight_name.split('.')[1]}.safetensors") |
There was a problem hiding this comment.
| if Path(self.lazy_load_file).is_file(): | ||
| lazy_load_file_path = self.lazy_load_file | ||
| else: | ||
| lazy_load_file_path = os.path.join(self.lazy_load_file, f"block_{self.tensor_name.split('.')[1]}.safetensors") |
There was a problem hiding this comment.
| if os.path.isdir(safetensors_path): | ||
| safetensors_files = glob.glob(os.path.join(safetensors_path, "*.safetensors")) | ||
| if self.lazy_load: | ||
| self.lazy_load_path = safetensors_path | ||
| non_block_file = os.path.join(safetensors_path, "non_block.safetensors") | ||
| if os.path.exists(non_block_file): | ||
| safetensors_files = [non_block_file] | ||
| else: | ||
| raise ValueError(f"Non-block file not found in {safetensors_path}. Please check the model path.") | ||
| else: | ||
| safetensors_files = glob.glob(os.path.join(safetensors_path, "*.safetensors")) | ||
| else: | ||
| if self.lazy_load: | ||
| self.lazy_load_path = safetensors_path | ||
| safetensors_files = [safetensors_path] |
There was a problem hiding this comment.
This logic for determining safetensors_files is nearly identical to the logic in _load_quant_ckpt (lines 214-228). To improve maintainability and reduce code duplication, consider extracting this into a helper method. The helper could take safetensors_path and a boolean flag indicating if it's for a quantized checkpoint to handle the minor difference in logic (i.e., safetensors_path = os.path.dirname(safetensors_path)).
No description provided.