Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Propose to print error information when import models #123

Open
SDaoer opened this issue Jun 20, 2024 · 1 comment
Open

Propose to print error information when import models #123

SDaoer opened this issue Jun 20, 2024 · 1 comment

Comments

@SDaoer
Copy link

SDaoer commented Jun 20, 2024

In the file /root/lmms-eval/lmms_eval/models/__init__.py, no output is printed if an ImportError occurs. This can lead to confusion for users who encounter a ValueError stating 'Attempted to load model '...', but no model for this name found!' in the file /root/lmms-eval/lmms_eval/api/registry.py, specifically at line 32, within the get_model function. This error might actually be triggered by a previous ImportError from {model}.py, but because the import error doesn’t display any associated messages, users remain unaware of the underlying issue.

recommend:

for model_name, model_class in AVAILABLE_MODELS.items():
    try:
        exec(f"from .{model_name} import {model_class}")
    except ImportError as e:
        print(f"Failed to import {model_class} from {model_name}: {e}")
        pass
@Luodian
Copy link
Contributor

Luodian commented Jun 20, 2024

Thanks for this proposal, I will integrate it in a new PR.

lorenzomammana pushed a commit to lorenzomammana/lmms-eval that referenced this issue Jun 20, 2024
* Resolve conflict when merge the kr_ego with internal_main_dev

* fix the bug of file overwrite

* Optimize the inference of videochatgpt dataset

* Resolve conflict

* delete repeated line

* reformat the code

* rename the file name for inference results

* group the same task together for cvrr and videochatgpt

* group the same task together for videochatgpt and cvrr

* reformat the code

* fix the bug of videochatgpt_consistency multiocessing

* Rename the metric from submission to subtask

* fix the bug of consistency where different answers agre generated in pred2

* add accuracy into the evaluation of cvrr

* add accuracy metric to cvrr dataset

* remove duplicate rows when merging from main branch

* Refactor videochatgpt_gen and videochatgpt_temporal for correct score parsing

* enable the webm video loader for llavavid as required in cvrr dataset

* Refactor process_results function to handle full_docs in videochatgpt task

* add tqdm to consistency gpt_eval

* Refactor the cvrr for correct aggregate logic

* change backend to decord for videochatgpt eval

* Fix for mkv video path

* add data feature

* update vatex

* add vatex task

* update

* update VATEX and evaluation matrices

* new

* update utils

* update vatex

* new

* update load vatex from url

* update vatex from url

* Delete lmms_eval/models/gpt4v.py

* update vatex from url

* update videomme

* add videomme

* update vatex_val_zh

* modified task vatex_test

* chore: Update dataset paths and dependencies

* update yaml

* update yaml

* update several typos

* chore: Fix typo in makecvrr.ipynb

---------

Co-authored-by: KairuiHu <kairuih12@gmail.com>
Co-authored-by: Bo Li <drluodian@gmail.com>
Co-authored-by: kcz358 <kaichenzhang358@outlook.com>
Co-authored-by: SHUAI LIU <choiszt@SHUAIdeMacBook-Pro.local>
Co-authored-by: choiszt <shuai005@e.ntu.edu.sg>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants