Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: ModuleNotFoundError: No module named 'openai'. #175

Open
PhilipMay opened this issue Apr 26, 2024 · 5 comments
Open

Error: ModuleNotFoundError: No module named 'openai'. #175

PhilipMay opened this issue Apr 26, 2024 · 5 comments

Comments

@PhilipMay
Copy link
Contributor

When I install lighteval form main and call this command:

accelerate launch --num_processes=1 run_evals_accelerate.py \
    --model_args "vonjack/Phi-3-mini-4k-instruct-LLaMAfied" \
    --tasks tasks_examples/open_llm_leaderboard_tasks.txt \
    --override_batch_size 1 \
    --use_chat_template \
    --output_dir="./evals/"

I get this error:

Traceback (most recent call last):
  File "/users/philip/code/git/lighteval_main/run_evals_accelerate.py", line 29, in <module>
    from lighteval.main_accelerate import CACHE_DIR, main
  File "/users/philip/code/git/lighteval_main/src/lighteval/main_accelerate.py", line 31, in <module>
    from lighteval.evaluator import evaluate, make_results_table
  File "/users/philip/code/git/lighteval_main/src/lighteval/evaluator.py", line 32, in <module>
    from lighteval.logging.evaluation_tracker import EvaluationTracker
  File "/users/philip/code/git/lighteval_main/src/lighteval/logging/evaluation_tracker.py", line 37, in <module>
    from lighteval.logging.info_loggers import (
  File "/users/philip/code/git/lighteval_main/src/lighteval/logging/info_loggers.py", line 34, in <module>
    from lighteval.metrics import MetricCategory
  File "/users/philip/code/git/lighteval_main/src/lighteval/metrics/__init__.py", line 25, in <module>
    from lighteval.metrics.metrics import MetricCategory, Metrics
  File "/users/philip/code/git/lighteval_main/src/lighteval/metrics/metrics.py", line 34, in <module>
    from lighteval.metrics.metrics_sample import (
  File "/users/philip/code/git/lighteval_main/src/lighteval/metrics/metrics_sample.py", line 42, in <module>
    from lighteval.metrics.llm_as_judge import JudgeOpenAI
  File "/users/philip/code/git/lighteval_main/src/lighteval/metrics/llm_as_judge.py", line 30, in <module>
    from openai import OpenAI
ModuleNotFoundError: No module named 'openai'
Traceback (most recent call last):
  File "/users/philip/miniconda3/envs/lighteval_main/bin/accelerate", line 8, in <module>
    sys.exit(main())
  File "/users/philip/miniconda3/envs/lighteval_main/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 46, in main
    args.func(args)
  File "/users/philip/miniconda3/envs/lighteval_main/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1075, in launch_command
    simple_launcher(args)
  File "/users/philip/miniconda3/envs/lighteval_main/lib/python3.10/site-packages/accelerate/commands/launch.py", line 681, in simple_launcher
    raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/users/philip/miniconda3/envs/lighteval_main/bin/python', 'run_evals_accelerate.py', '--model_args', 'vonjack/Phi-3-mini-4k-instruct-LLaMAfied', '--tasks', 'tasks_examples/open_llm_leaderboard_tasks.txt', '--override_batch_size', '1', '--use_chat_template', '--output_dir=./evals/']' returned non-zero exit status 1.

This should not happen since I do not want to use anything from OpenAI.

@dgolchin
Copy link

i quick workaround: uncomment from openai import OpenAI
in lighteval/src/lighteval/metrics/llm_as_judge.py and do pip install e . again :)

@PhilipMay
Copy link
Contributor Author

PhilipMay commented Apr 27, 2024

i quick workaround: uncomment from openai import OpenAI in lighteval/src/lighteval/metrics/llm_as_judge.py and do pip install e . again :)

Yes sure. Thanks.
This is a workaround and I also applied it.
But nevertheless - we have a bug that should be fixed.

Either the openai package should be installed by default or it should not be imported by default.

@clefourrier
Copy link
Member

Hi!
Yes, we need to add it to our optional dependencies with a check, it's already in the works in #173

@NathanHB
Copy link
Member

hi! since the llm-as-judge metric is an official metric, we will be adding openai as a required dependency. like clementine said a PR has been opened :)

@Bachstelze
Copy link

This error is annoying. Who is even using closedai if there is prometheus for evaluation?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants