Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug: CalledProcessError: Unable to download and use models using OpenLLM #121

Closed
cmazzoni87 opened this issue Jul 18, 2023 · 4 comments
Closed

Comments

@cmazzoni87
Copy link

Describe the bug

Cannot use openllm locally at all due to CalledProcessError

To reproduce

from langchain.llms import OpenLLM
llm = OpenLLM(model_name='falcon', model_id='tiiuae/falcon-40b-instruct', temperature=0.0)

Logs

---------------------------------------------------------------------------
CalledProcessError                        Traceback (most recent call last)
<ipython-input-2-9e706d7fe198> in <module>
      3 import os
      4 
----> 5 llm = OpenLLM(model_name='falcon', model_id='tiiuae/falcon-40b-instruct', temperature=0.0)
      6 
      7 llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?")

~/.local/lib/python3.8/site-packages/langchain/llms/openllm.py in __init__(self, model_name, model_id, server_url, server_type, embedded, **llm_kwargs)
    168             # in-process. Wrt to BentoML users, setting embedded=False is the expected
    169             # behaviour to invoke the runners remotely
--> 170             runner = openllm.Runner(
    171                 model_name=model_name,
    172                 model_id=model_id,

~/.local/lib/python3.8/site-packages/openllm/_llm.py in Runner(model_name, ensure_available, init_local, implementation, **attrs)
   1404                 behaviour
   1405     """
-> 1406     runner = t.cast(
   1407         "_BaseAutoLLMClass",
   1408         openllm[implementation if implementation is not None else EnvVarMixin(model_name)["framework_value"]],  # type: ignore (internal API)

~/.local/lib/python3.8/site-packages/openllm/models/auto/factory.py in create_runner(cls, model_name, model_id, **attrs)
    155             A LLM instance.
    156         """
--> 157         llm, runner_attrs = cls.for_model(model_name, model_id, return_runner_kwargs=True, **attrs)
    158         return llm.to_runner(**runner_attrs)
    159 

~/.local/lib/python3.8/site-packages/openllm/models/auto/factory.py in for_model(cls, model_name, model_id, return_runner_kwargs, llm_config, ensure_available, **attrs)
    133                     llm.model_id,
    134                 )
--> 135                 llm.ensure_model_id_exists()
    136             if not return_runner_kwargs:
    137                 return llm

~/.local/lib/python3.8/site-packages/openllm/_llm.py in ensure_model_id_exists(self)
    898         Auto LLM initialisation.
    899         """
--> 900         output = subprocess.check_output(
    901             [
    902                 sys.executable,

/usr/lib/python3.8/subprocess.py in check_output(timeout, *popenargs, **kwargs)
    413         kwargs['input'] = empty
    414 
--> 415     return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
    416                **kwargs).stdout
    417 

/usr/lib/python3.8/subprocess.py in run(input, capture_output, timeout, check, *popenargs, **kwargs)
    514         retcode = process.poll()
    515         if check and retcode:
--> 516             raise CalledProcessError(retcode, process.args,
    517                                      output=stdout, stderr=stderr)
    518     return CompletedProcess(process.args, retcode, stdout, stderr)

CalledProcessError: Command '['/usr/bin/python3', '-m', 'openllm', 'download', 'falcon', '--model-id', 'tiiuae/falcon-40b-instruct', '--machine', '--implementation', 'pt']' returned non-zero exit status 1.

Environment

latests

System information (Optional)

1x H100 (80 GB PCIe)
26 vCPUs, 200 GiB RAM, 1 TiB SSD

@cmazzoni87 cmazzoni87 changed the title bug: Unable to download and use models using OpenLLM bug: CalledProcessError: Unable to download and use models using OpenLLM Jul 18, 2023
@aarnphm
Copy link
Member

aarnphm commented Jul 18, 2023

Can you try installing openllm from HEAD?

pip install -U "git+https://github.com/bentoml/openllm.git@main"

@cmazzoni87
Copy link
Author

Can you try installing openllm from HEAD?

pip install -U "git+https://github.com/bentoml/openllm.git@main"

That did solve the issue but now cloudpickle is failing:
~/.local/lib/python3.8/site-packages/bentoml/_internal/models/model.py in enter_cloudpickle_context(cls, external_modules, imported_modules)
263 return []
264
--> 265 registed_before: set[str] = cloudpickle.list_registry_pickle_by_value()
266 for mod in external_modules:
267 if mod.name in registed_before:

AttributeError: module 'cloudpickle' has no attribute 'list_registry_pickle_by_value'

@parano
Copy link
Member

parano commented Aug 8, 2023

@cmazzoni87 could you try upgrade cloudpickle? pip install -U cloudpickle

@aarnphm
Copy link
Member

aarnphm commented Aug 18, 2023

This is fixed by upgrading cloudpickle, non issue to OpenLLM.

@aarnphm aarnphm closed this as completed Aug 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants