Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multimodal/ Multimodal.ipynb example doesn't include vital details #763

Closed
pleabargain opened this issue Mar 16, 2023 · 2 comments
Closed
Labels

Comments

@pleabargain
Copy link

There are a number of libs | imports not included
e.g. # ! pip install sentencepiece
#! pip install transformers

! pip install gpt_index

And the code insists on openai_api_key
but no matter how I try to set this VAR, the code complains that the openai_api_key doesn't exist.

reciept reader code complains for a long time:
Could not find image processor class in the image processor config or the model config. Loading based on pattern matching with the model's feature extractor configuration.

I tried with
OPENAI_API_KEY="sk-XXX"
and
openai_api_key="sk-XXX"
both failed with

Did not find openai_api_key, please add an environment variable OPENAI_API_KEY which contains it, or pass openai_api_key as a named parameter. (type=value_error)

@Disiok
Copy link
Collaborator

Disiok commented Mar 16, 2023

How do you currently set your API key? Do other notebook examples work?

@Disiok Disiok added the discord label Mar 18, 2023
@logan-markewich
Copy link
Collaborator

Going to close this issue, as being stale

For reference, there are several ways to set the key

in a bash terminal
export OPENAI_API_KEY="my key"

Directly in code
import os
os.environ['OPENAI_API_KEY'] = "my key"

Directly in LLM
llm_predictor = LLMPredictor(llm=OpenAI(model_name='text-davinci-003', temperature=0, openai_api_key="my key"))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants