Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update README.md #338

Merged
merged 1 commit into from
Aug 12, 2023
Merged

Update README.md #338

merged 1 commit into from
Aug 12, 2023

Conversation

3coins
Copy link
Collaborator

@3coins 3coins commented Aug 12, 2023

Updated instructions to set api keys in the notebook cell.

Updated instructions to set api keys in the notebook cell.
@3coins 3coins added the documentation Improvements or additions to documentation label Aug 12, 2023
@3coins 3coins marked this pull request as ready for review August 12, 2023 19:22
@3coins 3coins merged commit f0e60f8 into main Aug 12, 2023
6 of 7 checks passed
@bjornjorgensen
Copy link
Contributor

bjornjorgensen commented Aug 13, 2023

The reason for this PR #330 was that %env OPENAI_API_KEY='sk don't work. I'm using a docker build from jupyter/docker-stack but os.environ["OPENAI_API_KEY"]='sk works.

This is the error I get

---------------------------------------------------------------------------
AuthenticationError                       Traceback (most recent call last)
Cell In[4], line 1
----> 1 get_ipython().run_cell_magic('ai', 'chatgpt -f code', 'A program that asks me for my name and then greets me by my name, in Norwegian\n')

File /opt/conda/lib/python3.11/site-packages/IPython/core/interactiveshell.py:2478, in InteractiveShell.run_cell_magic(self, magic_name, line, cell)
   2476 with self.builtin_trap:
   2477     args = (magic_arg_s, cell)
-> 2478     result = fn(*args, **kwargs)
   2480 # The code below prevents the output from being displayed
   2481 # when using magics with decodator @output_can_be_silenced
   2482 # when the last Python token in the expression is a ';'.
   2483 if getattr(fn, magic.MAGIC_OUTPUT_CAN_BE_SILENCED, False):

File /opt/conda/lib/python3.11/site-packages/jupyter_ai_magics/magics.py:590, in AiMagics.ai(self, line, cell)
    587 ip = get_ipython()
    588 prompt = prompt.format_map(FormatDict(ip.user_ns))
--> 590 return self.run_ai_cell(args, prompt)

File /opt/conda/lib/python3.11/site-packages/jupyter_ai_magics/magics.py:532, in AiMagics.run_ai_cell(self, args, prompt)
    529 prompt = prompt.format_map(FormatDict(ip.user_ns))
    531 # generate output from model via provider
--> 532 result = provider.generate([prompt])
    533 output = result.generations[0][0].text
    535 # if openai-chat, append exchange to transcript

File /opt/conda/lib/python3.11/site-packages/langchain/llms/base.py:227, in BaseLLM.generate(self, prompts, stop, callbacks, tags, **kwargs)
    221         raise ValueError(
    222             "Asked to cache, but no cache found at `langchain.cache`."
    223         )
    224     run_managers = callback_manager.on_llm_start(
    225         dumpd(self), prompts, invocation_params=params, options=options
    226     )
--> 227     output = self._generate_helper(
    228         prompts, stop, run_managers, bool(new_arg_supported), **kwargs
    229     )
    230     return output
    231 if len(missing_prompts) > 0:

File /opt/conda/lib/python3.11/site-packages/langchain/llms/base.py:178, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
    176     for run_manager in run_managers:
    177         run_manager.on_llm_error(e)
--> 178     raise e
    179 flattened_outputs = output.flatten()
    180 for manager, flattened_output in zip(run_managers, flattened_outputs):

File /opt/conda/lib/python3.11/site-packages/langchain/llms/base.py:165, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
    155 def _generate_helper(
    156     self,
    157     prompts: List[str],
   (...)
    161     **kwargs: Any,
    162 ) -> LLMResult:
    163     try:
    164         output = (
--> 165             self._generate(
    166                 prompts,
    167                 stop=stop,
    168                 # TODO: support multiple run managers
    169                 run_manager=run_managers[0] if run_managers else None,
    170                 **kwargs,
    171             )
    172             if new_arg_supported
    173             else self._generate(prompts, stop=stop)
    174         )
    175     except (KeyboardInterrupt, Exception) as e:
    176         for run_manager in run_managers:

File /opt/conda/lib/python3.11/site-packages/langchain/llms/openai.py:822, in OpenAIChat._generate(self, prompts, stop, run_manager, **kwargs)
    818     return LLMResult(
    819         generations=[[Generation(text=response)]],
    820     )
    821 else:
--> 822     full_response = completion_with_retry(self, messages=messages, **params)
    823     llm_output = {
    824         "token_usage": full_response["usage"],
    825         "model_name": self.model_name,
    826     }
    827     return LLMResult(
    828         generations=[
    829             [Generation(text=full_response["choices"][0]["message"]["content"])]
    830         ],
    831         llm_output=llm_output,
    832     )

File /opt/conda/lib/python3.11/site-packages/langchain/llms/openai.py:106, in completion_with_retry(llm, **kwargs)
    102 @retry_decorator
    103 def _completion_with_retry(**kwargs: Any) -> Any:
    104     return llm.client.create(**kwargs)
--> 106 return _completion_with_retry(**kwargs)

File /opt/conda/lib/python3.11/site-packages/tenacity/__init__.py:289, in BaseRetrying.wraps.<locals>.wrapped_f(*args, **kw)
    287 @functools.wraps(f)
    288 def wrapped_f(*args: t.Any, **kw: t.Any) -> t.Any:
--> 289     return self(f, *args, **kw)

File /opt/conda/lib/python3.11/site-packages/tenacity/__init__.py:379, in Retrying.__call__(self, fn, *args, **kwargs)
    377 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
    378 while True:
--> 379     do = self.iter(retry_state=retry_state)
    380     if isinstance(do, DoAttempt):
    381         try:

File /opt/conda/lib/python3.11/site-packages/tenacity/__init__.py:314, in BaseRetrying.iter(self, retry_state)
    312 is_explicit_retry = fut.failed and isinstance(fut.exception(), TryAgain)
    313 if not (is_explicit_retry or self.retry(retry_state)):
--> 314     return fut.result()
    316 if self.after is not None:
    317     self.after(retry_state)

File /opt/conda/lib/python3.11/concurrent/futures/_base.py:449, in Future.result(self, timeout)
    447     raise CancelledError()
    448 elif self._state == FINISHED:
--> 449     return self.__get_result()
    451 self._condition.wait(timeout)
    453 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:

File /opt/conda/lib/python3.11/concurrent/futures/_base.py:401, in Future.__get_result(self)
    399 if self._exception:
    400     try:
--> 401         raise self._exception
    402     finally:
    403         # Break a reference cycle with the exception in self._exception
    404         self = None

File /opt/conda/lib/python3.11/site-packages/tenacity/__init__.py:382, in Retrying.__call__(self, fn, *args, **kwargs)
    380 if isinstance(do, DoAttempt):
    381     try:
--> 382         result = fn(*args, **kwargs)
    383     except BaseException:  # noqa: B902
    384         retry_state.set_exception(sys.exc_info())  # type: ignore[arg-type]

File /opt/conda/lib/python3.11/site-packages/langchain/llms/openai.py:104, in completion_with_retry.<locals>._completion_with_retry(**kwargs)
    102 @retry_decorator
    103 def _completion_with_retry(**kwargs: Any) -> Any:
--> 104     return llm.client.create(**kwargs)

File /opt/conda/lib/python3.11/site-packages/openai/api_resources/chat_completion.py:25, in ChatCompletion.create(cls, *args, **kwargs)
     23 while True:
     24     try:
---> 25         return super().create(*args, **kwargs)
     26     except TryAgain as e:
     27         if timeout is not None and time.time() > start + timeout:

File /opt/conda/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py:153, in EngineAPIResource.create(cls, api_key, api_base, api_type, request_id, api_version, organization, **params)
    127 @classmethod
    128 def create(
    129     cls,
   (...)
    136     **params,
    137 ):
    138     (
    139         deployment_id,
    140         engine,
   (...)
    150         api_key, api_base, api_type, api_version, organization, **params
    151     )
--> 153     response, _, api_key = requestor.request(
    154         "post",
    155         url,
    156         params=params,
    157         headers=headers,
    158         stream=stream,
    159         request_id=request_id,
    160         request_timeout=request_timeout,
    161     )
    163     if stream:
    164         # must be an iterator
    165         assert not isinstance(response, OpenAIResponse)

File /opt/conda/lib/python3.11/site-packages/openai/api_requestor.py:298, in APIRequestor.request(self, method, url, params, headers, files, stream, request_id, request_timeout)
    277 def request(
    278     self,
    279     method,
   (...)
    286     request_timeout: Optional[Union[float, Tuple[float, float]]] = None,
    287 ) -> Tuple[Union[OpenAIResponse, Iterator[OpenAIResponse]], bool, str]:
    288     result = self.request_raw(
    289         method.lower(),
    290         url,
   (...)
    296         request_timeout=request_timeout,
    297     )
--> 298     resp, got_stream = self._interpret_response(result, stream)
    299     return resp, got_stream, self.api_key

File /opt/conda/lib/python3.11/site-packages/openai/api_requestor.py:700, in APIRequestor._interpret_response(self, result, stream)
    692     return (
    693         self._interpret_response_line(
    694             line, result.status_code, result.headers, stream=True
    695         )
    696         for line in parse_stream(result.iter_lines())
    697     ), True
    698 else:
    699     return (
--> 700         self._interpret_response_line(
    701             result.content.decode("utf-8"),
    702             result.status_code,
    703             result.headers,
    704             stream=False,
    705         ),
    706         False,
    707     )

File /opt/conda/lib/python3.11/site-packages/openai/api_requestor.py:763, in APIRequestor._interpret_response_line(self, rbody, rcode, rheaders, stream)
    761 stream_error = stream and "error" in resp.data
    762 if stream_error or not 200 <= rcode < 300:
--> 763     raise self.handle_error_response(
    764         rbody, rcode, resp.data, rheaders, stream_error=stream_error
    765     )
    766 return resp

AuthenticationError: Incorrect API key provided: 'sk-UDSc*****************************************Wz7'. You can find your API key at https://platform.openai.com/account/api-keys.

@bjornjorgensen
Copy link
Contributor

oh.. if I use os.environ then I need 'KEY', but if I use %env then its without ' '
%env works now.

@3coins 3coins deleted the 3coins-patch-1 branch September 13, 2023 17:50
dbelgrod pushed a commit to dbelgrod/jupyter-ai that referenced this pull request Jun 10, 2024
Updated instructions to set api keys in the notebook cell.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants