Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: No token found - OPENAI_API_KEY environment variable #52

Open
vertesy opened this issue Nov 29, 2023 · 21 comments
Open

Error: No token found - OPENAI_API_KEY environment variable #52

vertesy opened this issue Nov 29, 2023 · 21 comments

Comments

@vertesy
Copy link

vertesy commented Nov 29, 2023

Hi Thanks for creating this package!

I am trying to use it, but i get an error, when I submit my first request!

Sys.setenv("OPENAI_API_KEY" = 'sk-hahaga32this23isnotarealkey')

chattr::chattr_app()
• Provider: Open AI - Chat CompletionsPath/URL: https://api.openai.com/v1/chat/completionsModel: gpt-3.5-turbo

Listening on http://127.0.0.1:5946
<callr_error/rlib_error_3_0/rlib_error/error>
  Error: 
  ! in callr subprocess.
Caused by error in `openai_token()`:
  ! No token found
- Add your key to the "OPENAI_API_KEY" environment variable
- or - Add  "open-ai-api-key" to a `config` YAML file

---
Subprocess backtrace:
 1. chattr::ch_submit(defaults = do.call(what = chattr::chattr_defaults, …
 2. chattr:::ch_submit.ch_open_ai_chat_completions(defaults = do.call(what = chattr::chattr_defaults, …
 3. chattr:::ch_submit_open_ai(defaults = defaults, prompt = prompt, stream = stream, …
 4. chattr:::openai_completion(defaults = defaults, prompt = prompt, new_prompt = new_prompt, …
 5. chattr:::openai_completion.ch_open_ai_chat_completions(defaults = defaults, …
 6. chattr:::openai_switch(prompt = prompt, req_body = req_body, defaults = defaults, …
 7. chattr:::openai_stream_file(defaults = defaults, req_body = req_body, …
 8. openai_request(defaults, req_body) %>% req_stream(function(x) { …
 9. httr2::req_stream(., function(x) { …
10. httr2::req_perform_stream(req = req, callback = callback, timeout_sec = timeout_sec, …
11. httr2:::check_request(req)
12. httr2:::is_request(req)
13. chattr:::openai_request(defaults, req_body)
14. defaults$path %>% request() %>% req_auth_bearer_token(openai_token()) %>% …
15. httr2::req_body_json(., req_body)
16. httr2:::check_request(req)
17. httr2:::is_request(req)
18. httr2::req_auth_bearer_token(., openai_token())
19. httr2:::check_string(token)
20. httr2:::.rlang_check_is_string(x, allow_empty = allow_empty, allow_na = allow_na, …
21. rlang::is_string(x)
22. chattr:::openai_token()
23. rlang::abort("No token found\n       - Add your key to the \"OPENAI_API_KEY\" environment variable\n       - or - Add  \"open-ai-api-key\" to a `config` YAML fil…
24. | rlang:::signal_abort(cnd, .file)
25. | base::signalCondition(cnd)
26. global (function (e) …
Warning: Error in observe: Streaming returned error
  51: <Anonymous>
  50: signalCondition
  49: signal_abort
  48: abort
  47: observe
  46: <observer>
   3: shiny::runApp
   2: runGadget
   1: chattr_app

print(Sys.getenv("OPENAI_API_KEY")) is correct

> sessioninfo::session_info()
─ Session info ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
 setting  value
 version  R version 4.3.1 (2023-06-16)
 os       macOS Ventura 13.5.2
 system   x86_64, darwin20
 ui       RStudio
 language (EN)
 collate  en_US.UTF-8
 ctype    en_US.UTF-8
 tz       Europe/Vienna
 date     2023-11-29
 rstudio  2023.09.1+494 Desert Sunflower (desktop)
 pandoc   NA

─ Packages ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
 package     * version    date (UTC) lib source
 bslib         0.6.0      2023-11-21 [1] CRAN (R 4.3.0)
 cachem        1.0.8      2023-05-01 [1] CRAN (R 4.3.0)
 callr         3.7.3      2022-11-02 [1] CRAN (R 4.3.0)
 chattr      * 0.0.0.9005 2023-11-29 [1] Github (mlverse/chattr@210cfb2)
@vertesy
Copy link
Author

vertesy commented Nov 29, 2023

I guess this may simply be, because a chatGPT subscription is not enough? It seems you need a separate credit for openAI API use?

I tried by creating a new account, which comes with 5$ credit, change key, but still the same error.

@roman-gallardo
Copy link

I have the same issue!

@PJV-Ecu
Copy link

PJV-Ecu commented Dec 11, 2023

Same as of today.

@edgararuiz
Copy link
Collaborator

Hi all, I'm looking into this right now

@edgararuiz
Copy link
Collaborator

@PJV-Ecu / @vertesy / @roman-gallardo -

I think this is more likely due to an invalid token, can you try running this and let me know?

library(httr2)
Sys.setenv("OPENAI_API_KEY" = "sk-...")
request("https://api.openai.com/v1/models") %>%
  req_auth_bearer_token(Sys.getenv("OPENAI_API_KEY")) %>% 
  req_perform()

If your token is valid you should see this:

<httr2_response>
GET https://api.openai.com/v1/models
Status: 200 OK
Content-Type: application/json
Body: In memory (9051 bytes)

If your token is not valid, which is what I think it is, you should see:

Error in `req_perform()`:
! HTTP 401 Unauthorized.
Run `rlang::last_trace()` to see where the error occurred.

If you do get 401 Unathorized, check your token against the OpenAI web UI, you may need to get a new one. If you get 200 OK, can you please let me know? This means the error is somewhere else

@roman-gallardo
Copy link

Thank you. Here is my output

<httr2_response>
GET https://api.openai.com/v1/models
Status: 200 OK
Content-Type: application/json
Body: In memory (8438 bytes)

So I think my error is elsewhere

@edgararuiz
Copy link
Collaborator

Thank you @roman-gallardo, would you mind trying this?

library(httr2)
Sys.setenv("OPENAI_API_KEY" = "sk-...")
request("https://api.openai.com/v1/models") %>%
  req_auth_bearer_token(Sys.getenv("OPENAI_API_KEY")) %>% 
  req_perform()
chattr::chattr("hello")

@roman-gallardo
Copy link

I tried it and got the following output:

> chattr::chattr("hello") Error in openai_check_error(): ! Error from OpenAI Type:insufficient_quota Message: You exceeded your current quota, please check your plan and billing details. Run rlang::last_trace()to see where the error occurred. Warning message:req_stream()was deprecated in httr2 1.0.0. ℹ Please usereq_perform_stream()instead. ℹ The deprecated feature was likely used in the chattr package. Please report the issue at <https://github.com/mlverse/chattr/issues>. This warning is displayed once every 8 hours. Calllifecycle::last_lifecycle_warnings()` to see where this warning was generated.

`

I think I exceeded my quota because I tried inputting text using the chattr_app() function so many times. I guess I have to wait until my quota is reset? I am not sure when it resets.

@edgararuiz
Copy link
Collaborator

Oh ok, thank you for testing, so, it is validating your token... would you mind running the app again, and see if the error you get is the "no token" error? Because if it is that, then I need to work on improving error messages

@roman-gallardo
Copy link

sure. here is the output when I run chattr_app().

`> chattr_app()
• Provider: Open AI - Chat Completions
• Path/URL: https://api.openai.com/v1/chat/completions
• Model: gpt-3.5-turbo

Listening on http://127.0.0.1:5937
<callr_error/rlib_error_3_0/rlib_error/error>
Error:
! in callr subprocess.
Caused by error in openai_token():
! No token found
- Add your key to the "OPENAI_API_KEY" environment variable
- or - Add "open-ai-api-key" to a config YAML file

Subprocess backtrace:

  1. chattr::ch_submit(defaults = do.call(what = chattr::chattr_defaults, …
  2. chattr:::ch_submit.ch_open_ai_chat_completions(defaults = do.call(what = chattr::chattr_defaults, …
  3. chattr:::ch_submit_open_ai(defaults = defaults, prompt = prompt, stream = stream, …
  4. chattr:::openai_completion(defaults = defaults, prompt = prompt, new_prompt = new_prompt, …
  5. chattr:::openai_completion.ch_open_ai_chat_completions(defaults = defaults, …
  6. chattr:::openai_switch(prompt = prompt, req_body = req_body, defaults = defaults, …
  7. chattr:::openai_stream_file(defaults = defaults, req_body = req_body, …
  8. openai_request(defaults, req_body) %>% req_stream(function(x) { …
  9. httr2::req_stream(., function(x) { …
  10. httr2::req_perform_stream(req = req, callback = callback, timeout_sec = timeout_sec, …
  11. httr2:::check_request(req)
  12. httr2:::is_request(req)
  13. chattr:::openai_request(defaults, req_body)
  14. defaults$path %>% request() %>% req_auth_bearer_token(openai_token()) %>% …
  15. httr2::req_body_json(., req_body)
  16. httr2:::check_request(req)
  17. httr2:::is_request(req)
  18. httr2::req_auth_bearer_token(., openai_token())
  19. httr2:::check_string(token)
  20. httr2:::.rlang_check_is_string(x, allow_empty = allow_empty, allow_na = allow_na, …
  21. rlang::is_string(x)
  22. chattr:::openai_token()
  23. rlang::abort("No token found\n - Add your key to the "OPENAI_API_KEY" environment variable\n…
  24. | rlang:::signal_abort(cnd, .file)
  25. | base::signalCondition(cnd)
  26. global (function (e) …
    Warning: Error in observe: Streaming returned error
    51:
    50: signalCondition
    49: signal_abort
    48: abort
    47: observe
    46:
    3: shiny::runApp
    2: runGadget
    1: chattr_app

`

@edgararuiz
Copy link
Collaborator

Thank you! So yeah! I need to improve what the error needs to say, thank you for testing!

@PJV-Ecu
Copy link

PJV-Ecu commented Dec 12, 2023

Thanks, @edgararuiz for following this up. My token is ok according to your instructions. The error message when trying to run a query in the "chattr_app()" pop-up prompt is the following:

chattr_app()
• Provider: Open AI - Chat Completions
• Path/URL: https://api.openai.com/v1/chat/completions
• Model: gpt-3.5-turbo

Listening on http://127.0.0.1:6815
<callr_error/rlib_error_3_0/rlib_error/error>
Error:
! in callr subprocess.
Caused by error in openai_check_error(ret):
! Error from OpenAI
Type:insufficient_quota
Message: You exceeded your current quota, please check your plan and billing details.

Subprocess backtrace:

  1. chattr::ch_submit(defaults = do.call(what = chattr::chattr_defaults, …
  2. chattr:::ch_submit.ch_open_ai_chat_completions(defaults = do.call(what = chattr::chattr_defaults, …
  3. chattr:::ch_submit_open_ai(defaults = defaults, prompt = prompt, stream = stream, …
  4. chattr:::openai_completion(defaults = defaults, prompt = prompt, new_prompt = new_prompt, …
  5. chattr:::openai_completion.ch_open_ai_chat_completions(defaults = defaults, …
  6. chattr:::openai_switch(prompt = prompt, req_body = req_body, defaults = defaults, …
  7. chattr:::openai_stream_file(defaults = defaults, req_body = req_body, …
  8. chattr:::openai_check_error(ret)
  9. rlang::abort(error_msg)
  10. | rlang:::signal_abort(cnd, .file)
  11. | base::signalCondition(cnd)
  12. global (function (e) …
    Warning: Error in observe: Streaming returned error
    51:
    50: signalCondition
    49: signal_abort
    48: abort
    47: observe
    46:
    3: shiny::runApp
    2: runGadget
    1: chattr_app

chattr_app()
• Provider: Open AI - Chat Completions
• Path/URL: https://api.openai.com/v1/chat/completions
• Model: gpt-3.5-turbo

Listening on http://127.0.0.1:6815

@Ni-Ar
Copy link

Ni-Ar commented Dec 29, 2023

Thank you! So yeah! I need to improve what the error needs to say, thank you for testing!

Hi, I have the same error and tried the same debugging. So, if I get it right, the problem is not related to maxing out the quota?

@Jack-0623
Copy link

Jack-0623 commented Jan 10, 2024

I run into a similar problem using chattr. Can anyone help? Thanks

library(pacman)
p_load(chattr,httr2)
chattr_app()
• Provider: Open AI - Chat Completions
• Path/URL: https://api.openai.com/v1/chat/completions
• Model: gpt-4

Listening on http://127.0.0.1:6153
<callr_error/rlib_error_3_0/rlib_error/error>
Error:
! in callr subprocess.
Caused by error in req_perform_stream(., function(x) { …:
! could not find function "req_perform_stream"

Subprocess backtrace:

  1. chattr::ch_submit(defaults = do.call(what = chattr::chattr_defaults, …
  2. chattr:::ch_submit.ch_open_ai_chat_completions(defaults = do.call(what = chattr::chattr_defaults, …
  3. chattr:::ch_submit_open_ai(defaults = defaults, prompt = prompt, stream = stream, …
  4. chattr:::openai_completion(defaults = defaults, prompt = prompt, new_prompt = new_prompt, …
  5. chattr:::openai_completion.ch_open_ai_chat_completions(defaults = defaults, …
  6. chattr:::openai_switch(prompt = prompt, req_body = req_body, defaults = defaults, …
  7. chattr:::openai_stream_file(defaults = defaults, req_body = req_body, …
  8. openai_request(defaults, req_body) %>% req_perform_stream(function(x) { …
  9. base::.handleSimpleError(function (e) …
  10. global h(simpleError(msg, call))
    Warning: Error in observe: Streaming returned error
    51:
    50: signalCondition
    49: signal_abort
    48: abort
    47: observe
    46:
    3: shiny::runApp
    2: runGadget
    1: chattr_app

@drarunmitra
Copy link

I get the same error aswell

@toni-cerise
Copy link

toni-cerise commented Jan 15, 2024

Same here, chattr::chattr_app() works though (please check spelling mistake: "✔ Connection with OpenAI cofirmed")

Update (1/17/24): it worked for me today somehow.

edgararuiz added a commit that referenced this issue Jan 20, 2024
edgararuiz added a commit that referenced this issue Jan 20, 2024
@Janine-KKK
Copy link

Janine-KKK commented Jan 27, 2024

Hello, I got the same issue when I ran chattr_app() and sent prompts:
chattr_app()
• Provider: Open AI - Chat Completions
• Path/URL: https://api.openai.com/v1/chat/completions
• Model: gpt-4

Listening on http://127.0.0.1:6693
<callr_error/rlib_error_3_0/rlib_error/error>
Error:
! in callr subprocess.
Caused by error in openai_token():
! No token found
- Add your key to the "OPENAI_API_KEY" environment variable
- or - Add "openai-api-key" to a config YAML file

Subprocess backtrace:

  1. chattr::ch_submit(defaults = do.call(what = chattr::chattr_defaults, …
  2. chattr:::ch_submit.ch_open_ai_chat_completions(defaults = do.call(what = chattr::chattr_defaults, …
  3. chattr:::ch_submit_open_ai(defaults = defaults, prompt = prompt, stream = stream, …
  4. chattr:::openai_completion(defaults = defaults, prompt = prompt, new_prompt = new_prompt, …
  5. chattr:::openai_completion.ch_open_ai_chat_completions(defaults = defaults, …
  6. chattr:::openai_switch(prompt = prompt, req_body = req_body, defaults = defaults, …
  7. chattr:::openai_stream_file(defaults = defaults, req_body = req_body, …
  8. openai_request(defaults, req_body) %>% httr2::req_perform_stream(function(x) { …
  9. httr2::req_perform_stream(., function(x) { …
  10. httr2:::check_request(req)
  11. httr2:::is_request(req)
  12. chattr:::openai_request(defaults, req_body)
  13. defaults$path %>% httr2::request() %>% httr2::req_auth_bearer_token(openai_token()) %>% …
  14. httr2::req_body_json(., req_body)
  15. httr2:::check_request(req)
  16. httr2:::is_request(req)
  17. httr2::req_auth_bearer_token(., openai_token())
  18. httr2:::check_string(token)
  19. httr2:::.rlang_check_is_string(x, allow_empty = allow_empty, allow_na = allow_na, …
  20. rlang::is_string(x)
  21. chattr:::openai_token()
  22. rlang::abort("No token found\n - Add your key to the "OPENAI_API_KEY" environment variable\n - or - Add "openai-api-key" to a `conf…
  23. | rlang:::signal_abort(cnd, .file)
  24. | base::signalCondition(cnd)
  25. global (function (e) …
    Warning: Error in observe: Streaming returned error
    51:
    50: signalCondition
    49: signal_abort
    48: abort
    47: observe
    46:
    3: shiny::runApp
    2: runGadget
    1: chattr_app

But when I do req_perform, it responded to the prompt I sent before:
Sys.setenv("OPENAI_API_KEY" = "sk-g")

request("https://api.openai.com/v1/models") %>%

  • req_auth_bearer_token(Sys.getenv("OPENAI_API_KEY")) %>% 
    
  • req_perform()
    

<httr2_response>
GET https://api.openai.com/v1/models
Status: 200 OK
Content-Type: application/json
Body: In memory (3329 bytes)

chattr::chattr("hello")
Here is a simple way to generate a uniform random variable in R:

Generate a uniform random variable

runif(1)

@thtveer
Copy link

thtveer commented Feb 15, 2024

I have the same issue as Janine-KKK:
running a request in the chattr_app() caused this error:

Listening on http://127.0.0.1:4123
<callr_error/rlib_error_3_0/rlib_error/error>
Error:
! in callr subprocess.
Caused by error in openai_token():
! No token found
- Add your key to the "OPENAI_API_KEY" environment variable
- or - Add "openai-api-key" to a config YAML file

Subprocess backtrace:

  1. chattr::ch_submit(defaults = do.call(what = chattr::chattr_defaults, …
  2. chattr:::ch_submit.ch_open_ai_chat_completions(defaults = do.call(what = chattr::chattr_defaults, …
  3. chattr:::ch_submit_open_ai(defaults = defaults, prompt = prompt, stream = stream, …
  4. chattr:::openai_completion(defaults = defaults, prompt = prompt, new_prompt = new_prompt, …
  5. chattr:::openai_completion.ch_open_ai_chat_completions(defaults = defaults, …
  6. chattr:::openai_switch(prompt = prompt, req_body = req_body, defaults = defaults, …
  7. chattr:::openai_stream_file(defaults = defaults, req_body = req_body, …
  8. openai_request(defaults, req_body) %>% httr2::req_perform_stream(function(x) { …
  9. httr2::req_perform_stream(., function(x) { …
  10. httr2:::check_request(req)
  11. httr2:::is_request(req)
  12. chattr:::openai_request(defaults, req_body)
  13. defaults$path %>% httr2::request() %>% httr2::req_auth_bearer_token(openai_token()) %>% …
  14. httr2::req_body_json(., req_body)
  15. httr2:::check_request(req)
  16. httr2:::is_request(req)
  17. httr2::req_auth_bearer_token(., openai_token())
  18. httr2:::check_string(token)
  19. httr2:::.rlang_check_is_string(x, allow_empty = allow_empty, allow_na = allow_na, …
  20. rlang::is_string(x)
  21. chattr:::openai_token()
  22. rlang::abort("No token found\n - Add your key to the "OPENAI_API_KEY" environment variable\n - or - Add "openai-api-key" to a config YAML file")
  23. | rlang:::signal_abort(cnd, .file)
  24. | base::signalCondition(cnd)
  25. global (function (e) …
    Warnung: Error in observe: Streaming returned error
    51:
    50: signalCondition
    49: signal_abort
    48: abort
    47: observe
    46:
    3: shiny::runApp
    2: runGadget
    1: chattr_app

So I trouble shooted and my API key is valid:

request("https://api.openai.com/v1/models") %>%

  • req_auth_bearer_token(Sys.getenv("OPENAI_API_KEY")) %>% 
    
  • req_perform()
    

<httr2_response>
GET https://api.openai.com/v1/models
Status: 200 OK
Content-Type: application/json
Body: In memory (3457 bytes)

Then I continued with

GET https://api.openai.com/v1/models
Status: 200 OK
Content-Type: application/json
Body: In memory (3457 bytes)

chattr::chattr("hello")

And I got an answer to my original chat question in the console. Yet, using the chattr_app() produces over and over the above mentioned error.

Can someone help?

sessioninfo::session_info()
─ Session info ────────────────────────
setting value
version R version 4.3.2 (2023-10-31 ucrt)
os Windows 11 x64 (build 22631)
system x86_64, mingw32
ui RStudio
language (EN)
collate German_Germany.utf8
ctype German_Germany.utf8
tz Europe/Berlin
date 2024-02-15
rstudio 2023.12.1+402 Ocean Storm (desktop)
pandoc NA

─ Packages ─────────
bslib 0.6.1 2023-11-28 [1] CRAN (R 4.3.2)
cachem 1.0.8 2023-05-01 [1] CRAN (R 4.3.2)
callr 3.7.3 2022-11-02 [1] CRAN (R 4.3.2)
cellranger 1.1.0 2016-07-27 [1] CRAN (R 4.3.2)
chattr * 0.0.0.9006 2024-02-15 [1] Github (bc8f3b5)
cli 3.6.1 2023-03-23 [1] CRAN (R 4.3.1)
clipr 0.8.0 2022-02-22 [1] CRAN (R 4.3.2)
commonmark 1.9.1 2024-01-30 [1] CRAN (R 4.3.2)
crayon 1.5.2 2022-09-29 [1] CRAN (R 4.3.2)
curl 5.2.0 2023-12-08 [1] CRAN (R 4.3.2)
data.table * 1.15.0 2024-01-30 [1] CRAN (R 4.3.2)
digest 0.6.34 2024-01-11 [1] CRAN (R 4.3.2)
ellipsis 0.3.2 2021-04-29 [1] CRAN (R 4.3.2)
fansi 1.0.6 2023-12-08 [1] CRAN (R 4.3.2)
fastmap 1.1.1 2023-02-24 [1] CRAN (R 4.3.2)
fontawesome 0.5.2 2023-08-19 [1] CRAN (R 4.3.2)
fs 1.6.3 2023-07-20 [1] CRAN (R 4.3.2)
glue 1.6.2 2022-02-24 [1] CRAN (R 4.3.1)
htmltools 0.5.7 2023-11-03 [1] CRAN (R 4.3.2)
httpuv 1.6.14 2024-01-26 [1] CRAN (R 4.3.2)
httr2 * 1.0.0 2023-11-14 [1] CRAN (R 4.3.2)
jquerylib 0.1.4 2021-04-26 [1] CRAN (R 4.3.2)
jsonlite 1.8.8 2023-12-04 [1] CRAN (R 4.3.2)
later 1.3.2 2023-12-06 [1] CRAN (R 4.3.2)
lifecycle 1.0.4 2023-11-07 [1] CRAN (R 4.3.2)
magrittr 2.0.3 2022-03-30 [1] CRAN (R 4.3.1)
memoise 2.0.1 2021-11-26 [1] CRAN (R 4.3.2)
mime 0.12 2021-09-28 [1] CRAN (R 4.3.1)
NLP * 0.2-1 2020-10-14 [1] CRAN (R 4.3.1)
pillar 1.9.0 2023-03-22 [1] CRAN (R 4.3.2)
pkgconfig 2.0.3 2019-09-22 [1] CRAN (R 4.3.2)
processx 3.8.3 2023-12-10 [1] CRAN (R 4.3.2)
promises 1.2.1 2023-08-10 [1] CRAN (R 4.3.2)
ps 1.7.6 2024-01-18 [1] CRAN (R 4.3.2)
purrr 1.0.2 2023-08-10 [1] CRAN (R 4.3.2)
R6 2.5.1 2021-08-19 [1] CRAN (R 4.3.2)
rappdirs 0.3.3 2021-01-31 [1] CRAN (R 4.3.2)
Rcpp 1.0.12 2024-01-09 [1] CRAN (R 4.3.2)
readxl 1.4.3 2023-07-06 [1] CRAN (R 4.3.2)
rlang 1.1.1 2023-04-28 [1] CRAN (R 4.3.1)
rstudioapi 0.15.0 2023-07-07 [1] CRAN (R 4.3.2)
sass 0.4.8 2023-12-06 [1] CRAN (R 4.3.2)
sessioninfo 1.2.2 2021-12-06 [1] CRAN (R 4.3.2)
shiny * 1.8.0 2023-11-17 [1] CRAN (R 4.3.2)
slam 0.1-50 2022-01-08 [1] CRAN (R 4.3.1)
stopwords * 2.3 2021-10-28 [1] CRAN (R 4.3.2)
tibble 3.2.1 2023-03-20 [1] CRAN (R 4.3.1)
tm * 0.7-11 2023-02-05 [1] CRAN (R 4.3.2)
utf8 1.2.4 2023-10-22 [1] CRAN (R 4.3.1)
vctrs 0.6.3 2023-06-14 [1] CRAN (R 4.3.1)
withr 3.0.0 2024-01-16 [1] CRAN (R 4.3.2)
xml2 1.3.6 2023-12-04 [1] CRAN (R 4.3.2)
xtable 1.8-4 2019-04-21 [1] CRAN (R 4.3.2)
yaml 2.3.8 2023-12-11 [1] CRAN (R 4.3.2

@cargingarsan
Copy link

chattr_app()
• Provider: OpenAI - Chat Completions
• Path/URL: https://api.openai.com/v1/chat/completions
• Model: gpt-3.5-turbo
• Label: GPT 3.5 (OpenAI)
Loading required package: shiny

Listening on http://127.0.0.1:6732
<callr_error/rlib_error_3_0/rlib_error/error>
Error:
! in callr subprocess.
Caused by error in abort(req_result):
! message must be a character vector, not a <httr2_response> object.

Subprocess backtrace:

  1. chattr::ch_submit(defaults = defaults, prompt = prompt, stream = stream, …
  2. chattr:::ch_submit.ch_openai(defaults = defaults, prompt = prompt, stream = stream, …
  3. chattr:::ch_openai_complete(prompt = prompt, defaults = defaults)
  4. rlang::abort(req_result)
  5. rlang:::validate_signal_args(message, class, call, .subclass, "abort")
  6. rlang:::check_character(message, call = env)
  7. rlang:::stop_input_type(x, "a character vector", ..., allow_na = FALSE, …
  8. rlang::abort(message, ..., call = call, arg = arg)
  9. | rlang:::signal_abort(cnd, .file)
  10. | base::signalCondition(cnd)
  11. global (function (e) …
    Warning: Error in observe: Streaming returned error
    51:
    50: signalCondition
    49: signal_abort
    48: abort
    47: observe
    46:
    3: shiny::runApp
    2: runGadget
    1: chattr_app

@edgararuiz
Copy link
Collaborator

Hi @cargingarsan , can you run the following and letting me know what you got?

library(httr2)
request("https://api.openai.com/v1/models") %>%
  req_auth_bearer_token(Sys.getenv("OPENAI_API_KEY")) %>% 
  req_perform()

@jhk0530
Copy link

jhk0530 commented May 7, 2024

Hi, I was experiencing similar symptoms to those of the users mentioned above and thought I'd leave a note in case it might give you a clue.

Symptoms

1. httr2 response

When executed below code, It returns 200.

library(httr2)
Sys.setenv("OPENAI_API_KEY" ="sk-...")request("https://api.openai.com/v1/models") 
%>% 
 req_auth_bearer_token(Sys.getenv("OPENAI_API_KEY")) %>% req_perform()
<httr2_response>
GET https://api.openai.com/v1/models
Status: 200 OK
Content-Type: application/json
Body: In memory (2707 bytes)

This doesn't effected from type of API secret key (either user API key or Project API key)

2. chattr options

When I use chattr_app(), I only see GitHub Copilot chat. (not GPT 3.5 / GPT 4)

3. chattr viewer

When I use chattr with viewer, it doesn't produce any reaction. (like chattr::chattr("hello"))

4. chattr Error "character", "httr2_response" message

(see #52 (comment))


My status

1. Versions

I used

  • R with 4.3.2 version.
  • Rstudio with 2023.12.1 version (recent version is 2024.4, but it crashed my session in other project, so I downgraded this)
  • Every R packages are updated as recent version (include httr2, chattr)

2. API

I'm using the ChatGPT Plus Subscription. (sorry for korean)

At the same time, I saw error @cargingarsan mentioned

<callr_error/rlib_error_3_0/rlib_error/error>
Error: 
! in callr subprocess.
Caused by error in `abort(req_result)`:
! `message` must be a character vector, not a <httr2_response> object.

Clue

After trying various things to use chattr, I realized that I didn't have a subscription to the openAI api (which is just free trial), so I thought it might caused problem.

I didn't realize that chatGPT subscription and openAI API subscription are treated separately.

Since I don't have any intention of using the openAI API, so I won't be trying to utilize chattr further more, but I would recommend checking out the openAI API plan if anyone else has had similar issues.

(However, I understand as Gemini provides API freely, if chattr can utilize it, I can try it later.)

Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests