Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make Auto-GPT aware of it's running cost #6

Closed
Torantulino opened this issue Mar 29, 2023 · 15 comments
Closed

Make Auto-GPT aware of it's running cost #6

Torantulino opened this issue Mar 29, 2023 · 15 comments
Labels
API costs Related to monitoring/reduction of running costs enhancement New feature or request good first issue Good for newcomers

Comments

@Torantulino
Copy link
Member

Auto-GPT is expensive to run due to GPT-4's API cost.

We could experiment with making it aware of this fact, by tracking tokens as they are used and converting to a dollar cost.

This could also be displayed to the user to help them be more aware of exactly how much they are spending.

@Torantulino Torantulino added the enhancement New feature or request label Mar 29, 2023
@0xcha05
Copy link
Contributor

0xcha05 commented Apr 2, 2023

could use this?
https://github.com/openai/tiktoken

@Torantulino
Copy link
Member Author

My thoughts exactly.

If someone wants to tackle this.

Suggested Implementation:

  • ALL API calls should be centralised through a single method that counts tokens and adds to a dollar amount based on the model used.
  • Store this in its own file and remove openai imports from everywhere else so that the API can't be called accidentally.

Displaying

  • Total dollar amount spent should be constantly displayed to the user.
  • Total dollar amount spent could be displayed to the AI. Perhaps it could even be given a budget, and the dollar amount would count down as it spends money. This could create a sense of urgency in a HustleGPT style task.

If you'd like to work on this (please do!), assign it to yourself so we don't both work on the same thing.

@Torantulino Torantulino added the good first issue Good for newcomers label Apr 2, 2023
@0xcha05
Copy link
Contributor

0xcha05 commented Apr 2, 2023

I can pick this, but first, I'll spend a little time on #4 .

@0xcha05
Copy link
Contributor

0xcha05 commented Apr 3, 2023

I'm going to start work on this now, @Torantulino . please assign this to me so others don't work on the same issue.
I'll start with a fresh fork.

@ryanpeach
Copy link
Contributor

ryanpeach commented Apr 4, 2023

I agree with all @Torantulino's ideas here. In that case, it might be a reason to use a real TUI instead of a simple set of print statements for this application. It could have a top-bar that contains running price for example.

I'd also make a flag in the cli that allows for rate limiting the application: Sleep 10 seconds between requests, for example. Some tasks the AI may be given may not require frequent polling. For example I'm using the bot for research, and notifications of major news events. Major news event searching may only need the bot to check the internet every day. This polling frequency option will be critical for giving some applications amazing speed, and others sort of dumb infrequent inexpensive speed.

@masterismail
Copy link

hey @0xcha05 are you currently working on this issue?

@Vwing
Copy link
Contributor

Vwing commented Apr 9, 2023

I went ahead and implemented this in my own fork (branch "running_cost"). I'll submit a pull request later tonight.

Here it is in action, but also an example of it not working quite right.

running-cost-failure

I think I'll prompt engineer it a bit before submitting the pull request. I find that it works better if I send its remaining budget in the system message context, and yell at it if it goes over budget

    create_chat_message(
        "system", f"Your remaining budget is ${remaining_budget:.3f}" + (" BUDGET EXCEEDED! SHUT DOWN." if remaining_budget < 0  else "")),

@Vwing
Copy link
Contributor

Vwing commented Apr 10, 2023

I'll submit a pull request later tonight.

I'm very close, now. Auto-GPT is beginning to behave with the appropriate behavior of hurrying up when the budget is nearly depleted, and shutting itself down when the budget is exceeded.

Too tired to finish this tonight, though. I'll give another update tomorrow night.

@Vwing
Copy link
Contributor

Vwing commented Apr 11, 2023

Okay! It's working, now. It prints the running cost, the AI is aware of the running cost, and the user can optionally provide a budget (of which the AI is also aware, and behaves appropriately).

Please refer to my pull request. #762

@Vwing
Copy link
Contributor

Vwing commented Apr 23, 2023

Woohoo, it's merged in! 🎉

@rob-luke would you be interested in tackling the final task of making the total running cost visible to the user?

See #762 (comment)

@ntindle
Copy link
Member

ntindle commented Apr 23, 2023

@Vwing shoot me a DM on the discord :)

@rob-luke
Copy link
Contributor

Thanks for tackling this issue @Vwing.
I would be pleased to add the feature of making the running costs visible to the user, I can get that done this week. But if someone else starts to tackle it first, let me know so we don't duplicate work 🚀

@Pwuts
Copy link
Member

Pwuts commented Apr 27, 2023

Fixed in #762

@Pwuts Pwuts closed this as completed Apr 27, 2023
@bjm88 bjm88 mentioned this issue Apr 27, 2023
1 task
@rob-luke
Copy link
Contributor

@Pwuts The final aspect of the initial problem description is addressed in #3313 which isn't merged yet.

@Boostrix
Copy link
Contributor

Boostrix commented Oct 4, 2023

has anybody thought about exposing this in a shell prompt-style manner at runtime, per agent (task/job) ?

jmikedupont2 pushed a commit to meta-introspector/Auto-GPT that referenced this issue Oct 19, 2023
SquareandCompass pushed a commit to SquareandCompass/Auto-GPT that referenced this issue Oct 21, 2023
* Upsert in batch

* Improve update context, support customized answer prefix

* Update tests

* Update intermediate answer

* Fix duplicate intermediate answer, add example 6 to notebook

* Add notebook results

* Works better without intermediate answers in the context

* Bump version to 0.1.2

* Remove commented code and add descriptions to _generate_retrieve_user_reply

---------

Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
API costs Related to monitoring/reduction of running costs enhancement New feature or request good first issue Good for newcomers
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

9 participants