Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add to SOTA: worse than humans at agency #57

Open
Pato-desu opened this issue Jan 12, 2024 · 7 comments
Open

Add to SOTA: worse than humans at agency #57

Pato-desu opened this issue Jan 12, 2024 · 7 comments
Labels

Comments

@Pato-desu
Copy link
Collaborator

For being a little more honest, and we can add there as that even if LLMs right now aren't agentic the big labs are trying to change that

@joepio
Copy link
Collaborator

joepio commented Jan 17, 2024

Yeah the SOTA needs some more things AIs aren't good at. However, I don't think agency is that clear to me. The reason that AutoGPT doesn't really lead to useful behaviour has to do with a bunch of underlying shortcomings:

  • Short context window (little short-term memory)
  • Hallucinations
  • Can't update own weights (learn as you go).

@joepio
Copy link
Collaborator

joepio commented Jan 17, 2024

Although to be honest, short-term-memory also isn't the right abstraction. An LLM can recite many pages letter by letter if it's in the context. However, as soon as it does not fit in the context, it doesn't remember something. Maybe this is part of the "can't update own weights" problem then.

@Pato-desu
Copy link
Collaborator Author

I don't know if I agree with you, but I guess agency can mean two different things. One is acting like an agent and the other deciding stuff somewhat independently. Right now they are "bad" for both.

@joepio
Copy link
Collaborator

joepio commented Jan 18, 2024

What does it mean to "act like an agent", though? I just don't have a clear understanding of this concept I suppose.

We have NN's playing all sorts of video games. I'd consider these agents. They perform actions in a system to achieve some objective.

I agree that if you give AutoGPT a goal, it performs sub-human in pretty much any task. But I think the reason it's bad can be explained mostly by other shortcomings: it can't make a cup of coffee because it can't control a body. It can't reliably perform tasks as it hallucinates things.

But maybe I'm missing some important other dimension here?

@Pato-desu
Copy link
Collaborator Author

Yeah, you're right. I'm not sure what I wanted to refer to. I'm confused now.

@Pato-desu
Copy link
Collaborator Author

Being general purpose?

@joepio
Copy link
Collaborator

joepio commented Jan 19, 2024

Still not clearly defined to me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants