Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

After I make manual edits and remove context from a plan and re-tell, old decisions made by LLM seem to persist. #138

Closed
atljoseph opened this issue May 26, 2024 · 5 comments

Comments

@atljoseph
Copy link

I got fed up with it deleting code that already worked great, and decided smaller more granular files would be best… for both me and the LLM. Well, I removed all files from the context, then manually added a new file and moved code around. Then, I added the updated files to context… much fewer tokens. I thought this would allow the tool and LLM to focus on the task at hand, but it started generating code it had generated in the past, but inside the new files. I know a lot of this is because the LLM is just calling functions on the context and request it is being sent, but I think this is a breakdown in a few things.

First, prompting. The prompts should remind the LLM that the context might not represent a full solution, and they it shouldn’t try to just generate everything to scaffold up the solution it responds with.

Second, other tools like aider allow the LLM to request snippets of code from a repo map built on tree sitter. Any plans to implement that here? There is a go project that looks promising. I forget the name.

Third, it seems that the plan artifacts or message/response history affected this maybe? That should be more apparent, if so, and easier to manage, and even if the influence is there, it should be done in a way that doesn’t make the LLM get stuck in a rut. In fact, there should be mechanisms to actively make it escape ruts.

@danenania
Copy link
Contributor

First, prompting. The prompts should remind the LLM that the context might not represent a full solution, and they it shouldn’t try to just generate everything to scaffold up the solution it responds with.

Could you say a bit more on this?

Second, other tools like aider allow the LLM to request snippets of code from a repo map built on tree sitter. Any plans to implement that here? There is a go project that looks promising. I forget the name.

Yes, I do plan to add something like aider's repo map, and it will also be built on tree-sitter.

it seems that the plan artifacts or message/response history affected this maybe

Yes I think this is likely. It sounds similar to #136. I'm working on it.

In fact, there should be mechanisms to actively make it escape ruts.

Agreed!

@atljoseph
Copy link
Author

atljoseph commented May 27, 2024 via email

@danenania
Copy link
Contributor

@atljoseph Thanks, I understand now. That one is a bit tricky because it has a tradeoff against outputting a production-ready app, which the model is encouraged to do in the built-in prompts, and which in most cases is desirable.

I do have some ideas that may help, but this is also something that you can influence with your own prompts. If you tell it explicitly not to generate the code that you don't want and that it's ok if the generated code doesn't constitute a full app, does that help to get the behavior you want?

@atljoseph
Copy link
Author

atljoseph commented May 27, 2024 via email

@danenania
Copy link
Contributor

danenania commented Jun 11, 2024

@atljoseph This should now be working better in 1.1.0 - https://github.com/plandex-ai/plandex/releases/tag/cli%2Fv1.1.0

Please lmk if you're still seeing this kind of issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants