-
Notifications
You must be signed in to change notification settings - Fork 751
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
After I make manual edits and remove context from a plan and re-tell, old decisions made by LLM seem to persist. #138
Comments
Could you say a bit more on this?
Yes, I do plan to add something like aider's repo map, and it will also be built on tree-sitter.
Yes I think this is likely. It sounds similar to #136. I'm working on it.
Agreed! |
On the point you asked more about:
If I’m working on a fyne app and the context is just a file with a random
ui component, the first thing the LLM will try to do is scaffold up an app
and screen to make that component run, but that’s already built in another
file… it’s almost like the LLM is trying to generate a complete working
solution instead of merely changing the code it was given in the way it was
asked to. It’s kind of nuanced, but I think the right prompt could do magic
here, as long as no other features are negatively affected.
…On Sun, May 26, 2024 at 1:18 AM Dane Schneider ***@***.***> wrote:
First, prompting. The prompts should remind the LLM that the context might
not represent a full solution, and they it shouldn’t try to just generate
everything to scaffold up the solution it responds with.
Could you say a bit more on this?
Second, other tools like aider allow the LLM to request snippets of code
from a repo map built on tree sitter. Any plans to implement that here?
There is a go project that looks promising. I forget the name.
Yes, I do plan to add something like aider's repo map, and it will also be
built on tree-sitter.
it seems that the plan artifacts or message/response history affected this
maybe
Yes I think this is likely. It sounds similar to #136
<#136>. I'm working on it.
In fact, there should be mechanisms to actively make it escape ruts.
Agreed!
—
Reply to this email directly, view it on GitHub
<#138 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AF632FJGDC5BHWXOECOCPXTZEFWC3AVCNFSM6AAAAABIJPIJXCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMZSGA3TAMRVG4>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
@atljoseph Thanks, I understand now. That one is a bit tricky because it has a tradeoff against outputting a production-ready app, which the model is encouraged to do in the built-in prompts, and which in most cases is desirable. I do have some ideas that may help, but this is also something that you can influence with your own prompts. If you tell it explicitly not to generate the code that you don't want and that it's ok if the generated code doesn't constitute a full app, does that help to get the behavior you want? |
Yeah when having to craft a lengthy prompt, it starts venturing into the
territory of being easier to write the desired code manually. Seems to be
really good resource for asking it to generate example apps that have the
feature I desire. Then I can just copy paste it from the example. That also
has an added benefit of a standalone example.
…On Mon, May 27, 2024 at 12:24 PM Dane Schneider ***@***.***> wrote:
@atljoseph <https://github.com/atljoseph> Thanks, I understand now. That
one is a bit tricky because it has a tradeoff against outputting a
production-ready app, which the model is encouraged to do in the built-in
prompts, and which in most cases is desirable.
I do have some ideas that may help, but this is also something that you
can influence with your own prompts. If you tell it explicitly not to
generate the code that you don't want and that it's ok if the generated
code doesn't constitute a full app, does that help to get the behavior you
want?
—
Reply to this email directly, view it on GitHub
<#138 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AF632FIXLXSYLHP57T3QDPDZENM3VAVCNFSM6AAAAABIJPIJXCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMZTG44TQMRSG4>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@atljoseph This should now be working better in 1.1.0 - https://github.com/plandex-ai/plandex/releases/tag/cli%2Fv1.1.0 Please lmk if you're still seeing this kind of issue. |
I got fed up with it deleting code that already worked great, and decided smaller more granular files would be best… for both me and the LLM. Well, I removed all files from the context, then manually added a new file and moved code around. Then, I added the updated files to context… much fewer tokens. I thought this would allow the tool and LLM to focus on the task at hand, but it started generating code it had generated in the past, but inside the new files. I know a lot of this is because the LLM is just calling functions on the context and request it is being sent, but I think this is a breakdown in a few things.
First, prompting. The prompts should remind the LLM that the context might not represent a full solution, and they it shouldn’t try to just generate everything to scaffold up the solution it responds with.
Second, other tools like aider allow the LLM to request snippets of code from a repo map built on tree sitter. Any plans to implement that here? There is a go project that looks promising. I forget the name.
Third, it seems that the plan artifacts or message/response history affected this maybe? That should be more apparent, if so, and easier to manage, and even if the influence is there, it should be done in a way that doesn’t make the LLM get stuck in a rut. In fact, there should be mechanisms to actively make it escape ruts.
The text was updated successfully, but these errors were encountered: