Description
First I assume this is the correct place to put feedback regarding the Copilot plugin for the IntelliJ suite, as this would be feedback to the client team. If not, my apologies.
Describe your experience
I feel like the integration of the agent into the IDE (IntelliJ suite in my case) is rather unnatural for how an assistant should work. The LLM is supposed to feel like working with a person, but the way it's implemented makes that difficult. It's hard to explain, so I'll give some examples:
- I often use Copilot assistant to work through design problems. Things that aren't necessarily related to specific lines of code, but which might pertain to the overall design or such. However the agent seems hell bent on providing code snippets and proposing changes, many of which can be absolutely massive (as implementing a design is by its nature going to be a large change). I'm not opposed to seeing code snippets when we're talking about concepts and whatnot, but when I'm evaluating possibilities, I don't want to get into a deep dive on a specific one without talking about high level details first, or talking about other options. The discussion should start at a high level, and evolve into specifics once decisions have been made. Especially since the massive code changes it's proposing tend to make a ton of assumptions that the agent should really be asking about first. Often times I'll open up a chat with an LLM agent in my browser just to get it to stop diving into my code so I can talk about general problems. But then when I finally want to get into implementation, I have to start the conversation from scratch again inside the IDE.
In a nutshell, the agent is too eager to get into implementation details when it's not appropriate, and makes too many assumptions when it should ask questions first. - Another issue is that "chats" and "edits" are separate conversations. Some times I will start a conversation in a chat, but then I do want to get into details related to actual code that is written. I can have the agent generate a code snippet, but because it's a "chat", I have to manually and visually diff the code it's proposing from what currently exists. This is what "copilot edits" are for, but I can't carry a conversation between a chat and an edit. So I either have to deal with the annoyance of a visual diff to spot the changes, or I have to start the conversation from scratch in an "edit" conversation. And I don't want to just start in an "edit" conversation, as you can only have one of these. Using chats I can have a few different topics going where history is saved, allowing me to have clear context switching and avoiding confusing the agent with irrelevant history.
So what I mean by "unnatural", is to me it seems like the intent of copilot is to feel like working with a person who can look at your code and help with writing it. But these obstacles I mention get in the way of it actually working like that. I might even say I spend more time fighting through these obstacles than I do actually having meaningful interaction with copilot.
Suggestions for improvement
- Make it so Copilot isn't so obsessed with providing code. I would think it should be able to analyze the conversation to determine when providing code is appropriate.
- The agent should be more open to having an actual conversation, where you can talk back and forth. It should not try to generate one massive response.
- Maybe add configuration so that users can define a standard set of instructions that are always provided to the agent at the start of a conversation. These could be used for the user to set preferences on how they want copilot to behave.
- Get rid of separation between "chats" and "edits". A single conversation should be able to do both. I might say get rid of "chats" and have just edits, where the agent doesn't have to actually propose an edit, but the ability to have multiple separate conversations would be a requirement.