-
Notifications
You must be signed in to change notification settings - Fork 858
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✨ Tab Autocomplete #758
✨ Tab Autocomplete #758
Conversation
nice to see this is being worked on, it will make continue a true replacement for copilot I've been using Copilot for a couple hundred hours, I found that important regarding autocompletion:
Performance |
This is fascinating! Any idea when this PR might be merged and subsequently released? |
@meteor199 looking to release an early preview next week |
@cmp-nct thanks for this, definitely a few new ideas here for me. Brief response to your bullets in order:
Your performance notes are very important—for this reason we'll probably be highly encouraging specific providers that we form-fit at first |
Nice to see this bein worked on! One feature that would be great to have is what Codeium call Fill-in-the-Middle [0]. I don't know if Copilot has that these days but when I tried it with Codeium the amount of times it kicked in and helped was really great! |
@c10l already been done! Turns out many models are trained by default to do this (they are passed a prefix and suffix, and told to write the code that should be inserted at the cursor in between) |
Great to hear :) Regarding 'context should be configurable per section (as in how many tokens before, after cursor, how many of SQL, file structure, etc':
So what I meant is that those sections/subsections - if they are included - should have some configuration options in how much of them is sent to the LLM. The idea of plugins is very interesting too. I will stay tuned on the upcoming releases, very promising work |
Here goes the merge! This is still a beta version of tab-autocomplete, and should be understood as such, meaning please share your feedback! The best places to do this are the #feedback channel on Discord, GitHub Issues, or on the Contribution Ideas Board if you have thoughts on how to improve or would like to contribute code. Over the next few weeks we will be focusing a lot on this feature, so expect significant improvement. It will use |
Tested it for some hours now. Pretty useful so far even with a small model like deepseek-coder:1.3b Thanks for your work! I will report if I find anything that does not work like expected but for now its really nice. Btw I am working on a Macbook with M1 and I have even better code completion results with stable-code-3b and also same speed as deepseek-coder:1.3b. |
Great news. Looking forward to IntelliJ plugin update too! |
@sestinj thats amazing! you will hit all the other plugins, especially all the features from commercial alternatives! can you give an estimation to bring all these features to intellij (in my propose phpstorm) ?! |
We'll probably take another 1-2 weeks to further improve tab autocomplete just within VSCode, and then will transfer it over to JetBrains |
I'd like to add a request for an optional "idle time" parameter to the config. In the preview build, the autocomplete triggers on every change, resulting in continue.dev calling my local LLM multiple times per second and reprocessing the context each time. Usually it's not a problem because of smart context shift, but sometimes it decides to reprocess the entire context, resulting in some slowdown. I'd like to see an option where autocomplete only fires off if I pause typing for some configurable time period. |
You can set "debounceDelay": The delay in milliseconds before triggering autocomplete after a keystroke. (Number) |
Please add the ability to use other providers besides ollama. |
It would be nice to have a shortcut key to disable autocomplete in VSCode, or even a setting to disable it on certain file types. |
+1 On using other providers besides ollama. |
WIP tab autocomplete