-
Notifications
You must be signed in to change notification settings - Fork 145
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Code completion not working #239
Comments
Hello, Please confirm all settings used for FIM completion providers. Also, please enable the debugging information in the extension settings and tick enable logging, then go to Help -> Toggle Developer Tools inside Visual Studio Code to look out for any errors. Many thanks, |
Hi! While debugging I could see that there is a request being sent if I use the chat functionality. However, nothing shows up in the console for the code completion , even when I request it with Option + \ ( I am a mac user ). I also found a Can it be that's the issue? No template-> no FIM? |
Hey, that shouldn't be an issue for fim as they are built in. Please provide all the provider configuration settings as previously requested. |
Type: FIM As I mentioned in the previous message, I don't get request in the console as you do ( based on your photo ) EDIT: out of curiosity I did a
Hope that rings a bell. I would expect to have either more templates in a file or more template files. EDIT 2: After staying stuck in the train I had the chance to 1) check your repo with more care, 2) debug a bit further. With regards to 2: despite the fact that I don't get any logs about the request being sent, like I get when using the chat functionality, I do get the following.
Attention to Hope it rings a bell now. I went through your code and though I am not a typescript programmer, I could follow most of it and it looks alright. I am somewhat clueless. |
I would recommend trying codellama:7b-code to see if it works. |
I have the same problem as you andremald, i.e. chat is working fine but FIM does nothing anyway, and input on how to fix this would be appreciated. this otherwise great ext is not usable for me like that |
Same problem here. All settings correct. 13b or 7b, doesn't matter. Only chat seems to work. I see the robot icon loading when i start coding, but no autocomplete prompts ever how |
Failing for me too. I can see the message being received by the provider, but no response and no error. I am using Aphrodite's openai api server. I have tried different providers, yet none give a resopnse. |
My issue might partly related to there not being an actual supported OpenAi provider. I setup a litellm proxy to forward to my model and I am still not getting any completions. |
+1 |
Hi. I have the same issue: chat works, FIM doesn't, no matter what I do in the configurations. I was looking at the developer tools. as suggested, and when I was writing in the VScode I saw this in the Developer console ERR memory access out of bounds: RuntimeError: memory access out of bounds After receiving this error I went to the WSL to ~/.vscode and deleted all twinny related folders. Started VS code and installed fresh copy of twinny. Now it works. I had twinny 3.11.10 and 3.11.31 on the host, now I have 3.11.39 and all is good again. I'll repeat this on my work computer later on, to see if this for some reason fixes the issue... |
Same issues here. Ollama is running in docker on an external server. Chat works, no FIM. Have tried other extensions (continue dev at least) and get FIM...not good...but it at least does something....so I know it's not an issue with Ollama. |
Same issue, OS is macOS, provider is ollama with starcoder:3b – ollama gets request and does computation, but whatever is computed does not show up in VSCode... Weird is that some time ago, the extension worked flawlessly and I did no manual change except installing auto-updates. |
@yuhanj2 Thx, that workaround works for me too! |
Same issue here. Could it be bugging out due to other extensions? |
Hello all, do we have any indication of what is broken? I have been using the extension myself recently and it's working with latest version of code. Many thanks.
|
First of all, thank you for all your work @rjmacarthy 🙌🏽 I'm having the same issue, have a setup with ollama and just updated VSC to 1.90.2 and ollama 0.1.48 |
Hello, I think starcoder 2 is not good for fim completion I tried it. Please try one of the recommended models in the docs. Many thanks. |
Maybe try to restart ide. |
I have. Its a bit confusing now: I installed the deepseek model as mentioned above, and then my FIM started working althogh still not requiring the model I have entered as provider (codellama) but deepseek again 🤔 EDIT: Resetting providers or deleting and adding again seems to work. Funny how I must been using deepseek all this time until I deleted it, and thought I was using starcoder2 :) Its really not good for FIM as you pointed out |
This is an error in the console:
Also:
|
I am facing the same error with VS Code Ver 1.90.2. |
I just released a new version which adds a try catch to the parsing of the document to attempt to fix this problem. Please let me know if it helps, if not it would be helpful to know what file/language/extension is being used when this error appears. Many thanks. |
Hey! Having some trouble with autocomplete. I activated the developer tools and can see the completion requests being made, but it appears that the suggestions aren't being shown in my editor; that is I'm not seeing the inline completions. Chat works fine, wondering if you or anyone has encountered issues with it showing in the editor? Using the stock codellama-7b settings. It works sporadically, which is also confusing. Tried all suggestions posted here already. Great tool when it works for me though Thanks!
Chat works fine:
|
Hi! I am trying to use the tool but somehow the code completion is not working. The chat functionality works just fine so I am quite sure I configured the connectors properly. Unfortunately, I couldn't find any logging. Hence I am not even sure whether the completion request is being sent.
It tried the following models: codegemma, codellama and starcoder. ( always the fim version)
The path is /api/generate
Though it is also not working in a host vs code, I normally work inside a dev container one. Hence, I updated the hostname to host.docker.internal
As I said, the chat functionality works just fine. I am wondering what could be the issue with the coding completion one?
The text was updated successfully, but these errors were encountered: