Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can the CodeT5 model do code autocompletion without fine-tuning? #21

Closed
frankxu2004 opened this issue Dec 1, 2021 · 3 comments
Closed

Comments

@frankxu2004
Copy link

The readme mentioned that this is used for Code autocompletion in VSCode, I wonder how to use CodeT5 without fine-tuning as a language model to complete code given some previous context in code?

@yuewang-cuhk
Copy link
Contributor

Yes, it can support this for completing a code span. Generally, CodeT5 without fine-tuning would be more suitable for code span autocompletion with previous and after contexts, which is more aligned with span denoising objective in pre-training.

@frankxu2004
Copy link
Author

Thanks for the quick response. I wonder how long (say the number of tokens) the code span completion usually is?

@yuewang-cuhk
Copy link
Contributor

We have included such details in the paper: the average of span length is 3 tokens (before subword tokenization).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants