Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adapters for finetuning large models on low-end systems #22

Open
xloem opened this issue Aug 12, 2022 · 3 comments
Open

Adapters for finetuning large models on low-end systems #22

xloem opened this issue Aug 12, 2022 · 3 comments
Labels
enhancement New feature or request

Comments

@xloem
Copy link

xloem commented Aug 12, 2022

People should be aware of the research and tools at https://github.com/adapter-hub/adapter-transformers . They place small bottlenecks between model layers and freeze the pretrained weights and train them to compose specific skillsets together. This would be good for personal coding styles or changes like refactoring, commenting, or bugfixing.

@smith-co
Copy link

I don't understand how would that be helpful in the context of fauxpilot. Can you please elaborate a little bit more how could this applied to learn refactoring changes.

@xloem
Copy link
Author

xloem commented Aug 13, 2022

Adapters basically provide for lightweight finetuning. They're a powerful tool people on lower end systems would ideally have access to.

For example, if you wanted a model feature for converting from one code form to another, you would train the adapter on examples of the transformation just like any other model, either from existing code or using prompts to augment or curate data in a partially supervised way.

A simpler thing you can do is personalise the model by training an adapter on your own code in general or code you respect; or severely strengthen it by training it only on the language you are writing in; or add comprehension of missing contextual information by defining a data presentation norm for it; add direct production of patch files; train only on generation of comments by removing them; etc. [edit: you could also train an adapter to predict your keystrokes and edits much better]

When an architecture is trained for a specific task this way, it becomes much stronger at that task. Adapters are only a few megabytes large, so they are quickly hot-swappable and composable.

Something else I have been trying a little is using architectures that accept longer input sequences, an orthogonal concept, which lets one include neighboring files in the input and is also very powerful.

@moyix moyix added the enhancement New feature or request label Aug 13, 2022
@moyix
Copy link
Collaborator

moyix commented Aug 13, 2022

Looks like an interesting idea. One hurdle is that we would have to find a way to translate the adapters into FasterTransformer as well, since that's what we currently use for making inference fast.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants