Loading request...
User suggests integrating adapters (e.g., from adapter-hub/adapter-transformers) to enable finetuning of large models on low-end systems. This would allow for personal coding styles, refactoring, commenting, or bugfixing.
People should be aware of the research and tools at https://github.com/adapter-hub/adapter-transformers . They place small bottlenecks between model layers and freeze the pretrained weights and train them to compose specific skillsets together. This would be good for personal coding styles or changes like refactoring, commenting, or bugfixing.