Loading request...
Integrate Petals (https://github.com/bigscience-workshop/petals) to enable support for distributed large language models, allowing users to run large 'local' models without the cost of proprietary platforms.
Support for distributed llm using petals would be potentially huge: https://github.com/bigscience-workshop/petals Their tagline 'Run large language models at home, BitTorrent‑style' They already support llama2 amongst other models. This could open us up to very large 'local' models without the cost of openAI or other pay platforms.