Loading request...
User requests the addition of WebLLM as a provider to allow privacy-focused users to utilize LLMs directly within the browser. Proposed solution includes restricting models, starting WebLLM in the background, and warning users about performance degradation.
### What problem does this solve? Allows users to make use of webLLM within the browser for privacy focused users. ### Proposed solution - Add webLLM as a provider, restrict the models to a subset. - If webLLM is choosen as the provider, start the webLLM in the background. - Build capability to detect slowness when slowness is observed/performance is degraded and warn users to switch to cloud/locally hosted models. ### Additional context _No response_