A new Ollama user faces a blank prompt with no guidance on which model to run. The proposed `ollama fit` command would recommend compatible models based on available hardware, preventing out-of-memory crashes and long load times.
## Problem A new Ollama user faces a blank prompt with no guidance on which model to run. Choosing wrong leads to: - Out-of-memory crashes when VRAM is insufficient - Multi-minute load times from unexpected CPU offloading - No way to know in advance whether a 70B model will run at all There is currently no way to ask Ollama "what can *my machine* actually run?" ## Proposed Solution A new `ollama fit` subcommand — and matching `GET /api/fit` endpoint — that scans the machine and ranks a built-in model catalogue by hardware compatibility. **CLI example:** ```sh $ ollama fit Ollama Fit Check ────────────────────────────────────────────────────────────── CPU : linux (amd64) RAM : 22.4 GB free / 31.9 GB total GPU : CUDA NVIDIA RTX 3080 • 9.2 GB free / 10.0 GB total Disk : 180.0 GB free → /home/user/.ollama/models ────────────────────────────────────────────────────────────── ✅ IDEAL — Full GPU inference, fast ──────────────────────────────────────────────────────