Loading request...
User wishes LMStudio had vLLM support to compare performance with llama.cpp.
Now, what model should I take on a test ride? I prefer vLLM but it's interesting to see the difference in performance with llama.cpp I wish LMStudio had vLLM support ),: https://t.co/B0RjIuPyLE