Loading request...
User expresses interest in seeing specialized LLM inference machines with high RAM capacity for running local LLMs.
When do you expect to see some specialized LLM inference machine. Something like 512GB or 1TB uniformed RAM built for running local LLMs?