User requests support for FP4 quantization, likely for improved efficiency or performance when running local LLMs, as discussed in the video context.
Thanks for vid! Great. FP4 please.