The user requests a training/fine-tuning script for TADA and vLLM integration.
## Request: Training Script and vLLM Support Thank you for open-sourcing TADA — the architecture is genuinely exciting and the RTF numbers speak for themselves. Two requests: **Training / Fine-tuning Script** The repo currently only includes inference and conversion scripts. A minimal training reference — even just the loss formulation and how hidden states are passed to `VibeVoiceDiffusionHead` — would go a long way for anyone looking to fine-tune on new languages. **vLLM Integration** For high-concurrency production serving, `VibeVoiceDiffusionHead` requiring per-step LLM hidden states makes vLLM integration non-trivial. Any guidance on a recommended batched serving path would be very helpful. Happy to contribute if there's interest. Thanks again for the work on this.