Current AI video generation tools limit video length (e.g., 5-20 seconds). Users need the ability to generate longer videos (e.g., for platforms like TikTok) without the time-consuming and repetitive process of manually stitching short sequences and continuously reprompting the model.
The craziest part about AI? You can solve novel problems at home, on your own computer. We're seeing amazing progress with AI video generation—models like Seedance are mind-blowing. But there's still a big problem: videos need to be short. Open models typically give you about 5 seconds, and even the best commercial offerings max out around 20 seconds. I've long wanted to create AI videos for TikTok that are a minute or longer. You could manually stitch sequences together, but that's incredibly time-consuming because you need to continuously reprompt the model. Then I found a solution using entirely open models and open source tools: - ComfyUI to manage image generation workflows - Flux for generating the initial image - Wan 2.2 I2V for video generation - Kimi 2.5 to generate prompts for each segment - Kimi Code to write all the automation All video generation runs on my local machine! I use Kimi through an API, but it's still an open model I could run locally if needed. The result? Fully automated, continuously flowing, single-shot video with no length limits. Currently I'm making intentionally surreal content, but by tweaking the "AI Director" that manages prompting, you can generate very normal videos like the crow example below. Full technical breakdown + architecture diagrams: [link in comments soon] What would you build with automated long-form AI video?