A user creating AI-generated video content with Veedio requests the ability to preview avatars before selecting them. Currently, they have to render a short script with an avatar to see if it fits their needs, which is inefficient.
🎥 How we transformed our portfolio companies' funding announcements into a professional TV news telecast using AI. Here's a behind-the-scenes of our latest AI content creation experiment at @BlumeVentures (and what I learned about the current state of AI video tools). The challenge: For Blume Day 2025, we needed a compelling way to present our portfolio companies' 2024 funding rounds. 💡 The lightbulb moment: What if we created an AI-powered news anchor to introduce the showcase? Here's how we brought that TV newsroom feel to our presentation's opening... Step 1: Finding the perfect AI avatar that can pass the AI's uncanny valley challenge. Tested @HeyGen_Official, @veedstudio, @DeepBrain_ai, and @invideoOfficial. I decided to go with @veedstudio because: • Superior lip-sync quality with audio • More natural avatar movements • Built-in video editing features, so as a package it makes more sense (can be used for non-AI projects) • The video editing controls felt more intuitive (disclaimer: I have no video editing expertise) • Better pricing (all AI features included) I didn’t want to create a custom/personal avatar. Just wanted to use an off-the-shelf one. If you want to create a custom avatar, YMMV. Step 2: Finding the right avatar. Now Veedio has a ton of avatars to select from. One challenge (and a feature request to the Veedio team) is you cannot preview any avatar. Ideally, I would have loved to preview the avatar and then pick one. The workaround is to have a short script, punch it into the avatar that feels like your vibe, let the video render and see if it works for you. The next steps were straightforward: Used Claude to craft a proper news-style script. After some trial and error with Veedio's avatars, found one that nailed both the natural look and lip-sync. The final touch came in @canva. It's incredible how much you can do there - news backgrounds, lower thirds, banners, the works. This is where the video really started feeling like a proper news segment. Here's a screengrab of how it looked when I started working on it. Everything is a click away. And voila, just like that, we had an AI news-style intro for the video. Key takeaways from this experiment: Could I have made the entire video AI-generated? Technically yes, but the current tools would have demanded more time and creative workarounds than I had bandwidth for. Reality check: AI video tools are still in the early days (call it v0.5). - Most are laser-focused on shorts/reels, leaving other use cases somewhat underserved. - If your avatar doesn't feel natural off-the-shelf, it's not easy to customize it. For instance, adding additional pauses between sentences was a pain in most tools I surveyed. - I used the US accent because the Hindi accent in Veedio is shot. It sounds like a bad version of an Indian actor in a Comedy Central show. The MVP player? Canva. After 2 years of use, it remains my go-to solution when other tools fall short. Think of it as the Swiss Army knife of content creation (like what Notion is for content curation). The workflow works best when you export everything into Canva and then layer in adidtional elements to bring it to the aesthetic you want. Check out the final result below - the first 34 seconds are fully AI-generated. And do check all the following amazing funding announcements of our portfolio :) Would love to hear your thoughts!