Testing Runway and Google Flow (Veo) for the First Time

I’ve been curious for a while now about how far these new video generation tools have come—especially for bringing our world of Kabu and the Seeds in the Wind to life. Over the weekend, I finally sat down to give Runway AI and Google’s Flow (Veo 2) a spin. Here’s how it went.
First Impressions – Runway’s Reference Feature is a Game Changer
Before anything else: Runway’s ability to create and label visual references is honestly such a powerful feature. I uploaded individual images of Kabu, Kabocha, and Kenta’s farm and assigned them reference tags (@Kabu, @Kabocha, and @Kentas Farm). It immediately made my prompt-writing feel more focused and modular—like I was scripting a scene rather than just throwing words at a wall.
Test #1 – Group Shot at Kenta’s Farm
Prompt:
“@Kabu and @Kabocha look at the camera posing back to back with arms crossed confidently, smiling and happy. Two child heroes. Standing in the field of @Kentas Farm.”
This was my first full image-generation test using all three references. Honestly? Not bad at all.
- The character consistency was better than I expected—especially considering this was a new scene.
- Runway didn’t take too many creative liberties with the background (thankfully no unexpected cyberpunk skylines). That said, the art style definitely lost a bit of its original "Twilight Folk" charm. Not deal-breaking, just... a little flatter.
- Oh, and hands? Still cursed. I thought we were past this 🙄



Training images used for Runway

Test #2 – From Image to Video
Prompt:
“The two characters run into the scene and cross their arms, looking at the camera and smiling.”
Now we’re in the exciting-but-chaotic territory. This test really highlighted both the magic and the mess of where this tech is at right now.
- The output barely matched the prompt—but the animation itself was kind of cool. The last half second was what really excited me: Kabu casually uncrosses his arms and starts walking forward. It looked... weirdly real.
- The fact that Runway can do this from a still image and a sentence? Wild.
- Still, this isn’t production-ready. Yet.
“The two characters run into the scene and cross their arms, looking at the camera and smiling.”
Test #3 – Anime Action Pose
Prompt:
“@Kabu jumps into the air, yelling to attack as the camera tracks him upwards, anime speed lines fly through the background to show motion.”
I was going for something loud, heroic, and over-the-top. And... it worked? Sort of.
- The energy was there, and the upward motion with speed lines looked dope.
- But Kabu’s head has some kinda growth coming out of it, and yes—hands, again. What are fingers even?
- Definitely usable as a rough animatic or concept frame, but it would need some clean-up.

Test #4 – Animating the Jump
This time I took the still image from Test #3 and ran it through Runway’s image-to-video tool without any prompt—just to see what it could infer.
- Results were surprisingly strong. The animation picked up the motion lines, added flowing clothes, even Kabu's leaves blowing in the wind.
- It genuinely felt like a stylized anime sequence. A small one, sure—but I got goosebumps watching it.
Test #5 – Seeing What Google Veo (Flow) Could Do
I wanted to see how Google’s new model stacked up. I tested a similar jump animation using the same base image.
- Google Flow matched Runway’s output in most respects, maybe even slightly smoother in motion but because it was slower, I probably prefer runway's output.
Final Test – Stitching a Sequence in Google Flow
Start Frame + End Frame + Prompt + Camera Direction (tilt up)
“Hero leaps into the air to attack. Anime speed lines rushing in the background to show motion.”
This is where things kind of fell apart. I tried giving Veo more structure by feeding it a start and end frame, a camera movement, and a prompt.
- The result? Jank as hell.
- Kabu starts flapping his arms like a seagull as he lazily hovers into the air, before then flailing all of his limbs like he's trying to maintain balance. Everything then blurs into a mess and we end on a frame that could only look like the one I supplied after a night out with too many tequilas.
- Perhaps it was because the start frame image was actually from midjourney and wasn't really a 1:1 character match with the end frame from runway. Whatever it was, clearly I have a lot to learn about prompting Flow. Or maybe it’s just not there yet.


Start frame (Midjourney) and End frame (Runway)
Final Thoughts
Despite the jank, this was genuinely thrilling. Tools like Runway and Google Flow aren’t fully reliable yet—but they are insanely close to being usable for early concepting and vibe-setting.
I’ll definitely keep experimenting. If you’ve tried these tools and have tips (especially on how to prompt Google Flow better), let me know.