The first system command in the universe was light. It’s the foundational idea and code rooted in our collective beliefs about the beginning of everything.
That idea has trickled down through time in many ways.
It hit me again recently at a photography festival where I learned not to treat artificial intelligence as the enemy.
AI is a useful and supportive tool for an artist’s craft. But it’s never the source of our vision.
I’ve been leaning into that mindset in testing different tools and managing my expectations.
I recently attempted a Disney-style trailer inside Google’s Flow tool on my personal computer. Here’s what I’ve observed in unleashing my imagination.
Gemini’s AI Pro Student plan motivated my creativity
F…
The first system command in the universe was light. It’s the foundational idea and code rooted in our collective beliefs about the beginning of everything.
That idea has trickled down through time in many ways.
It hit me again recently at a photography festival where I learned not to treat artificial intelligence as the enemy.
AI is a useful and supportive tool for an artist’s craft. But it’s never the source of our vision.
I’ve been leaning into that mindset in testing different tools and managing my expectations.
I recently attempted a Disney-style trailer inside Google’s Flow tool on my personal computer. Here’s what I’ve observed in unleashing my imagination.
Gemini’s AI Pro Student plan motivated my creativity
Flow takes the word out of your head
Credit: Google
I recently picked up mobile videography and photography as another expressive lane. They complement my writing and poetry skills in making cinematic journaling.
I try to create and upload as many posts as I can on social media. The Spring editing app is my trusty companion and favorite CapCut alternative. The most important editing features are free.
It also has a rich stock content library to practice with. I’ve mixed my own footage with some when I needed filler shots.
But stock footage hits a ceiling where someone else can use it too. Your idea will blend into a pool that everyone has access to.
Google’s Flow filled that gap. I’m able to offload an idea from my mind to the screen without shooting from scratch or searching for generic materials.
It’s an experimental AI video workspace inside Gemini’s Lab section. It’s integrated with Veo 3.1, which is the latest and fastest model.
I came across the tool while exploring my Gemini AI Pro student plan features. Although I accessed the web app through Chrome, it exists in the Gemini app.
You can’t use it without a subscription. It costs $19 and $240 monthly for the AI Pro and Ultra plans, respectively.
Prompt your next viral video in seconds
However, 8 seconds is a long time on Flow’s clock
Real-world videography is nowhere near streamlined. You’re either thinking about locations, audio, lighting, timing, people, and many other things.
Coordinating them all demands professionalism. There’s an overload of moving parts in your head without it, and you risk losing your spark.
It’s extreme to say you can replace those elements with AI. But I love that the logistics of it collapse into one question that eliminates some hard parts: “What do you want to see?” The pipeline involves description, and your word alone becomes a production department.
However, there’s an eight-second limit per clip. It dampens your motivation to milk the tool for all it’s worth. I generated my first clip and admired it for seconds, only to watch it end quickly.
Thinking about it now, I understand that it’s more of a quality-control thing than Google trying to frustrate me.
Video generation at this level is unstable the longer you stretch it. Past a certain point, the model will start drifting and producing weird results. Once, my character’s eyes rolled back into their head, and it scared me.
Continuity was a major challenge at first. Google’s video model doesn’t automatically know if you want to keep the momentum. The scenes changed completely every time I generated a new shot.
You can avoid this mistake in the Scenebuilder. The feature lets you stitch videos together without needing a third-party app.
Tap its button in the upper-left corner to access it. Then, tap the + button and select Extend. Enter a prompt describing what happens next.
Also, switch from Text to video to Frame to video alongside your prompt. You’ll take a screenshot of the last frame and feed it into Flow to remind the tool of what you’re working with. Then slowly introduce the next scenes.
Flow is not as frictionless as it sounds
You’re going to need a bigger bucket of credits to pour your ideas
I cooked up a story combining animation styles from my favorite Disney movies: Moana, Tangled, and Onward. Basically, self-discovery, nature, humor, a great storyline, sibling bonding, and quest spirit. I called it Lightseed (tentative) after the Lightseed editing app.
The premise covers an African girl and her brother who discover a glowing seed beneath an old baobab tree. They take it back to the village and plant it in a secluded corner.
It grows to reveal a doorway to a lush green world where nature doesn’t follow the laws of physics. It borrows inspiration from Jack and the Beanstalk and Alice in Wonderland.
They exit the portal and learn that it’s a siphon. They find a devastated wasteland and must find a way to reverse it. Eventually, they learn that one sibling must stay in the siphon world and become its anchor. The story ties to climate change themes.
I drafted the prompts for it in Google Docs before copying and pasting them individually in the Scenebuilder. The results were great after I had the first perfect clip, and so were the voiceovers and background music.
However, Flow is powerful when it actually works. Most times, it’s a gamble. The 1,000 monthly credits it offers on my plan sounds generous. But I burned through them quickly and still didn’t have a final video.
Prompts may require multiple iterations, and each clip costs 20 credits. It looks like I’ll be trying again next month. Upgrading to the Ultra plan with 25,000 credits is overkill for a tool I’ll hardly use.
Also, Veo 3.1 can do human-style animation. But it’s not something you should rely on for commercial purposes. Sometimes, there are facial glitches and unnatural movements.
Beside the watermark on your video, these issues are a reminder that you’re not fooling anyone who’s observant.
Surprisingly, I’ve seen studio-ready videos from other users on YouTube. But I can confidently say that you won’t succeed in your first month’s credits.
I’d stick to abstract, inanimate, or ad concepts where the likelihood of jarring errors is less. Lightseed is the farthest I’ve pushed into character-driven animation.
Experiment with the future’s camera
Flow has great potential. It could become the best film sandbox ever built if Google tightens its consistency and makes it more accessible.
Currently, it’s limited to lesser plans and expensive for the trial-and-error benefits it offers on Ultra.
I would’ve loved to design a character once and save them as a persistent asset. Then I could call them up on every prompt with tags.
Google should also lean into its uncanniness instead of pretending it’s flawless. It should let creators flag weird frames immediately, so that Veo can be retrained.
I’d just stick to making short funny videos until the tool is truly improved.