![]()
![]()
I’ve spent the last couple of weeks recreating movie scenes with AI. The hard part isn’t the tech, it’s understanding cinematography: camera movement, lighting, framing, and mood. So I built an MCP tool that does that heavy lifting.
For the test, I recreated a 24-second montage from Rebels of the Neon God using AI image and video tools.
Why manual AI video work fails: • Endless guessing of camera terms • $50+ wasted on bad generations • No clear idea which tool fits which shot • Results never match the original look
What my [#MCP](https://x.com/hashtag/MCP?src=…
![]()
![]()
I’ve spent the last couple of weeks recreating movie scenes with AI. The hard part isn’t the tech, it’s understanding cinematography: camera movement, lighting, framing, and mood. So I built an MCP tool that does that heavy lifting.
For the test, I recreated a 24-second montage from Rebels of the Neon God using AI image and video tools.
Why manual AI video work fails: • Endless guessing of camera terms • $50+ wasted on bad generations • No clear idea which tool fits which shot • Results never match the original look
What my #MCP tool does:
Powered by #Gemini 2.5 Pro, it breaks a video into shots, extracts composition, lighting, color, and motion, then auto-generates prompts for text-to-image, image-to-image, and video tools. The result: AI outputs that actually match the scene.
What worked best: Grok Imagine — handled ~60 % of the video; best price-to-quality ratio. Midjourney — nailed color and mood. Veo 3.1 — great for complex camera movement. Nano Banana / Imagen — strong for close-ups and image editing.
24 seconds of usable footage. $3 in credits. 2 hours total. Same workflow scales to: product ads, music videos, pro content, or film practice without gear.
Want the full breakdown (MCP architecture, best tools, exact prompts)? Head over to https://x.com/foundbymoe/status/1984628337273077865 #AI #AIVideo #GenerativeAI #AIFilmmaking #OpenAI #sora2 #Halloween