Veo 3.1 is impressive! Took the Ingredients feature in Fast for a spin, using the isolated elements (Alien on an alpha channel, Newscaster on a white BG and the news set cleaned of any characters with Photoshop generative fill) of what would have gone into my Runway References workflow back in late May when I originally made 'The Visitor'. Very impressed with the consistency this maintained! Audio generation is much cleaner so far, free of the heavy artifacting that came with most generations. Also the humour of the performance of the anchor peering around... nice touch! Prompt: Make the Alien in image 1 sit at the newsdesk on image 2. The News Anchor in image 3 is peering around a background set corner at the alien in disbelief. The Alien is talking about ways to bring about world peace instantly. Note that Ingredients is only available for Veo 3.1 Fast directly through Google's Flow platform.... which is quite worth the Ultimate price tag if you're doing high volumes of generations (you can generate 20 clips at once!), as Fast generations are still unlimited! More to come as we learn more about this new beast of a model. #alien #cliffhanger #veo3 #google #deepmind #test #consistency #film #ai
More Relevant Posts
-
Is Google Veo 3.1 a Sora 2 killer? Here's what you need to know: (In under a min) 1) Three images instead of one - Upload 3 images: your character, object, and scene - Google blends them all into one video 2) Better beginnings and endings - Upload your start frame and end frame - Veo creates smooth transitions between them - Continuous videos with matching sound effects 3) Improved character quality - Better consistency throughout videos - Enhanced lighting and cinematic feel - Characters look more realistic (still not perfect) My honest take? Not amazing, but solid progress. It’s giving Sora 2 a run for its money. Sora still wins on character realism, but Veo shines in scene composition and control. It hasn't fully passed the Will Smith test yet, which remains the gold standard for character consistency in AI video. PS- Found value? Follow Charlie O'Brien for more such insightful posts! 🔁 Repost this post to help your connections!
To view or add a comment, sign in
-
While you were learning Sora 2, Google just changed the game. Oct 15th. Veo 3.1 dropped. Most creators are still figuring out how to prompt AI video models. Google just made that irrelevant. ━━━━━━━ Here's what they shipped: 1️⃣ Native audio across ALL video features — dialogue, ambience, SFX baked in ✅ 2️⃣ Insert/remove objects mid-scene — lighting & shadows auto-adjust 🎬 3️⃣ Frame-to-frame control — specify first/last image, AI bridges seamlessly 🎯 4️⃣ 60+ second continuous shots — extend from last frame, no breaks ⚡ ━━━━━━━ This isn't incremental. It's a workflow reset. Five months ago Google shipped Veo 3 at I/O 2025. Creators said "give us more control." Google DeepMind just gave you a full post-production suite inside Flow. The gap → you can now insert a character into a scene and Flow recalculates shadows, lighting, and depth automatically. Your competitor isn't spending 6 hours in After Effects anymore. They're shipping finished video in 15 minutes. ━━━━━━━ 👉 Tools to act now: ⚡ Manus - It doesn't assist. It executes. You delegate. It delivers. 🔗 https://lnkd.in/d3Ami8eK 💬 What's your first use case for Veo 3.1? ♻️ Repost if you're rethinking your video workflow ➕ Follow Christian Schmidt for AI & business strategy ━━━━━━━ #AIVideo #GenerativeAI #ContentCreation #VideoProduction #GoogleVeo 📎 Source: https://lnkd.in/dZG8GJBh https://lnkd.in/dANYrZgJ https://lnkd.in/dNeknfUD
To view or add a comment, sign in
-
I just broke gravity with a single text prompt. Forget simple stock footage—this is what Google Veo 3.1 is truly capable of. If you thought AI couldn't handle physics or emotion, prepare to be amazed. I challenged Veo 3.1 to create an 8-second cinematic scene involving defying physics (liquid gold floating up!), complex geometry, and a custom, synchronized audio score. The model nailed the chiaroscuro lighting and the seamless camera track. This demonstrates the level of control we now have over: * Temporal Consistency: Flawless object and liquid motion. * Integrated Audio: No post-production sound needed—it's generated in-sync. * Cinematic Prowess: Directing high-end lighting and camera work with simple language. Organizations using this technology are already moving from concept to polished asset in hours, not weeks. If you want to elevate your brand's video content or need a breakdown on mastering Veo 3.1's advanced prompting techniques, I'm here to help. Let's connect on the future of generative media. Send me a message or follow my journey right here:https://lnkd.in/dVjTgBrb
To view or add a comment, sign in
-
Google DeepMind just dropped Veo 3.1 and it's a serious leap forward in text-to-video generation. Higher fidelity, more control, and a notable move toward coherent storytelling, not just eye candy. What’s new: ▶️ Significantly improved motion quality and temporal consistency 🎬 New “director mode” lets you specify camera movement (pans, zooms, etc.) 📈 Better text-to-video alignment, esp. for dynamic scenes 🌊 Real-world physics and fluid rendering look shockingly good From a creative tech lens, this isn’t just about sharper frames, it’s a shift toward usable generative video for media, advertising, and entertainment. One prompt, multi-shot storytelling is now within reach. Also worth noting: Veo now powers Google’s new “Dream Track” for YouTube Shorts, enabling fans to generate AI music videos featuring artists like John Legend and Demi Lovato. That’s not just a demo, it’s deployment at scale. Still closed beta, but it’s clear DeepMind is optimizing for both control and composability. We’re inching closer to generative tools that behave more like a DP and editor than a magic paintbrush. Full post: https://lnkd.in/e8w4Rxam #GenerativeVideo #AIInnovation #Veo3
To view or add a comment, sign in
-
I just tested Veo 3.1 — and it’s absolutely next-level. ....... Google’s newest generative video model takes everything from Veo 3 and supercharges it. The realism, motion quality, and creative control feel like a huge leap forward in AI video generation. Here’s what really stood out to me 👇 🔥 Audio built-in: Veo 3.1 now generates synchronized sound — ambient effects, dialogue, and background audio that actually match the scene. 🎨 Reference-image control: You can guide the video style or characters using up to 3 reference images — perfect for consistency across clips. 🎬 Scene transitions: Start from one image and end with another — Veo 3.1 fills in everything in between, creating cinematic transitions. ✂️ Object-level editing (in Flow): Add or remove objects in the generated video — giving creators real post-production control inside the AI workflow. 🧠 Better prompt accuracy: The model understands creative direction much more precisely, especially for camera motion, lighting, and emotion. Honestly, it feels like we’re witnessing the point where text-to-video meets film editing. Veo 3.1 isn’t just generating — it’s co-creating. #Veo3 #AIvideo #GoogleDeepMind #GenerativeAI #VideoCreation #Innovation
To view or add a comment, sign in
-
Google's Veo 3.1 is here and here is everything you need to know in under a minute. Veo 3.1 is a massive leap for generative video, moving beyond short clips into serious narrative territory. Here’s the technical breakdown of why this is a game-changer: - Extended Runtimes: Forget 10-second clips. Veo 3.1 can extend scenes to nearly 2.5 minutes, enabling actual storytelling and sequence development. -Character Consistency: A major hurdle is solved. You can now upload a reference image to lock in a consistent character throughout the entire video. -Production-Ready Quality: The output is getting more professional with 1080p resolution, richer cinematic styles, and integrated studio-quality audio. The proof is in the adoption. With over 275 million videos already generated, creators are using it for everything from social media content to professional storyboards for films. Veo 3.1 is rapidly closing the gap between a simple prompt and a finished product, shifting from a novelty AI toy to an indispensable creative tool. What application of this technology excites you the most? #AI #Google #Veo #GenerativeAI #AIVideo #Filmmaking #ContentCreation #Tech #Innovation
To view or add a comment, sign in
-
-
🔥 Major upgrade for Weavy creators! 🔥 The Weavy team just dropped Video Transformation, and it's a game-changer for video workflows. Now, everything we can do to images, we can do to video too! » One single compositor for stills and motion. » Layer multiple videos, images, and text right on the canvas. » Nodes like Crop, Levels, Blur, and Merge Alpha now work directly on footage. This makes creating split views, titles, and complex layered scenes so much faster 👉🏻 you know the drill: The AI generates. You direct. 🤖 Check out this template to get started, and a deep-dive tutorial from @Kim Köhler (Weavy): Template: https://lnkd.in/dXXHrNgc Tutorial: https://lnkd.in/dBWM85HR
To view or add a comment, sign in
-
AI Videos up to 2.5 Minutes? With you in it? Here is how I did it in Veo 3.1 and why its still not perfect. ... the new Veo 3.1 by Google now allows you to extend your Videos from 8 seconds up to 148 (!!!) seconds. Thats 2.5 Minutes. AND... more importantly... it does so based upon the last second of your previous video. Why is this relevant? Video extensions used to be based upon the last frame (1/25 of a second). Result: Movements were janky and inconsistent with previous scene. So all great, now? Not yet. Why? 🤔 + the audio changes in the extension (can use voice changer on top) + lack of character consistency in extension if facing away + lack of scene consistency (here the room) if details > 1 second ago Still... a huge step up. Am sure they are gonna extend the "context window" beyond 1 second. Want to give it a try? 👉 Here is how I did it... (detailed guide at the end of video) + Veo 3.1 in Google Flow + generate initial clip with Veo 3.1 Fast (8 seconds) + use ingredients feature (my foto, my cat and chinese man) + then continue to extend (via Scenebuilder) ❓ Did you notice the transitions if it were NOT for the sound?
To view or add a comment, sign in
-
AI video just got more useful. Sora 2 is now built in to Descript and allows for: -multi‑shot instructions -synced dialogue and sound effects -realistic effects Build full productions faster–without leaving your timeline
To view or add a comment, sign in
More from this author
-
16GB of RAM?!?!?....But how will everything in the system be used now? Let's put a 'pause' on Apple M1 hardware judgement for Pro users.
Kevin Friel 5y -
SIGGRAPH 2018 a massive revelation in real-time animation and VFX creation. The revolution has arrived!
Kevin Friel 7y -
New MacBook Pro with 'Touch Bar' a bold step forward for content creators, even if it isn't perfect.
Kevin Friel 9y
BC + AI Film Club•3K followers
5moNews anchor reference