Sora 2 is absolutely insane. Credits for the video - Albert Bozesan This new version takes short-form, audio-matched generative video to another level physics, better prompt adherence, and more control than the first model. For quick reels, animated shorts, and content made for social feeds, it’s starting to feel production-ready. it's aware of context, it understands backgrounds, scene logic, and details. It feels like a different beast entirely. Where It’s Still Weak - Long-range storytelling, multi-shot flow Text rendering, hands, and small objects Guardrails + safety controls Where It’s Already Strong - Sharp, believable short clips with audio in sync Flexible inputs (prompt or image) Horizontal + vertical outputs with dialogue and SFX More realism. More accuracy. More control. This one feels like the closest we’ve been to usable generative video yet #openai #sora2 #aivideo #genai #sora
More Relevant Posts
-
☀️☀️ Sora2 Face Inconsistency? Here’s My Flora-Powered Workaround☀️☀️ Sora2 does not allow realistic human faces due to its policy restrictions, which makes it difficult to maintain visual consistency for a character across different scenes. Here’s how I solved that, using Flora as the hub: I first design the spot, then I generate a consistent model/character. To maintain coherence across angles, I lean on Kling or Veo3, which help preserve the same “look” from different perspectives. Alternatively, you could extract the initial model’s frame, then reimagine her in different positions or settings by using Seedream or Nano Banana, and finally animate her. It’s a pipeline that gives continuity without losing realism — especially useful for spots, short films, or narrative pieces with recurring characters. A full FLORA tutorial is coming soon 🎥 If you want to try FLORA now, here’s the link 👉 🩷 🩷 My Link get 25% off for 12 months https://lnkd.in/dgKcsupm And if you like watching my AI experiments, hit Follow — more to come. #alessandrabalzani #tutorial #sora2pro #florofaunaai #kling #veo3
To view or add a comment, sign in
-
🚨 𝐒𝐨𝐫𝐚 𝟐 𝐢𝐬 𝐦𝐚𝐤𝐢𝐧𝐠 𝐡𝐞𝐚𝐝𝐥𝐢𝐧𝐞𝐬 — 𝐛𝐮𝐭 𝐂𝐡𝐢𝐧𝐚’𝐬 𝐖𝐚𝐧 𝟐.𝟓 𝐢𝐬 𝐪𝐮𝐢𝐞𝐭𝐥𝐲 𝐤𝐞𝐞𝐩𝐢𝐧𝐠 𝐩𝐚𝐜𝐞. 𝐖𝐞 𝐫𝐚𝐧 𝐭𝐡𝐞 𝐞𝐱𝐚𝐜𝐭 𝐬𝐚𝐦𝐞 𝐩𝐫𝐨𝐦𝐩𝐭 𝐚𝐜𝐫𝐨𝐬𝐬 𝐛𝐨𝐭𝐡 𝐦𝐨𝐝𝐞𝐥𝐬: 🧾 “𝐏𝐞𝐫𝐬𝐨𝐧 𝐬𝐥𝐢𝐜𝐢𝐧𝐠 𝐚 𝐬𝐭𝐫𝐚𝐰𝐛𝐞𝐫𝐫𝐲 𝐚𝐧𝐝 𝐚 𝐦𝐚𝐧𝐠𝐨 𝐢𝐧𝐭𝐨 𝐭𝐡𝐢𝐧, 𝐞𝐯𝐞𝐧 𝐩𝐢𝐞𝐜𝐞𝐬 𝐰𝐢𝐭𝐡 𝐚 𝐬𝐡𝐚𝐫𝐩 𝐤𝐧𝐢𝐟𝐞” 𝐀𝐧𝐝 𝐭𝐡𝐞 𝐫𝐞𝐬𝐮𝐥𝐭𝐬? 𝐒𝐮𝐫𝐩𝐫𝐢𝐬𝐢𝐧𝐠𝐥𝐲 𝐜𝐥𝐨𝐬𝐞 — 𝐛𝐮𝐭 𝐰𝐢𝐭𝐡 𝐤𝐞𝐲 𝐝𝐢𝐟𝐟𝐞𝐫𝐞𝐧𝐜𝐞𝐬 👇 🎥 Sora 2 (OpenAI) • Strong physics and reflections • Multi-shot storytelling works beautifully • Accurate audio syncing • Needed multiple attempts for polish 💰 Cost: $0.15/video 🎬 Wan 2.5 (Chinese Model) • Nailed it on the first try • Smooth, cinematic camera motion • Realistic lighting + native audio generation • Efficient + consistent 💰 Cost: $0.10/video 🤔 Takeaway: Sora 2 might own the spotlight, but Wan 2.5 is quietly outperforming in speed, cost, and precision. AI video is now global, fast, and collaborative — and competition is heating up. Which model should we benchmark next? 👇 Drop your suggestions in the comments. Want your AI product to scale fast? We’re helping 13M+ creators and companies grow with community + infrastructure. #Sora #Wan25 #AIvideo #GenerativeAI #OpenAI #AIshowdown #TechComparison #MachineLearning #ArtificialIntelligence #InnovationWatch #AItools #VideoGeneration #AIcommunity
To view or add a comment, sign in
-
Getting deeper into the visuals of #SceneBuilder in #Veo3, here’s a closer look at one of my favourite tools — the Jump In factor! 🎬 This simple video is generated through Scene Builder in #Veo. To begin, I first created a base video using a simple text prompt in Veo 2, which provides a clean visual foundation without sound. Then, to bring the scene to life and add that emotional “jump,” I switched to Veo 3 Quality. This version automatically generates natural sounds and background ambience, making the transition feel #smooth and #cinematic. 💡 Remember: Jump In can only generate videos in Veo 3 Quality. The Jump In element helps shift between contrasting emotions or moment. For example, moving from a joyful café scene to a quiet, reflective night. It’s what gives your #story depth and your #visuals flow. Here’s today’s short example created using the Jump In feature. Next, I’ll share how the #Extend factor builds continuity and camera flow, taking your storytelling and visual representation to the next level. #AI #VEO3 #SceneBuilder #VideoEditing #AICreation #Storytelling #ContentTools #VisualEditing #AIVideo
To view or add a comment, sign in
-
Veo 3.1 is impressive! Took the Ingredients feature in Fast for a spin, using the isolated elements (Alien on an alpha channel, Newscaster on a white BG and the news set cleaned of any characters with Photoshop generative fill) of what would have gone into my Runway References workflow back in late May when I originally made 'The Visitor'. Very impressed with the consistency this maintained! Audio generation is much cleaner so far, free of the heavy artifacting that came with most generations. Also the humour of the performance of the anchor peering around... nice touch! Prompt: Make the Alien in image 1 sit at the newsdesk on image 2. The News Anchor in image 3 is peering around a background set corner at the alien in disbelief. The Alien is talking about ways to bring about world peace instantly. Note that Ingredients is only available for Veo 3.1 Fast directly through Google's Flow platform.... which is quite worth the Ultimate price tag if you're doing high volumes of generations (you can generate 20 clips at once!), as Fast generations are still unlimited! More to come as we learn more about this new beast of a model. #alien #cliffhanger #veo3 #google #deepmind #test #consistency #film #ai
To view or add a comment, sign in
-
📢 OpenAI Creative Tools Announcement Sora 2 API Preview: Next-Gen Video Creation 🎬 • Generate stunning, cinematic videos from text or images • Full control over length, style, aspect ratio, and sound • Synchronizes rich soundscapes with visuals for lifelike results ✅ Bring real-world photos to life with motion and ambient audio ✅ Integrated directly into the OpenAI API for creative workflows ✅ Used by partners like Mattel for rapid prototyping and design 🗓️ Sora 2 API available in preview today 🔗 Learn more: https://lnkd.in/edmaQXEd 🔔 Follow for more OpenAI creative model updates... #OpenAI #Sora2 #AIVideo #DevDay2025 #GenerativeAI
To view or add a comment, sign in
-
🎥 When AI says “No,” your real work begins. Sora 2 is not the creative playground people expected — it’s a rules-heavy, compliance-first video engine. But here’s the twist: the constraints are your edge. 🔒 If you can structure prompts that pass Sora’s filters and still look cinematic? You’re not just prompting — you’re directing under pressure. 👀 Meanwhile, models like WAN 2.5 are offering more freedom, smoother audio/video sync, and fewer creative restrictions. Here’s how I’m testing both: 1. Build JSON-based prompt frameworks that mirror content policy. 2. Run side-by-side generations in WAN vs. Sora. 3. Treat every Sora rejection as a signal — then re-engineer the prompt for compliance + story power. 💡 This is part of a new technique I call the Redline Prompt Loop — built into my Story Engine. You prompt. It fails. You learn. Then rebuild better. #AIStorytelling #PromptEngineering #Sora2 #WAN25 #CinematicAI #CreativeConstraints #GenerativeVideo #FinbarVisual
To view or add a comment, sign in
-
𝗧𝗵𝗲 𝗦𝗲𝗰𝗿𝗲𝘁 𝘁𝗼 𝗩𝗶𝗿𝗮𝗹 𝗩𝗙𝗫 𝗩𝗶𝗱𝗲𝗼𝘀🃏 is This 3️⃣ Step AI Hack #aitools #aivfx ↪️Want to create stunning, cinematic videos from a single image? 𝗦𝘁𝗼𝗽 𝘄𝗮𝘀𝘁𝗶𝗻𝗴 𝘁𝗶𝗺𝗲 𝗼𝗻 𝗰𝗼𝗺𝗽𝗹𝗲𝘅 𝗲𝗱𝗶𝘁𝗶𝗻𝗴 𝘀𝗼𝗳𝘁𝘄𝗮𝗿𝗲. 👀In this video, I'll show you a simple 3 step process using a powerful AI tool to 𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗲 𝗽𝗿𝗼𝗳𝗲𝘀𝘀𝗶𝗼𝗻𝗮𝗹, 𝗮𝘁𝘁𝗲𝗻𝘁𝗶𝗼𝗻-𝗴𝗿𝗮𝗯𝗯𝗶𝗻𝗴 𝘃𝗶𝗱𝗲𝗼𝘀. 𝘛𝘩𝘪𝘴 𝘧𝘳𝘦𝘦 𝘈𝘐 𝘵𝘰𝘰𝘭 𝘨𝘪𝘷𝘦𝘴 𝘺𝘰𝘶 𝘵𝘩𝘦 𝘱𝘰𝘸𝘦𝘳 𝘰𝘧 𝘢 𝘏𝘰𝘭𝘭𝘺𝘸𝘰𝘰𝘥 𝘝𝘍𝘟 𝘢𝘳𝘵𝘪𝘴𝘵 𝘸𝘪𝘵𝘩𝘰𝘶𝘵 𝘵𝘩𝘦 𝘱𝘳𝘪𝘤𝘦 𝘵𝘢𝘨. #AIvideo #VFX #CinematicVideo #AItools #ContentCreation #MarketingTips
To view or add a comment, sign in
-
With Sora 2 you can turn a storyboard into a fully edited sequence, with sync audio using just one prompt 🤯 Update: Lots more on this in Tim Simmons latest video 👉 https://lnkd.in/e9j9_j8V (Starts from 12:55) Prompt: Create a full colour movie from this storyboard with dramatic, shaky, hand-held shots and intense emotion. Realistic waves. Cinematic colour grade and 35mm film grain It took around 5 mins to generate this entire piece. It may not follow the storyboard accurately, and the resolution (for now) is only 1280 x 704 - but for day 1 this is super impressive. A whole new era of almost instant creation has arrived. I’m kind of nostalgic for the old ways already, but at least I can now make AI films as fast as Dave Clark 😁 The technical achievement of creating at this speed is impressive, but just to be clear - what Sora 2 is doing here offers very limited creative control. You are totally giving that over to the AI when working this way. Right now it’s a novelty, but I think people will quickly tire of ‘the Sora 2 look’ especially when our socials become flooded with it. That’s when truely original creative work, that still takes time and effort to make, will rise to the top again. And for now at least - that can only be created by humans. #sora2 #gamechanger
To view or add a comment, sign in
-
Veo 3.1 is here - and it’s wild. Google just dropped its biggest update yet for AI video creators. Veo 3.1 now understands stories better, captures ultra-realistic textures, and even adds audio + dialogue that feels natural. What’s new: 🎬 Ingredients to Video — use reference images to lock in style + consistency 🖼️ First & Last Frame — define how your story begins and ends ⏩ Extend — grow your clip beyond 8 seconds, smoothly 💡 Add or insert new elements directly into your video (with shadows + lighting handled automatically!) 🧹 Object removal coming soon This update feels like AI filmmaking is finally stepping into director mode. Excited to experiment with it soon. #Veo #GoogleDeepMind #AIvideo #Flow #AIfilmmaking #CreativeTech
To view or add a comment, sign in
-
Thanks for sharing my vid! It’s my favorite Sora creation so far 🤣