Runway’s NEW Act-One Update: Create Multiple AI Actors With Your Face
Вставка
- Опубліковано 5 лют 2025
- 🤯 Bring Your AI Actors to LIFE! 🤯 Runway just dropped a GAME-CHANGING update to Act-One, and it's about to revolutionize how we create AI-generated videos! In this tutorial, I'll show you how to use your own face and performance to create incredibly realistic and dynamic AI actors with lifelike lip-syncing.
What you'll learn in this video:
How to use Runway's new Act-One update with video inputs.
Creating dynamic scenes with AI Performance Capture and Realistic Generative AI Video.
Tips for recording the perfect performance video for optimal results (lighting, framing, etc.).
Using pre-rendered videos and AI video generators (like Minimax, Clink, and more) for character references.
Understanding and using the Motion Intensity setting for fine-tuned facial expressions.
Changing your AI character's voice using tools like 11labs, Minimax Hylia, and my personal favorite, PlayAI
Creating realistic dialogue scenes with multiple AI characters.
Even using animated characters with Act-One!
Troubleshooting common issues and getting the best results.
Key improvements in Act-One:
Dynamic Lip Sync: Say goodbye to static, lifeless AI characters!
AI Performance Capture: Capture your own performance and transfer it to your AI actor.
Realistic Generative AI Video: Create highly dynamic and engaging scenes.
Tools mentioned in this video:
Runway ML (Act-One)
Minimax Hailuo
Kling AI
Elevenlabs
Minimax Audio
PlayAI (Free!)
CapCut
Flux
#runwayactone #runwaygen3
Like this video if you found it helpful and subscribe for more in-depth AI tutorials! Let me know in the comments what you think of the new Act-One update and what you're creating with it!
Didn't try it out yet, but when I see your video I'n gonna hurry to my PC and check it out!🔥
Great video - thanks for sharing all your preferred apps and portals. The lip syncing capabilities are still not very good, IMHO, but we're getting there!
Always good stuff, bro. I'm less enthused by the lip-sync quality of Runwayn atm, though. Can't wait for this tech to improve so we can really do decent shot-reverse-shot scenes!
Thanks!!
Thanks man!!! LOL Love the video. As for act 1 - if I feed it a driving video, what is the accepted resolution? 1920 x 1080? 500 x 500? Is there a specific size needed?
Best lip sync so far is HeyGen, the king of kings, but for experimental purposes is expensive, so live portrait is still the best option
Great news! Too bad Runway hasn't updated the video generator to the level of Kling AI yet.
I prefer minimax for characters. And kling for landscapes 😊
@@CALIODD runway and minimax have a plus - it's an unlimited subscription. It's bad that Kling AI does not add such an option in subscriptions. In my opinion, Kling has the best quality of generated videos.
Runway it-s only do good with close portrait for quality and 3D animated videos, movement is the worth part of Runway unfortunately
That is the way sir/mam @@CALIODD
Funny thing is Kling works like crap for so it’s 6 of one and half dozen of the other
Insane!
how does it compare to liveportrait?
Perfect 😍
thanks
what tools to lip sync?
Sehr interessant. Wenn ich darf, gebe ich ein paar Tipps. Die Stimme sollte ein paar Effekte haben, damit sie nicht wie eine Computerstimme klingt. Man kann sich also eine kostenlose DAW ("Digital Audio Workstation") herunterladen und der Stimme einen Reverb-Effekt oder was auch immer hinzufügen. Man könnte das auch in einem Videoeditor machen, der VST unterstützt, aber dann müsste man das VST/Plugin selbst herunterladen, deshalb ist es meiner Meinung nach besser, eine („reduzierte“) Testversion der DAW herunterzuladen, da diese bereits alle Tools enthält.
Eine gute Generation.
what about having the character walk around?
can u pls make a video about creating images locally and training our own lora? or use others loras...
In the middle of 2025 , we would see ai generated full premium flim ❤❤
We will! But none of us would be able to pay $12,000 subscription fee. Sora charges $200 for a 20-second video now.😂
@@dasberlinlexYou clearly have a comprehension problem.
@dasberlinlex yes , maybe in future , there would be so many gen-ai tools, so tools cost will be abate
You'll get better results just using Kling and audio.
nice :)
CyberJungle, a slightly off-topic question for the video 🙂 - Is Sacha Baron Cohen by any chance your brother? I don’t know why, but you and he seem very similar to me! 😁
Totally! In fact I was the main inspiration for Borat character 😄
@@cyberjungle 😁😁😁
omg its Earl!
Still trying to complete my karma list 😄
I recently experimented with lipsync in a music video for the cover of Elizabeth (Ghost), performed by Elmas Mehmet from Salem Vocals. The video is now available on my channel
Could you share the link of your Channel. Thanks
Just tap his photo
0:53 V1 is better
♥️
FIRST: All of this technology is INCREDIBLE. Period.
But...
The Lips still SUCK. As they did in 3D for decades [and still do in many cases]. Lip SYNCH fails without the proper Lip SHAPE. AI needs to be trained to accurately generate the bazillion muscles and mouth shapes associated with different phonemes [look into "FACs - Facial Animation Coding System" presented by Melinda Ozel]. Until then, everyone's going to look like Charlie McCarthy v2.0.
I'm not being pessimistic, just honestly critiquing where things are at THIS moment. Do I think AI animations will ever become indistinguishable from a human? Oh yes, absolutely... and very soon -
[PS: It does not help that the actor in this case has a mustache and beard which completely obscures nearly all of his mouth and the region surrounding it - ]
R u armenian?
he has turkish accent so....
Naw runway is money grabbin again 😂
Too bad it's so expensive...
:(
expressionless still
Valeu Mestre obrigado .