The new update to Runway's Act One allows users to apply facial animations to characters in videos, not just images, revolutionizing character animation in filmmaking. Thanks
That's what I was waiting. And I am happy that Runway did that, because they have unlimited subscription plan. So I can experimenting without any doubts. Thanks for review. 👍
Great job on this video. I used the lip sync and expand feature for a client and it worked awesome. You made some great examples. I didn’t know that I could also expand a landscape video even further.
That’s awesome you used it for a client, it is a really interesting tool. You can keep expanding, but the resolution does start to diminish, so I actually paste back the original footage into the expanded video to keep the centre area at a higher quality.
Wow thanks! I do try to create videos that are hopefully enjoyable and you can learn things from. It can be hard with how fast things are moving at the moment. I really appreciate your comment, thanks again.
Thanks for another brilliant video you really add value that other UA-camrs do not. I can't wait until we get more of a motion capture that works as well instead of just mapping the face onto generated video once we have that missing link we may well I'll be able to go off and make our own marvel movies?? What do you think are we going to get there soon or are there more missing pieces?
Thanks for the comment! Yes I believe body motion capture is the next step. Hopefully they allow higher resolution and better consistency and I’m sure you could create your own films. It’s all down to the creativity of the user.
I would like to know if I can create a character without affecting the location, that is, only the character changes, with the set lighting and everything else remaining the same.
You could create a character on a green screen then composite them onto a background of your choice. Or if you already have a character with a background that you like in the same image, you could cut out the character and then use generative fill to fill the space in the background, then animate the cut out character separately and place the new animated character into the gen filled background. Hope that helps.
It’s a cool gimmick, but you know what’s very important missing in reviews like this - CONSISTENCY, can you make a consistent, coherent short story, maybe a cartoon? Please, consider answering those questions in the video next time
Thanks for the comment, you can actually achieve this if you create images with consistent character and background (which can be done now with ai image generators), then take those images into image to video and prompt them with your desired movements and then use act one to change the facial animation. It’s not perfect yet, but it’s about working to the strengths of the software and getting creative.
I have found a method that works really well, if that characters face is small (or if there are two people on screen) you can crop the video to make the person you want to animate so that their face takes up most of the screen. Then take the new cropped video into runway and animate it, then take that clip and crop it back into the original clip, it may take a bit of time adjust to fit correctly but it does work as I tried it myself. Thanks for the comment!
These models are trained on minimally human-like faces, so it becomes very complicated when dealing with anything outside of that without specific training. In the case of 3D, it's best to turn to specialized tools
It can be done if the animal has human like features, they have some examples in their ‘act-one’ settings of a dog character, but I think because it’s a Pixar style dog they made it’s facial features more human so it works for that. Is your character realistic or animated style?
9:19 this is also another misleading statement. I dare you to have 5 different camera angles from the same shot and keep consistency throughout every angle (plate, wardrobe and props). And the diffusion noise is also a bummer for professional use.
Consistency can be a problem with this at the moment, like I said…this tech is in its early stages and I am confident we will see improvements in consistency in the next year.
If you really think this can be used in films, why nobody has even shown a full 5 minutes dialogue scene between 2 actors? I tried it and the results are really bad. I took a scene from The Godfather and needless to say that when real actors perform, sometimes they are still, while this app requires exaggerated enunciation and face movements. Nobody overacts except for stage actors. Not ready for prime time but also, UA-camrs are not ready to evaluate these tools because they have no idea what entails to post produce a real feature film.
You gotta search the internet correctly. There are TONS of Ai short films, some better than others. The tech is there, the time and energy to put into making something really good is what’s needed.
Thanks for the comment. I do mention that this is still early versions of this tech and we will most likely see advancements over the next year with how fast Ai is moving.
I just don't see why people even want this and seemingly refuse to see how it's guaranteed to be used maliciously. For every 1 innocent creative use of it theres going to be literally thousands of examples of it being used to manipulate people. It's almost as if society desires escapism so much we're willing to sacrifice reality for easy entertainment. If things keep going this direction then in less than a decade it's going to be impossible to distinguish reality from AI generated content.
@@RSpracticalshootingWith Google’s Veo 2, released yesterday, it’s already impossible in many cases. I share your concern, but I think these issues are coming much quicker than you may suspect. Pandora’s box has been opened.
Thanks. I hate it. Not you - Runway. Storytelling involves drama. Drama involves conflict. Conflict involves unpleasant material. Oops. You're not allowed to generate that. It might offend somebody. How about a video of somebody somewhere picturesque? Or maybe a skateboarding dog? Or meatballs spilling from a manhole cover! Talk about storytelling! Jesus. I'm trying to tell a story about a couple of people dealing with the death of their child. It wouldn't even generate a funeral scene based on an (original!) still frame of the actors standing around a coffin. I just wanted some movement in the scene that couldn't be captured on the day. Runway refused and wouldn't tell me why. There was no violence, no public figures, no nudity, no blood, NOTHING. Was it the coffin? Was THAT it? I'm sorry. I'm not f***ing paying for something that keeps me from telling my story
Amazing Runway Is Master! I use it every day! BIG ❤
putting a comment for the algorithm god!
but seriously, thank you man, these reviews are very useful
Thank you so much, It all helps! Glad you enjoyed the video.
The new update to Runway's Act One allows users to apply facial animations to characters in videos, not just images, revolutionizing character animation in filmmaking. Thanks
Thanks for watching.
waiting for the part 2 subscribed so i dont miss the video
Thanks for the clear breakdown and examples-super inspiring!
You’re welcome, it’s really fun to use.
That's what I was waiting. And I am happy that Runway did that, because they have unlimited subscription plan. So I can experimenting without any doubts. Thanks for review. 👍
Great job on this video. I used the lip sync and expand feature for a client and it worked awesome. You made some great examples. I didn’t know that I could also expand a landscape video even further.
That’s awesome you used it for a client, it is a really interesting tool. You can keep expanding, but the resolution does start to diminish, so I actually paste back the original footage into the expanded video to keep the centre area at a higher quality.
Great video! Hope to see your channel blow up soon! Very underrated channel.
Wow thanks! I do try to create videos that are hopefully enjoyable and you can learn things from. It can be hard with how fast things are moving at the moment. I really appreciate your comment, thanks again.
This is amazing !
Thanks so much, glad you enjoyed it.
I think you're my favorite ai guy😊
Cool update! Thanks for a news!
No worries!
Thanks for another brilliant video you really add value that other UA-camrs do not. I can't wait until we get more of a motion capture that works as well instead of just mapping the face onto generated video once we have that missing link we may well I'll be able to go off and make our own marvel movies?? What do you think are we going to get there soon or are there more missing pieces?
Thanks for the comment! Yes I believe body motion capture is the next step. Hopefully they allow higher resolution and better consistency and I’m sure you could create your own films. It’s all down to the creativity of the user.
Amazing stuff!
wow Wow WOW!! 🎉🎉🎉
Wow! Thanks for watching 🙂
Nice, soon or later, we'll have talking dogs and cats.
Now, making a feature length film is almost possible
I would like to know if I can create a character without affecting the location, that is, only the character changes, with the set lighting and everything else remaining the same.
its up to you to test and figure it out... i guess. if your're lucky then someone who knows may travel by and give you a hint. good luck
You could create a character on a green screen then composite them onto a background of your choice. Or if you already have a character with a background that you like in the same image, you could cut out the character and then use generative fill to fill the space in the background, then animate the cut out character separately and place the new animated character into the gen filled background. Hope that helps.
@@taavetmalkov3295 thanks for the advice, and greetings from Mexico.
@@AtomicGains-ql8hg thank you for the response and for the advice, keep doing videos, greetings from Mexico.
very good video
Thanks, glad you liked it
It’s a cool gimmick, but you know what’s very important missing in reviews like this - CONSISTENCY, can you make a consistent, coherent short story, maybe a cartoon? Please, consider answering those questions in the video next time
Thanks for the comment, you can actually achieve this if you create images with consistent character and background (which can be done now with ai image generators), then take those images into image to video and prompt them with your desired movements and then use act one to change the facial animation. It’s not perfect yet, but it’s about working to the strengths of the software and getting creative.
this could have saved superman in justice league
Haha yeah it would definitely do a better job than the version they released. I’ll have to try it!
In some examples the face and region around it look a bit blurrier than the rest of the image, but this is impressive anyway.
Yes it does have some quality loss like I mention in the video. I’m sure that will improve over time.
Thanks
You’re welcome 🙂.
How do you find it works when the speaking character is only a very small part of the frame or possibly one of a number of different characters?
I have found a method that works really well, if that characters face is small (or if there are two people on screen) you can crop the video to make the person you want to animate so that their face takes up most of the screen. Then take the new cropped video into runway and animate it, then take that clip and crop it back into the original clip, it may take a bit of time adjust to fit correctly but it does work as I tried it myself. Thanks for the comment!
i have created a bear character, it was not detecting his face for lip sync. do you have any suggestions?
These models are trained on minimally human-like faces, so it becomes very complicated when dealing with anything outside of that without specific training. In the case of 3D, it's best to turn to specialized tools
It can be done if the animal has human like features, they have some examples in their ‘act-one’ settings of a dog character, but I think because it’s a Pixar style dog they made it’s facial features more human so it works for that. Is your character realistic or animated style?
@@AtomicGains-ql8hg thank you!
Do you have any idea of how to Turn a cartoon video clip into an photorealistic Cinematic movie type??
You could use Runways video to video tool and prompt the animated clip with ‘photorealistic cinematic movie style’. I hope that helps.
@@AtomicGains-ql8hg Thank you, but I tried, but it gave very odd and unrealistic video also strange styled. I would like to get a perfect prompt.
9:19 this is also another misleading statement. I dare you to have 5 different camera angles from the same shot and keep consistency throughout every angle (plate, wardrobe and props). And the diffusion noise is also a bummer for professional use.
Consistency can be a problem with this at the moment, like I said…this tech is in its early stages and I am confident we will see improvements in consistency in the next year.
damm its getting good
I just wish it was free or cheaper for indie youtubers and people looking to experiment before committing.
This is incredible. Too expensive for me though.
If you really think this can be used in films, why nobody has even shown a full 5 minutes dialogue scene between 2 actors? I tried it and the results are really bad. I took a scene from The Godfather and needless to say that when real actors perform, sometimes they are still, while this app requires exaggerated enunciation and face movements. Nobody overacts except for stage actors. Not ready for prime time but also, UA-camrs are not ready to evaluate these tools because they have no idea what entails to post produce a real feature film.
You gotta search the internet correctly. There are TONS of Ai short films, some better than others. The tech is there, the time and energy to put into making something really good is what’s needed.
Thanks for the comment. I do mention that this is still early versions of this tech and we will most likely see advancements over the next year with how fast Ai is moving.
I just don't see why people even want this and seemingly refuse to see how it's guaranteed to be used maliciously. For every 1 innocent creative use of it theres going to be literally thousands of examples of it being used to manipulate people. It's almost as if society desires escapism so much we're willing to sacrifice reality for easy entertainment. If things keep going this direction then in less than a decade it's going to be impossible to distinguish reality from AI generated content.
@@RSpracticalshootingWith Google’s Veo 2, released yesterday, it’s already impossible in many cases. I share your concern, but I think these issues are coming much quicker than you may suspect. Pandora’s box has been opened.
If people made their choices based on fear, nothing new would ever happen. No risk no reward...life.
Sogar mit deutscher Übersetzung danke👍👍
Glad you liked it 😄.
How much is it per 10 sec?
Nah, I call bullshit on this. Gimmicky and will be a disservice to filmmaking. Cool gimmick though I must say.
Wow you erased and reported that ...whyy
Thanks. I hate it. Not you - Runway. Storytelling involves drama. Drama involves conflict. Conflict involves unpleasant material. Oops. You're not allowed to generate that. It might offend somebody. How about a video of somebody somewhere picturesque? Or maybe a skateboarding dog? Or meatballs spilling from a manhole cover! Talk about storytelling!
Jesus. I'm trying to tell a story about a couple of people dealing with the death of their child. It wouldn't even generate a funeral scene based on an (original!) still frame of the actors standing around a coffin. I just wanted some movement in the scene that couldn't be captured on the day. Runway refused and wouldn't tell me why. There was no violence, no public figures, no nudity, no blood, NOTHING. Was it the coffin? Was THAT it?
I'm sorry. I'm not f***ing paying for something that keeps me from telling my story
It will not be any useful except some TikTok video or some insta reel or so called viral one … But this can surely kill the film making skill…
I think in a years time it will be at a more professional level. I could see it being used for animations at this level though.
Old news
Better late than never I guess, thanks for watching.
WTF!!!
beta
Ur late
I like to test tools a while before making a video, sorry I didn’t release it sooner.