This Will Change Animation Forever. NEW Gen1 AI Animation Tutorial.
Вставка
- Опубліковано 15 лип 2024
- -Mom, can we have Corridor Crew? -We got Corridor Crew at home son.
One hour. That's all you need for cool AI animations. I'll show you Runway's new Gen1 and how to create cool looking AI animations with it.
research.runwayml.com/gen1
FREE Prompt styles here:
/ sebs-hilis-79649068
Flowframes:
nmkd.itch.io/flowframes
Support me on Patreon to get access to unique perks! / sebastiankamph
Chat with me in our community discord: / discord
Control Lights in Stable Diffusion
• Control Light in AI Im...
LIVE Pose in Stable Diffusion
• LIVE Pose in Stable Di...
My workflow to Perfect Images
• Revealing my Workflow ...
ControlNet tutorial and install guide
• NEW ControlNet for Sta...
Ultimate Stable diffusion guide
• Stable diffusion tutor...
The Rise of AI Art: A Creative Revolution
• The Rise of AI Art - A...
7 Secrets to writing with ChatGPT (Don't tell your boss!)
• 7 Secrets in ChatGPT (...
Ultimate Animation guide in Stable diffusion
• Stable diffusion anima...
Dreambooth tutorial for Stable diffusion
• Dreambooth tutorial fo...
5 tricks you're not using
• Top 5 Stable diffusion...
Avoid these 7 mistakes
• Don't make these 7 mis...
How to ChatGPT. ChatGPT explained:
• How to ChatGPT? Chat G...
How to fix live render preview:
• Stable diffusion gui m...
wow. game changer! thanks for keeping us in the loop with the latest, Sebastian!
You bet! Glad to have you around. Keep working on those animations 🌟
Processing foreground and background seperatley will work even better. ( i haven't finished the video yet, so you may have already done that lol)
That surely sounds like the best way. I need to test it further to see if it can be improved.
@@sebastiankamph batch processing your masks is insanely powerful. You can work with 4k video and retain the resolution without upscaling. Give the AI much more pixels to work with and providing impressive amounts of detail. Let me know if you need help. I can't find time to make a guide. You can if you want
I was just watching another UA-camr play with Gen1 and I kept saying to my wife "why isn't anyone styling Frames from the input video? That would give much more consistent results" lol. Leave it to Sebastian. Thanks for sharing, I can't wait to get access for myself. I wonder if they're using the Alt IMG2IMG method for heir processing. It's a HUGE game changer for SD animations
Hah, great minds think alike. I first tried to style each video input as well, having 5 style frames. But I didn't notice much improvement in the little testing I did.
I have just got access to runway cant wait to have a go so this video has come at the right time for me.
Excellent, have fun with it!
1:53 no need to screenshot. You can export the frame as a jpg or png file.
In the Program Monitor, click the Export Frame button on the lower right.
Click to view larger image
In the Export Frame dialog, choose the desired filename, still-image format, and path, clicking the Browse button to open the Browse for Folder dialog.
NOTE ... In Windows, you can export to the BMP, DPX, GIF, JPEG, PNG, TGA, and TIFF formats. On the Mac, you can export to the DPX, JPG, PNG, TGA, and TIFF formats.
Click OK to export the frame.
Great content. Keep up the good work!
How cool! let's see if I have access soon! I would like to try it!
Amazing work! The ski slope is the best. Thanks!
Glad you liked it! And thank you :)
Clever way around the limit! Another option might be to leave a bit of overlap in the videos sent to Gen-1 and then crossfade them.
Really cool workflow! 😬👍
Thanks! Just waiting for you to make something cool now once you get in 🌟
Nice job bypassing the current limitations with a nice work flow. I can foresee background renders composited with green screen actors each being separately passed through the Gen-1 as a common workflow once consistency is better. Might not even need the green screen if the AI can ever differentiate between subjects real-time.
Now what is really going to blow people’s minds is when Imagen text to video gets fully ironed out. Then the sky is the limit for content creation. Make your own movies in any style, everyone then uploads to a Imagenhub full of user content. I’m just spitballing but time will tell. Keep up the great content Sebastian. 👍
I think you're on to something there. User generated content will be the future more than it is today. Thank you! 😊🌟
Damn getting consistency like this is insane. Doing it on a green screen would be clutch
it MIGHT change animation as soon as AI can fix the variation change and keep a very good consistency.
Hahahahahahaha!
Even then it wouldn’t look like traditional animation…. Traditional animation has those deformed looking in-between frames that don’t occur in reality but they add that extra punch and oomf to the resulting animation. An ai would have to be trained on all the various types of contexts for super deformed comedic moments, chibi moments, crazy foreshortening during motion etc
uhhh this stuff is fire ! thanks Seb !
I'm expecting some new stuff with this on your Tiktok soon! Don't forget where you saw it first 🔥
that's really cool
This is crazyyy
Thank you. 🎉🎉🎉🎉🎉
This is so cool, The youtube series I am working on, can definitely use this 😮😮😮
Go for it! Show me the results when you're finished 🌟
Amazing
Thank you! Cheers!
I transform videos into cartoon from almost a year ago but this AI has changed it all
Amazing stuff! I'm blown away with how fast this tech is progressing. Crazy to think Stable Diffusion has only been available for 6 months. This next year is going to be incredible!
I can't wait! 🤩
I seen the Corridor Digital attempt at this “anime rock paper scissor”. I’m working on doing something similar
Very cool! They got a whole team working, but with the right tools you can get pretty far solo as well.
We need this as an extension to Stable Diffusion.
For sure!
This is amazing, but could you make a tutorial explaining better how to do the edits in Premiere so that we, who are laypeople, can do it too
Thanks. I'd be very curious if this also works for driving realistic character images with an input video...? Did you maybe have chance to test this or saw something like this somewhere?
I haven't seen any good examples yet. But I'm sure someone will manage it!
Let’s hope an open source paper comes out for this and it can be added to automatic, I feel like this is very rudimentary
What about green screen? so you can generate a background and or the video separately so it won't change as much?
Hi ! Do you think reducing the video frame rate and asking GEN1 to do something with it could be a viable solution? I mean, you could switch to 10FPS for example, generate your result, and use Flowframes to make it smoother in the end. This would allow you to generate 3 times more images with the same exact seed/background/whatever. Thank you in advance for your response.
Hey, probably yes. I played around a bit with it, trying different settings.
Hi, i get this everytime I try to generate an image from text:
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
any solutions?
(I have a imac 64GB RAM btw)
Thank you very much
It's not changing animation unless you can do it from scratch, this is just another version of motion capture.
I wonder if you ran the gen-1 videos thru with 'Only_foreground: True) if you could render just the figure, then either composite the Ai animated figure with a still of the SD bricks, or do two renders of the gen-1 videos, first run is only foreground and the 2nd group of renders is only background?
That'd be something! I tried to AI mask the character out and put him in front of a scene. It was kinda meh, but with more work it could be really cool.
@@sebastiankamph Btw your tuts are great. And you have a very calm delivery style which is refreshing, especially as AI is somewhat frenetic in that it updates almost everyday 🙂 Thanks so much for your generous tutorials and walk thrus.
I've searched for the aspect ratio extension and don't see anything. Anyone got a link?
Where is the open source version of Gen 1 ???
❤
yeah. Just let us know when an unlimited release goes public. I guess that’s about it. Creative people can start producing their own cool cartoon or anime shows. Thx for sharing, cheers 🍻
Thanks! I'm waiting for that too 🌟
Hi, how can I join the Runway Discord? Is it free?
To add the anime affects, is it free??
I wonder if runway can be pumped into stable diffusion? Is this also a model or? This is already a real animation that anyone can use. Thank you for the video.
I don't see a way yet, I think their api is closed. But maybe in time.
may i know the download source of your checkpoint and style pls
i find when i try to combine different clips togethr into a long one, there are couple frames missing in each of the gen1 output clip which makes the clips transition not aligned, how do you solve this missing frames problems? thanks a lot.
I literally show this in the video 😅
Great tech but good for some things and not others. It is basically a rotoscoping filter - it changes video but it still looks like video with a filter. Great for the animation tool box or a quick shot like you used. Corridor crew's animation is epic but still required an epic amount of work and man hours to put together... and still looks like rotoscoping.... Would be good to see any animation work you have done apart from playing around with AI - I mean do you understand the topic you are talking about? Not being funny just interested to know :-)
으메이징!! 뷰티풀 어썸 갓
And do you have an idea to keep the same background ?
How do i get private discord
And this is only Gen 1... and now is out ModelScope!
For sure! Got a video on that one too.
I hate it when you can't do something locally on your PC, I don't want to register, access, share data, etc.
That's why Stable Diffusion blows my mind with its features available at any time, without registration and without limitation
Agreed! But I'll take what I can get with new tech 😊
Guys this is rotoscoping with a filter.
Let’s not disrespect animators or rotoscoping.
each gen using different seed might be a reason for bg change ?
I did same seed for this tutorial but I'm sure there are ways around it working with foreground and background separately.
@@sebastiankamph ya that should be possible with control net and plain bg
This is amazing, considering Corridor Crew just did it by hand the hard way using Stable DIffusion. One weird thing though... Why'd it make the actor "white?" Is there a way to keep it closer to the source material?
With a team their size, they have the luxury to make it really detailed by hand 😅 The change was from Stable diffusions style prompt. Since I wrote martial arts master and had lots of anime prompts, it generally turned towards the region of Asia. I could just as easily have another style (like any of the thumbnail images).
@@sebastiankamph Oh, totally. That was my point about Corridor. This makes it so that anyone can do essentially what they did which is so amazing. These tools are advancing so far, so fast. Regarding the prompts, I honestly hadn't considered that they would be smart enough to do that without specifics. Maybe its the "old person" in me but I'm continuously amazed by what these tools can do. Thanks for clarifying!
How do you get the access????
It's open access for all now. Check their site
@@sebastiankamph thanks
*I use macOS system, How can I get Flow frames*
I'm afraid I don't know
Let me know when it is something I can do offline on my own machines.
You can do what Corridor Crew did on your own machines, but that takes a loooooot more effort. This will have to wait a bit 😊
Any idea how to upscale the 512*512 footage to a 4k?
You could try an AI video upscaler. Check out Topaz for example. It's not free however. Thank you very much for your continued support 🌟🤩
Me gustaría en español. Haces maravillas. Intento adaptarme.
No sé español, espero que UA-cam traduzca los subtítulos, ¿está bien?
There's a huge difference between Animation and Rotoscope. Please do understand this fundamental principles.
Runway just announced something for 20/03. Likely text to video. 😮😮😮
Oooooh! Exciting 😊
@@sebastiankamph totally ua-cam.com/video/YRnhxGPhdDc/v-deo.html
my future is at risk
Is gen1 free and unlimited?
Now its free. They announced it on discord 1,5 hours ago.
their discord doesn't work anymore
Runway.ml now
Can 4gb ram handle that work?
Yes, Gen-1 handles all the calculations, so no hardware is needed for you individually.
I have the same problem where the ai is turning all my black character white 😂
not only we cant afford it we cant even get to play it
Soon! I see new testers get invited every day.
This technology will obsolete a lot of the work that Corridor Crew did for their anime video.
For sure! It's still very rough this, but it's getting there.
Where's the chick?
🫡
What is this, a tip for ants!? Just kidding my friend, thank you! Biggest supporter as always 😘🌟
@@sebastiankamphMany ants also give a fortune at some point 🐜🐜🐜💰 Then they can carry your sacks full of money 😜👍
XD This type of animation has existed since the 1940's
All people are doing now is pushing companies to produce bad quality animation
The film "The Little Mermaid" will be run through the neural network and then the film will have a chance!
This is essentially just a filter. Not animation.
meh, more discord bot sh!t.... guess i have to wait for a stable diffusion version of this.
dude got whitewashed
As you show us very well, the result generated by an AI is really ugly, very far from what a real artist could bring who has spent years acquiring solid skills. Even if you spend hundreds of hours there, it won't change anything. And unfortunately we risk seeing more and more "artists" like you appearing trying to make it easy with these new tools (which can be very useful for certain tasks I'm not against at all, such as coupling chat gtp with a 3d software to make motion design). Only to make beautiful animation, you should already try to be less lazy and get to work to develop your artistic eye. So for now there is still a bright future for REAL artists
Thank you, I am a designer and animator by trade, working 20 years.
@@sebastiankamph 🤣
Why is that character face so bloated, I highly doubt anyone wants that
I love you chanel but sorry to say it that this time this is a poor example. Old info + Low creativity level + poor workflow.
Having to use discord is a shame.
Agreed!
free?
Free, at least for now. (But beta access only for now)
hey, kinda unrelated but what is the extension at the top of your txt2img tab that allows quicking switching of vaes and loras?
It's just a setting in settings. Come ask in Discord for details.
This is amazing, but could you make a tutorial explaining better how to do the edits in Premiere so that we, who are laypeople, can do it too