I've been using AE every day for over a decade and this is the first time I've seen someone connect the displacement map H and V values to the camera X and Y. Wow. Great technique!
Cool, I've had a few people say that's one of their big takeaways from the video. I was admittedly pretty pleased when I first tried it and it worked. In this other video, I use the free Displacer Pro plugin for After Effects, which works a little differently so you can link up a camera created using the Camera Tracker in After Effects. (Which used orientation). Whilst it might not work in every instance, it's another handy technique. ua-cam.com/video/c3L7O-0gRiQ/v-deo.html Also the simple but handy 'layer stalker' effect, used in that video, opens up lots of ideas too, that I'd not considered before in After Effects.
Thanks for all the positive comments and for those that have already subscribed. Honestly, a little bit shocked as I checked last night and the video had 150 views after 3 days... then this morning it was at 3000. Thanks for the positive feedback, I'm working on a 2nd video with a deep dive video into using some more AI tools (with a good dose of traditional digital animation techniques). Also, I've added some high res character/scene images to download for free and try out the process in this current video yourself. aianimation.com/ai-animation-tutorials Cheers!! Edit: Sheesh we've crossed the 100k views mark in under 2 weeks. 🤷♀
another thing, INO is better to remove the character and add the green screen before you submit to DI, it is cheaper and faster and you get better results.
@@AIAnimationStudio sorry I meant in my opinion. what I do is: create image in Midjourney, separate character from background in Photoshop and add green background, import image to D-ID then export. your looks good too but this is way faster does not require runway. loved the tutorial :)
@@AIBizarroTheater 😅... oh.. my bad... 🤦♂. Cool, and yup 100% skip the RunwayML step and separate the character in Photoshop or similar. Think I got carried away trying to use more AI tools. 👍
@@AIAnimationStudio I did it like that too until I discovered D-ID works with a green background as well. Anyway come have a look at my work if you have time I am new to after but been learning quickly. would love to hear your thoughts :)
As good as the AI stuff in this is, the most surprising take away for me is using a layer as a source for Optical Flares, I had no idea you could do that - so thank you!
I think this is the first video on UA-cam that uses a variety of AI tools combining with traditional design tools to create an amazing result! keep up the good work, looking forward to see how we all could do in the future a longer and wider scene without limits
Thanks. It's interesting to see how we can all utilise the different AI tools to create something that isn't completely uncontrollable but still has a crisp finish and kinda efficient. Lots to learn, and different workflows to try out. The one I'm working on now should be another interesting mix of AI and digital animation techniques, though I might try and make more of a story with it first.
Nice work. One suggestion is to isolate the character and put on green BG before D-ID. That way you can easily key it out when without Runway artefacts
Wow, thank you. Quite a lot of commands going on there for a rusty old retired person, but extremely cool video, and very inspiring, things sure have changed. Subbed.
Thanks @gilldanier4129. Feel like a rust old person here too. Hopefully, the 'version 2' video I'm working on for this will be a 'little' bit more straight forward. But yep, it's all changing quite rapidly at the moment. For example, Dalle 3 for image generation is now live this week (currently available via Bing Image Creator), which is providing some good results and provides a free alternative to Mid Journey. (*Not yet as good or as controllable, but it is another option).
This skills kicks ass, bravo! There will always be a place for a human mind in creaive process, even when ai will learn how to reduce this rather complicated workflow with a single prompt. I still hope there will )))
😅 yeah, that was super easy.. For you maybe. For person who never had anything common with animation, like me it's a magic. So much steps, different website(subscriptions plans) just for 5s animated picture 🙈 at least I know now how hard is to make professional animated movie
This video walks through a complete AI animation method for animating images made with MidJourney. It's an excellent tool for learning how to add life to your static photographs.
Great breakdown, thanks for sharing your steps. I think the Runway step probably is unnecessary. I green screen the character in photoshop and load onto D-ID. Alternatively, we can use AE's roto brush.
Thanks very much. Yeah in hindsight, I should have just separated him in Photoshop in this run-through. Check out the V2 video for this and a much easier way for the background too. ua-cam.com/video/7u0FYVPQ5rc/v-deo.html
Wow! - This was a superb video, quality, instruction, clarity, results. Liked and Subd, will check out your site too. This channel is going places. Thank you very much, and please do keep up the excellent work.
The chance for untalented to pretend to be otherwise is amazing, and helping corporations destroy human creativity and civil liberites for the rest of time is not an issue.
The "Animate MidJourney Images - Full AI Animation Workflow" video is an incredible showcase of the immense capabilities of AI-powered animation. From start to finish, this video takes us on a mesmerizing journey through the creative process of bringing still images to life using artificial intelligence.
Im a complete novice in AE, been staying away from it and just usinf Premiere pro to render my finished animations. But my animation program now has a pipeline so Im finally throwing myself in. This looks so amazing, gonna make my kids storybook reading videos sooo lit once I finallt get it in to my brain!
i think adobe firefly is still far from midjourney, theyre more mainly into 3d cartoon non details image, instead of hyper detailed and hyper realistic
Ive been animating ai images for about 40 hours total. It's so fun and satisfying to see them come to life. It's true that the ai images typically need a few touchups and a color grade, but the end results are beautiful. I made a cyborg enchanted SS Blue animated wallpaper that looks insanely cool. Im def going to try some of the tricks in this vid! Thank you :)
Great to hear. The various AI tools are opening up a lot of creative options, and when combined with some more traditional animation/media skills the outputs can be great!. Good luck.
Thank you! Very well explained AI course. I do have a question. Do you know any other site that can to the 3d Animation Model? For some reason Midas is not working for me.
Thanks. You can use Zoe Depth to achieve the same thing as Midas. See here: huggingface.co/spaces/shariqfarooq/ZoeDepth It also has the option to create a GLB 3D file, which is supported in recent updates to After Effects. I did an update to this video here, where I showed an approach using that. ua-cam.com/video/7u0FYVPQ5rc/v-deo.html
Thank you for your video, it's exactly the kind of animation I would like to make. Perfect for a beginner like me who is looking for his answers, can't wait to try your method.
Yes, this is good. I have scoured through UA-cam and the web to find something like this and today this popped up. Naturally subscribed immediately. A few weeks and you should have hoards of followers. A quick question, if you don't mind, what gpu do you run Firefly on?
I'm just running an M1 mac for this tutorial. I keep looking at the dusty PC with an old Nvidia 2070 on it, and wondering about trying out some local Stable Diffusion setup... but not fired it up yet.
I’ve been making faceless videos but now I know why they suck so bad. I was just zooming the camera in and out. I need to get their mouth is moving and heads turning. Thank you for opening my eyes.
Love it coz you even explained in depth ie - -ar 16:9 for landscape image not square. The way you speak it’s so soothing and easy to listen. Subscribed as soon as I saw the thumbnail.
i don't know editing so i got lost in some parts.. but wow it's amazing.. respect to people studying editing/ multimedia and all those stuffs.. C'est trop cool 🤙🏾
Awesome stuff! Love the result here! By the way, doesn't AE have some sort of "puppet warp" built in? I imagine this could be utilized on the boy when he says the word "beach" by doing a subtle little head tilt.
Yeah absolutely. 👍. I mention it briefly at 21:55 in the video. I actually used the puppet tool, on the version at the beginning of the video, to add a little extra head tilt and 'very' subtle shoulder movement.
World will remember me that i subscribe your channel since you have 45 subscriber i am your 45th subcriber 😁😁 why i am saying that because i amwatching your first 3rd video of your channel and i loved it. More power to you. and keep making videos like this because i am planing to make Ai content channel.
This is awesome work but all of the programs and websites you use will cost varying levels of money. Maybe if you could create a video with a lower budget version?
Thanks.. there are some ways to bring the price down. I might do one in the future but the tech is a bit more fussy to achieve that (not impossible though). Using free code repositories off of git hub to animate the character to match audio for example .... but might be worth exploring in a future video.
That's a huge talent, my easy hack, will be a very detailed prompt with midjourny ( dept of field and all ), then dale E, and of course elvenlab, and will sort the camera movement with FCPX and/or Capcut for flare and lighting. Thanks for the video it really nice
cool stuff! Just wondering why you didn't mask out the character as an image before applying the animation? Would save you the trouble of messing around with keylight. Or am I missing something here?
Yep.. not sure why I didn't either... I think I was too keen to show an extra AI tool with the background removal from RunwayML. But yep.. 100% just mask out the character before sending to D-ID. 👍
Thats amazing . Thank you for sharing . A little intimidating but I'm convince I could learn how to do this.... With time, and patience.... Lots of it...
This is super helpful! I've been looking for a tutorial just like this! Thank you!! And your website is awesome! I will sign up after I create my portfolio 😃
I get it. Depending what you're looking to create this approach is pretty involved. Once setup you could create a really long video with your talking character cutting to the wide and close up etc. Could work well for introducing a games channel, or using as the presenter for any kinda techy youtube channel. I might try and arrive at a quicker approach in the future.
I used to make some video and short film using somewhat the similar AI tools, I used everything except the depth map tool and I had no proper computer for the software so I used clipchamp. It end up with no views and it's wasn't of any high quality content and I took all of them down.
Thanks. The website at aianimation.com is a new site we've built to help showcase professional creatives/animators/studios who are using AI to generate high-quality work. You can register for free and create a profile, add videos to your portfolio, highlight some core skills, budget ranges... and hopefully get discovered by studios or potential clients for paid work. Or simply just a place for hobbyists to showcase their awesome creations, regardless of which AI tools they're using to assist (or fully create) animated videos.
My guess is if you make a character with a blank or solid color background and then add your own creative background after animating the character would give you more freedom to move them around without compromising the masking effect because generative fill simply downgrades the creative and tried to fit in forcibly and this way you can maintain the original bg work.
Yeah, that could definitely work nicely. The one benefit of having Midjourney create the scene with the character at the same time is that it will match the look/style and lighting. But absolutely worth exploring creating them separately. 👍
So if I were to want to JUST create the image with a 3d effect and post it to my IG - would I stop at 11:17 ish? My goal here is to create cool images on the feed that when someone scrolls they see the 3d effect and the depth. I just encountered one on a friend's feed and was blown away. The subject stayed still if you will but the background had depth and would move as I moved my phone if that makes sense. Am I on the right track here with your training?
really good walkthrough and was happy to see a tool chain that i already have subscriptions for !! Are you making use of Gen 2 image to video on Runway ML ?
Thanks, glad you enjoyed the walkthrough. Plus interesting and good to hear you're already using a similar stack of tools. Yeah, I had a 'play' with the new 'image only' prompt on Gen-2 last night. Great to be able to bring an image to life and have it match your original input, though definitely missing the lack of control. But certainly, one I'm going to explore more this week as there's obvious potential for telling stories quickly, if not a bit hit and miss at times. I think it'll be even more useful when you can use the image prompt, (with a text prompt to guide what happens) whilst still keeping the visuals in line with the image prompt for characters and scene composition, a bit more like the control you have with Gen-1 video to video. Whereas a text prompt (with image reference) at the moment in Gen-2 creates interesting, at times great, but mostly uncontrollable outcomes, though the preview frame is helpful. I'll probably do the next video about RunwayML soon.
Thanks very much. A new 1 hour long video using Wonder Studio / Blender / Mid Journey/ Runway ML / Topaz AI / Chat GPT ... and a lot of After Effects should be live later this evening.
I've been using AE every day for over a decade and this is the first time I've seen someone connect the displacement map H and V values to the camera X and Y. Wow. Great technique!
Cool, I've had a few people say that's one of their big takeaways from the video. I was admittedly pretty pleased when I first tried it and it worked.
In this other video, I use the free Displacer Pro plugin for After Effects, which works a little differently so you can link up a camera created using the Camera Tracker in After Effects. (Which used orientation). Whilst it might not work in every instance, it's another handy technique.
ua-cam.com/video/c3L7O-0gRiQ/v-deo.html
Also the simple but handy 'layer stalker' effect, used in that video, opens up lots of ideas too, that I'd not considered before in After Effects.
ua-cam.com/users/shortsP6EzZmdq_ac?feature=share
Thanks for all the positive comments and for those that have already subscribed. Honestly, a little bit shocked as I checked last night and the video had 150 views after 3 days... then this morning it was at 3000. Thanks for the positive feedback, I'm working on a 2nd video with a deep dive video into using some more AI tools (with a good dose of traditional digital animation techniques). Also, I've added some high res character/scene images to download for free and try out the process in this current video yourself. aianimation.com/ai-animation-tutorials Cheers!!
Edit: Sheesh we've crossed the 100k views mark in under 2 weeks. 🤷♀
another thing, INO is better to remove the character and add the green screen before you submit to DI, it is cheaper and faster and you get better results.
@@AIBizarroTheater . I've not come across INO and google isn't helping. Care to share a link?
@@AIAnimationStudio sorry I meant in my opinion. what I do is: create image in Midjourney, separate character from background in Photoshop and add green background, import image to D-ID then export. your looks good too but this is way faster does not require runway. loved the tutorial :)
@@AIBizarroTheater 😅... oh.. my bad... 🤦♂. Cool, and yup 100% skip the RunwayML step and separate the character in Photoshop or similar. Think I got carried away trying to use more AI tools. 👍
@@AIAnimationStudio I did it like that too until I discovered D-ID works with a green background as well. Anyway come have a look at my work if you have time I am new to after but been learning quickly. would love to hear your thoughts :)
As good as the AI stuff in this is, the most surprising take away for me is using a layer as a source for Optical Flares, I had no idea you could do that - so thank you!
the map displacement trick was *explodes* mind blowing
I'm actually just starting a project of mine and you have no idea how much this helps. Thank you sir :)
I think this is the first video on UA-cam that uses a variety of AI tools combining with traditional design tools to create an amazing result! keep up the good work, looking forward to see how we all could do in the future a longer and wider scene without limits
Thanks.
It's interesting to see how we can all utilise the different AI tools to create something that isn't completely uncontrollable but still has a crisp finish and kinda efficient. Lots to learn, and different workflows to try out.
The one I'm working on now should be another interesting mix of AI and digital animation techniques, though I might try and make more of a story with it first.
@@AIAnimationStudio looking forward to it mate
Nice work. One suggestion is to isolate the character and put on green BG before D-ID. That way you can easily key it out when without Runway artefacts
Good tip.
I definitely recommended that approach to save the unnecessary Runway step. 👍
And also upscale x2 before You will use D-ID. You recive mp4 :) it will be easier to keyig with more data :)
This video deserves more than a like or subscribe. Thank you!
Wow, thank you. Quite a lot of commands going on there for a rusty old retired person, but extremely cool video, and very inspiring, things sure have changed. Subbed.
Thanks @gilldanier4129. Feel like a rust old person here too. Hopefully, the 'version 2' video I'm working on for this will be a 'little' bit more straight forward. But yep, it's all changing quite rapidly at the moment. For example, Dalle 3 for image generation is now live this week (currently available via Bing Image Creator), which is providing some good results and provides a free alternative to Mid Journey. (*Not yet as good or as controllable, but it is another option).
Thank you, I will check out Dalle 3.@@AIAnimationStudio
Hi-tech Hacker the little intro is so cute!😊
Cheers!! I need to do some more with that guy.
Looks incredible, going to play with this for my next AI powered mockumentary
This skills kicks ass, bravo! There will always be a place for a human mind in creaive process, even when ai will learn how to reduce this rather complicated workflow with a single prompt. I still hope there will )))
Great work, high quality indeed, we need more tutorials from you. Thank you.
Love this video!!! Amazing work. I can't wait for the next one.
Thanks so much. A few more tutorials are now live.
14:21 I understand you just moved the layer. Thank you
Bravo! I am amazed that this can be done with such ease!
😅 yeah, that was super easy.. For you maybe. For person who never had anything common with animation, like me it's a magic. So much steps, different website(subscriptions plans) just for 5s animated picture 🙈 at least I know now how hard is to make professional animated movie
This is brilliant. Thank you so much!
I got it, my own epic trailer of Stronghold and Castle Town , lovely ❤
Wow this is really cool! Happy to have found this video!
Great video! Concise and to the point. Keep it up
Cheers! I'll definitely try and get into the habit of making more over the coming weeks and months.
This video walks through a complete AI animation method for animating images made with MidJourney. It's an excellent tool for learning how to add life to your static photographs.
omg, this is so beautiful 🙂
Amazing work!
Fantastic work mate. Well done bouncing back from the system crash too! Always the way isn’t it! 😂
creative and cool wow what a cool Character you just made XD man ur talanted amazing truly.
great tutorial bro thanks bro very good work
Completely mindblowing, thank you!
Great breakdown, thanks for sharing your steps. I think the Runway step probably is unnecessary. I green screen the character in photoshop and load onto D-ID. Alternatively, we can use AE's roto brush.
Thanks very much. Yeah in hindsight, I should have just separated him in Photoshop in this run-through. Check out the V2 video for this and a much easier way for the background too. ua-cam.com/video/7u0FYVPQ5rc/v-deo.html
Wow! - This was a superb video, quality, instruction, clarity, results. Liked and Subd, will check out your site too. This channel is going places.
Thank you very much, and please do keep up the excellent work.
A very interesting video. My after effects skills are not up to the tutorial but If I get time, I'll try and learn.
Nice!!!!
Great tutorial! Keep it coming. Ty much.
That was amazing. Thank you so much.
Hi this tutorial great
The chance for untalented to pretend to be otherwise is amazing, and helping corporations destroy human creativity and civil liberites for the rest of time is not an issue.
The "Animate MidJourney Images - Full AI Animation Workflow" video is an incredible showcase of the immense capabilities of AI-powered animation. From start to finish, this video takes us on a mesmerizing journey through the creative process of bringing still images to life using artificial intelligence.
Thanks so much. Glad you enjoyed the walkthrough.
@@AIAnimationStudio I'm loving your artwork, keep it up 🙂👍
Dude you're lazy. Did you just use AI to write this comment ?
Very incredible video , appreciate for sharing 🙇♂️
Im a complete novice in AE, been staying away from it and just usinf Premiere pro to render my finished animations. But my animation program now has a pipeline so Im finally throwing myself in. This looks so amazing, gonna make my kids storybook reading videos sooo lit once I finallt get it in to my brain!
This is very impressive. Thanks for the share
very helpful video, thank you
Thanks for this - Subscribed + 👍
i think adobe firefly is still far from midjourney, theyre more mainly into 3d cartoon non details image, instead of hyper detailed and hyper realistic
Ive been animating ai images for about 40 hours total. It's so fun and satisfying to see them come to life. It's true that the ai images typically need a few touchups and a color grade, but the end results are beautiful. I made a cyborg enchanted SS Blue animated wallpaper that looks insanely cool.
Im def going to try some of the tricks in this vid! Thank you :)
Great to hear. The various AI tools are opening up a lot of creative options, and when combined with some more traditional animation/media skills the outputs can be great!. Good luck.
Thank you! Very well explained AI course. I do have a question. Do you know any other site that can to the 3d Animation Model? For some reason Midas is not working for me.
Thanks. You can use Zoe Depth to achieve the same thing as Midas. See here:
huggingface.co/spaces/shariqfarooq/ZoeDepth
It also has the option to create a GLB 3D file, which is supported in recent updates to After Effects.
I did an update to this video here, where I showed an approach using that. ua-cam.com/video/7u0FYVPQ5rc/v-deo.html
@@AIAnimationStudio Thank you so much for replying and for the suggestions I will check them up soon, keep up the great work with your page!!
Also, could use free Pika Labs, but only generates 3 sec clips of your images.
Thank you for your video, it's exactly the kind of animation I would like to make. Perfect for a beginner like me who is looking for his answers, can't wait to try your method.
good job, great vid
With so much ai hype you stand out in the crowd. Keep it coming brother!
I think I will Use this Once you don't have to keep jumping to different Sights ..when it's all in one Program It will be Impressive and usable
Yes, this is good. I have scoured through UA-cam and the web to find something like this and today this popped up. Naturally subscribed immediately. A few weeks and you should have hoards of followers. A quick question, if you don't mind, what gpu do you run Firefly on?
I'm just running an M1 mac for this tutorial. I keep looking at the dusty PC with an old Nvidia 2070 on it, and wondering about trying out some local Stable Diffusion setup... but not fired it up yet.
Thank you for sharing .. so important - Love it! 🙂
Crazy how in 23mins you could inspire so many possibilities in how I work potentially forever. Thank you sir
Arrr... cool. Thanks very much.
Great Thanks for the video. Subscribed!!!!
I’ve been making faceless videos but now I know why they suck so bad. I was just zooming the camera in and out. I need to get their mouth is moving and heads turning. Thank you for opening my eyes.
Love it coz you even explained in depth ie - -ar 16:9 for landscape image not square. The way you speak it’s so soothing and easy to listen. Subscribed as soon as I saw the thumbnail.
i don't know editing so i got lost in some parts.. but wow it's amazing.. respect to people studying editing/ multimedia and all those stuffs.. C'est trop cool 🤙🏾
Awesome stuff! Love the result here! By the way, doesn't AE have some sort of "puppet warp" built in? I imagine this could be utilized on the boy when he says the word "beach" by doing a subtle little head tilt.
Yeah absolutely. 👍.
I mention it briefly at 21:55 in the video. I actually used the puppet tool, on the version at the beginning of the video, to add a little extra head tilt and 'very' subtle shoulder movement.
Oh indeed, look at that! Thanks for putting up with me and showing me the way in the feedback :)
World will remember me that i subscribe your channel since you have 45 subscriber i am your 45th subcriber 😁😁
why i am saying that because i amwatching your first 3rd video of your channel and i loved it. More power to you.
and keep making videos like this because i am planing to make Ai content channel.
@Enjoyablee ... So glad you liked the video and thanks for being the 45th subscriber... 12 hours later, now up to 336. Which is a bit bonkers.
So good 😊
Great work and tutorial...I loved it.
Awesome tutorial video, I am gonna fail to get this done on my Pc
Great! Thank you very much.. I just learned cool new stuff.. 😎✌🏻
That was incredible. The depth pass and displacement map technique absolutely blew me away!
Thanks. Loving the content on your channel too. 👍
This is awesome work but all of the programs and websites you use will cost varying levels of money. Maybe if you could create a video with a lower budget version?
Thanks.. there are some ways to bring the price down. I might do one in the future but the tech is a bit more fussy to achieve that (not impossible though). Using free code repositories off of git hub to animate the character to match audio for example .... but might be worth exploring in a future video.
It's Really Amazing Superb !
Ai robots are coming and you can go to the beach...exactly...oh btw i hope you have someone who will pay your bills :D
Definitely a tongue in cheek outlook. Don't think we're doomed nor will be heading to beach all the time just yet.
That's a huge talent, my easy hack, will be a very detailed prompt with midjourny ( dept of field and all ), then dale E, and of course elvenlab, and will sort the camera movement with FCPX and/or Capcut for flare and lighting. Thanks for the video it really nice
cool stuff! Just wondering why you didn't mask out the character as an image before applying the animation? Would save you the trouble of messing around with keylight. Or am I missing something here?
Yep.. not sure why I didn't either... I think I was too keen to show an extra AI tool with the background removal from RunwayML. But yep.. 100% just mask out the character before sending to D-ID. 👍
Thanks for this - Subscribed + 👍
Neat process, but unless I can just drop my thing into a tool, it's way too try-hard... 😅
Regular motion graphics are much easier to do...
Great Tutorial! Love your flow and style. Thank you
Thats amazing . Thank you for sharing . A little intimidating but I'm convince I could learn how to do this.... With time, and patience.... Lots of it...
I'd love to see some tutorials that included NVidia's Audio2Face which does real time audio lip sync from a wav file.
amazing video, pity D-id have added watermarks all over the generations now! But wow i learnt something new so simply! Thank you
Wow blew me away subscribed for sure. A lot more techie than my knowledge but I am willing to learn. Great video
This is super helpful! I've been looking for a tutorial just like this! Thank you!! And your website is awesome! I will sign up after I create my portfolio 😃
cool
That was great bro looking forward to learn more from you new sub 🎉
4:08 You can leave it blank and press Generate, it will fill the background
I've subscribeed. I got lost after the Runway part :)
Thanks for subscribing... good feedback... sorry if the After Effects bit was a bit too quick or too techy.
That's fuckn wild.
At the right time, we will have a conversation, AI Animation. I promise you will not be disappointed 😊
I was looking for this everywhere but couldn't find it until I found this video. Thx so much 🙂 this is incredible 💯 best I have seen so far
Too much work
I get it. Depending what you're looking to create this approach is pretty involved. Once setup you could create a really long video with your talking character cutting to the wide and close up etc. Could work well for introducing a games channel, or using as the presenter for any kinda techy youtube channel. I might try and arrive at a quicker approach in the future.
I used to make some video and short film using somewhat the similar AI tools, I used everything except the depth map tool and I had no proper computer for the software so I used clipchamp.
It end up with no views and it's wasn't of any high quality content and I took all of them down.
13:02 Linking displacement map to a camera and separating two compositions to differents windows were eye openers to me
I have subscribed, but whats your website for?
Thanks. The website at aianimation.com is a new site we've built to help showcase professional creatives/animators/studios who are using AI to generate high-quality work. You can register for free and create a profile, add videos to your portfolio, highlight some core skills, budget ranges... and hopefully get discovered by studios or potential clients for paid work. Or simply just a place for hobbyists to showcase their awesome creations, regardless of which AI tools they're using to assist (or fully create) animated videos.
My guess is if you make a character with a blank or solid color background and then add your own creative background after animating the character would give you more freedom to move them around without compromising the masking effect because generative fill simply downgrades the creative and tried to fit in forcibly and this way you can maintain the original bg work.
Yeah, that could definitely work nicely. The one benefit of having Midjourney create the scene with the character at the same time is that it will match the look/style and lighting. But absolutely worth exploring creating them separately. 👍
Holy shit this is nuts
thx for making this
Great tutorial, workflow and final result... Thank you!
I'm very new to all of this. This tutorial was absolutely jammed with info. Great work!
So if I were to want to JUST create the image with a 3d effect and post it to my IG - would I stop at 11:17 ish? My goal here is to create cool images on the feed that when someone scrolls they see the 3d effect and the depth. I just encountered one on a friend's feed and was blown away. The subject stayed still if you will but the background had depth and would move as I moved my phone if that makes sense. Am I on the right track here with your training?
You can make very good depth maps with stable diffusion and control net as well
really good walkthrough and was happy to see a tool chain that i already have subscriptions for !!
Are you making use of Gen 2 image to video on Runway ML ?
Thanks, glad you enjoyed the walkthrough. Plus interesting and good to hear you're already using a similar stack of tools.
Yeah, I had a 'play' with the new 'image only' prompt on Gen-2 last night. Great to be able to bring an image to life and have it match your original input, though definitely missing the lack of control. But certainly, one I'm going to explore more this week as there's obvious potential for telling stories quickly, if not a bit hit and miss at times. I think it'll be even more useful when you can use the image prompt, (with a text prompt to guide what happens) whilst still keeping the visuals in line with the image prompt for characters and scene composition, a bit more like the control you have with Gen-1 video to video.
Whereas a text prompt (with image reference) at the moment in Gen-2 creates interesting, at times great, but mostly uncontrollable outcomes, though the preview frame is helpful. I'll probably do the next video about RunwayML soon.
I thought you were a creative Director at Sterling Cooper.
🥃
what would you suggest for a kid's show like blue's clue . I being a teacher in a animated background with animated characters?
encore..thank God i found this channel..i want to explore and learn more.. keep it up sir..
Thanks very much. A new 1 hour long video using Wonder Studio / Blender / Mid Journey/ Runway ML / Topaz AI / Chat GPT ... and a lot of After Effects should be live later this evening.
4:17 I type simply delete or remove in prompt which do same thing
thanks! great work
Fear less, achieve more. That is what I do everyday to improve. If you know you know