Generate 3D Sets for your Short Films!
Вставка
- Опубліковано 29 чер 2024
- Today we'll be using AI tools to create virtual environments for your film projects!
We cover the entire workflow, from generating the 3D environment, to filming in a low-budget virtual production environment, to compositing your shot with some AI tips and tricks, like re-lighting your shot based on the environment you've created!
Chapters:
00:00 Intro
00:34 What is Virtual Production?
01:42 Generating a 360° image
03:17 Preparing the 3D Model [Depth Maps]
06:22 Filming the scene
08:05 Key out the actors
09:10 Generate Normal Maps
11:15 Putting it all together
14:11 Final Shots
Tools used in this video:
- Skybox Labs by Blockade Labs: skybox.blockadelabs.com/
- NVIDIA Canvas: www.nvidia.com/de-de/studio/c...
- ControlNet for Stable Diffusion: github.com/lllyasviel/ControlNet
- midjourney: www.midjourney.com/
- High Resolution Depth Maps for Stable Diffusion WebUI: github.com/thygate/stable-dif...
- Blender: www.blender.org/
- CamTrackAR: apps.apple.com/us/app/camtrac...
- Reality OBJ to USDZ Converter: apps.apple.com/de/app/reality...
- After Effects: www.adobe.com/products/aftere...
- Runway.ml: runwayml.com/
- High Reslution Normal Maps for Stable Diffusion: github.com/graemeniedermayer/...
- DaVinciResolve: www.blackmagicdesign.com/prod...
- Reference to image plane: github.com/Pullusb/reference_...
Videos Links:
- Avatar Featurette: Performance Capture: • Avatar Featurette: Per...
- Avatar & Avatar 2 Behind the scenes - How James Cameron Evolved Motion Capture in the Avatar Films: • Avatar & Avatar 2 Behi...
- The Virtual Production of The Mandalorian Season Two: • The Virtual Production...
- Blender Motion Tracking - Room Transformation! by Ian Hubert: • Blender Motion Trackin...
- Grae n UA-cam Channel: / @grae_n
- Inpainting in Augmented Reality: • Inpainting in Augmente...
- How to Make a Blur Node in Blender!: • How to Make a Blur Nod...
❤️ Buy me a coffee:
ko-fi.com/mickmumpitz
Follow me on Twitter: / mickmumpitz
It's insane that a person with creativity, persistence, and a willingness to learn can make a high quality short film with almost no resources. A phone with some apps, a computer with some video processing power, and some free and/or inexpensive ai tools will give you realistic composited visuals. The same equipment will give you a soundtrack and overall sound design.
The tools are at our fingertips. The skill is knowing which tool to use for which task and having the imagination to combine their power to create something great.
It reminds me of something Deadmou5 said several years ago when referring to creating music electronically: "This can all be done with a minimal amount of software which is why a kid can make a dance hit on a laptop."
As a VFX artist with 22+ years experience I loved the line "if you shot on a green screen you can just simply key them" !! Very funny.. It seems vfx is finally having (or about to have) it's punk moment. Thanks for helping bring it forward!
many people still dont get it that they ll never get rid of the artists with the help of ai because of its own limitations, artists that are producing art, are never going to die
What's so funny about it
@@theonm.5736 those limitations will lessen over time.
eventually you'll just need an idea and ai will do it for you
its come along way...after that is the Glam metal phase watchout !
@@theonm.5736 I can’t emphasize this clearly enough, we used to think that AI couldn’t create “art”. Now that is highly suspect. The same thing will happen with the idea that we need artists to create high quality art. We are not special. We are bioorganic computers that are good at inference. That is all.
Thank you for the kind words! The de-flickering normal maps sequence is amazing! I wouldn't of expected that to work so well.
Loved the way you simplified all of this creation. Well done!!
Dude... these videos are incredible. You're finding ways of doing things I thought it wouldn't be truly possible to accomplish for another 5 to 10 years. Very cool! I have an ambitious short film I shot most of in 2019 that I've been wanting to finish but knew it would take a significant amount of money to create the fantastical world in the film's conclusion. This has given me hope that completing it might be feasible on a much smaller scale if I sit down and really focus on banging out in AI that would be very hard to do practically or with more traditional VFX techniques.
Definitely spreading the word and keeping a close eye on your channel going forward. It's VERY EXCITING!! Thanks man, appreciate what you're doing here!
Outstanding content as always. Pioneering stuff.
So glad you have this channel, great stuff.
Thanks mate! Always awesome tutorials!
Mind blowing as usual! Blown away on the research and implementation you do. Exceptional!
You’re skill and dedication is insane. Thanks for sharing this
Holy cow, you are AWESOME! I can't wait to give this a shot!
i knew we will come to this stage, this is the teaching age where those who were the early adopters of the technology will start educating the people who are just trying to work things out . AMAZING !!! thanks for the workflow and tying things toegther
You are a wizard! Thanks so much for sharing all of these explorations. Definitely taking notes!
you're the man. thank you for sharing your workflow. I was going down this rabbit hole blind for a year and you sir are the rosetta stone. Muchas gracias.
Awesome job man! You always have great ideas
it has been an amazing learning curve and it certainly made me watch a few times to really grasp the technique and workflow. Amazing work and super cool explanation with a mixture of cross working tool sets which again makes it fun int he learning process. Thanks
Deine Videos sind großartig! Mach weiter so :)
Good stuff, very creative and informative. Thanks for the effort you put into creating this video and teaching the options available.
I'm so grateful to you. I've been looking for a quick way to create virtual worlds for so long. And in this video you have revealed everything in such detail. I'm shocked, thank you!
Are you open to creating a video clip for a musical remix without it costing me too much? I'm just a musician and I don't have a producer behind. thank you
@@relaxmax6808 It's possible) Please send me your remixes. I want to hear them) And write, how you imagine the clip. My email is in description of channel. Thanks.
@@SoulTuner ok , thank you , i will answer on your email .
Excellent, always pushing the limits of imagination!
This was dense, a lot to digest, and it's awesome. Cool workflow. Thank you.
Thank you for this video. I was looking for this exact suite of tools and workflow 🙏🙏🙏
What an excellent idea and it does look ok. I'll have to try this out! Thanks for the insperation!
Mind. Blown. Thank you very much for this awesome presentation.
Very nice and great infomration. Thanks!. Pls do share more of this..
Dude this is mind blowing
It was great to see the full process without cuts also. Even as paid mini course. It's hard to get a lot of things without deeper explanation
Heck yeah! He's back 🎉🎉🎉
Yeehaaaaa, wonderful, thanks for explaining this tool! 🎉
4:50 MID-LEVEL Bro.. in the Displacement modifier you have this blue slider. Change it to 1, that will get rid of the "problem".😉
Great video! 👀🍿❤️
Haha yeah, I should have mentioned that! The problem is that at a midlevel of 1, the strength of the displacement effect could no longer be changed (it just scaled the sphere). The difference between maximum and minimum displacement was too weak for me, the effect was hardly noticeable from the inside of the sphere. Do you have an idea how to fix this?
@@mickmumpitz What you mean by "the Displacement effect could no longer be changed"?
@@mickmumpitz you could also use a color ramp node on the displacement map in the material to adjust the displacement map's range. Just make the white values grey.
Well done ! nicely explained.
This is absolutely brilliant.
Amazing! Thank you very much!
Here’s a thought. What if one took an HDRI of the real filming location, whether in front of the green screen or outdoors, et cetera. Then depending on how feasible it is, make a depth map of the original video, and smooth it out, and then displace the keyed video plane in front of the camera so that we effectively have a 3D version of the subject instead of just normal mapped. And then take the HDRI of the original film location, but give its brightness a power of minus one or something negative, and make that light only interact with the displaced video plane in hopes that depending on how accurate the the depth map is it could subtract the lighting of the original filming location (like in the case it wasn’t feasible to film outdoors on an overcast day, and have more distance lighting indoors but it’s also in practice to light it any other helpful way) and then add other lights and the environment otherwise to re-light the video plane as if the subject were another object in the scene.
Probably way too overthought and impractical itself anyway but it’s a thought anyway😆
(I just wondered if there’s a way to record LiDAR on one’s phone in the video to use as the footage displacement map instead of generating it with AI)
Thanks for the training Mr Muppetz 😊 much love 🙏 ❤
bro this is so amazing
Man, you're crazy, I love it!🤩
This is so awesome, great video, i subscribed :D
I Appreciate your work , thanks
awesome tutorial, very interesting and well explained
Thank you for inspiration!
im a 3d animator, the most hardest part of my skill is creating a good environment..and this become my new favorite channel!!
Awesome video! I'm gonna try these on my giant L.E.D. production studio tomorrow.
Love your videos!
WOW! What a great video!
I've been experimenting with stable diffusion for months on my VR generated worlds. This video takes me a big step further....
BIG FAT FANX!
So far I have used SD to generate endless textures for creating VR brushes or to integrate Deforum clips into 360 degree clips with Davinci Resolve.
But:
My main interest in connecting VR and AI is the possibility of transforming a 360 degree clip (based on a custom VR art world) into something else using SD Deforum...unfortunately I haven't managed to get a decent workflow done yet...mainly because of the large amounts of data ...(?)
Happy colored greetinx!
super and will try these techniques myself
I really love the concept of "virtual production". Bringing a mix of 3D-assets/scenes and (AI modified) real footage into a game/physics engine which serves as a studio environment. Game Engine - not Blender/Maya... ! This idea is huge.
As you said: not everyone can afford a studio environment with LED screens, but VR/XR headsets can do a big chunk of the work. Flipside VR for Oculus would be a candidate to leverage VR for virtual production.
Awesome 🤯
Wow man that's amazing
Wonderful work. I as a German Hobby-Artist, interested in Tracking and Matchmoving, was pretty amazed by your creativity and deeply explanations. I let an abo here. Good luck my friend.
Perfect.!
Thank you!
Great video. Can you talk a bit more about preparing the 3D environment, particularly those last steps once you have the depth map from controlnet? I’m not familiar with the flow for how to reduce the black and white values in PS or change the file’s bit depth.
Midjourney have also --tile option, nice video btw 😃
Nice work
Thanks!
amazing
I've watched 4 of your videos now. You make everything seem possible. I know it's complicated and you are a genius. Do you do commissions?
Cool idea. You should use the mid-level offset slider in the displace modifier to shift the displacement from center to outwards, this way you can keep the original depth map values and do not have to compress them. Also, instead of an uv-sphere using an ico-sphere helps with getting more evenly distributed subdivisions and this more predictable displacements on the sphere mesh.
Do you slide the mid level slider to 1? I don’t know how to compress the depth map values and save in high bit value so your method might be easier 😅
I’m also curious about this since I don’t know how to reduce the black and white values in PS or change this file’s bit depth.
You’re soo underrated man 😭🙏
I never thought the day would come. Well, yeah, I did. I guess a couple of my stories from long ago cover it. But it seemed far away when I was trying to break into the Hollywood writer scene. It looks like even a single actor can play many roles.
Awesome
Wow
Wow!!!
Wow! What a brilliant idea to use ai generated normal maps to relight live action footage.
ufff really nice 👌🏽
bro you deserve more 📈
Some really cool tools highlighted here. I am really interested to see where this workflow goes as the AI tools improve. I am a professional compositor and I have to say the "re-lighting" method made me cringe a bit. The normals are way too inaccurate for that ever to work. One idea for a fix though would be an image to 3d model AI so you can project the video back onto a generate human 3d model. You can then use that Geo to catch the lighting of the scene. This will also solve the flickering problem from the normals
Oh my god this is 1000 times better then corridore crew
Wow 360^3
I bet the hi res but seamed depth map can be layered over the better but low res variant. To remove the seam.
Subscribed
If you don’t mind my asking, how long did it take you to make a single virtual set? I’m wondering because I thinking of making some content with this method, but I wonder if the workload is actually manageable with what I have in mind.
Thank you for the video, and I did hit the Like button. Yes, I am also experimenting with virtual sets to work in a feature length screenplay that I have finished. There's not enough money to do the movie live on locations, so virtual sets is the only other option. So far in the set building with Blender, my conclusion is...well...it might work...
Are you open to creating a video clip for a musical remix without it costing me too much? I'm just a musician and I don't have a producer behind. Thank you
Hi Mick! Amazing tutorial! I'm trying to reproduce the workflow but I'm stuck on the depth map white value alteration. I don't know how to do this (i have photoshop, but no skills) - also, I can't overcome the blocky appearance of the depth map. I save the images as 32 bit depth but the same blockiness is still there.
The volume has another trick, a stage that can rotate since the screens dont; go full 360 degrees.
I'd be happy to shoot with a really good, large, short throw laser projector with UE5 ... you could do that for less than 6k (USD). Ryan Conolly did a few REALLY impressive tests with short throw projectors and Unreal.
Was watching another video and a company sells a small LED wall setup for Virtual Production for as low as 10k (USD) ... not EXACTLY available to us indies but still a move in the right direction.
cool
love yourrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr videos you are amaaaaaaaaaaaaaaaazing thank you very much you are unique
Hello Brother,
please consider covering the software options available for this process, the steps involved in converting 3D animation videos to 2D animation videos.
There is so much in this video I have no idea about... still fascinating to watch though
Love the content. I just watched something the other day about scanning someone into the Unreal Engine. I wonder if that process is simpler?
It’s a Matt Wolfe video on using AI to create 3D games. I want to get my hands on Unreal’s metahuman. But your method allows for real human actors.
My thoughts greatly aligned with yours...I have been fantasising on using AI to produce a short film for a while now. Just yesterday I found blockade...and then boom 💥 today I found your page...please can we connect and interact one on one ?
That smile of "I enjoy this landscape" (while in fact you see the most boring green in existence)! 😂 😜
Hi Mick, thank you for all the wonderful videos. I am trying to find the easiest way, as I am not technically savvy, to create a music video with green screen/3D/iphone. It looks as if this video can help. If you have any suggestions, I would greatly appreciate.
Suggestion: You can use the Omniverse blender extension to convert obj or any other blender-supported formats to USD
Awesome, looks great!
btw MJ5 can tile: --v 5 --tile
Amazing! V 5 unfortunately came out while I filmed the video, so I couldn't try it out much. I'll have to catch up on that right away!
Hi, can you please provide some explanation on how to shift the depth image values in photoshop?
You should revisit this tutorial using the new relight functionality of DaVinci Resolve Studio BETA.
Great stuff! Do you know how I could find someone to teach me how to use these tools to make cool short films myself?
Incredible dude , I'm kinda disappointed because of the relight on the actor normal map trick didn't go as i expected , If i have to do so , I use photoshop and Ebysynth but that's a pain workflow.
Thank you for the video tutorial, as always very informative.
There is a question. If anyone knows, explain briefly. In ControlNet there is a Preprocessor and Models. In the models, for example, there is "control_canny-fp16" and there is "control_sd15_canny.pth". What are the differences between these models. The sd15 just weighs about 1.5 GB. But the fp16 is 720Mb. "control_sd15_canny.pth" an older model? Or what is the difference?
the new feature of lighting in DaVinci resolve18.5 based on Depth may be useful in this workflow. maybe you dont need to build normal maps???
Amazing Content! For me this is still too advanced. I dont even know how to use stable difusión or blender
It's good
Can we use a 3D model background on unreal engine with all those tools also?
Thanks for this great video...... Is there any easy way to make cartoon short movie taking scene from an actual movie using AI....
Not so far from a classic workflow we do since 20 years only cheaper :D. The only thing who is really different is the possibility of AI image generation (start)..
👍👍
_The future of filmaking_
Kerry Conran more than 20 years ago: _Here, hold my DeLorean..._
0:04 how did you put yourself in an animated version?
Sometimes I would like to see the name of the software used on screen……. Was it Blender, was it Davinci where you did the color correction?
It would be helpful to see the names of the used plugins especially where you‘re a newbie to this whole process….😇