The ADHD seems quite apparent, and is very much appreciated 😄! I know ADHD, like any disorder, is on a spectrum. But unlike the autism spectrum (for example), which can influence personality in a million different ways, I feel that people with ADHD are more alike, than with any other influence on personality. Thusfar every ADHDer I've met was very easy to instantly vibe and communicate with. Same goes for youtubers with ADHD, being very easy to instantly follow along or be fully captivated by.
Easiest subscribe I've ever done. I run a UA-cam channel based on teaching people to write fiction and I do a bunch of intros and skits using digital production and this tutorial was super helpful. I never knew that Nuke had a non-commercial license, but I'm going to give it a shot! I appreciate all your work on this workflow!
Dude, I absolutely love your short, I think I've watched it at least 20 times in a row. If the entrance to afterlife is like this, I'm actually looking forward to it!
Have been using omniscient for about 2 months now. It is greaattt. I wish apple's Ar kit allow to shoot not only hdr but also in log format than It would be nearly perfect for everything.
It never ceases to amaze me how people continue to underestimate Davinci Resolve (Studio version). You can literally do all of this in 5 minutes flat with Davinci. You can roto out super quickly with magic mask + a saver node to export the roto into an EXR image sequence. And Davinci's Relight feature lets you generate normal map sequence (you can also export depth texture maps) and throw them all in Blender. It's super fast and easy. Davinci is the best.
I like Davinci but there's a reason I chose these programs. Copycat is the greatest rotoscoper out right now. The fact that I can roto an hour long clip in 10 minutes is insane, I literally use it everyday lol. And I like that Davinci can make normal and depth maps but I can't find anything about it also making roughness, metallic, specular, or occlusion maps. I still like Davinci! Don't get me wrong! But it's Nuke everyday for me :)
36:29 For Blender 4.2 "import images as planes" was moved from the Import menu to the Add menu of the 3D Viewport. You can find it here: 3D Viewport ‣ Add (Shift A) ‣ Image ‣ Mesh Plane
maybe you could remap the depth map to make a "relative height" map. For it to push the back foot back and the front foot forward, your torso would have to be the midpoint(greypoint) of the heightmap, but the heightmap generated is based on your distance from camera and you're walking towards it. The heightmap probably is putting you on a plane as you are in roughly the same place compared to the treeline 300 metres behind you. You'd need to track the colour on the midpoint (torso, hips) and remap the depth map across your video so your torso and hips remain the greypoint and your feet changeably are darker or lighter
keep posting man, every tutorial I get closer to downloading nuke and deleting after effects. AFTER EFFECTS ROTO SUCKS!!! Great tutorial, explained perfect ✅, still entertaining ✅, keep posting
for displacement use adaptive subdivisions and set render mode to experimental and set subdivisions in render settings to 2 viewport 1 render and do same with map range node and turn down displacement
With the depth you can probably in Nuke take your alpha, and dilate out the depth so you won't get those depth errors on the edge. Regarding displacement, it needs to displace towards the cameras origin and not the normal of the surface. Hope that helps.
Btw I was wondering, Do you reckon if we use the free version of nuke to do a roto in HD, then we bring the matte in da vinci and apply it on a 4K , wloud it work better then a magic mask ?
@30:24 there’s a note from you about 16bit being not right. but back when you were choosing 8bit in nuke a second later it was back to 16bit so I think you did export at 16?
Since you have the depth, and you have the camera track, I bet you could use geometry nodes to push your mesh away from the camera (along camera normals) by the depth map amount. So, the pixels of that back leg would move diagonally away from the camera. 🤷♂️
@@vincelund7203 I think you’re right. I can’t get it to displace the way I want it to with modifiers so I really gotta get better at geometry nodes lol. Thank you!
I don't know much about Davinci's depth generator but I'm fairly confident that Depth Anything V2 is currently the top dog rn. And it works really well! But I think V3 will fix my gripes lol. Thank you so much for telling me about Davinci though!! I didn't know that and that motivates me just a little more to ditch Premiere :)
resolve actually has an auto bg remover. in my experience it is extremely good and its relatively fast. While you're at it you can generate your normals for your roto and any depth maps you need without jumping programs. Once you finish rendering in blender just drop it back into resolve for grading and final editing. Extremely powerful tool and cuts down on a lot of time hopping between programs.
what if you used your rotoscoped alpha from nuke as a mask on the output from depthanything and export it as gray scale. am a nood so this might be waaay of and if it is i would love it if any of you guys could correct me. wanna see where i went of the rails. awesome vid
You are 1,000 percent on the money lol. That will work, you did great :) My gripe though is that every depth generator I've used, struggles with feet and legs. It bends the legs backwards and the feet fade into the ground too suddenly. So I'm kinda waiting until somebody else fixes this problem or depth anything v3 releases.
@@curtskelton thanks for the replay, means alot. lets hope v3 solves this. But since you seem to like experimenter like my self, am shit compared to you but what if you photo scanned your self into a 3d model with the same outfit and took the original video and extracted the walk animation with the new video to animation ai models, cleaned the animation with cascaduer and matched it to be almost perfect to the original video and super imposed it to the original video in blender, maybe even parent them. or just use the rigged 3d models depth output from blender from waist down and the original from waist up. this might fix the shadows and maybe the lighting, not sure about the light reflection tho. this might be the equivalent of using an axe to cut a piece of grass but if it worked the applications might be endless.
Resolve/Fusion can import image sequences, it's just annoying, do it in the Media tab Blender 4.2 can import images as planes, it's just integrated: Add>Images>Images As Planes
dude your ADHD type of tutorial is the best thing that i found in the internet in a long time
Agree😂
agree, works for my brain
The ADHD seems quite apparent, and is very much appreciated 😄!
I know ADHD, like any disorder, is on a spectrum. But unlike the autism spectrum (for example), which can influence personality in a million different ways, I feel that people with ADHD are more alike, than with any other influence on personality. Thusfar every ADHDer I've met was very easy to instantly vibe and communicate with. Same goes for youtubers with ADHD, being very easy to instantly follow along or be fully captivated by.
@@DeltaNovumas someone with adhd and autism i approve this message
YEEES!
Holy cow. So glad the algorithm gods lead me to your channel
Easiest subscribe I've ever done. I run a UA-cam channel based on teaching people to write fiction and I do a bunch of intros and skits using digital production and this tutorial was super helpful. I never knew that Nuke had a non-commercial license, but I'm going to give it a shot! I appreciate all your work on this workflow!
Another banger from whats quickly becoming one of my favorite VFX channels. Your channel is so underrated bro
Thank you so much :)
a rare sighting of a nehemiah
Dude, I absolutely love your short, I think I've watched it at least 20 times in a row. If the entrance to afterlife is like this, I'm actually looking forward to it!
One of the most loaded videos in mixing AI with VFX . And a record breaking burp 🥳
Thank you so much for this! This'll help so much for my school project, you deserve way more subscriber's.
Watching this video is the most important thing I've done today
@@peterwojtechkojr why vote when you can cgi?
Dude is high on vfx... By the end of the video he's already seeing polygons.
Dang dude you're doing great work! Very informative and engaging. Keep it up and you're sure to grow.
Bro you are a monster thank you for this video absolutely 100 percent value viewing
really like the way you explained everything, keep it up !!! ADHD gang
Coool! Depth anything combined with copycat is very good for temporally stable depth.
This is game changing
Have been using omniscient for about 2 months now. It is greaattt. I wish apple's Ar kit allow to shoot not only hdr but also in log format than It would be nearly perfect for everything.
BTW, in Blender 4.3 you can press Shift+A -> Image -> Mesh Plane. No longer an addon.
4.2, as well.
THIS is an amazing video. thank you for showing all your workflow. 💯💯🙏🏾🙏🏾
You ARE HILARIOUS!!! haha. Amazing tuts mate!!!!
crazy ❤
It never ceases to amaze me how people continue to underestimate Davinci Resolve (Studio version). You can literally do all of this in 5 minutes flat with Davinci. You can roto out super quickly with magic mask + a saver node to export the roto into an EXR image sequence. And Davinci's Relight feature lets you generate normal map sequence (you can also export depth texture maps) and throw them all in Blender. It's super fast and easy. Davinci is the best.
I like Davinci but there's a reason I chose these programs. Copycat is the greatest rotoscoper out right now. The fact that I can roto an hour long clip in 10 minutes is insane, I literally use it everyday lol. And I like that Davinci can make normal and depth maps but I can't find anything about it also making roughness, metallic, specular, or occlusion maps. I still like Davinci! Don't get me wrong! But it's Nuke everyday for me :)
36:29 For Blender 4.2 "import images as planes" was moved from the Import menu to the Add menu of the 3D Viewport.
You can find it here: 3D Viewport ‣ Add (Shift A) ‣ Image ‣ Mesh Plane
ahahahhah! I loved the burp at the end!
wizardry but im willing to learn from you. if a silly goose like you can do it and can teach it to a silly goose like myself... i am learning
i love you curt skelton your videos are great
amazing tutorial man
maybe you could remap the depth map to make a "relative height" map. For it to push the back foot back and the front foot forward, your torso would have to be the midpoint(greypoint) of the heightmap, but the heightmap generated is based on your distance from camera and you're walking towards it. The heightmap probably is putting you on a plane as you are in roughly the same place compared to the treeline 300 metres behind you. You'd need to track the colour on the midpoint (torso, hips) and remap the depth map across your video so your torso and hips remain the greypoint and your feet changeably are darker or lighter
dude. this is clutch
This is soo cool ! thank you for sharing:)
Yeah bro you owned a sub ☄️
Great tutorial!
broooooooooo i couldnt figure out how to put the footage in front of the camera thank youuuuu
keep posting man, every tutorial I get closer to downloading nuke and deleting after effects. AFTER EFFECTS ROTO SUCKS!!! Great tutorial, explained perfect ✅, still entertaining ✅, keep posting
for displacement use adaptive subdivisions and set render mode to experimental and set subdivisions in render settings to 2 viewport 1 render and do same with map range node and turn down displacement
With the depth you can probably in Nuke take your alpha, and dilate out the depth so you won't get those depth errors on the edge.
Regarding displacement, it needs to displace towards the cameras origin and not the normal of the surface.
Hope that helps.
Awesome ❤
48:53 this made my day
Ed Sheeran now teaches 3D!!!!!!!!!! 😁😁😁😁😁😁🫨🫨🫨🤯🤯🤯🤯🤯🤯🤯🤯😲😲😲😲😲😲😱😱😱😱😱😱😱😱
Btw I was wondering, Do you reckon if we use the free version of nuke to do a roto in HD, then we bring the matte in da vinci and apply it on a 4K , wloud it work better then a magic mask ?
Can I rig the phone to my real camera and use the data in a VFX program on the camera footage?
I legit think you can, I was just talking about this same idea with a coworker lol. If you do, let me know how it goes
Could the masking be done on resolve studio magic mask? (I know nothing about 3d rendering :D )
Watching it on .75x is perfect for me
to avoid this displacement glitches i just set the scene, add orthorgaphic camera, render the relighted footage and import back to scene as a plane
@30:24 there’s a note from you about 16bit being not right. but back when you were choosing 8bit in nuke a second later it was back to 16bit so I think you did export at 16?
@@HelpingHand48 lol what the heck that’s funny. That’s just an editing error because I constantly re-do takes I don’t like. My bad.
Since you have the depth, and you have the camera track, I bet you could use geometry nodes to push your mesh away from the camera (along camera normals) by the depth map amount. So, the pixels of that back leg would move diagonally away from the camera. 🤷♂️
@@vincelund7203 I think you’re right. I can’t get it to displace the way I want it to with modifiers so I really gotta get better at geometry nodes lol. Thank you!
Holy **** you could rotoscope while your f**kin' I gotta tell my girlfriend!!
Is there a similar program like Omniscient, for Android?
Maybe CameraTrackAR. That might be on the android app store but idk
Cool tricks.
Great video bro! I'm not an expert like you but would it help if you would create a depth map with Davinci Resolve?
I don't know much about Davinci's depth generator but I'm fairly confident that Depth Anything V2 is currently the top dog rn. And it works really well! But I think V3 will fix my gripes lol. Thank you so much for telling me about Davinci though!! I didn't know that and that motivates me just a little more to ditch Premiere :)
resolve actually has an auto bg remover. in my experience it is extremely good and its relatively fast. While you're at it you can generate your normals for your roto and any depth maps you need without jumping programs. Once you finish rendering in blender just drop it back into resolve for grading and final editing. Extremely powerful tool and cuts down on a lot of time hopping between programs.
what if you used your rotoscoped alpha from nuke as a mask on the output from depthanything and export it as gray scale. am a nood so this might be waaay of and if it is i would love it if any of you guys could correct me. wanna see where i went of the rails. awesome vid
You are 1,000 percent on the money lol. That will work, you did great :) My gripe though is that every depth generator I've used, struggles with feet and legs. It bends the legs backwards and the feet fade into the ground too suddenly. So I'm kinda waiting until somebody else fixes this problem or depth anything v3 releases.
@@curtskelton thanks for the replay, means alot. lets hope v3 solves this. But since you seem to like experimenter like my self, am shit compared to you but what if you photo scanned your self into a 3d model with the same outfit and took the original video and extracted the walk animation with the new video to animation ai models, cleaned the animation with cascaduer and matched it to be almost perfect to the original video and super imposed it to the original video in blender, maybe even parent them. or just use the rigged 3d models depth output from blender from waist down and the original from waist up. this might fix the shadows and maybe the lighting, not sure about the light reflection tho. this might be the equivalent of using an axe to cut a piece of grass but if it worked the applications might be endless.
This could have been condensed way down for the people who already know Nuke.
And if my mother had wheels, she’d have been a bike
I can get substantially more detailed results with Sapiens models + Bilateral Normal Integration, I could share the workflow if interested.
go for it. I'll sub your channel
You need to do camera tracking so you stay in the right place while the camera does the moving.
Ctrl shift t works too 37:30
Really wish I had an NVidia GPU, can't run Switchlight 😭
Cool video though!
Resolve/Fusion can import image sequences, it's just annoying, do it in the Media tab
Blender 4.2 can import images as planes, it's just integrated: Add>Images>Images As Planes
There's another easier rotoscope method, Davinci Resolve 19, also free.
lostme at that 13 min
so sad that surt ckelton got herpes
Useless. No available on android.
😢
Interesting. But you speak too much