The ADHD seems quite apparent, and is very much appreciated 😄! I know ADHD, like any disorder, is on a spectrum. But unlike the autism spectrum (for example), which can influence personality in a million different ways, I feel that people with ADHD are more alike, than with any other influence on personality. Thusfar every ADHDer I've met was very easy to instantly vibe and communicate with. Same goes for youtubers with ADHD, being very easy to instantly follow along or be fully captivated by.
It never ceases to amaze me how people continue to underestimate Davinci Resolve (Studio version). You can literally do all of this in 5 minutes flat with Davinci. You can roto out super quickly with magic mask + a saver node to export the roto into an EXR image sequence. And Davinci's Relight feature lets you generate normal map sequence (you can also export depth texture maps) and throw them all in Blender. It's super fast and easy. Davinci is the best.
I like Davinci but there's a reason I chose these programs. Copycat is the greatest rotoscoper out right now. The fact that I can roto an hour long clip in 10 minutes is insane, I literally use it everyday lol. And I like that Davinci can make normal and depth maps but I can't find anything about it also making roughness, metallic, specular, or occlusion maps. I still like Davinci! Don't get me wrong! But it's Nuke everyday for me :)
Have been using omniscient for about 2 months now. It is greaattt. I wish apple's Ar kit allow to shoot not only hdr but also in log format than It would be nearly perfect for everything.
36:29 For Blender 4.2 "import images as planes" was moved from the Import menu to the Add menu of the 3D Viewport. You can find it here: 3D Viewport ‣ Add (Shift A) ‣ Image ‣ Mesh Plane
maybe you could remap the depth map to make a "relative height" map. For it to push the back foot back and the front foot forward, your torso would have to be the midpoint(greypoint) of the heightmap, but the heightmap generated is based on your distance from camera and you're walking towards it. The heightmap probably is putting you on a plane as you are in roughly the same place compared to the treeline 300 metres behind you. You'd need to track the colour on the midpoint (torso, hips) and remap the depth map across your video so your torso and hips remain the greypoint and your feet changeably are darker or lighter
keep posting man, every tutorial I get closer to downloading nuke and deleting after effects. AFTER EFFECTS ROTO SUCKS!!! Great tutorial, explained perfect ✅, still entertaining ✅, keep posting
With the depth you can probably in Nuke take your alpha, and dilate out the depth so you won't get those depth errors on the edge. Regarding displacement, it needs to displace towards the cameras origin and not the normal of the surface. Hope that helps.
Since you have the depth, and you have the camera track, I bet you could use geometry nodes to push your mesh away from the camera (along camera normals) by the depth map amount. So, the pixels of that back leg would move diagonally away from the camera. 🤷♂️
@@vincelund7203 I think you’re right. I can’t get it to displace the way I want it to with modifiers so I really gotta get better at geometry nodes lol. Thank you!
I don't know much about Davinci's depth generator but I'm fairly confident that Depth Anything V2 is currently the top dog rn. And it works really well! But I think V3 will fix my gripes lol. Thank you so much for telling me about Davinci though!! I didn't know that and that motivates me just a little more to ditch Premiere :)
resolve actually has an auto bg remover. in my experience it is extremely good and its relatively fast. While you're at it you can generate your normals for your roto and any depth maps you need without jumping programs. Once you finish rendering in blender just drop it back into resolve for grading and final editing. Extremely powerful tool and cuts down on a lot of time hopping between programs.
what if you used your rotoscoped alpha from nuke as a mask on the output from depthanything and export it as gray scale. am a nood so this might be waaay of and if it is i would love it if any of you guys could correct me. wanna see where i went of the rails. awesome vid
You are 1,000 percent on the money lol. That will work, you did great :) My gripe though is that every depth generator I've used, struggles with feet and legs. It bends the legs backwards and the feet fade into the ground too suddenly. So I'm kinda waiting until somebody else fixes this problem or depth anything v3 releases.
@@curtskelton thanks for the replay, means alot. lets hope v3 solves this. But since you seem to like experimenter like my self, am shit compared to you but what if you photo scanned your self into a 3d model with the same outfit and took the original video and extracted the walk animation with the new video to animation ai models, cleaned the animation with cascaduer and matched it to be almost perfect to the original video and super imposed it to the original video in blender, maybe even parent them. or just use the rigged 3d models depth output from blender from waist down and the original from waist up. this might fix the shadows and maybe the lighting, not sure about the light reflection tho. this might be the equivalent of using an axe to cut a piece of grass but if it worked the applications might be endless.
@30:24 there’s a note from you about 16bit being not right. but back when you were choosing 8bit in nuke a second later it was back to 16bit so I think you did export at 16?
Resolve/Fusion can import image sequences, it's just annoying, do it in the Media tab Blender 4.2 can import images as planes, it's just integrated: Add>Images>Images As Planes
Dude is high on vfx... By the end of the video he's already seeing polygons.
dude your ADHD type of tutorial is the best thing that i found in the internet in a long time
Agree😂
agree, works for my brain
The ADHD seems quite apparent, and is very much appreciated 😄!
I know ADHD, like any disorder, is on a spectrum. But unlike the autism spectrum (for example), which can influence personality in a million different ways, I feel that people with ADHD are more alike, than with any other influence on personality. Thusfar every ADHDer I've met was very easy to instantly vibe and communicate with. Same goes for youtubers with ADHD, being very easy to instantly follow along or be fully captivated by.
It never ceases to amaze me how people continue to underestimate Davinci Resolve (Studio version). You can literally do all of this in 5 minutes flat with Davinci. You can roto out super quickly with magic mask + a saver node to export the roto into an EXR image sequence. And Davinci's Relight feature lets you generate normal map sequence (you can also export depth texture maps) and throw them all in Blender. It's super fast and easy. Davinci is the best.
I like Davinci but there's a reason I chose these programs. Copycat is the greatest rotoscoper out right now. The fact that I can roto an hour long clip in 10 minutes is insane, I literally use it everyday lol. And I like that Davinci can make normal and depth maps but I can't find anything about it also making roughness, metallic, specular, or occlusion maps. I still like Davinci! Don't get me wrong! But it's Nuke everyday for me :)
Another banger from whats quickly becoming one of my favorite VFX channels. Your channel is so underrated bro
Thank you so much :)
a rare sighting of a nehemiah
Watching this video is the most important thing I've done today
@@peterwojtechkojr why vote when you can cgi?
Thank you so much for this! This'll help so much for my school project, you deserve way more subscriber's.
One of the most loaded videos in mixing AI with VFX . And a record breaking burp 🥳
Have been using omniscient for about 2 months now. It is greaattt. I wish apple's Ar kit allow to shoot not only hdr but also in log format than It would be nearly perfect for everything.
Dang dude you're doing great work! Very informative and engaging. Keep it up and you're sure to grow.
Coool! Depth anything combined with copycat is very good for temporally stable depth.
BTW, in Blender 4.3 you can press Shift+A -> Image -> Mesh Plane. No longer an addon.
4.2, as well.
THIS is an amazing video. thank you for showing all your workflow. 💯💯🙏🏾🙏🏾
This is game changing
Bro you are a monster thank you for this video absolutely 100 percent value viewing
wizardry but im willing to learn from you. if a silly goose like you can do it and can teach it to a silly goose like myself... i am learning
36:29 For Blender 4.2 "import images as planes" was moved from the Import menu to the Add menu of the 3D Viewport.
You can find it here: 3D Viewport ‣ Add (Shift A) ‣ Image ‣ Mesh Plane
maybe you could remap the depth map to make a "relative height" map. For it to push the back foot back and the front foot forward, your torso would have to be the midpoint(greypoint) of the heightmap, but the heightmap generated is based on your distance from camera and you're walking towards it. The heightmap probably is putting you on a plane as you are in roughly the same place compared to the treeline 300 metres behind you. You'd need to track the colour on the midpoint (torso, hips) and remap the depth map across your video so your torso and hips remain the greypoint and your feet changeably are darker or lighter
ahahahhah! I loved the burp at the end!
i love you curt skelton your videos are great
crazy ❤
... A dell is not a hardware description.
keep posting man, every tutorial I get closer to downloading nuke and deleting after effects. AFTER EFFECTS ROTO SUCKS!!! Great tutorial, explained perfect ✅, still entertaining ✅, keep posting
This is soo cool ! thank you for sharing:)
broooooooooo i couldnt figure out how to put the footage in front of the camera thank youuuuu
dude. this is clutch
Yeah bro you owned a sub ☄️
Holy **** you could rotoscope while your f**kin' I gotta tell my girlfriend!!
I can get substantially more detailed results with Sapiens models + Bilateral Normal Integration, I could share the workflow if interested.
go for it. I'll sub your channel
With the depth you can probably in Nuke take your alpha, and dilate out the depth so you won't get those depth errors on the edge.
Regarding displacement, it needs to displace towards the cameras origin and not the normal of the surface.
Hope that helps.
to avoid this displacement glitches i just set the scene, add orthorgaphic camera, render the relighted footage and import back to scene as a plane
Watching it on .75x is perfect for me
Great tutorial!
Ed Sheeran now teaches 3D!!!!!!!!!! 😁😁😁😁😁😁🫨🫨🫨🤯🤯🤯🤯🤯🤯🤯🤯😲😲😲😲😲😲😱😱😱😱😱😱😱😱
Awesome ❤
Cool tricks.
Since you have the depth, and you have the camera track, I bet you could use geometry nodes to push your mesh away from the camera (along camera normals) by the depth map amount. So, the pixels of that back leg would move diagonally away from the camera. 🤷♂️
@@vincelund7203 I think you’re right. I can’t get it to displace the way I want it to with modifiers so I really gotta get better at geometry nodes lol. Thank you!
This could have been condensed way down for the people who already know Nuke.
And if my mother had wheels, she’d have been a bike
You need to do camera tracking so you stay in the right place while the camera does the moving.
Can I rig the phone to my real camera and use the data in a VFX program on the camera footage?
I legit think you can, I was just talking about this same idea with a coworker lol. If you do, let me know how it goes
Is there a similar program like Omniscient, for Android?
Maybe CameraTrackAR. That might be on the android app store but idk
Great video bro! I'm not an expert like you but would it help if you would create a depth map with Davinci Resolve?
I don't know much about Davinci's depth generator but I'm fairly confident that Depth Anything V2 is currently the top dog rn. And it works really well! But I think V3 will fix my gripes lol. Thank you so much for telling me about Davinci though!! I didn't know that and that motivates me just a little more to ditch Premiere :)
resolve actually has an auto bg remover. in my experience it is extremely good and its relatively fast. While you're at it you can generate your normals for your roto and any depth maps you need without jumping programs. Once you finish rendering in blender just drop it back into resolve for grading and final editing. Extremely powerful tool and cuts down on a lot of time hopping between programs.
what if you used your rotoscoped alpha from nuke as a mask on the output from depthanything and export it as gray scale. am a nood so this might be waaay of and if it is i would love it if any of you guys could correct me. wanna see where i went of the rails. awesome vid
You are 1,000 percent on the money lol. That will work, you did great :) My gripe though is that every depth generator I've used, struggles with feet and legs. It bends the legs backwards and the feet fade into the ground too suddenly. So I'm kinda waiting until somebody else fixes this problem or depth anything v3 releases.
@@curtskelton thanks for the replay, means alot. lets hope v3 solves this. But since you seem to like experimenter like my self, am shit compared to you but what if you photo scanned your self into a 3d model with the same outfit and took the original video and extracted the walk animation with the new video to animation ai models, cleaned the animation with cascaduer and matched it to be almost perfect to the original video and super imposed it to the original video in blender, maybe even parent them. or just use the rigged 3d models depth output from blender from waist down and the original from waist up. this might fix the shadows and maybe the lighting, not sure about the light reflection tho. this might be the equivalent of using an axe to cut a piece of grass but if it worked the applications might be endless.
@30:24 there’s a note from you about 16bit being not right. but back when you were choosing 8bit in nuke a second later it was back to 16bit so I think you did export at 16?
@@HelpingHand48 lol what the heck that’s funny. That’s just an editing error because I constantly re-do takes I don’t like. My bad.
Really wish I had an NVidia GPU, can't run Switchlight 😭
Cool video though!
Ctrl shift t works too 37:30
There's another easier rotoscope method, Davinci Resolve 19, also free.
Resolve/Fusion can import image sequences, it's just annoying, do it in the Media tab
Blender 4.2 can import images as planes, it's just integrated: Add>Images>Images As Planes
lostme at that 13 min
so sad that surt ckelton got herpes
😢
Interesting. But you speak too much