Silverwing Quick-Tip: Z-Depth (The Intriguing world of Z-Depth)

Поділитися
Вставка
  • Опубліковано 23 лис 2024

КОМЕНТАРІ • 47

  • @brouyer
    @brouyer 4 місяці тому +4

    70% of the questions i ask to youtube about octane is answered by your videos, thank you so much!

    • @SilverwingVFX
      @SilverwingVFX  4 місяці тому

      Hey hey, and thank you so much for your comment. That's super awesome to hear!
      I hope I can continue this trend in the future.

  • @REDPanti
    @REDPanti Рік тому +2

    Thanks for wonderful Information

  • @vladan.Poison
    @vladan.Poison Рік тому +1

    Raphael you did it !!! thank you for making me happy and 1 million others happy in such short notice. I am going to try it out and slowly drop the Position pass hack.

    • @SilverwingVFX
      @SilverwingVFX  Рік тому

      Yes, thanks for the input. Its always great to have an active community suggesting topics, asking stuff!
      Fingers crossed it works out for you.
      Cheers and a good start into this week!

  • @bulba1561
    @bulba1561 Рік тому +2

    youre a genius man

    • @SilverwingVFX
      @SilverwingVFX  Рік тому

      Thank you very much.
      Not sure about the Genius part ha ha though 😇

  • @imakegreateggs
    @imakegreateggs Рік тому +2

    Love you!

  • @bokasonic
    @bokasonic Рік тому +3

    Fantastic knowledge bombs again, Raphael! Danke schön!

    • @SilverwingVFX
      @SilverwingVFX  Рік тому

      Thank you for your nice comment. Und gern geschehen :-)

  • @zotake
    @zotake Рік тому +1

    Z Depth always was a problem for me. Thank you!! ❤

    • @SilverwingVFX
      @SilverwingVFX  Рік тому +1

      Great to hear I could clear some thins up 😊

  • @cemgulpunk
    @cemgulpunk Рік тому +1

    very informative thank you.

  • @EliMagaziner
    @EliMagaziner Рік тому +1

    Your tutorials are Essential. Thank you!!!!

    • @SilverwingVFX
      @SilverwingVFX  Рік тому

      Thank you very much. Great to hear that 🙏 🙌

  • @SaucisseMasquee7
    @SaucisseMasquee7 Рік тому +1

    Thanks a lot for all of tutorials ! I'm struggling with one of the elements of this particular one : Whatever I do, modifying the values of the "Environment Depth" input in the Z-Depth AOV settings doesnt seem to have any effect on the Z-Depth output.. Do you have any idea as why this happens ? I've got this issue across all of my projects, whatever the distances and scenes

    • @SilverwingVFX
      @SilverwingVFX  Рік тому

      Hey there and thank you for your comment.
      Unfortunately I don´t know what´s happening to your scenes that you can´t get the right output. If the contents of the tut did not help you to get a good output I am afraid I am at a loss here.
      Do you see our modifications in the live viewer. And it´s just the output that if off? Or can´t you set the Z-Depth no matter what?

  • @georgeluna6217
    @georgeluna6217 Рік тому +1

    Always great content Raphael! I would love to see how a real render benefits from Z-Depth pass and how that can be adjusted in post afterwards to get the maximum results

    • @SilverwingVFX
      @SilverwingVFX  Рік тому +2

      Thank you very much for your compliments and suggestion.
      That sounds like something I can do. Let me write that down on my tut list!
      🖊️ 🙌

  • @TGuyHD
    @TGuyHD Рік тому +1

    well what if you want a shallow depth of field with close objects and far objects out of focus like a real camera ?

    • @SilverwingVFX
      @SilverwingVFX  Рік тому +1

      The "Depth Channel" is just a data layer, not representing what a DOF filter is doing.
      Usually the depth should be directional continuous. This helps the DOF filter see which objects are in front and which are behind others as this step is important to calculate the correct DOF. (Not all DOF filters have that though)
      So having a point in space (your focal plane) from which there is a gradient to the front and a gradient to the back is actually counter productive. As with most DOF filters you can pick your focus depth from the continuous depth channel (such as shown in this tutorial) and then produce the right front and back blur.
      Hope that helped 🙌
      Cheers and a great weekend to you!

  • @Sleepycyborg1982
    @Sleepycyborg1982 Рік тому +1

    Fantastic explanation! Always can learn new things in your videos. If you dont mind, I've got a quesiton about the use of z-depth map for depthblur in Fusion (if you rmb my question about z-depth in another video of yours, you provided with a youtube link showing how to boolean the z channel) What I am experiencing is: after I use the channelboolean to copy the z channel from the z-depth map to my render(EXR32bit, DWAB), in the viewport, when i check and switch the viewing mode from color to Z, it just shows all white color. The depthblur just doesnt seem to read any information from the Z of the channel boolean. I suppose when i click over on the z channel in the viewport, there should show what is like in the z-depth map... This problem here, could that be the result of using DWAB compression? Or rather, because of the nature of 32bit, the z-depth is not normalised so that the depthblur function couldnt read beyond 0 to 1. Since I have experienced similar issue in AE before, where the z-depth is just all white. Sorry for the long text. Hope you could provide some insights :) Thanks for your video and help again!

    • @SilverwingVFX
      @SilverwingVFX  Рік тому

      Hey there and thank you very much for your comment.
      Allright. So if you have problems with above white values, you could just stick a Color Correction node in front of the Chanel Boleans and turn down the values to a 0 to 1 range.
      The same I do it in the video. Other then that I think you can choose in the Depth Blur node where it should read the depth information from. So you can feed it to the node within the 2nd input (if I remember right) I do not have a fusion dongle at home. I can test it in the office tomorrow and then let you know.
      So what I am saying is: Take your Z-Depth input. Transform it to a visible range (e.g. using a Color Corrections Gain) then pipe in this value wither to the stream via a Channel Boleans or wire it directly to the node and set the option in the Depth Blur node accordingly.
      Hope this helps.
      If you still having trouble, let me know. I can have a look tomorrow.

  • @AsirisC
    @AsirisC Рік тому +1

    This is a really good tutorial on Z-Depth. I needed it for my project. Unfortunately, I still struggle with this. In my case I try to use the Z-Depth out of Octane in Nuke to create bokeh in the sequence where the image is moving, but I can't get a satisfying result. I see artifact at edges of the image. I guess it's expected since Nuke have only the 2D image and can't know what the data is beyond the edge. But it's also have issue in some other areas of the image. The final result doesn't look too clean if he compare to ground truth from the renderer itself. I tried using both ZDefocus and Bokeh in Nuke and I struggle with both option to output the clean result. I'd appreciate if you can share your experience how you can produce a clean result applying depth blur in the post.

    • @SilverwingVFX
      @SilverwingVFX  Рік тому

      Hey Alexey. Thank you very much for your comment.
      You are absolutely right, Z-Depth has some inherent problems. Such as the problem of causing artifacts at edges with large depth differences.
      My "workaround" is to render the DOF and not create it in comp. But that of course costs render time.
      To get a little cleaner results, at the end of the video I show a way to get aliased Z-Depth. That (dependend on the blur plugin) can lead to a little cleaner, less artifacted look. But not completely avoid them.
      The reason this works better is because one cause of artifacts at edges is that the Z-Depth channel is anti-aliased so there is a gradient happening on the edge from one depth layer to another instead of an abrupt change in depth that would be natural. By not anti-aliasing the Z-Depth there is no false information in the channel and, depending if the DOF effect of your comp supports that, can increase quality.
      If you are in nuke and would like to do DOF there, the best way would be deep compositing with deep EXRs. But those are super big in size.
      Octane can actually save them.
      Since I never worked with those, I can´t tell you how to handle them though. There are UA-cam videos on the subject. But they are rather rare as usually only bigger Studios have the resources to use it.
      Fingers crossed you will find a solution!
      Raphael

  • @adamzen3905
    @adamzen3905 Рік тому

    There's still so many problems with doing Motion Blur & DOF in post for Octane that I pretty much just gave up.
    I don't even know where to begin. Both Z-depth & Motion Vector completely ignore the environment is a huge setback. I can get motion vector to work in AE but if I use an hdri environment then that would not be calculated, so it's useless. I could try creating a big sphere and then set it's material to be shadow catcher, but then my render time get exponentially longer.
    I could use the "include "environment" checkbox with the global texture approach for zdepth, but no matter what, I would have to choose if I want my z depth to be anti-aliased or not, and I can't have both, which mean I can't get 2 pass to use it as DOF map and distance fog map. So I would have to render twice if I want em both.
    I'll just accept the consequences & do DOF & Motion Blur in camera, and export an extra anti-aliased Z-Depth pass for distance fog instead, it sucks that we can't get simple important stuff to work as expected, but oh well...

    • @SilverwingVFX
      @SilverwingVFX  Рік тому

      Hey Adam,
      I get your complaints. And feel you. I was where you are years ago when I started learning all the techniques comping etc.
      In my experience post Z-Depth and DOF in renderers never worked flawless. There is always some things you need to render twice and even then you get artifacts?
      Post DOF and post Motion Blur are both very finnicky concepts that break easily.
      Using them on top of each other is like playing jenga. Rendered DOF / M-Blur is always far superior. I get that it can save you some erious render time though.
      e.g. I remember that I, 15 years ago, I also had to render a sphere in C4D as an extra pass to get the vector MB info.
      Other then that you could export your camera and use the HDRI in AE and motion blur it there. Not sure right now if AE has the possibility get an image mapped 360° without a plugin though.
      Having the passes Anti Aliased or not, is a problem that would be nice if you could set it per output pass. That´s very true.
      So in the end I used post DOF / M-Blu as long as I had to because I worked with a pentium 3 single core processor and did not have the resources to render the effects.
      But as soon as I could afford it, I switched to rendered DOF / M-Blur and never looked back 😇
      Hope you might do the same and don´t waste time trying to get a good result out of those post effects.
      In my experience. 95% of what you describe is just the pain of the general workflow and has not that much to do with Octane.
      Cheers and a great time to you. And fingers crossed you do not have to work with frustrating workflows like this in the future.

  • @RUFFENSTEINT
    @RUFFENSTEINT Рік тому +1

    Man I've been having the most difficult time (and about the only issue when swapping to Fusion from AE) but to get a proper fog depth look, I'm using channel booleans but the results are so, i don't know... it does not look good. Do you have any advice or a setup on how to? Currently plugging Z and Beauty into booleans, set it all to do nothing expect alpha which either goes to lightness or alpha FG, have tinkered alot. Also tried doin settings in the buffer with the depth set to same as env, and the env set at 0.
    Most the results I get are some kind of inverted mess where alpha fades inveresly, or the colors are inversed and they fade correctly, can't find a happy medium.
    Greatly appreciate your work :)

    • @SilverwingVFX
      @SilverwingVFX  Рік тому +1

      Hey Kris, thank you very much for your comment!
      I feel you. Fusion does stuff differently and sometimes it does not feel intuitive at all. But I assure you, there is always a solution!
      I maybe should have shown this in the video as Fusion is a bit specific how it likes his values.
      Fog Node Workflow:
      First of all what´s important is that Fusion likes it´s depth passes
      Far = Black
      Near = White
      ...so the other way around Octane saves them. Fortunately I show a solution in the video with the global texture. Or you just invert the whole channel in fusion itself.
      Channel Booleans:
      Once you have done those tweaks to your Z-Depth input, you can use a "Channel Booleans" - Main Image as BG, Z-Depth as FG, and Leave it as "Do Nothing" for all the channels (also for the Alpha). Then go to the Aux Tab and "Enable extra channels" and in the "To Z Buffer" set "Lightness FG". This writes the Z-Depth values in the provided channel and carries it along the stream.
      There seems to be a bug in Fusion right now so you can´t see the Z-Depth if you try to view it. But the values are there if you sample it.
      Fog:
      Now add a "Fog" node. If you now sample the near plane (so the values that the Z-Depth has closest to the camera) and the far plane (so the Z-Depth has at the horizon) then the fog should show up properly.
      --------
      Alternatively:
      If you do not get it to work, because there are frankly a lot of moving parts in this! Then you could make your own Fog by Merging your Render with a colored Background via a mask:
      Merge:
      Make a "Background" node in fusion (make sure it has the same pixel dimensions as your render), give it the color of your fog and use a "Merge" to merge that "Background" with your Render (FG)
      Mask:
      On your Z-Depth channel you can use a "Bitmap" node that will turn the Z-Depth values into a mask. Set the Bitmap node to use any color channel of your Z-Depth and arrange the Low and High values so you get a proper Z-Representation (you might also want to invert the result to have Far = Black, Near = White)
      Then use the output as the Mask input for the "Merge" of your Renders and the Background.
      I know its pretty hard to follow written tutorials. So hopefully you get something out of that.
      I tried both methods right now in Fusion Studio 18, and they work as intended.

    • @RUFFENSTEINT
      @RUFFENSTEINT Рік тому +1

      @@SilverwingVFX ahhh love you man, thank you. Got the first method to work without a hitch, feel big dumb for not being aware of a straight up 'Fog' node.. Appreciate it big time, can get back to spooky vague render aesthetic now :)

    • @SilverwingVFX
      @SilverwingVFX  Рік тому

      @@RUFFENSTEINT Ohhhh super nice! Glad that it worked. And don´t feel dumb, we are all learning here! I wish you the best success with your spooky projects!

  • @Part1of2
    @Part1of2 Рік тому +1

    first :) thanks for the video

  • @westex13
    @westex13 Місяць тому

    How can i render the post process pass and also control it with z-depth? please anyone help

    • @SilverwingVFX
      @SilverwingVFX  Місяць тому

      If you think logically about it, all the post process pass is doing is happening in the lens after there is already depth of field to it. The most accurate way to recreate what happens in reality is that you do not use the post process but recreate it in comp after you have put the z-depth alterations on the image.

  • @simontrickfilmer
    @simontrickfilmer Рік тому +2

    oh; another long tip. Nope, it's just long because of the patreon-names :-)

    • @SilverwingVFX
      @SilverwingVFX  Рік тому

      😅 Ha ha sorry 😇
      I need to find another option with the Patreons going forward.
      Cheers and thanks for your input 🙌

    • @simontrickfilmer
      @simontrickfilmer Рік тому +1

      @@SilverwingVFX wie ein filmabspann? Ginge schneller. War eh überrascht meinen namen zu sehen. Musst du alle copypasten oder gibt es eine export funktion?

    • @SilverwingVFX
      @SilverwingVFX  Рік тому +1

      @@simontrickfilmer
      Ja, das ist der nahe liegensten Weg!
      Evtl mach ich das so.
      Bei Patreon kann man ein CSV runter laden. Allerdings musste ich die Namen etwas formatieren mit den Bindestrichen dazwischen. Das war dann manuelle Arbeit.

  • @simontrickfilmer
    @simontrickfilmer Рік тому +2

    in Ae there seems to be a bug (or feature?) with EXtractoR: You have to manually set all the RGB passes to Z-depth.y

    • @SilverwingVFX
      @SilverwingVFX  Рік тому +1

      Hey there and thank you for the heads up.
      I think what you experiencing has something to do with the way Octane stores data inside DWAB EXRs.
      It tries to be clever about it (and I think it is) So while it compresses the main passes, it has passes it does not compress because the are better left uncompressed. e.G. Cryptomatte but also data passes... like Z-Depth. To be most efficient about it, it just uses single channels for those so it does not take up more space then it needs to.
      If you do not like that way of dealing with it, you can go with a PIZ or PXR24 compression. They files will be quite a bit bigger then DWAB though.

    • @simontrickfilmer
      @simontrickfilmer Рік тому

      @@SilverwingVFX thanks for the explanation: I'm fine with it, you just has to know it :-)

  • @SlayeRFCSM
    @SlayeRFCSM Рік тому +1

    Once again just a treasureous tut here! Thanx!
    I know I have already told you that you can read my thoughts? But I want to give you a tip for one of the next lessons if you don't mind ;) I struggle very much with automotive scene with full lights on especially the headlamps and the taillamps. The problem is that when you do ithis setup in Octane physically correct it ruins the render times and makes it soooo slow that an animation of 20 sec can be rendered (with a moderate quality! - not perfect at all) about 10-12 hours. Which is not normal in mine opinion and I really want to know how to make these lights easier and more simple for render engine. Unfortunately I couldn't found a good exactly Octane and Cinema 4D tutorial on that - there are some on Redshift and Blender, but in Octane that pains me very much and I really need a good help and advice. Would be just great if you take this idea and make a breakdown of some of this type of projects. Expecially which settings and which materials is better to use and btw what is the normal time for such animation. Here's what I have got now and this is very far from what I would like to get...
    ua-cam.com/users/shorts0iXZV83FwFE?feature=share
    The lights are not real, the render times are ennourmous. Please, give a hand of help ;)

    • @SilverwingVFX
      @SilverwingVFX  Рік тому +2

      Hey and thanks a lot for your message and your suggestion.
      I think rendering the head lights of a car in a way as they work in reality. Light source, reflectors, lensing etc. Would result in massive render times, no matter what renderer you´d try.
      So I would go for a split approach. Make the headlights reflections / refractions look good in camera, and then use a different set of lights, e.g. with a IES light or a gobo, that mimic that headlight silhouette.
      I do very little automotive, so I do not a lot of knowledge with this topic. Though I have shaded headlights before.
      I am always searching for topics that can be applied very broadly as getting headlights looking correctly is a very specific thing.
      In your case I would not render the whole scene using photon tracing. I would rely on pathtracing alone. As you experience in your video, there are some flaws in the current implementation that can cause the animation to flicker.
      Maybe do a 2nd pass just rendering the lights that rely on caustic effects.
      Hope that at least helps a little bit!