How To Combine Normal Maps Correctly (Adding Texture Detail, Blender Tutorial)

Поділитися
Вставка
  • Опубліковано 30 чер 2024
  • Last video: • How to Increase Streng...
    The technique I show in this video is probably the most accurate way to combine normal maps (geometrically it makes the most sense). There isn't really a "perfect" way to do it - usually you should do stuff like this while working with heightmaps and only create the normal map from the height data at the end.
    In real-time applications, other methods are preferred since trig functions are really expensive.
    Final .blend file: www.mediafire.com/file_premiu...
    Chapters:
    0:00 - Intro & Concept
    0:48 - Explanation
    2:18 - Blender Implementation
    3:51 - Rotation Order Example
    4:27 - Optimizations
    5:36 - Adjusting strength of added detail

КОМЕНТАРІ • 90

  • @Wilphi
    @Wilphi Рік тому +3

    Truly awesome videos! Finally something new to learn 😃 good job. Keep them coming

  • @randyhester3340
    @randyhester3340 Рік тому +14

    Excellent video. The explanation does a good job of describing why photo type blend modes don't provide accurate combinations of normal maps.

  • @rameshgovindaraju5499
    @rameshgovindaraju5499 6 місяців тому

    Splendid explanation. Thank you.

  • @Kavukamari
    @Kavukamari Рік тому

    i was just going to try to figure this out, ty, will follow this to learn more about normal maps.

  • @Lakus231
    @Lakus231 Місяць тому

    tysm, exactly what i was looking for

  • @UnderdogDen
    @UnderdogDen Рік тому

    Thanks for the indepth explanation!

  • @3DWithLairdWT
    @3DWithLairdWT Рік тому +3

    This is very cool.
    I wrote a normal map XYZ to quaternion rotation map converter for Unreal Engine to combine normal maps in a more mathematically efficient way than matrix calculations.
    You might have fun figuring that one out, since the formulas aren't really that spicy, the ideas are a bit abstract though

  • @wallterschwarz8713
    @wallterschwarz8713 3 дні тому

    freaking amazing

  • @digitalvectorONE
    @digitalvectorONE Рік тому +1

    How much knowledge and how beautiful what you have shared. If you can gift us how to do the same by extracting the color parameters to normals and normals to displacement maps it would be just as invaluable.
    Thank you so much

  • @artefox0
    @artefox0 Рік тому +3

    good video

  • @DerB23
    @DerB23 Рік тому +2

    I would have loved to see the result in the end. Especially a comparison between how to do it and how not to do it

    • @georg240p
      @georg240p  Рік тому

      Yea, the comparison at the start of the video is a bit short.
      blog.selfshadow.com/publications/blending-in-detail/
      Here is a great article comparing different methods. Their own method is exactly what I used. (They are just using quaternion rotation to make it computationally far more efficient)

    • @DerB23
      @DerB23 Рік тому

      @@georg240p Heidewitzka, you're fast! Thank you for the link :D

    • @_MaZTeR_
      @_MaZTeR_ 7 місяців тому

      I'm guessing you're talking about Reoriented Normal Mapping method talked in that blog, this video does the exact same? I use a Photoshop version of that and it looks imo better than what even Substance has.@@georg240p
      Was wondering if you could make a video with the more computationally efficient way of doing it to save system resources.

  • @smortonmedia
    @smortonmedia Рік тому +1

    This is absolutely amazing! Been loving all of your normal map videos

    • @georg240p
      @georg240p  Рік тому +2

      I thought about making a video about that, here are some ideas:
      In case you performed a simple euler rotation: Let's say you rotated the landscape 30deg around X axis, 50 around Y and then 80 around Z. To get the tangent space normals you have to apply the inverse rotation to the normal vectors: First rotate them by -80 around Z, -50 around Y and -30 around X.
      In case you tilted the landscape towards a specific vector, you could actually use the node setup from this video. Just replace the base normal map by a constant vector (the vector by which you tilted the landscape) But this would just apply the same tilt again so you have to do the inverse tilt (just flip the sign of the X and Y coordinates of the tilt vector)
      In case you have no idea what rotation you performed, you can always convert between two coordinate systems if you know where the X,Y and Z axis of the coordinate system ended up after the rotation. To put the world space normals into this rotated coord system (tangent space), just project the normals onto these 3 axis. The tangent space X coordinate is just dot(worldNormal, XaxisVector), Y = dot(worldNormal, YaxisVector), Z = dot(worldNormal, ZaxisVector)
      This is basically just applying an inverse matrix by hand.
      Feel free to message me on discord if you want to send screenshots or something: umsoea#8675

  • @gart2922
    @gart2922 Рік тому +2

    Hello there, just wondering what do you think is the best way to bake out normal of complex hair texture map? Example is baking out a hair particle system curves into a normal map to be used on a hair cards later on.

  • @lugui
    @lugui Рік тому +12

    for real, WHY THE FUCK blender doesn't have a "Combine Normal Map" node..
    they took YEARS to add a propper blur node... we might also get that in a decade....

    • @anonymousd5582
      @anonymousd5582 Місяць тому +1

      Because Blender Devs.
      Sadly any proper fix requires some smart and persistent user to go and implement it on their own and apply for getting it added to Blender core, if the blender devs accept it or not is the second gamble issue with blender and contributions.

  • @FeRReTNS
    @FeRReTNS Рік тому +4

    You should sell this node setup on blender market for us lazy people

  • @cg3Dim
    @cg3Dim 11 місяців тому

    Nice

  • @lospuntosstudios5149
    @lospuntosstudios5149 Рік тому +22

    Pretty good video. But why is Blender forcing people to do so many steps? I only live once :(

    • @redi4ka951
      @redi4ka951 Рік тому +3

      Although you need to do it only once and then reuse it, it's a shame blender doesn't have something similar to substance painter's normals add blend mode

    • @_MaZTeR_
      @_MaZTeR_ 9 місяців тому +1

      @@redi4ka951 Yeah it's a crime the material menu doesn't have a build in function for combining as many normal maps as you want with just a single node group or really just that with any sort of texture.

  • @elyzius7725
    @elyzius7725 Рік тому

    Works perfectly! This method is very accurate, but it slows down the first few seconds that you switch the viewport shading to Material Preview or Rendered. Is it possible in Blender to output the combined normal maps into a single image texture node for saving?

  • @sabothbrainiac5844
    @sabothbrainiac5844 Рік тому

    Great video. I do however have a question. How can I use this node setup and still control the strength of both normals? Right now the nodes branch away from the multiply node that controlled the strength of the original normal map in your previous tutorial.

    • @georg240p
      @georg240p  Рік тому

      At 5:40 I showed how to change the strength of the detail normal vector (by manipulating the angle theta). You can do the exact same thing to the base normal vector (bottom one). This way you can control the strength of both normal maps independently.
      And you can also do the same thing at the end (after the combination) to change the strength of the final result. You can combine as many normal maps as you want, and change the strength at any point in between.
      Not sure what you mean by "the nodes branch away from the multiply node". (The multiply node is only used to manipulate the angle theta).
      In case you want to send screenshots of your node setup, feel free to message me on discord: umsoea#8675

  • @lospuntosstudios5149
    @lospuntosstudios5149 Рік тому +1

    I can recommend the free Normalizer by Friendly Shade. Supposedly it does exactly this.

    • @_MaZTeR_
      @_MaZTeR_ 9 місяців тому

      I'm not sure if the end result is the exact same, but there's quite a difference if you compare it to for example what Substance Painter does. Normalizer washes out quite a lot of detail in comparison.

  • @ValeGoG
    @ValeGoG Рік тому

    10/10

  • @sarahf1506
    @sarahf1506 3 місяці тому

    Am I understanding correctly that this setup can be used with any 2 normal maps or do the equations need to change somehow for some instances? I tried setting it up and wasn't successful, but I'm not sure if I messed up somewhere or if the method needs altering in some way. In any case, this is eye-opening, thanks.

    • @georg240p
      @georg240p  3 місяці тому

      It should work for combining any two (or more) tanget space normal maps. I added a download link to the my final file in the video description.

  • @Ksu8O8
    @Ksu8O8 2 місяці тому

    This is a great tutorial, but how can i bake it into an image texture?

    • @georg240p
      @georg240p  2 місяці тому +1

      6:15 By rendering an image and saving it (if you followed my scene setup)
      A link to my final .blend file is in the description.

  • @hummersaadi3612
    @hummersaadi3612 9 місяців тому

    still i want know best way to do animate texture and merged with project as texture same what you did in video but look animate texture becuse i made snake texture animate but i need to add project as texture please help

  • @bossnoob8624
    @bossnoob8624 Рік тому

    Nice explanation! But I still don't quite understand why the Rotation Order is 3 steps instead of 2 and why the rotation Angle is negative in the first rotation. I'm very confused because the video just seems to rotate according to these angles, but there is no explanation of why this is done.Can you explain it in more detail, or where can I find the corresponding video. Appreciates it

    • @georg240p
      @georg240p  Рік тому +1

      You're right, It's a bit confusing especially because I said that we are rotating by theta and phi. ("2 rotations")
      Here is what we actually want to do:
      We want to perform a single rotation (by theta) BUT around a rotated axis.
      And this rotated axis is the X axis that has been rotated by phi around Z.
      The problem with that: With simple Euler rotations, we can only rotate around the main axes: (X,Y,Z)
      But we can use a trick: If we first align the rotation axis with one of the main axes (X,Y or Z), we can just rotate around this main axis (by theta). Because they are identical.
      We just have to make sure to reverse this alignment at the end.
      So here are the 3 steps:
      1. Align rotation axis with one of the main axis: As mentioned above, our rotation axis is just the X axis rotated by phi around Z. If we reverse this, the rotation axis is identical to the X axis. So we rotate by negative phi around Z.
      2. Perform the actual rotation. Since the X axis and our rotation axis are now identical we can just perfom a rotation around X by theta.
      3. Reverse the alignment. Rotate by positive phi around Z. Now the rotation axis is back to where it was at the beginning.
      This is what I showed in the video with the example of rotating the cube.
      Hope this helps.

    • @angeldude101
      @angeldude101 Рік тому

      Shouldn't you just be able to do a single quaternion rotation? The quaternion for the rotation in question is just sqrt(final / initial).

  • @KellyCocc
    @KellyCocc 10 місяців тому

    I'd like to try this but there is no link to the previous video. I could probably copy the node tree but what the heck is vecTo Spherical?

    • @georg240p
      @georg240p  10 місяців тому

      The link to the last video is in the video description. You might have to reload the page. vecToSpherical converts a 3d vector to spherical coordinates. I showed how to create it in the last video.

  • @JaXuun
    @JaXuun Рік тому

    It's 1 node in redshift

  • @Diabellze
    @Diabellze Рік тому

    is it possible to add more normal map on top of that node? for example mixing 3 normal maps or more?

    • @georg240p
      @georg240p  Рік тому +1

      Should work fine to just use the resulting normal vector from the first combination as the new base normal vector. just make to keep the z component positive +renormalizing.

    • @Diabellze
      @Diabellze Рік тому

      @@georg240p got it, thank you very much!

  • @manvirrayat7030
    @manvirrayat7030 Місяць тому

    Please, can you do it with substance painter

  • @robertYoutub
    @robertYoutub Рік тому

    If the base map Is a plane RGB input, the method doesn't work. For example the input is 128,128,256 for an empty normal map (hex: BCBCFF)

    • @georg240p
      @georg240p  Рік тому +1

      Works perfectly fine for me. In the video you can see that the base normal map also has a flat background.
      In which way does it break for you?
      The arctan2 function might cause some trouble. Check if it returns something close to zero for phi. If not, set phi to zero IF the z component is close to 1.
      Here is my file in case you want to double check the node setup: www.mediafire.com/file/y60i9en2qltabyo/tut12_combine_normal_maps.blend/file
      (i used blender 3.3)
      If you need a faster and numercially more stable version of this approach (using quaternions), Stephen Hill has a great blog post: blog.selfshadow.com/publications/blending-in-detail/

  • @Lakus231
    @Lakus231 Місяць тому

    hmm, is there a way to use it's node setup output directly? like plugging it into the "normal map" node input?
    so that i don't have to render the new normal map everytime i'm changing something🤔

    • @georg240p
      @georg240p  Місяць тому

      Of course. Whenever you pass a normal vector between nodes, you can manipulate it in all kinds of ways e.g. adjust the strength, rotation etc.
      Combining normals as I showed in the video just means that you adjust a vector's orientation (detail) based on another vector (base).
      Just make sure you don't work with the color data directly because it's in range [0...1] instead of [-1...1]

    • @Lakus231
      @Lakus231 Місяць тому

      @georg240p When I set both "Texture Coordinate" nodes to "UV," i get them combined onto my object, but the detail normal map doesn't seem to be calculated correctly. It looks identical to an overlay mix node. probably because by using the uvmap in the beginning, both normal maps get warped and xyz axis aren't accurate anymore, if they are world axis, then it's breaking completly?
      Maybe adding a plane and setting it in the "Texture Coordinate" node as "object," providing both normal maps with a flat, undistorted base for combination? Then, the combined output could be applied to the object, but I'm unsure how to instruct those vectors to utilize the UV map.

    • @georg240p
      @georg240p  Місяць тому

      ​@@Lakus231 UV Coords are in Tangent-Space (2D). It helps to display the UV Vector in the Output node. Red is X, Green is Y.
      As I said, all the Normal Vector combinations/adjustments have to be done in range [-1...1] and Tangent-Space.
      Once you want to use the final Vector as the Surface Normal of your Model, you have to be careful:
      The "BSDF" Nodes' "Normal" input socket expects a World-Space Vector in range [-1...1]. (The direction in which your surface is pointing)
      The "Normal Map" Node outputs such a World-Space Vector in range [-1...1].
      But it's input Socket is Color Data (yellow dot). In Blender, Color Data is in range [0...1].
      So, before feeding the Vector into the "Normal Map" Node you have to do the conversion back to [0...1] (color data) as I showed in my video.

    • @Lakus231
      @Lakus231 Місяць тому

      @@georg240p yes, it was already converted to color data (-1...1 to 0...1) then into the "Normal Map" node (set to tangent space) connected to "BSDF" Normal input.
      i have basically exactly your node setup, i just set the "Texture Coordinate" nodes to "UV".
      The main normal map gets projected correcly onto the object but the detail normal map just gets overlayed, not actually mixed. like it's overwriting the main normal map.
      Did i understood you correctly that i can't set those "Texture Coordinate" nodes to "UV" and have to bring the uv map into there after the conversion from color data into tangent-space (0...1 to -1...1)? if yes, i have no idea how 😬

    • @georg240p
      @georg240p  Місяць тому

      ​@@Lakus231 Of course you can use the UV socket from the Texture Coordinate node. As I said, it gives you a 2D tangent space vector ( = it is completely independent from the surface orientation or how your object is rotated etc)
      Here is my .blend file but with an UV unwrapped monkey head:
      www.mediafire.com/file/7dh1coqw1jv731c/combine_normalMap_SuzanneUV_BlenderV4.02.blend/file
      In my video I used the window coordinate system only because it's good for rendering and exporting the final texture as an image.
      But you can use any 2D coordinate system you like. It will not change the way the combination of the textures is done.
      The coordinate system is just the "canvas" that the rest of the shader is displayed on. And a UV map is simply the coordinate system of the object's surface.

  • @_MaZTeR_
    @_MaZTeR_ 10 місяців тому

    How can you combine 3 normal maps (base + 2 detail maps)?

    • @georg240p
      @georg240p  10 місяців тому

      just use the resulting vector as the new base normal vector for the 2nd combination

  • @xbzq
    @xbzq Рік тому

    You can turn one vector into a matrix and then multiply with the other vector.

    • @georg240p
      @georg240p  Рік тому

      why not just use quaternions? numerically stable and the most efficient.

    • @xbzq
      @xbzq Рік тому

      @@georg240p Vectors and matrices are the simplest and cheapest (computationally)

    • @georg240p
      @georg240p  Рік тому

      @@xbzq
      The quaternion version I tend to use requires 1 divison, 6 multiplications, and 6 additions. and can be used to rotate points aswell.
      Stephen Hill has a great blog post about it:
      blog.selfshadow.com/publications/blending-in-detail/
      Would love to know if there is anything faster.
      Keep in mind we are doing a shortest arc rotation.

    • @angeldude101
      @angeldude101 Рік тому

      ​@@xbzq Quaternions are cheaper to compose and store than matrices, but more expensive to apply. As for which is simpler, quaternions are only "complicated" because they're obfuscated and communicated terribly. They're really just a blending of a 180° rotation (represented as a blending of 180° rotations around each axis) and a 0° rotation.

  • @natsunwtk
    @natsunwtk 8 місяців тому

    And how to bump them up but masking each other out with controlling the power separately?

    • @georg240p
      @georg240p  8 місяців тому +1

      In newer Blender versions (eg 3.6) there is the Mix node (set to float).
      Set the mask as the Factor input. And then you can control A and B separately.
      Use its output as the normal strength.
      In older Blender versions, just use the MixRGB node. Does the same thing but it looks odd because its using color values.

    • @natsunwtk
      @natsunwtk 8 місяців тому

      @@georg240p in case if we use black and white texture because it has to be exact location for label/box etc? Will it work or I’m just connecting it wrongly? I also face a problem where it’s black and white but when I change slots the white turn grey instead of completely black while before the black part was completely black. Like it inverted but there’s no completely black in the inverted but grey even tho I change another slot color to completely black. Do you have any suggestions?

    • @georg240p
      @georg240p  8 місяців тому

      @@natsunwtk Yes, to get a clear separation, the mask should be only black and white (0 or 1).
      Im not sure what you mean by inverted or gray. You mean the mask looks wrong when you set it as the output? Try to set the Color space to "Non-Color data" in the image texture node that loads the mask.

    • @natsunwtk
      @natsunwtk 8 місяців тому

      @@georg240p yes, something like that. It’s already non color when it’s before inverted it showing completely black and completely white but when I change the slot A to slot B in a mix color node, it showing white->gray and black-> white. Sorry my english is not good enough to saying it properly. (I connected a black and white image to a mixed color node for bumping. It’s mask just fine because I also used it for roughness. I just connected them to checking if it’s masking properly but turn out it appeared gray instead of black for bump node)

    • @georg240p
      @georg240p  8 місяців тому

      ​@@natsunwtk
      The result of the mix node are the strength values that will be used to control the normal map strength. It is not a mask! If its inverted, thats because you defined the colors this way in the mix node. And maybe thats exactly what you want.
      Connect the output of the mix node to the value that you previously used to control the strength of the entire normal map (eg in my video I used a multiply node. So in this case connect the mix node output to the bottom slot of the multiply node)

  • @lonesomealeks4206
    @lonesomealeks4206 Рік тому

    Why can't you just split xyz and combine xyz with add node in between for each? much simpler, same control

    • @georg240p
      @georg240p  Рік тому

      Stephen Hill from Lucasfilm has a great blog post comparing different techniques:
      blog.selfshadow.com/publications/blending-in-detail/
      You seem to describe the first technique they mention: "Linear Interpolation" (Vector addition, which is the same as averaging the two vectors).
      The technique I used in my tutorial gives the exact same result as their proposed method. (They use quaternion shortest arc rotation to make it computationally extremely efficient, but hard to understand)

  • @SamG-nq4pi
    @SamG-nq4pi Рік тому

    The tiny Nodes in the screen and the video editing make it hard to keep up, especially as a non english speaker. but thanks for your video

  • @DarkShadow-ek6og
    @DarkShadow-ek6og Рік тому

    Wouldn't it be easier to just subtract 0.5 from the second normal, scale it, and then just add it to the first

  • @robertYoutub
    @robertYoutub Рік тому

    I always just use overlay and mix them. Does work too, but maybe nit as accurate

    • @DileepNow
      @DileepNow Рік тому

      I tried it. It works but result is kind of dull.

  • @grindinghalt6155
    @grindinghalt6155 Рік тому

    Hi umsoere

  • @kowy_m
    @kowy_m Рік тому

    overlay filter will combine 2 normals without this math )))
    great explanation, but there are other ways ))

    • @georg240p
      @georg240p  Рік тому +1

      then just use ovelay. who cares. This technique is for those not satisfied with regular image blending techniques. I mentioned that in the first few seconds of the video.

    • @_MaZTeR_
      @_MaZTeR_ 7 місяців тому +1

      It works, but it's not the "mathematically" correct way of doing it. It washes out a lot of detail, but if one doesn't care about it, it doesn't matter. But I personally would prefer to have both of my textures looking as best as they can and how they are intended.

  • @JustFor-dq5wc
    @JustFor-dq5wc Місяць тому

    Aren't colors in normal maps representation of angles? Can't you just combine colors to combine angles?
    Edit: To combine normal maps in photoshop you can double click the layer. Turn off blue channel. Set that layer to overlay. And you're done. At least that what I've heard.

    • @georg240p
      @georg240p  Місяць тому +1

      No, normal maps store 3d vector data and treating them as color will 1. produce unnormalized vectors (a normal vector should always be normalized). and 2. it doesnt follow any geometric concept so results can be quite unpredictable. For more details, here is an article explaining this in more detail by comparing color blending techniques with the geometric approach: blog.selfshadow.com/publications/blending-in-detail/

    • @JustFor-dq5wc
      @JustFor-dq5wc 27 днів тому

      @@georg240p Thanks for info. I'm new to all of this.

  • @aaiki14
    @aaiki14 6 місяців тому

    the easiest way to combine normal maps is to overlay 2 normal maps using mix color

  • @PiterTraceurFA
    @PiterTraceurFA 10 місяців тому

    I hate how archaic blenders' shading editor is. Like holy shit, this can be done in two seconds in substance painter/designer. Actually pathetic. But at least we have grease pencil! 🙄🙄

  • @kaitoren365
    @kaitoren365 2 дні тому

    This only works when working with just one material, so if you're come here because you have 2 normals from different texture set, this is not helpful.

  • @BlenderUnreal
    @BlenderUnreal Місяць тому

    is there any addon or premade node for this? i like it lazy.

    • @georg240p
      @georg240p  Місяць тому +1

      There if a download for my .blend file in the description.