Yea, the comparison at the start of the video is a bit short. blog.selfshadow.com/publications/blending-in-detail/ Here is a great article comparing different methods. Their own method is exactly what I used. (They are just using quaternion rotation to make it computationally far more efficient)
I'm guessing you're talking about Reoriented Normal Mapping method talked in that blog, this video does the exact same? I use a Photoshop version of that and it looks imo better than what even Substance has.@@georg240p Was wondering if you could make a video with the more computationally efficient way of doing it to save system resources.
for real, WHY THE FUCK blender doesn't have a "Combine Normal Map" node.. they took YEARS to add a propper blur node... we might also get that in a decade....
Because Blender Devs. Sadly any proper fix requires some smart and persistent user to go and implement it on their own and apply for getting it added to Blender core, if the blender devs accept it or not is the second gamble issue with blender and contributions.
This tutorial is awesome! The only thing that could make it even better is adding a section on using a mask to combine specific parts of the normal maps.
This is very cool. I wrote a normal map XYZ to quaternion rotation map converter for Unreal Engine to combine normal maps in a more mathematically efficient way than matrix calculations. You might have fun figuring that one out, since the formulas aren't really that spicy, the ideas are a bit abstract though
Although you need to do it only once and then reuse it, it's a shame blender doesn't have something similar to substance painter's normals add blend mode
@@redi4ka951 Yeah it's a crime the material menu doesn't have a build in function for combining as many normal maps as you want with just a single node group or really just that with any sort of texture.
I'm not sure if the end result is the exact same, but there's quite a difference if you compare it to for example what Substance Painter does. Normalizer washes out quite a lot of detail in comparison.
Thank you, I am messing around with this method, and I was wondering if this will work with normal maps that use different UV maps on the same model? I ask because I know to get the best results you have to specify the UV map you want to use in the normal map node if you set it to tangent space.
I thought about making a video about that, here are some ideas: In case you performed a simple euler rotation: Let's say you rotated the landscape 30deg around X axis, 50 around Y and then 80 around Z. To get the tangent space normals you have to apply the inverse rotation to the normal vectors: First rotate them by -80 around Z, -50 around Y and -30 around X. In case you tilted the landscape towards a specific vector, you could actually use the node setup from this video. Just replace the base normal map by a constant vector (the vector by which you tilted the landscape) But this would just apply the same tilt again so you have to do the inverse tilt (just flip the sign of the X and Y coordinates of the tilt vector) In case you have no idea what rotation you performed, you can always convert between two coordinate systems if you know where the X,Y and Z axis of the coordinate system ended up after the rotation. To put the world space normals into this rotated coord system (tangent space), just project the normals onto these 3 axis. The tangent space X coordinate is just dot(worldNormal, XaxisVector), Y = dot(worldNormal, YaxisVector), Z = dot(worldNormal, ZaxisVector) This is basically just applying an inverse matrix by hand. Feel free to message me on discord if you want to send screenshots or something: umsoea#8675
texnormal.xy*=strength; texnormal.z=mix(1.,texnormal.z,saturate(strength)); this is how blender do .if u want to blend two normal map,you can just plus xy, and multiply z, then lerp it with default normal to control the mix factor. thats much faster and easier.
Or you could just seperate both texture maps into their RGB channels (seperate color node), use 3 color mix nodes, mix each channel from each texture in 'overlay' and bring that back into one texture with a combine color node. Way easier with the same result.
"with the same result." As mentioned in the first 20 seconds of the video, blending normal data by using typical image blending techniques might not give you the results that you want, since they have no geometric basis. Here's a great article comparing overlay blending with the one I showed in my video: blog.selfshadow.com/publications/blending-in-detail/
Hello there, just wondering what do you think is the best way to bake out normal of complex hair texture map? Example is baking out a hair particle system curves into a normal map to be used on a hair cards later on.
Is there any way to adjust the scale of the detail normal map? I understand the strength and everything but if I wanted to make the dots more spread out and way smaller how would I go about doing that?
@@applethefruit You can scale, rotate, translate, skew, warp the input textures by manipulating their UV coordinate systems. In my video, the Texture Coordinate Node provides the UV coordinates and this is then plugged into the Image Texture Node which samples the color of the texture at the specified coordinates. So to modify the coordinates you have to place nodes before the Image Texture Node. A simple "Mapping" Node will do the job i think.
still i want know best way to do animate texture and merged with project as texture same what you did in video but look animate texture becuse i made snake texture animate but i need to add project as texture please help
for god can someone explain me this rotation 3:30 you rotate the detail normal vector , by negative phi ( angle made by base nomral vector with z ) then you rotated detail normal by theta (angle made by base normal with y ) you rotate the detail normal vector , by positive phi ( angle made by base nomral vector with z ) what's happening here 😭
Does this only work if you are rendering the normal map back out to an image? I've tried this method for a shader mapped onto an object and my normals are coming out crazy on the other end
@@RachelDetermann It will work, but you have to make sure that the actual combination is done with both vectors being in tangent space and in range [-1....1]. Not sure how blender handles this in different versions, but if you get normal data from within blender and don't export it as an image, you have to skip the range conversions since this only applies to images (typical image file formats can only store data in range [0...1] ).
Works perfectly fine for me. In the video you can see that the base normal map also has a flat background. In which way does it break for you? The arctan2 function might cause some trouble. Check if it returns something close to zero for phi. If not, set phi to zero IF the z component is close to 1. Here is my file in case you want to double check the node setup: www.mediafire.com/file/y60i9en2qltabyo/tut12_combine_normal_maps.blend/file (i used blender 3.3) If you need a faster and numercially more stable version of this approach (using quaternions), Stephen Hill has a great blog post: blog.selfshadow.com/publications/blending-in-detail/
Am I understanding correctly that this setup can be used with any 2 normal maps or do the equations need to change somehow for some instances? I tried setting it up and wasn't successful, but I'm not sure if I messed up somewhere or if the method needs altering in some way. In any case, this is eye-opening, thanks.
The link to the last video is in the video description. You might have to reload the page. vecToSpherical converts a 3d vector to spherical coordinates. I showed how to create it in the last video.
Works perfectly! This method is very accurate, but it slows down the first few seconds that you switch the viewport shading to Material Preview or Rendered. Is it possible in Blender to output the combined normal maps into a single image texture node for saving?
Nice explanation! But I still don't quite understand why the Rotation Order is 3 steps instead of 2 and why the rotation Angle is negative in the first rotation. I'm very confused because the video just seems to rotate according to these angles, but there is no explanation of why this is done.Can you explain it in more detail, or where can I find the corresponding video. Appreciates it
You're right, It's a bit confusing especially because I said that we are rotating by theta and phi. ("2 rotations") Here is what we actually want to do: We want to perform a single rotation (by theta) BUT around a rotated axis. And this rotated axis is the X axis that has been rotated by phi around Z. The problem with that: With simple Euler rotations, we can only rotate around the main axes: (X,Y,Z) But we can use a trick: If we first align the rotation axis with one of the main axes (X,Y or Z), we can just rotate around this main axis (by theta). Because they are identical. We just have to make sure to reverse this alignment at the end. So here are the 3 steps: 1. Align rotation axis with one of the main axis: As mentioned above, our rotation axis is just the X axis rotated by phi around Z. If we reverse this, the rotation axis is identical to the X axis. So we rotate by negative phi around Z. 2. Perform the actual rotation. Since the X axis and our rotation axis are now identical we can just perfom a rotation around X by theta. 3. Reverse the alignment. Rotate by positive phi around Z. Now the rotation axis is back to where it was at the beginning. This is what I showed in the video with the example of rotating the cube. Hope this helps.
In newer Blender versions (eg 3.6) there is the Mix node (set to float). Set the mask as the Factor input. And then you can control A and B separately. Use its output as the normal strength. In older Blender versions, just use the MixRGB node. Does the same thing but it looks odd because its using color values.
@@georg240p in case if we use black and white texture because it has to be exact location for label/box etc? Will it work or I’m just connecting it wrongly? I also face a problem where it’s black and white but when I change slots the white turn grey instead of completely black while before the black part was completely black. Like it inverted but there’s no completely black in the inverted but grey even tho I change another slot color to completely black. Do you have any suggestions?
@@natsunwtk Yes, to get a clear separation, the mask should be only black and white (0 or 1). Im not sure what you mean by inverted or gray. You mean the mask looks wrong when you set it as the output? Try to set the Color space to "Non-Color data" in the image texture node that loads the mask.
@@georg240p yes, something like that. It’s already non color when it’s before inverted it showing completely black and completely white but when I change the slot A to slot B in a mix color node, it showing white->gray and black-> white. Sorry my english is not good enough to saying it properly. (I connected a black and white image to a mixed color node for bumping. It’s mask just fine because I also used it for roughness. I just connected them to checking if it’s masking properly but turn out it appeared gray instead of black for bump node)
@@natsunwtk The result of the mix node are the strength values that will be used to control the normal map strength. It is not a mask! If its inverted, thats because you defined the colors this way in the mix node. And maybe thats exactly what you want. Connect the output of the mix node to the value that you previously used to control the strength of the entire normal map (eg in my video I used a multiply node. So in this case connect the mix node output to the bottom slot of the multiply node)
Great video. I do however have a question. How can I use this node setup and still control the strength of both normals? Right now the nodes branch away from the multiply node that controlled the strength of the original normal map in your previous tutorial.
At 5:40 I showed how to change the strength of the detail normal vector (by manipulating the angle theta). You can do the exact same thing to the base normal vector (bottom one). This way you can control the strength of both normal maps independently. And you can also do the same thing at the end (after the combination) to change the strength of the final result. You can combine as many normal maps as you want, and change the strength at any point in between. Not sure what you mean by "the nodes branch away from the multiply node". (The multiply node is only used to manipulate the angle theta). In case you want to send screenshots of your node setup, feel free to message me on discord: umsoea#8675
@@xbzq The quaternion version I tend to use requires 1 divison, 6 multiplications, and 6 additions. and can be used to rotate points aswell. Stephen Hill has a great blog post about it: blog.selfshadow.com/publications/blending-in-detail/ Would love to know if there is anything faster. Keep in mind we are doing a shortest arc rotation.
@@xbzq Quaternions are cheaper to compose and store than matrices, but more expensive to apply. As for which is simpler, quaternions are only "complicated" because they're obfuscated and communicated terribly. They're really just a blending of a 180° rotation (represented as a blending of 180° rotations around each axis) and a 0° rotation.
Stephen Hill from Lucasfilm has a great blog post comparing different techniques: blog.selfshadow.com/publications/blending-in-detail/ You seem to describe the first technique they mention: "Linear Interpolation" (Vector addition, which is the same as averaging the two vectors). The technique I used in my tutorial gives the exact same result as their proposed method. (They use quaternion shortest arc rotation to make it computationally extremely efficient, but hard to understand)
Should work fine to just use the resulting normal vector from the first combination as the new base normal vector. just make to keep the z component positive +renormalizing.
Aren't colors in normal maps representation of angles? Can't you just combine colors to combine angles? Edit: To combine normal maps in photoshop you can double click the layer. Turn off blue channel. Set that layer to overlay. And you're done. At least that what I've heard.
No, normal maps store 3d vector data and treating them as color will 1. produce unnormalized vectors (a normal vector should always be normalized). and 2. it doesnt follow any geometric concept so results can be quite unpredictable. For more details, here is an article explaining this in more detail by comparing color blending techniques with the geometric approach: blog.selfshadow.com/publications/blending-in-detail/
I appreciate all the explanation, but.....seriously? This should be such an easy step without haaving to connect a million different nodes. It's a bit ridiculous (by now).
then just use ovelay. who cares. This technique is for those not satisfied with regular image blending techniques. I mentioned that in the first few seconds of the video.
It works, but it's not the "mathematically" correct way of doing it. It washes out a lot of detail, but if one doesn't care about it, it doesn't matter. But I personally would prefer to have both of my textures looking as best as they can and how they are intended.
I hate how archaic blenders' shading editor is. Like holy shit, this can be done in two seconds in substance painter/designer. Actually pathetic. But at least we have grease pencil! 🙄🙄
This only works when working with just one material, so if you're come here because you have 2 normals from different texture set, this is not helpful.
Excellent video. The explanation does a good job of describing why photo type blend modes don't provide accurate combinations of normal maps.
I would have loved to see the result in the end. Especially a comparison between how to do it and how not to do it
Yea, the comparison at the start of the video is a bit short.
blog.selfshadow.com/publications/blending-in-detail/
Here is a great article comparing different methods. Their own method is exactly what I used. (They are just using quaternion rotation to make it computationally far more efficient)
@@georg240p Heidewitzka, you're fast! Thank you for the link :D
I'm guessing you're talking about Reoriented Normal Mapping method talked in that blog, this video does the exact same? I use a Photoshop version of that and it looks imo better than what even Substance has.@@georg240p
Was wondering if you could make a video with the more computationally efficient way of doing it to save system resources.
for real, WHY THE FUCK blender doesn't have a "Combine Normal Map" node..
they took YEARS to add a propper blur node... we might also get that in a decade....
Because Blender Devs.
Sadly any proper fix requires some smart and persistent user to go and implement it on their own and apply for getting it added to Blender core, if the blender devs accept it or not is the second gamble issue with blender and contributions.
This tutorial is awesome! The only thing that could make it even better is adding a section on using a mask to combine specific parts of the normal maps.
You should sell this node setup on blender market for us lazy people
This is very cool.
I wrote a normal map XYZ to quaternion rotation map converter for Unreal Engine to combine normal maps in a more mathematically efficient way than matrix calculations.
You might have fun figuring that one out, since the formulas aren't really that spicy, the ideas are a bit abstract though
thanks a lot for your blender file, very appreciated :)
Splendid explanation. Thank you.
thanks a lot i was looking for that since a time
i was just going to try to figure this out, ty, will follow this to learn more about normal maps.
O my lord!! this is what I've been looking for...thanks a lot
Truly awesome videos! Finally something new to learn 😃 good job. Keep them coming
tysm, exactly what i was looking for
Thanks for the indepth explanation!
freaking amazing
How is different from plugging the normal output of a bump into the normal input of a bump node?
good video
Pretty good video. But why is Blender forcing people to do so many steps? I only live once :(
Although you need to do it only once and then reuse it, it's a shame blender doesn't have something similar to substance painter's normals add blend mode
@@redi4ka951 Yeah it's a crime the material menu doesn't have a build in function for combining as many normal maps as you want with just a single node group or really just that with any sort of texture.
I can recommend the free Normalizer by Friendly Shade. Supposedly it does exactly this.
I'm not sure if the end result is the exact same, but there's quite a difference if you compare it to for example what Substance Painter does. Normalizer washes out quite a lot of detail in comparison.
Thank you, I am messing around with this method, and I was wondering if this will work with normal maps that use different UV maps on the same model? I ask because I know to get the best results you have to specify the UV map you want to use in the normal map node if you set it to tangent space.
This is absolutely amazing! Been loving all of your normal map videos
I thought about making a video about that, here are some ideas:
In case you performed a simple euler rotation: Let's say you rotated the landscape 30deg around X axis, 50 around Y and then 80 around Z. To get the tangent space normals you have to apply the inverse rotation to the normal vectors: First rotate them by -80 around Z, -50 around Y and -30 around X.
In case you tilted the landscape towards a specific vector, you could actually use the node setup from this video. Just replace the base normal map by a constant vector (the vector by which you tilted the landscape) But this would just apply the same tilt again so you have to do the inverse tilt (just flip the sign of the X and Y coordinates of the tilt vector)
In case you have no idea what rotation you performed, you can always convert between two coordinate systems if you know where the X,Y and Z axis of the coordinate system ended up after the rotation. To put the world space normals into this rotated coord system (tangent space), just project the normals onto these 3 axis. The tangent space X coordinate is just dot(worldNormal, XaxisVector), Y = dot(worldNormal, YaxisVector), Z = dot(worldNormal, ZaxisVector)
This is basically just applying an inverse matrix by hand.
Feel free to message me on discord if you want to send screenshots or something: umsoea#8675
texnormal.xy*=strength;
texnormal.z=mix(1.,texnormal.z,saturate(strength));
this is how blender do .if u want to blend two normal map,you can just plus xy, and multiply z, then lerp it with default normal to control the mix factor.
thats much faster and easier.
can you elaborate ?
Or you could just seperate both texture maps into their RGB channels (seperate color node), use 3 color mix nodes, mix each channel from each texture in 'overlay' and bring that back into one texture with a combine color node. Way easier with the same result.
"with the same result."
As mentioned in the first 20 seconds of the video, blending normal data by using typical image blending techniques might not give you the results that you want, since they have no geometric basis.
Here's a great article comparing overlay blending with the one I showed in my video:
blog.selfshadow.com/publications/blending-in-detail/
Hello there, just wondering what do you think is the best way to bake out normal of complex hair texture map? Example is baking out a hair particle system curves into a normal map to be used on a hair cards later on.
Is there any way to adjust the scale of the detail normal map? I understand the strength and everything but if I wanted to make the dots more spread out and way smaller how would I go about doing that?
@@applethefruit You can scale, rotate, translate, skew, warp the input textures by manipulating their UV coordinate systems. In my video, the Texture Coordinate Node provides the UV coordinates and this is then plugged into the Image Texture Node which samples the color of the texture at the specified coordinates. So to modify the coordinates you have to place nodes before the Image Texture Node. A simple "Mapping" Node will do the job i think.
@@georg240p yeah I did the mapping mode it worked a charm. Thanks mate.
still i want know best way to do animate texture and merged with project as texture same what you did in video but look animate texture becuse i made snake texture animate but i need to add project as texture please help
Please, can you do it with substance painter
for god can someone explain me this rotation 3:30
you rotate the detail normal vector , by negative phi ( angle made by base nomral vector with z )
then you rotated detail normal by theta (angle made by base normal with y )
you rotate the detail normal vector , by positive phi ( angle made by base nomral vector with z )
what's happening here 😭
Nice
Does this only work if you are rendering the normal map back out to an image? I've tried this method for a shader mapped onto an object and my normals are coming out crazy on the other end
@@RachelDetermann It will work, but you have to make sure that the actual combination is done with both vectors being in tangent space and in range [-1....1]. Not sure how blender handles this in different versions, but if you get normal data from within blender and don't export it as an image, you have to skip the range conversions since this only applies to images (typical image file formats can only store data in range [0...1] ).
If the base map Is a plane RGB input, the method doesn't work. For example the input is 128,128,256 for an empty normal map (hex: BCBCFF)
Works perfectly fine for me. In the video you can see that the base normal map also has a flat background.
In which way does it break for you?
The arctan2 function might cause some trouble. Check if it returns something close to zero for phi. If not, set phi to zero IF the z component is close to 1.
Here is my file in case you want to double check the node setup: www.mediafire.com/file/y60i9en2qltabyo/tut12_combine_normal_maps.blend/file
(i used blender 3.3)
If you need a faster and numercially more stable version of this approach (using quaternions), Stephen Hill has a great blog post: blog.selfshadow.com/publications/blending-in-detail/
Am I understanding correctly that this setup can be used with any 2 normal maps or do the equations need to change somehow for some instances? I tried setting it up and wasn't successful, but I'm not sure if I messed up somewhere or if the method needs altering in some way. In any case, this is eye-opening, thanks.
It should work for combining any two (or more) tanget space normal maps. I added a download link to the my final file in the video description.
I'd like to try this but there is no link to the previous video. I could probably copy the node tree but what the heck is vecTo Spherical?
The link to the last video is in the video description. You might have to reload the page. vecToSpherical converts a 3d vector to spherical coordinates. I showed how to create it in the last video.
Works perfectly! This method is very accurate, but it slows down the first few seconds that you switch the viewport shading to Material Preview or Rendered. Is it possible in Blender to output the combined normal maps into a single image texture node for saving?
Nice explanation! But I still don't quite understand why the Rotation Order is 3 steps instead of 2 and why the rotation Angle is negative in the first rotation. I'm very confused because the video just seems to rotate according to these angles, but there is no explanation of why this is done.Can you explain it in more detail, or where can I find the corresponding video. Appreciates it
You're right, It's a bit confusing especially because I said that we are rotating by theta and phi. ("2 rotations")
Here is what we actually want to do:
We want to perform a single rotation (by theta) BUT around a rotated axis.
And this rotated axis is the X axis that has been rotated by phi around Z.
The problem with that: With simple Euler rotations, we can only rotate around the main axes: (X,Y,Z)
But we can use a trick: If we first align the rotation axis with one of the main axes (X,Y or Z), we can just rotate around this main axis (by theta). Because they are identical.
We just have to make sure to reverse this alignment at the end.
So here are the 3 steps:
1. Align rotation axis with one of the main axis: As mentioned above, our rotation axis is just the X axis rotated by phi around Z. If we reverse this, the rotation axis is identical to the X axis. So we rotate by negative phi around Z.
2. Perform the actual rotation. Since the X axis and our rotation axis are now identical we can just perfom a rotation around X by theta.
3. Reverse the alignment. Rotate by positive phi around Z. Now the rotation axis is back to where it was at the beginning.
This is what I showed in the video with the example of rotating the cube.
Hope this helps.
Shouldn't you just be able to do a single quaternion rotation? The quaternion for the rotation in question is just sqrt(final / initial).
This is a great tutorial, but how can i bake it into an image texture?
6:15 By rendering an image and saving it (if you followed my scene setup)
A link to my final .blend file is in the description.
And how to bump them up but masking each other out with controlling the power separately?
In newer Blender versions (eg 3.6) there is the Mix node (set to float).
Set the mask as the Factor input. And then you can control A and B separately.
Use its output as the normal strength.
In older Blender versions, just use the MixRGB node. Does the same thing but it looks odd because its using color values.
@@georg240p in case if we use black and white texture because it has to be exact location for label/box etc? Will it work or I’m just connecting it wrongly? I also face a problem where it’s black and white but when I change slots the white turn grey instead of completely black while before the black part was completely black. Like it inverted but there’s no completely black in the inverted but grey even tho I change another slot color to completely black. Do you have any suggestions?
@@natsunwtk Yes, to get a clear separation, the mask should be only black and white (0 or 1).
Im not sure what you mean by inverted or gray. You mean the mask looks wrong when you set it as the output? Try to set the Color space to "Non-Color data" in the image texture node that loads the mask.
@@georg240p yes, something like that. It’s already non color when it’s before inverted it showing completely black and completely white but when I change the slot A to slot B in a mix color node, it showing white->gray and black-> white. Sorry my english is not good enough to saying it properly. (I connected a black and white image to a mixed color node for bumping. It’s mask just fine because I also used it for roughness. I just connected them to checking if it’s masking properly but turn out it appeared gray instead of black for bump node)
@@natsunwtk
The result of the mix node are the strength values that will be used to control the normal map strength. It is not a mask! If its inverted, thats because you defined the colors this way in the mix node. And maybe thats exactly what you want.
Connect the output of the mix node to the value that you previously used to control the strength of the entire normal map (eg in my video I used a multiply node. So in this case connect the mix node output to the bottom slot of the multiply node)
Great video. I do however have a question. How can I use this node setup and still control the strength of both normals? Right now the nodes branch away from the multiply node that controlled the strength of the original normal map in your previous tutorial.
At 5:40 I showed how to change the strength of the detail normal vector (by manipulating the angle theta). You can do the exact same thing to the base normal vector (bottom one). This way you can control the strength of both normal maps independently.
And you can also do the same thing at the end (after the combination) to change the strength of the final result. You can combine as many normal maps as you want, and change the strength at any point in between.
Not sure what you mean by "the nodes branch away from the multiply node". (The multiply node is only used to manipulate the angle theta).
In case you want to send screenshots of your node setup, feel free to message me on discord: umsoea#8675
2hr to learn this
How can you combine 3 normal maps (base + 2 detail maps)?
just use the resulting vector as the new base normal vector for the 2nd combination
10/10
It's 1 node in redshift
You can turn one vector into a matrix and then multiply with the other vector.
why not just use quaternions? numerically stable and the most efficient.
@@georg240p Vectors and matrices are the simplest and cheapest (computationally)
@@xbzq
The quaternion version I tend to use requires 1 divison, 6 multiplications, and 6 additions. and can be used to rotate points aswell.
Stephen Hill has a great blog post about it:
blog.selfshadow.com/publications/blending-in-detail/
Would love to know if there is anything faster.
Keep in mind we are doing a shortest arc rotation.
@@xbzq Quaternions are cheaper to compose and store than matrices, but more expensive to apply. As for which is simpler, quaternions are only "complicated" because they're obfuscated and communicated terribly. They're really just a blending of a 180° rotation (represented as a blending of 180° rotations around each axis) and a 0° rotation.
Why can't you just split xyz and combine xyz with add node in between for each? much simpler, same control
Stephen Hill from Lucasfilm has a great blog post comparing different techniques:
blog.selfshadow.com/publications/blending-in-detail/
You seem to describe the first technique they mention: "Linear Interpolation" (Vector addition, which is the same as averaging the two vectors).
The technique I used in my tutorial gives the exact same result as their proposed method. (They use quaternion shortest arc rotation to make it computationally extremely efficient, but hard to understand)
The tiny Nodes in the screen and the video editing make it hard to keep up, especially as a non english speaker. but thanks for your video
is it possible to add more normal map on top of that node? for example mixing 3 normal maps or more?
Should work fine to just use the resulting normal vector from the first combination as the new base normal vector. just make to keep the z component positive +renormalizing.
@@georg240p got it, thank you very much!
Wouldn't it be easier to just subtract 0.5 from the second normal, scale it, and then just add it to the first
if you like it, why not?
@@georg240p well, that's an option too
Hi umsoere
bruh
@@georg240p how u been 😂
pssst
Aren't colors in normal maps representation of angles? Can't you just combine colors to combine angles?
Edit: To combine normal maps in photoshop you can double click the layer. Turn off blue channel. Set that layer to overlay. And you're done. At least that what I've heard.
No, normal maps store 3d vector data and treating them as color will 1. produce unnormalized vectors (a normal vector should always be normalized). and 2. it doesnt follow any geometric concept so results can be quite unpredictable. For more details, here is an article explaining this in more detail by comparing color blending techniques with the geometric approach: blog.selfshadow.com/publications/blending-in-detail/
@@georg240p Thanks for info. I'm new to all of this.
I appreciate all the explanation, but.....seriously? This should be such an easy step without haaving to connect a million different nodes. It's a bit ridiculous (by now).
overlay filter will combine 2 normals without this math )))
great explanation, but there are other ways ))
then just use ovelay. who cares. This technique is for those not satisfied with regular image blending techniques. I mentioned that in the first few seconds of the video.
It works, but it's not the "mathematically" correct way of doing it. It washes out a lot of detail, but if one doesn't care about it, it doesn't matter. But I personally would prefer to have both of my textures looking as best as they can and how they are intended.
I always just use overlay and mix them. Does work too, but maybe nit as accurate
I tried it. It works but result is kind of dull.
the easiest way to combine normal maps is to overlay 2 normal maps using mix color
I hate how archaic blenders' shading editor is. Like holy shit, this can be done in two seconds in substance painter/designer. Actually pathetic. But at least we have grease pencil! 🙄🙄
This only works when working with just one material, so if you're come here because you have 2 normals from different texture set, this is not helpful.