2:54 I would have put a Normalize node on each normal-map component before feeding into the combiner network. Using a Mix instead of an Add for the combiner would give you a way of regulating their relative strengths. Then a Bright/Contrast node can be useful on the output as a way of adjusting the strength of the combination. Edit: actually, no need for the Bright/Contrast node, since the Normal Map node already has its own Strength control.
To be honest, I used to combine 2 principled shaders with a mix shader. Each principled shader would have its own normal map. Not professional, but it worked. : )
@@mwauraerick This is by far the best method in my opinion. Gives a little overlap here and there, but that splitting/combining is just too tedious for me ahaha.
yeah thats one good way i noticed that i need a uvmap for the tangent normal maps. using this method from decoded the combination reslults into artifacts. using shaders each with their own normal map solves the problem
Thank you! This method gave me better results than using just vector math add, or mixrgb add. Tried all three methods, and this one gave clearer normals than the other two.
also this: 2 normal maps into the 2 color slots of a 'MixRGB' node with factor set to 0.5 and mode to 'Linear Light'. if you want to mix them in photoshop: set mode of top layer to 'Linear Light' with an opacity of 50%. ;-)
Nice method, splitting the rgb values helped me make more subtle tweaks, a more accurate way of mixing them though would be using multiply instead of add but both work.
I'm puzzled as to why we split the channels. Shouldn't the Add-MixRGB treat the channels seperately anyway? And is it intentional that we only add with 50% strength (i.e. 100% × Color 1 + 50% Color 2)?
Holy RNG Jesus! This is simple in concept though I would of never figured this out haha!!! I am bookmarking, saving this video to a bunch of my playlist, giving a thumbs up, and subscribing!!!! Thanks a lot man, your a Hero!
When you said you would be combining 2 normal maps I came up with a very similar method, separating the channels of the two maps, though I envisioned sending the pairs into math nodes with a greater than function, or something to that effect, in order to give greater values of one map overriding features of another. For example: in the final version your tiles flow over the raised parts rather than them being separate features on a flat tile floor. When I saw you put them through Add nodes I realised that would work, but also predicted it would get that result. I guess it really depends on what your desired outcome would be, but it would be interesting to experiment to see what functions returned what results.
I searched on hope for not having to separate :( I guess that's the only way, also you don't have to mix blue, blue is always 1 in a normal map by the way.
You don't have to separate and recombine the channels. The add node already does it per channel and they don't affect each other. I just tested both and it looks exactly the same. Also, you shouldn't normalize the vectors (or RGB values (which mathematically are vectors since it's a set of three values)). Normalizing means giving them all a magnitude of 1. But not all pixels of a normal map have a magnitude of 1 when you see them as a vector. I made a test where I combined a normal map of a bumpy surface (A) with one that also has a lot of flat areas (B). So ideally, when combined, in the areas where normal map B is flat, the resulting normal map looks exactly the same as normal map A. With the normalizing node, it looked completely different, but it looked exactly the same when I just left out the normalizing node and used a strength of 2 for the combined normal map, compared to a strength of 1 for just normal map A. So it looks like you just have to double the strength. But I only got the right result when I combined the two normal maps with a mix node with factor 0.5 instead of an add node. edit: No, wait. I didn't think enough about what the numbers of the different channels in the normal map actually represent. They are just x, y, and z coordinates of normal vectors at every pixel mapped through the UV map onto the mesh. And of course, normal vectors are normalized. So technically, normalizing the vectors again is actually correct. However, as I mentioned, using the method described in the video, I got the wrong result, but it looked right using the method described above. I think the main problem is that the values of each channel of a normal map are in the range [0, 1] in Blender (or [0, 255] if you look at the file). So the values can't be negative. But the coordinates of a normal vector also can be negative. So I tried out the following: - A vector math node right after both image texture and subtract (.5, .5, .5). Now we brought our vectors from the range [0, 1] into the range [-.5, .5]. - Combine the result of those two vector math nodes with MixRGB in add mode with Fac on 1. When you add two vectors in the range of [-.5, .5] the result will be in the range [-1, 1]. - Put that through another vector math node and add (1, 1, 1). Now you shifted the range to [0, 2]. - Put that through another vector math node and divide by (2, 2, 2). Now you shifted the range to [0, 1]. That's the range we would get from a normal map However, this turned out to give me the same result as the mothed I described above. And in the normal map node, I still had to double the strength. So I guess I'll just stick with that method. At the end of this video (ua-cam.com/video/34BYCkQhHhg/v-deo.html), he's talking about some node groups that combine normal maps and about an article that goes deeper into the mathematical background. I did not yet read it, but as soon as people actually start working with the mathematical background it should be more promising to lead to the correct result. Perhaps I'll come back to that when this simpler method gives me a result that just doesn't look right.
I have been trying for a long time to sculpt my own fine detail into DAZ figures. They only allow the base mesh to be modified via sculpting in apps like blender and ZBrush. Using an additional blendedNormal Map (as you have done here) may be just the workflow I need. Thank you!
now is there a way you can use this to bake a brand new normal map that combines them? I want to know so I can minimize the amount of memory the textures take up by using one image instead of two.
I m wanting to create realistic skin texture and shading. I only know how to texture paint the defuse but it looks so bad. Would you please tell me what are the PBR maps required to create realistic human skin shader? And also could we use normal map to fake the detail on the skin of a low poly human mesh? Thanks...
Good realistic skin needs at least maps for diffuse, bump, roughness and sub surface scattering. There are some good videos on UA-cam for painting skin maps. And the answer to your question is yes, you can fake the skin details with a normal map on a low poly mesh.
but why do you separate and combine again? why not just throw both into one rgb-mix node? doesn't that have the exact same effect? makes no sense to me
TY very much sir! I can't help but theorize using this in some sort of way could essentially hide seams that have to be in the open. Perhaps switch the seams on the same model and obviously bake the different maps and blend the maps in a way that hides the seams. If I am right and figure it out I'll post how I did it. If anyone else is interested in such a thing let me know what you think or if you have done it let me know please!
if you tried to connect the normal textures to vector math node directly it will give approaching result , but i can't tell if it comes right or with problems
in the normal maps, only red and green channels contain information : so adding the 2 blue channels of both is useless ; just pick one of them and use it to recombine ... 😎
I noticed that using the blue channel from one map sometimes creates weird artifacts on the output. I have no idea why. That's why I reccomended that people combine both blue channels into one output. Even though it does smooth out the effect of the normal map a little bit when you combine them.
@@lawrencedoliveiro9104 interesting : I learned that there are 2 different normal map format (OGL and DX) but that only red and green channel contain useful information. If you have some link to explain the role of the blue channel I would be interested ...
@@olory3869 According to docs.substance3d.com/spdoc/project-creation-28737541.html , the difference between the two is the sign of the Y. In all cases, the preferred format is in tangent space en.wikipedia.org/wiki/Normal_mapping which means the Z points outwards, and it could indeed be ignored as you suggest, provided you can be sure the vectors are normalized.
Internal died 4 years ago. 3 years when you posted that comment. Use Eevee if you like, it's even faster than Internal was. You still have to get with the times and node up.
@@Deadequestrian Just feed both normal maps directly into a MixRGB node set to Mix with Factor 0.5. Another comment here says setting the MixRGB to Overlay might be somewhat more accurate. But in any case there's no point in splitting then recombining since there's no cross-talk of any kind within the Add node. It's just complicating things for no benefit.
I like that the comment section has 3 different solution, all of them easier than the decoded version. I love how efficient the community is.
2:54 I would have put a Normalize node on each normal-map component before feeding into the combiner network. Using a Mix instead of an Add for the combiner would give you a way of regulating their relative strengths. Then a Bright/Contrast node can be useful on the output as a way of adjusting the strength of the combination.
Edit: actually, no need for the Bright/Contrast node, since the Normal Map node already has its own Strength control.
I'm not able to get this to work in 2.91. Can you explain a bit further?
You gave me great ideas on modifying the material, but I have to say that your synthesis method will have issues in the end.
To be honest, I used to combine 2 principled shaders with a mix shader. Each principled shader would have its own normal map. Not professional, but it worked. : )
.
Oh you naughty boy. It worked like a charm. Thank you!
@@defdac I am happy that the unconventional methods are always convenient 😀😂😂😂
@@mwauraerick This is by far the best method in my opinion. Gives a little overlap here and there, but that splitting/combining is just too tedious for me ahaha.
yeah thats one good way i noticed that i need a uvmap for the tangent normal maps. using this method from decoded the combination reslults into artifacts. using shaders each with their own normal map solves the problem
Thank you! This method gave me better results than using just vector math add, or mixrgb add. Tried all three methods, and this one gave clearer normals than the other two.
oh my GOD I could've used this a thousand times and only now decided to look it up. Thank you so much!!
also this: 2 normal maps into the 2 color slots of a 'MixRGB' node with factor set to 0.5 and mode to 'Linear Light'.
if you want to mix them in photoshop: set mode of top layer to 'Linear Light' with an opacity of 50%. ;-)
This is worked really well for me, thanks! Was looking for a way to do this without separating everything
@Orcaluv26 wdym? Thats the comment explains how to do. Am I misunderstanding you?
@Orcaluv26 Baking it or doing as he said in photoshop
Perfect, thank you.
The video is a good explanation. Btw. Shft D is shorter to copy a node.
I usually overlay them with a mixRGB nod directly and it does the job
Thanks so much man, this helped me so much for making assets for my stuff
My god switching to World Space From Tangent Space Did the trick :D Mixed the tow Normal maps Perfect with your trick thank you again cheers.
Thank you so much. I was searching everywere to find this
No problem. I was stuck looking for an answer to this for ages too!
@@DECODEDVFX ahahaha
This is pretty useful, thanks mate
Nice method, splitting the rgb values helped me make more subtle tweaks, a more accurate way of mixing them though would be using multiply instead of add but both work.
I'm puzzled as to why we split the channels. Shouldn't the Add-MixRGB treat the channels seperately anyway?
And is it intentional that we only add with 50% strength (i.e. 100% × Color 1 + 50% Color 2)?
Holy RNG Jesus! This is simple in concept though I would of never figured this out haha!!! I am bookmarking, saving this video to a bunch of my playlist, giving a thumbs up, and subscribing!!!! Thanks a lot man, your a Hero!
Glad it helped!
When you said you would be combining 2 normal maps I came up with a very similar method, separating the channels of the two maps, though I envisioned sending the pairs into math nodes with a greater than function, or something to that effect, in order to give greater values of one map overriding features of another. For example: in the final version your tiles flow over the raised parts rather than them being separate features on a flat tile floor. When I saw you put them through Add nodes I realised that would work, but also predicted it would get that result. I guess it really depends on what your desired outcome would be, but it would be interesting to experiment to see what functions returned what results.
Thank You
Grate i did it with one normal map texture and procedural node normal map it worked good enough. Thank you for the Trick :D
maybe use vector math node much easyer !
Legend. This saved me ages!!
THIS. Worked flawlessly for me. Should be the top comment.
They should add a "Normal Mix" mode in the MixRGB Node
Thank you very much :)
Why don't use vector math set to add?
Brilliant mate, thanks a lot.
I searched on hope for not having to separate :( I guess that's the only way, also you don't have to mix blue, blue is always 1 in a normal map by the way.
You don't have to separate and recombine the channels. The add node already does it per channel and they don't affect each other. I just tested both and it looks exactly the same.
Also, you shouldn't normalize the vectors (or RGB values (which mathematically are vectors since it's a set of three values)). Normalizing means giving them all a magnitude of 1. But not all pixels of a normal map have a magnitude of 1 when you see them as a vector.
I made a test where I combined a normal map of a bumpy surface (A) with one that also has a lot of flat areas (B). So ideally, when combined, in the areas where normal map B is flat, the resulting normal map looks exactly the same as normal map A. With the normalizing node, it looked completely different, but it looked exactly the same when I just left out the normalizing node and used a strength of 2 for the combined normal map, compared to a strength of 1 for just normal map A. So it looks like you just have to double the strength.
But I only got the right result when I combined the two normal maps with a mix node with factor 0.5 instead of an add node.
edit:
No, wait. I didn't think enough about what the numbers of the different channels in the normal map actually represent. They are just x, y, and z coordinates of normal vectors at every pixel mapped through the UV map onto the mesh. And of course, normal vectors are normalized. So technically, normalizing the vectors again is actually correct. However, as I mentioned, using the method described in the video, I got the wrong result, but it looked right using the method described above.
I think the main problem is that the values of each channel of a normal map are in the range [0, 1] in Blender (or [0, 255] if you look at the file). So the values can't be negative. But the coordinates of a normal vector also can be negative.
So I tried out the following:
- A vector math node right after both image texture and subtract (.5, .5, .5). Now we brought our vectors from the range [0, 1] into the range [-.5, .5].
- Combine the result of those two vector math nodes with MixRGB in add mode with Fac on 1. When you add two vectors in the range of [-.5, .5] the result will be in the range [-1, 1].
- Put that through another vector math node and add (1, 1, 1). Now you shifted the range to [0, 2].
- Put that through another vector math node and divide by (2, 2, 2). Now you shifted the range to [0, 1]. That's the range we would get from a normal map
However, this turned out to give me the same result as the mothed I described above. And in the normal map node, I still had to double the strength. So I guess I'll just stick with that method.
At the end of this video (ua-cam.com/video/34BYCkQhHhg/v-deo.html), he's talking about some node groups that combine normal maps and about an article that goes deeper into the mathematical background. I did not yet read it, but as soon as people actually start working with the mathematical background it should be more promising to lead to the correct result. Perhaps I'll come back to that when this simpler method gives me a result that just doesn't look right.
Thank you!
I have been trying for a long time to sculpt my own fine detail into DAZ figures. They only allow the base mesh to be modified via sculpting in apps like blender and ZBrush. Using an additional blendedNormal Map (as you have done here) may be just the workflow I need. Thank you!
🤞
Thank you! I will suscribe.
Thanks
why not just use Vector Math? Vector math and then add 2 normal map plugged into Normal input is worked as well
SUPER JUICY STUFF, keep it up!
now is there a way you can use this to bake a brand new normal map that combines them? I want to know so I can minimize the amount of memory the textures take up by using one image instead of two.
I just only put mixRGB in between them. Just set it to Color. I tried with bake and it's all good.
i like that but im wondering how i can bake them so i have one normal map vs several normal maps? for game purposes
Wouldn’t it be better to multiply the colors, seeing as they’re all normalized vectors? Or what about taking the cross products?
Hi! In case that a need more strength on the second normal map how to?
Why the factor of add color mix node is set at 0.5. And what your thought about vector - average node.
I m wanting to create realistic skin texture and shading. I only know how to texture paint the defuse but it looks so bad. Would you please tell me what are the PBR maps required to create realistic human skin shader? And also could we use normal map to fake the detail on the skin of a low poly human mesh? Thanks...
Good realistic skin needs at least maps for diffuse, bump, roughness and sub surface scattering. There are some good videos on UA-cam for painting skin maps. And the answer to your question is yes, you can fake the skin details with a normal map on a low poly mesh.
but why do you separate and combine again? why not just throw both into one rgb-mix node? doesn't that have the exact same effect? makes no sense to me
Factor for add nodes should be 1.0
TY very much sir! I can't help but theorize using this in some sort of way could essentially hide seams that have to be in the open. Perhaps switch the seams on the same model and obviously bake the different maps and blend the maps in a way that hides the seams. If I am right and figure it out I'll post how I did it. If anyone else is interested in such a thing let me know what you think or if you have done it let me know please!
i love you
if you tried to connect the normal textures to vector math node directly it will give approaching result , but i can't tell if it comes right or with problems
You can do that, but it sometimes creates issues.
Wow ! This is good, but how do i bake this combined Normal Map for Unity use ?
you could simply connect the normal output from the normal/bump node into the diffuse/color slot of the (principled) shader and bake the color ;)
Hello Decoded, what about mixing 2 Image Textures with a MixRGB, then Normalize? It seems to give the exact same result...
Different results
in the normal maps, only red and green channels contain information : so adding the 2 blue channels of both is useless ; just pick one of them and use it to recombine ... 😎
I noticed that using the blue channel from one map sometimes creates weird artifacts on the output. I have no idea why. That's why I reccomended that people combine both blue channels into one output. Even though it does smooth out the effect of the normal map a little bit when you combine them.
Actually, all 3 are important for normalization.
@@lawrencedoliveiro9104 interesting : I learned that there are 2 different normal map format (OGL and DX) but that only red and green channel contain useful information.
If you have some link to explain the role of the blue channel I would be interested ...
@@olory3869 According to docs.substance3d.com/spdoc/project-creation-28737541.html , the difference between the two is the sign of the Y. In all cases, the preferred format is in tangent space en.wikipedia.org/wiki/Normal_mapping which means the Z points outwards, and it could indeed be ignored as you suggest, provided you can be sure the vectors are normalized.
what if i have to combine 3 maps? in this vid u combined 2
You apply this method for the first 2 maps, then you take the combined map and mix it again with the last one.
What about internal render ? I dont do this node crap
Internal died 4 years ago. 3 years when you posted that comment. Use Eevee if you like, it's even faster than Internal was. You still have to get with the times and node up.
si buona notte 400 NODI MA BASTA!!!!1
Jeeez, way too overcomplicated, dude.
How would you do it?
@@Deadequestrian Just feed both normal maps directly into a MixRGB node set to Mix with Factor 0.5. Another comment here says setting the MixRGB to Overlay might be somewhat more accurate. But in any case there's no point in splitting then recombining since there's no cross-talk of any kind within the Add node. It's just complicating things for no benefit.
1080p??? uhhh
Use substance painter
It doesn’t have smart node-based materials, though.