This was excellent. I'm doing a adv dip including OpenGl, and this is something we've recently covered. Except yours was a much better resource than theirs. ^.^
For me the most important take-away from this is the basic explanation of why normal maps are so important and what they do. The light bulb just clicked on, "that explains a lot of things".
How would one create normal data if they don't have the original objects but they do have a texture? Note: This is a technical question, not a "use application " question.
So not only do your videos show off making stuff look good in video games, but it also shows off making the illustrations in the videos themselves look good. The animations illustrating the concepts you're describing starting at around the 1:18 mark are particularly exquisite. I'm here to tell you that the time, polish, and effort you put into these videos have not gone unnoticed! Well done.
Your videos are a treat to be honest, and your case study videos are eye opening to all the possibilities that can be done through shaders. How does one acquire a deep knowledge and understanding of shaders as yourself? How did you learn this stuff? There's no other channel or resource that is half as good.. and the blogs tend to be introductory, and no one tackles 2D. I hope you could shed some light.
Gamma correction has barely nothing to do with human vision... That's a misconception / myth, and it's making me angry whenever I see it propagated further. Our eyes, just like a camera's sensors, record values in linear space. We see linear values. We need linear values. We never see gamma values. The only reason for gamma correction is backwards compatibility with old CRT technology. Back then, the monitor tech was so bad that it was displaying darker values than those expected and supplied to the GPU (coincidentally, in a non-linear fashion). Which meant that a value of 120 was displayed as less than 100 (not accurate, but just for example). People figured out that by supplying higher values than originally intended, they could get the monitor to display the required brightness. To do that, they started to gamma encode values to higher values (which again, works in a non-linear fashion). By supplying these gamma encoded values, the monitor was (through the nature of it's cathode ray tube) capable of decoding those values to the required lowered ones (keep in mind that this decoding was an abstract, conceptual decoding. There were no actual mathematics performed by the monitor) After the dawn of CRT technology and the birth of LCDs, monitors were finally able to correlate the GPU supplied data with on-screen brightness (which meant that 120 was 120 on the screen). The biggest issue, however, was that the ENTIRE INTERNET was filled with gamma encoded images, and backwards compatibility was required (there was no way to purge all the servers and change the standard to no gamma correction without making all images up to that point obsolete). "Why is that?" - you might (hopefully) ask. Well, if you try to display a gamma corrected (encoded) image on an LCD with gamma correction features disabled (gamma = 1.0), you'd get a very bright and washed up image (remember, gamma encoded / corrected values are higher than those intended). LCD manufacturers figured that the only way to keep going forward was to cancel gamma correction from images at software + hardware level. The LCD has a LUT (LookUp Table) chip which is responsible for doing gamma expansion / decoding, effectively canceling gamma correction and displaying values as originally intended (basically doing, through mathematical means, the same thing that a cathode tube was doing naturally). To enable this decoding process, you need to set your monitor to a gamma of 2.2 (the number was chosen as an average between 2.0 - 2.4, during the old CRT days, because it was giving the best results for a wide range of monitors), because almost every artist is encoding using that value. Of course, panel quality is very important here, so a value of 2.2 might not provide the exact experience intended by the artist (the LCD panel might be bad at reproducing colors / have bad background lighting). Basically, the entire process is like this: an image is created (values are in linear space) -> then gamma encoded (non-linear space) -> stored somewhere. Somebody comes along and downloads said image -> opens it -> (OPTIONAL STEP) if they want to alter the image, they have to do all their calculations by first decoding the image (getting it in linear space), then calculating, and after that re-encoding (back to non-linear space) -> gets sent to the GPU -> GPU sends it to the monitor -> the monitor uses its LUT chip to decode the image (linear space) -> image is displayed. Gamma correction is at this point a useless vestigial artifact of a bygone era, on account of bad early tech.
Am I just crazy or are you using Donkey Kong Country Soundtrack as background music (5:41 sounds like stickerbrush symphony)? Edit: Lol just saw the Video notes and I'm right :-)
"I'll get into art as I recover burnout after studying computer science in university. I'm sure everything will be just very calm and no math involved. :)" (jokes aside really good video but lol)
I would love if you went back to the exercises you had in shaders 101, it helps reinforce what you are saying so I am not just listening along. Keep up the awesome videos though!
at 4:50 you show the matrix multiplication. shouldn't you do mul((float3x3)unity_ObjectToWorld, v.normal) instead? You use the inverse matrix which also happens to be the transpose on the right side, matrix multiplications are not commutative so while in this particular case it works and it does transforms from object to world space it might not always be the case. If I'm not mistaken unity objects scale does some weird stuff if you get it wrong, besides, its the correct way to read it.
Yeah I probably shouldn't have moved past that so fast. What you've written here will work but only for uniform scaled transforms. Right side by the inverse will work properly with scaling. It's been a while since I've looked at the proof for that, I'll try and find a good source and make an annotation. Thanks!
This shit literally blew my mind... this legit made me realise that the games I play use flattish surfaces and they just have fucking wild normal maps so it lights differently... BLEW MY MIND
Question. Question. Question. At 4:50, shouldn't it be the ObjectToWorld matrix??? And also, shouldn't the matrix be the first parameter? Is the opposite order of the parameters what causes the matrix to be the inverse?? So many questions.....
Very good questions! I don't know why I skipped over this. We are transforming a normal, which is slightly different than just transforming a direction (that's what the math you're suggesting performs). You need to multiply a normal by the Inverse-Transpose of your model matrix to properly account for non-uniform scaling. If you look up why you have to do that you'll come across a stack overflow page with several proofs and different explanations, but the goal is to not squash/stretch your normals if you're scaling non-uniformly. Then a little trick that Unity actually uses in their cg include files, is to perform the multiplication on the right side. This is the equivalent to multiplying by the transpose from the left, except that mathematically you'd be multiplying a 3x1 matrix instead of a 1x3. But shader compilers don't care, they let you use a float3 as a column or a row contextually. So the inverse of ObjectToWorld is WorldToObject, and multiplying from the other side accounts for the transpose, which all in all gives us the effect of multiplying by the inverse-transpose. If you can assume uniform scaling, you can just use ObjectToWorld from the left-hand side as you have suggested and your normals will look just fine :)
Thanks for your response. This actually motivated me to spend 3 hours reading up about the whole matrices things. Thanks for being informative as always. Also I just made a video showing off some of the shaders I wrote. You can check it out on my channel if you want. No pressure tho.
Makin' Stuff Look Good I'm glad you like it. It looks pretty cool. It supports tessellation so you can effectively swap out the model by baking 2 different models on the same base mesh. I'll probably make a video highlighting each of them. Again thank you for your feedback :)
Speaking of normals and lighting. I think you should enlighten developers about usage of MatCap shaders and their advantage in mobile development. I've found them extremely useful.
I came here to finally understand what's a normal map... and not only I didn't understand a word... but now I am even more afraid to dig in. Yes, I had problems at school and my maths level are a that of a child, but there are a lot of people like me... we also have the right to know!! Hahahaha...
No plans for parallax mapping in the future, but if you have GDC vault access (or you can get access through a friend/work/whatever), part of the talk "Shaders 102" covered a similar technique in Unreal. The talk as a whole is worth checking out. And stencil buffer ahhh.... I used it in my See through effects video but it's not the best use case for stencil stuff. Compelling reasons to use stencil buffer are pretty few and far between as most ways to use it can be achieved with the depth buffer already. I'll try and think of a cool way to use it for a future video hopefully!
Yeah, after that point it got too fancy too fast. At least for me. Anyway, your earlier videos really helped me to start learning shaders. They're one of the best out there. Thanks for making them!
I feel the same. At the beginning of most of your videos I think it's easy as I understand and already know what you're talking about, unfortunately by the time the video ends I feel completely dumb. So yeah, I guess the gap is huge. I'm sure part of it can be explained by the fact even though I know things about 3D, lighting and such, I lack the basics in shaders. So I don't know if that's valid feedback for you..
Well you can't really explain or learn everything about normals in 10 minutes. This is a good introduction but you need to seek more training to truly understand
leftyfourguns yah trying to explain this to anyone not in graphic design or applied physics their eyes glaze over the first time you use a "big word"(hate that) n then want you to go back to being your "fun black self: instead of the "white you" like they know which one is which or like I'm either one. Like my world solace space lighting is always flipped on the x-axis and tangent values to their vectors and are in ring light detection to what they think is the real actual own map lighting worldspace
Hey! I know this is a bit off topic to your current video, but do you know of any smart way of masking off the world to only render objects that are inside a sphere? I've tried using the stencil buffer but the problem is that if you angle the camera you will continue to see the object despite of it not being physically inside the sphere... I need to achieve a type of snow globe effect, like a little world inside a sphere☺️ Thanks for the awesome videos! Keep them coming👌🏼
Not entirely sure what you're trying to achieve, but a sphere can be defined by just a point and a radius. So if you're writing/editing the shader of the objects that are within the globe, you could pass through world space positions from vertex shader and use clip/discard with the globe's point and radius.
I've just recently discovered your channel. I've put shaders off till the end. But I think every math professor would be so jealous of your ability to explain these abstract concepts. Thank you
I'm not going to complain about this video because I didn't understand it. The lack of knowledge is mine to own. I felt if I watched it with the pause button looking up what I didn't get and replaying the bits I'd almost got then I'd really understand it. I've watched some terrible tutorials that really are just someone showing off. This is not one of them. Thanks. By the way, I do understand way more than I did before watching it. Big thanks.
I'm a bit late, but your definition of the dot product is only true for unit vectors. The dot product of two like vectors is the square of their shared length. That being said, normals will of course be unit vectors, but I mention this for the sake of accuracy.
you explained dot product and I was like "cool I wondered what exaclty dot product was doing" then you said vector to a lightsource from a vertex and it was like the techno wizzardry that is how 3d models are lit became clear.
hey nice video but is it possible to explain why more about the blueish purple color being that color? eg. when we divide 255/2 = 127.5 = results in 128 for Red and the Green Channel in the mid value but for the blue when we divide 255/1 = 255 we get a (128,128,255) value. SO my question is is 127.5 = 128 or does the dividing the R and G channels should be 256/2 = 128? thanks
You should really take more time to go a little bit into details, bec you obviously know what you are talking about, but pls explain it tahn what you show
Honestly, I stopped seriously listening after 4 minutes. There is a lot being explained, but you're 15 levels above most everyone else, and giving the people who are already at level 15 the grand tour. But, you aren't offering a ladder for most average people to ascend to your level. TL;DR, if you already have in-depth knowledge of the terms used in the video, you had a ball and learned. If you're just a random person who wanted to know what those blue textures labeled ”NMp” in your game folder were, little to nothing was gained from this video.
If you're at the level of not knowing what a normal map is, then yes this video is not for you. This is for tech artists and graphics engineers you have already dabbled in shader programming. I have more beginner content on my channel but there's no way around it: shaders are software engineering on GPUs. I try to teach small linear algebra things where it makes sense to and make things as approachable as possible. At the end of the day, graphics programming is as challenging to learn as it is to teach.
Maaaan, thank you for *finally* describing this in a way I can understand. I could never understand why a "flat" normal map was 0,0,1, but makes total sense now when you say we're multiplying the normal map values with the tangent/bitangent/normal.
Ah man dude, I really tried to follow along, but you are not making it eazy. Please explain more how you got these two different shaders, at 2:35. The code before that only produces the right one :( Later on with the example at 10:12 my light goes not change the shadow on the sphere. I am really trying to code along but its not working out ;_;
There is no such shader that produces the left side vs the one on the right (at least, not a shader I describe in this video). The concept being explained here is that when normals are shared at each vertex, rather than split, the interpolation that occurs inside the shader governs whether the surface appears smooth or faceted. If you wish to achieve the faceted look, the easiest way is to set your mesh to import with Calculated Normals at 0, or remove all smoothing groups in the modeling suite such. 10:12 is the end of the video, so I'm not sure which other shader you're referring to. Ultimately, this video is meant to be a description of what normals are, where they come from (mesh normals vs tangent space normals), and how they're used for basic lighting. This isn't intended as a watch-and-code sort of tutorial. That being said, all the shaders that produce the various intermittent steps are included in the github, linked in the description. I think it would be much more valuable to check out that code rather than try to infer the final shaders based on the small snippets in the video. I have received other feedback regarding resources vs. video content. In the future I think I'll call out the github link at the beginning, and recommend people follow a long with the code. Sorry for the confusion, please don't hesitate to hit me up with other questions!
This was extremely informative, and i'm not even using Unity, so though I filtered and saved for later, a lot of what you said, this helps me finally GET wtf normals are, why they're the colors they are (I was like, is there some standardized gamut everyone is using?) and HOW they work. even though directionality can change by app e.g. in Keyshot vs Blendr, there's plenty of transforms and toggles to fix em. my main confusion was why they were even necessary when you have a bump map, geometry, and in some cases a displacement map. FINALLY i feel a little less dumb today. :) thank you!
Makin' Stuff Look Good. I have a "ray" object with a vertex origin and a direction vertex that points off to "infinity". I know how to render this by hand by calculating a point on the "horizon", but have no idea how to model this in the form of shaders. Can you point me in any helpful direction? Thanks, I appreciate any help.
Awesome video. learned a lot about normal mapping because you just showed it how it is. Best way I learn things :D What is that thing with the top hat? Looks like a state of the US?
How do you calculate the normal at a vertex? If a normal is usually found with a cross product, are vertex normals just averages of the normals of the faces around them?
I know these might be really easy but maybe could be a quick video that gets good traffic (because of the game). Maybe you could do a quick case study on the enchanted items icons in and the end portal in Minecraft. And whatever other cool shader stuff is in that game...Can't think of any off the top of my head. Love the depth of knowledge you show on your videos ...i only understand some of it but it reveals there is a lot I didn't know i didn't know. Maybe these (what i presume to be) simple shaders or tricks would be something a broad lower level (relatively speaking to yourself) audience might want to implement and can be knocked out relatively quickly. Also maybe you could consider doing a video showing us the best way to interact with shaders via script such as turn them on/off like in the end of the pokemon dive or modify them with gui controls or through user selection, like you did in your web demo of the Spelunky shader..A unity talk on mobile optimization has me paranoid about mobile optimization..if you watch it you will understand why ahah but he talks specifically about shaders at 34:35. ua-cam.com/video/j4YAY36xjwE/v-deo.html. Maybe you could talk some about shaders and running them on mobile. Just some thoughts. Probably the most technical in depth unity information with professional content on youtube I've found . Great work hope to see more soon.
Could you make a video on Terrain shaders? One with triplanar for dealing with texture stretching and UV resizing by distance to reduce patterns?( that would be amazing, lol). Anyway, there's hardly any info on terrain shaders for unity in the internet. Most free terrain shaders out there are 4 to 5 years old and don't work anymore. So it doesn't matter if you just want to use them or learn from them, because they just don't work anymore. So I was thinking it would be of great use for many people.
I Have an Idea what you could analyse next. Prey's Looking Glass technology... it's basically the same as Portals in Portal but still very interesting and a shader break down would be great
@5:09 could you explain this bit more? How did you get that texture? I’d really like to be find a good way to take 3D animations and bake the, in 2d with normal maps for games.
I have watched all of ur videos but i still dont quite get the distortion uv maps, you some times use the UV color maps, other examples have the normal color map... and finally som e people distorce the uv using a black and white on channel texture.. what is the diference?
Hey man, any idea on how Hearthstone make those green energy auras around selected cards? Im not sure if it isnt just animated texture/sprites or if theres an actual fancy effect, or maybe a combination of animation + some gloom.
The tangent was a little unclear tho. Am i right if i simply say that the 3 values ib rgb are storing the vector normal to the surface. It may be relative to somthing but i cant tell. I need to give a little bit more time
the best part of this is . . . most users of 3D creation software don't have to understand one bit of what you said . . . but we are eternally grateful that people like you do. thank you.
If using forward rendering, why would you want to transform per fragment normals from tangent space into world space?Isn't it better to transform light direction/position from world space to tangent space instead? It would save operations in the fragment shader.
9:06 - yup I mentioned that. It's generally easier to think about and visualize the normals in world space so I explained it this way. As well, there are effects that require world normals anyway so it's good to understand how you would get them out of a normal map.
This was excellent.
I'm doing a adv dip including OpenGl, and this is something we've recently covered. Except yours was a much better resource than theirs. ^.^
Didn't understand 70% of what you just said... feels bad man.
feels bad too
thank god I'm not the only one
they say, "I can explain it to you but I can't understand it for you". The video content is perfectly explained
@@stevejones9044 for some one that alrady knows some data about it, there are many words that people would not know. Still is a good video.
For me the most important take-away from this is the basic explanation of why normal maps are so important and what they do. The light bulb just clicked on, "that explains a lot of things".
great video !! Please keep uploading new videos
How would one create normal data if they don't have the original objects but they do have a texture? Note: This is a technical question, not a "use application " question.
I love your work dude. I'm putting a bug in .. for fur/hair shader!
that's so much helpful
you're making a great difference for beginner shader programmers community... keep up
So not only do your videos show off making stuff look good in video games, but it also shows off making the illustrations in the videos themselves look good. The animations illustrating the concepts you're describing starting at around the 1:18 mark are particularly exquisite.
I'm here to tell you that the time, polish, and effort you put into these videos have not gone unnoticed! Well done.
Aaron Misner
Well said
Totally agreed. In fact im curious how you produced these. Are these Unity scenes as well?
These are high quality teaching videos.
Your videos are a treat to be honest, and your case study videos are eye opening to all the possibilities that can be done through shaders. How does one acquire a deep knowledge and understanding of shaders as yourself? How did you learn this stuff? There's no other channel or resource that is half as good.. and the blogs tend to be introductory, and no one tackles 2D. I hope you could shed some light.
awesome video dude!
the way you make your videos is perfect, don't change anything!
funny and highly educational.
Gamma correction has barely nothing to do with human vision... That's a misconception / myth, and it's making me angry whenever I see it propagated further. Our eyes, just like a camera's sensors, record values in linear space. We see linear values. We need linear values. We never see gamma values.
The only reason for gamma correction is backwards compatibility with old CRT technology. Back then, the monitor tech was so bad that it was displaying darker values than those expected and supplied to the GPU (coincidentally, in a non-linear fashion). Which meant that a value of 120 was displayed as less than 100 (not accurate, but just for example). People figured out that by supplying higher values than originally intended, they could get the monitor to display the required brightness. To do that, they started to gamma encode values to higher values (which again, works in a non-linear fashion). By supplying these gamma encoded values, the monitor was (through the nature of it's cathode ray tube) capable of decoding those values to the required lowered ones (keep in mind that this decoding was an abstract, conceptual decoding. There were no actual mathematics performed by the monitor)
After the dawn of CRT technology and the birth of LCDs, monitors were finally able to correlate the GPU supplied data with on-screen brightness (which meant that 120 was 120 on the screen). The biggest issue, however, was that the ENTIRE INTERNET was filled with gamma encoded images, and backwards compatibility was required (there was no way to purge all the servers and change the standard to no gamma correction without making all images up to that point obsolete). "Why is that?" - you might (hopefully) ask. Well, if you try to display a gamma corrected (encoded) image on an LCD with gamma correction features disabled (gamma = 1.0), you'd get a very bright and washed up image (remember, gamma encoded / corrected values are higher than those intended).
LCD manufacturers figured that the only way to keep going forward was to cancel gamma correction from images at software + hardware level. The LCD has a LUT (LookUp Table) chip which is responsible for doing gamma expansion / decoding, effectively canceling gamma correction and displaying values as originally intended (basically doing, through mathematical means, the same thing that a cathode tube was doing naturally). To enable this decoding process, you need to set your monitor to a gamma of 2.2 (the number was chosen as an average between 2.0 - 2.4, during the old CRT days, because it was giving the best results for a wide range of monitors), because almost every artist is encoding using that value. Of course, panel quality is very important here, so a value of 2.2 might not provide the exact experience intended by the artist (the LCD panel might be bad at reproducing colors / have bad background lighting).
Basically, the entire process is like this: an image is created (values are in linear space) -> then gamma encoded (non-linear space) -> stored somewhere. Somebody comes along and downloads said image -> opens it -> (OPTIONAL STEP) if they want to alter the image, they have to do all their calculations by first decoding the image (getting it in linear space), then calculating, and after that re-encoding (back to non-linear space) -> gets sent to the GPU -> GPU sends it to the monitor -> the monitor uses its LUT chip to decode the image (linear space) -> image is displayed.
Gamma correction is at this point a useless vestigial artifact of a bygone era, on account of bad early tech.
Can you Please Make tutorials on "Water Effects like in ori and the blind forest , RayMan" it will be very helpful :)
You should do a shader case study of Super Mario 3D World. This game is FULL of little details that make the game beautiful and some are... Amazing
Am I just crazy or are you using Donkey Kong Country Soundtrack as background music (5:41 sounds like stickerbrush symphony)?
Edit: Lol just saw the Video notes and I'm right :-)
Awesome video! Would you be able to do one that shows off how Heroes of the Storm does their death & waiting-to-respawn screen effects?
Very educational! 👏
Anyone who understands what your saying doesnt need this tutorial.
"I'll get into art as I recover burnout after studying computer science in university. I'm sure everything will be just very calm and no math involved. :)" (jokes aside really good video but lol)
LOL HE IS ACTUALLY ALIVE
I would love if you went back to the exercises you had in shaders 101, it helps reinforce what you are saying so I am not just listening along. Keep up the awesome videos though!
at 4:50 you show the matrix multiplication. shouldn't you do mul((float3x3)unity_ObjectToWorld, v.normal) instead? You use the inverse matrix which also happens to be the transpose on the right side, matrix multiplications are not commutative so while in this particular case it works and it does transforms from object to world space it might not always be the case. If I'm not mistaken unity objects scale does some weird stuff if you get it wrong, besides, its the correct way to read it.
Yeah I probably shouldn't have moved past that so fast. What you've written here will work but only for uniform scaled transforms. Right side by the inverse will work properly with scaling. It's been a while since I've looked at the proof for that, I'll try and find a good source and make an annotation. Thanks!
really? i thought i was the way around, please do share that information when you find it, I might be doing the wrong way all along without realizing
This shit literally blew my mind... this legit made me realise that the games I play use flattish surfaces and they just have fucking wild normal maps so it lights differently... BLEW MY MIND
Question. Question. Question. At 4:50, shouldn't it be the ObjectToWorld matrix??? And also, shouldn't the matrix be the first parameter? Is the opposite order of the parameters what causes the matrix to be the inverse?? So many questions.....
Very good questions! I don't know why I skipped over this. We are transforming a normal, which is slightly different than just transforming a direction (that's what the math you're suggesting performs). You need to multiply a normal by the Inverse-Transpose of your model matrix to properly account for non-uniform scaling. If you look up why you have to do that you'll come across a stack overflow page with several proofs and different explanations, but the goal is to not squash/stretch your normals if you're scaling non-uniformly.
Then a little trick that Unity actually uses in their cg include files, is to perform the multiplication on the right side. This is the equivalent to multiplying by the transpose from the left, except that mathematically you'd be multiplying a 3x1 matrix instead of a 1x3. But shader compilers don't care, they let you use a float3 as a column or a row contextually.
So the inverse of ObjectToWorld is WorldToObject, and multiplying from the other side accounts for the transpose, which all in all gives us the effect of multiplying by the inverse-transpose.
If you can assume uniform scaling, you can just use ObjectToWorld from the left-hand side as you have suggested and your normals will look just fine :)
Thanks for your response. This actually motivated me to spend 3 hours reading up about the whole matrices things. Thanks for being informative as always. Also I just made a video showing off some of the shaders I wrote. You can check it out on my channel if you want. No pressure tho.
Wow there's some really cool stuff in there! The mystique transition is awesome. How does that look on a more complex model?
Makin' Stuff Look Good I'm glad you like it. It looks pretty cool. It supports tessellation so you can effectively swap out the model by baking 2 different models on the same base mesh. I'll probably make a video highlighting each of them. Again thank you for your feedback :)
Speaking of normals and lighting. I think you should enlighten developers about usage of MatCap shaders and their advantage in mobile development. I've found them extremely useful.
I came here to finally understand what's a normal map... and not only I didn't understand a word... but now I am even more afraid to dig in. Yes, I had problems at school and my maths level are a that of a child, but there are a lot of people like me... we also have the right to know!! Hahahaha...
Amazing video! Do you plan on talking about parallax shaders in the future?
EDIT: Also, how about stencil buffer?
Both of these shaders interest me :)
No plans for parallax mapping in the future, but if you have GDC vault access (or you can get access through a friend/work/whatever), part of the talk "Shaders 102" covered a similar technique in Unreal. The talk as a whole is worth checking out.
And stencil buffer ahhh.... I used it in my See through effects video but it's not the best use case for stencil stuff. Compelling reasons to use stencil buffer are pretty few and far between as most ways to use it can be achieved with the depth buffer already. I'll try and think of a cool way to use it for a future video hopefully!
I appreciate the reply :)
You've lost me in the middle of the video when it goes from beginner to advanced too fast.
Was it when we got fancy? Sorry! I'll try to make the fancy transition more gradual in the future.
Yeah, after that point it got too fancy too fast. At least for me. Anyway, your earlier videos really helped me to start learning shaders. They're one of the best out there. Thanks for making them!
I feel the same. At the beginning of most of your videos I think it's easy as I understand and already know what you're talking about, unfortunately by the time the video ends I feel completely dumb.
So yeah, I guess the gap is huge. I'm sure part of it can be explained by the fact even though I know things about 3D, lighting and such, I lack the basics in shaders. So I don't know if that's valid feedback for you..
Well you can't really explain or learn everything about normals in 10 minutes. This is a good introduction but you need to seek more training to truly understand
leftyfourguns yah trying to explain this to anyone not in graphic design or applied physics their eyes glaze over the first time you use a "big word"(hate that) n then want you to go back to being your "fun black self: instead of the "white you" like they know which one is which or like I'm either one. Like my world solace space lighting is always flipped on the x-axis and tangent values to their vectors and are in ring light detection to what they think is the real actual own map lighting worldspace
finally somebody talks about how to orient the normals relative to texture coordinate
[Previous knowledge of texturing terms necessary to understand anything]
Hey! I know this is a bit off topic to your current video, but do you know of any smart way of masking off the world to only render objects that are inside a sphere? I've tried using the stencil buffer but the problem is that if you angle the camera you will continue to see the object despite of it not being physically inside the sphere... I need to achieve a type of snow globe effect, like a little world inside a sphere☺️ Thanks for the awesome videos! Keep them coming👌🏼
Not entirely sure what you're trying to achieve, but a sphere can be defined by just a point and a radius. So if you're writing/editing the shader of the objects that are within the globe, you could pass through world space positions from vertex shader and use clip/discard with the globe's point and radius.
I am doing research for my PhD and this is 10 times better than any scientific paper i have read about Mikktspace normals. Thanks a lot
As it happens, normal maps are on my to do list for my dissertation! good timing as they're going in in a few weeks :)
I've just recently discovered your channel. I've put shaders off till the end. But I think every math professor would be so jealous of your ability to explain these abstract concepts. Thank you
Also, if you ever do one on linear algebra/matrix multiplication, etc, I'll be one of your biggest fans.
I'm not going to complain about this video because I didn't understand it. The lack of knowledge is mine to own. I felt if I watched it with the pause button looking up what I didn't get and replaying the bits I'd almost got then I'd really understand it. I've watched some terrible tutorials that really are just someone showing off. This is not one of them. Thanks. By the way, I do understand way more than I did before watching it. Big thanks.
I'm a bit late, but your definition of the dot product is only true for unit vectors. The dot product of two like vectors is the square of their shared length. That being said, normals will of course be unit vectors, but I mention this for the sake of accuracy.
you explained dot product and I was like "cool I wondered what exaclty dot product was doing" then you said vector to a lightsource from a vertex and it was like the techno wizzardry that is how 3d models are lit became clear.
Awsome. This is a fantastic video, thanks a lot! Subscribed
worldnormal=red*tengent+green*bitengent+blue*normal(blue show how much normal want to use)
Excellent yes I understand everything now thank you
Love to see new videos! Glad you still have a little free time with your new job!
Why can’t we just use the normal value instead of having to calculate the tangent value?
hey nice video but is it possible to explain why more about the blueish purple color being that color? eg. when we divide 255/2 = 127.5 = results in 128 for Red and the Green Channel in the mid value but for the blue when we divide 255/1 = 255 we get a (128,128,255) value. SO my question is is 127.5 = 128 or does the dividing the R and G channels should be 256/2 = 128?
thanks
You should really take more time to go a little bit into details, bec you obviously know what you are talking about, but pls explain it tahn what you show
My brain usage reached at 15%, while watching this.
"Tangential explanations" - I see what you did there
Normal map
This is your daily dose of Recommendation
What in god's name are you going on about
Thank you, this was very helpful.
while i also use unity, it would probably be helpful to also highlight where all these precalculated values like the matrices come from
bitmap texture = matrix
Amazing video, best tutorial I could find on here that explains normal maps
is that DK country 2 music playing??
what a brilliant explanation....well done
hello, good videos. i want to translate this one to russian, and upload on my chanel, any suggestion?
What a well explained video…….What ?
Honestly, I stopped seriously listening after 4 minutes.
There is a lot being explained, but you're 15 levels above most everyone else, and giving the people who are already at level 15 the grand tour. But, you aren't offering a ladder for most average people to ascend to your level.
TL;DR, if you already have in-depth knowledge of the terms used in the video, you had a ball and learned.
If you're just a random person who wanted to know what those blue textures labeled ”NMp” in your game folder were, little to nothing was gained from this video.
If you're at the level of not knowing what a normal map is, then yes this video is not for you. This is for tech artists and graphics engineers you have already dabbled in shader programming. I have more beginner content on my channel but there's no way around it: shaders are software engineering on GPUs. I try to teach small linear algebra things where it makes sense to and make things as approachable as possible.
At the end of the day, graphics programming is as challenging to learn as it is to teach.
I can't concentrate DK song too distracting.
Shaders Case Study: LANDMARK NEXT Pls Big tutorial how make this system!!!
Great stuff. New subscriber!!
Please make videos using Unity's ShaderGraph and VFX to replicate Good looking Games.
and again your Content is Amazing!
Maaaan, thank you for *finally* describing this in a way I can understand. I could never understand why a "flat" normal map was 0,0,1, but makes total sense now when you say we're multiplying the normal map values with the tangent/bitangent/normal.
Ah man dude, I really tried to follow along, but you are not making it eazy. Please explain more how you got these two different shaders, at 2:35. The code before that only produces the right one :( Later on with the example at 10:12 my light goes not change the shadow on the sphere. I am really trying to code along but its not working out ;_;
There is no such shader that produces the left side vs the one on the right (at least, not a shader I describe in this video). The concept being explained here is that when normals are shared at each vertex, rather than split, the interpolation that occurs inside the shader governs whether the surface appears smooth or faceted. If you wish to achieve the faceted look, the easiest way is to set your mesh to import with Calculated Normals at 0, or remove all smoothing groups in the modeling suite such.
10:12 is the end of the video, so I'm not sure which other shader you're referring to.
Ultimately, this video is meant to be a description of what normals are, where they come from (mesh normals vs tangent space normals), and how they're used for basic lighting. This isn't intended as a watch-and-code sort of tutorial. That being said, all the shaders that produce the various intermittent steps are included in the github, linked in the description. I think it would be much more valuable to check out that code rather than try to infer the final shaders based on the small snippets in the video.
I have received other feedback regarding resources vs. video content. In the future I think I'll call out the github link at the beginning, and recommend people follow a long with the code.
Sorry for the confusion, please don't hesitate to hit me up with other questions!
This was extremely informative, and i'm not even using Unity, so though I filtered and saved for later, a lot of what you said, this helps me finally GET wtf normals are, why they're the colors they are (I was like, is there some standardized gamut everyone is using?) and HOW they work. even though directionality can change by app e.g. in Keyshot vs Blendr, there's plenty of transforms and toggles to fix em. my main confusion was why they were even necessary when you have a bump map, geometry, and in some cases a displacement map. FINALLY i feel a little less dumb today. :) thank you!
Omg, this video helped so much. The tutorial wasn't 90% "do this because MAATHHHHH". Thanks! :D
Just realised that bramble blast from donkey Kong country 2 is playing in the background
You should clarify that with the dot product you're only talking about unit vectors, for example (2,2) . (2,2) = 8 not 1
please do more! amazing work!
Makin' Stuff Look Good. I have a "ray" object with a vertex origin and a direction vertex that points off to "infinity". I know how to render this by hand by calculating a point on the "horizon", but have no idea how to model this in the form of shaders. Can you point me in any helpful direction? Thanks, I appreciate any help.
What about a tutorial about making shaders for vfx, like the Diablo Way with multiple multiplied scrolling textures.
Wonderfully explained them on math. Thank you for this usefull clip.
Awesome video. learned a lot about normal mapping because you just showed it how it is. Best way I learn things :D What is that thing with the top hat? Looks like a state of the US?
How do you calculate the normal at a vertex? If a normal is usually found with a cross product, are vertex normals just averages of the normals of the faces around them?
vertex normals come from the input (from the triangle mesh)
waiting more awesome videos for shaders and effects....It is really cool...thanks a lot (Y)
I know these might be really easy but maybe could be a quick video that gets good traffic (because of the game). Maybe you could do a quick case study on the enchanted items icons in and the end portal in Minecraft. And whatever other cool shader stuff is in that game...Can't think of any off the top of my head. Love the depth of knowledge you show on your videos ...i only understand some of it but it reveals there is a lot I didn't know i didn't know. Maybe these (what i presume to be) simple shaders or tricks would be something a broad lower level (relatively speaking to yourself) audience might want to implement and can be knocked out relatively quickly. Also maybe you could consider doing a video showing us the best way to interact with shaders via script such as turn them on/off like in the end of the pokemon dive or modify them with gui controls or through user selection, like you did in your web demo of the Spelunky shader..A unity talk on mobile optimization has me paranoid about mobile optimization..if you watch it you will understand why ahah but he talks specifically about shaders at 34:35. ua-cam.com/video/j4YAY36xjwE/v-deo.html. Maybe you could talk some about shaders and running them on mobile. Just some thoughts. Probably the most technical in depth unity information with professional content on youtube I've found . Great work hope to see more soon.
Could you make a video on Terrain shaders? One with triplanar for dealing with texture stretching and UV resizing by distance to reduce patterns?( that would be amazing, lol). Anyway, there's hardly any info on terrain shaders for unity in the internet. Most free terrain shaders out there are 4 to 5 years old and don't work anymore. So it doesn't matter if you just want to use them or learn from them, because they just don't work anymore. So I was thinking it would be of great use for many people.
Amazing vídeo!
How is the code of the shader that shows the normals of the triangles (at 3:05)?
I had do rewatch several parts of the video bc was constantly distracted by the shitfluting in the background
Great Stuff but very less information about Normal Mapping.
I Have an Idea what you could analyse next. Prey's Looking Glass technology... it's basically the same as Portals in Portal but still very interesting and a shader break down would be great
@5:09 could you explain this bit more? How did you get that texture? I’d really like to be find a good way to take 3D animations and bake the, in 2d with normal maps for games.
Just watched your entire Shaders videos, thanks! I'm still super lost, but at least you encourage me to learn more about the subject :)
and im just bobbing my head trying to look smart :)
Would you consider making a video about writing shaders for text or text mesh in Unity? I've been looking for tutorials and they are non-existent.
lost me at the first 30 seconds
You are an awesome human being. Thanks for sharing knowledge! Respect.
I have watched all of ur videos but i still dont quite get the distortion uv maps, you some times use the UV color maps, other examples have the normal color map... and finally som e people distorce the uv using a black and white on channel texture.. what is the diference?
Hey man, any idea on how Hearthstone make those green energy auras around selected cards? Im not sure if it isnt just animated texture/sprites or if theres an actual fancy effect, or maybe a combination of animation + some gloom.
Good thing I'm an engineer. Beat my 60% understood!!
which is the difference between directX and OpenGL normal format?
Cool, dense material but cool. I'll need to watch more of your videos.
Omg someone who explains shit. Not just saying we put this there and that there. This is how u explain things.
The tangent was a little unclear tho. Am i right if i simply say that the 3 values ib rgb are storing the vector normal to the surface. It may be relative to somthing but i cant tell. I need to give a little bit more time
the best part of this is . . . most users of 3D creation software don't have to understand one bit of what you said . . . but we are eternally grateful that people like you do.
thank you.
Finally understand the dot product.
You should do more content, i suspect you are sitting on a goldmine! Love your format btw...
Great video! Thank you. Adding the github resources is such a nice touch!
Can you make realistic rain drops on car windshield?
1:41 i think it should be right hand rule, not left hand rule. or is this different in shaders?
I feel like such a nerd understanding and feeling gratitude for this help
Name of the channel absolutely delivers.
If using forward rendering, why would you want to transform per fragment normals from tangent space into world space?Isn't it better to transform light direction/position from world space to tangent space instead? It would save operations in the fragment shader.
9:06 - yup I mentioned that. It's generally easier to think about and visualize the normals in world space so I explained it this way. As well, there are effects that require world normals anyway so it's good to understand how you would get them out of a normal map.
Sorry i missed that, yeah world space normals are useful too in some situations.