LIGHTING AND SHADING // Ray Tracing series
Вставка
- Опубліковано 18 чер 2024
- Visit brilliant.org/TheCherno to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription. AMAZING place to learn all the math you'll need for this series!
Support on Patreon ► / thecherno
Discord (#raytracing-series) ► / discord
Source code ► github.com/TheCherno/RayTracing
🧭 FOLLOW ME
Instagram ► / thecherno
Twitter ► / thecherno
Twitch ► / thecherno
Learn C++ with my series ► • Welcome to C++
📚 RESOURCES (in order of complexity)
🟢 Ray Tracing in One Weekend series ► raytracing.github.io
🟡 Scratch a Pixel ► scratchapixel.com
🔴 Physically Based Rendering: From Theory to Implementation ► amzn.to/3y2bGK7
💾 SOFTWARE you'll need installed to follow this series
Visual Studio 2022 ► visualstudio.microsoft.com
Git ► git-scm.com/downloads
Vulkan SDK ► vulkan.lunarg.com
Welcome to the exciting new Ray Tracing Series! Ray tracing is very common technique for generating photo-realistic digital imagery, which is exactly what we'll be doing in this series. Aside from learning all about ray tracing and the math to goes into it, as well as how to implement it, we'll also be focusing on performance and optimization in C++ to make our renderer as efficient as possible. We'll eventually switch to using the GPU instead of the CPU (using Vulkan) to run our ray tracing algorithms, as this will be much faster that using the CPU. This will also be a great introduction to leveraging the power of the GPU in the software you write. All of the code episode-by-episode will be released, and if you need help check out the #raytracing-series channel on my Discord server. I'm really looking forward to this series and I hope you are too! ❤️
CHAPTERS
0:00 - Lighting and shading in rendering
6:21 - Using floats for colors
12:07 - Why use floats instead of ints for colors?
12:52 - Finding our sphere hit coordinates
17:08 - Closest intersection point
19:40 - Using color to visualize numbers
21:02 - How lighting and shading works
22:45 - Calculating lighting using normal vectors
25:34 - Visualizing normals better
27:00 - Using math to calculate lighting and shade our sphere
This video is sponsored by Brilliant.
#RayTracing
I know a lot of you have been asking for a more frequent/consistent schedule for this series, and I'm happy to say that with the help of Brilliant sponsoring these episodes we've decided to go all in on this series! 🎉 We'll be aiming for 2-3 videos per month. Go and show them some love and learn all the math you'll need for this series here (first 200 people will get 20% off their annual premium subscription): brilliant.org/thecherno
;))
Would you be making videos on rendering meshes in the future ?
You should try adding multi-threading to this project after you are done with all the videos, that would be a nice addition.
When that final result popped up, that was insane! Absolutely loved it🔥 Kudos to you @The Cherno
This production is AMAZING !!!
Really clear explanations Cherno! Funnily enough I've been doing very similar computations with topographic data
Excellent progression of developing and demonstrating concepts through a series of changes. Not only is this good form with respect to educating, but is a helpful practice for software development in general. It is helpful to maintain a sense of confidence and understanding along the way as a feature is developed. Well done!
This is a great series. The effort you have put into this is very evident, thanks for that.
I am using glsl instead because I just want to learn math instead of copying your code. 😄
The fact that such trivial linear algebra operations (dot products and normalization) basically doubled the frametime (from 3ms to 6ms) is insane
Dot products should be pretty cheap since it just addition and multiplication. However the relativity "expensive" part is the normalization which requires a square root.
Things do add up quickly since it is on a per pixel basis (well in this case, per hit basis). Quite spooky
@@MCSteve_ yeah, I know! It's trivial for us, humans
@@matheusmarchetti628 I dont think calculating a sqrt root is trivial for humans. Imagine taking a square root of something like 12.389.
@@tk-ruffian it's actually pretty easy to do. Just a bunch of divisions and multiplications until you finally do the square root. It's tedious, not hard
@@matheusmarchetti628 for a computer tedious and hard are synonymous. There are a lot of instructions needed to take a square root, compared to just a multiplication and an addition for each dimension to take the dot product.
I bought the book when the vid came out, I got it and I can say it’s great for everything game dev it covers everything from matrices to scattering. The purchase was also great timing as I started doing shaders 2 months ago and this book helps me go up a level.
holy shit this was amazing. that stuff always blew my mind and i found so hard to understand but this entire video just made it SO SIMPLE! you're the absolute best teacher for that content.
The Cherno clearly has developed a keen understanding of how to teach. It's quite admirable.
Amazing presentation! Thank you!
Glad you're pumping more content on this series. By the way are those custom colors you setup in VS settings along side visual assist or something else ?
I don't understand 1 single thing but I love every second of it.
Another great video, thanks!
love the colors in the background!
Great work, Cherno!
This is education at its best, thanks.
geometry and textures actually don't even matter in a lot of cases, especially if the lighting is good enough.
throw any low poly solid color model into a beautiful lighting engine and it looks beautiful
That's just too true
I mean just look at half-life 1 rtx or quake 2 rtx. it's beautiful (although it does ruin the atmosphere of the games a little bit)
My solution for the "homework" included adding two vec3 properties on the Renderer--one for the light direction and one for the sphere color. In the WalnutApp's OnUIRender method I added six sliders immediately under the frame-rate text--one each for the components of the sphere color (r, g, b) and the light direction (x, y, z). I set the min/max for the color channels to [0,1] and for the light components to [-1,+1]. With the real-time rendering of the single sphere I was then able to see the color change in real time and the shading change in real time as I dragged the slider values back and forth. WAY more fun than the clunky everything-from-scratch ray tracer I wrote in university in the early 2000s.
I made a sdf sphere like this in unity with ray marching before, it was quite fun
fantastic tutorial
all hail lord cherno
Love the content and how easy it is to follow along. I'm using Python with OpenGL and can follow along perfectly! Keep it up!
How's the performance? I'm getting interested in OpenGL/graphics programming but I don't want to deal with C++. Thanks for the reply!
@@fraelitecagnin7628 In small projects it would be fine. But as soon as you start scaling your project you will begin to see frame drops
Would it be faster if you vectorize every pixel with numpy?
@@luigidabro Yes, but to an extent. Python will always slow down in larger projects. And ray tracing is extremely computationally demanding. Even c++ struggles with it
best series you are a Computer graphics guru
Can't wait for the next episode...
Woot! I thought this series was dead
amazing video!
XD The ease with which he removes the camera lens at 0:49. I think he uses it fairly often.
Best Teacher 💜💟
0:35 "If there was no light anywhere we whould'nt actually see anything" The Cherno 2022 AD
That is interesting!
I am just waiting for you to get to the part where you move this all to a shader. Then i can just go crazy and make stuff similar to stuff on shadertoy. I just don't know how to transfer this to a shader using vulkan or your library, walnut, which uses vulkan lol
Wow, thank you
FYI: There are no "lightrays" in nature, there are photons though, which are quantum objects - quanta of energy in the electromagnetic field.
As these objects are quantum in nature, they can be entangled to other shit and are also in a superposition of multiple states all at once - If you think rendering is slow now well... imagine simulating all that :)
i would like to know how often we use vector functions that are templated
Please make a video on how to store a linked list in a binary file (or text file) in c++.
Hey cherno what are your thoughts on the zig programming language
I really want to get into video game programming
shouldnt the function at 11:23 be called ConvertToABGR since that is is the format you are converting it too?
Which theme you're using for Visual Studio IDE ?
Bro you gotta get on those PR’s.
Isn't it slightly modified (or more accurately done a bit differently) PBR?
Hey cherno as someone who doesn't have a single clue of where to start learning.. what would you suggest be the first step in getting started in programming... I don't understand alot of what you go thru but i love watching it anyway lol
Ask yourself what you want to make. For example, I want to make games, so learning python might not be the best option, whereas C# and C++ are better for that
yeah what he said
Is this how lighting is normally done in raytracers, and then indirect lighting and reflections (bounced rays) are added on top? Or is it normally just a bouncing ray with scattering?
OMG that book is big .... It really must lead you to something....
We miss the bunny!
If the sphere is behind the camera, both t values will be negative. If the camera is inside the sphere, then one t value will be negative. So the t to choose should be the smallest NON-NEGATIVE t value.
Hey, Cherno, your accent immediately became familiar to my ear, and after a little investigation Ive confimed to myself, that part of your family are russians. Do you speak russian at home (which causes the accent I seem to hear)?
only 1.2k likes ??? we must be a niche crowd on youtube looking up c++ stuff
Hey,I really like your Video,Can you share your Camera and Len
I'm really enjoying following along with this... In JavaScript... (I am so so sorry)
Isn't the ConvertToRGBA function should be called ConvertToARGB?
We alleyways put colors in rgb+a order
the color is still RGBA but the value in memory is ABGR because of endiannes
wouldn't it be bettr to map "d" to 0-1 instead of clamping it to give the effect of ambient lighting?
You could try that as an artistic shortcut, but it's going to be very incorrect. Once the surface is facing away from a light source, there's no contribution unless the surface is not opaque.
It's better to think of the ambient light as a separate light source that gets _added_ to the shaded color. Also, once you're in photorealistic territory ambient lighting essentially becomes a "side effect".
aren't glm already has a vec4 to uint32 coversion function? something like glm::packUnorm4x8
Hai Bro Cherno, I am not a programmer but I am looking your all series of C++ programming and interested to learn C++ graphics Programming, I have done a circle shading using OpenGL in the attached Source Code, Please help me to complete the Circle at the closing point image is not smooth, what will be the problem?
as a guy who watches programming related videos without ever actually doing anything(because whenever i try anything myself, i do a lot of mistakes, a lot of compilation errors happen. and just following and copy paste-ing code feels to me like i won't learn anything but atleast it works): i am wondering about one, or few things:
i can imagine somebody would say "there are many ways to do that" to many things you've done in this video. and if i were to follow your tutorial, wouldn't i be just copying your code? i can't just copy paste everything and pretend that i know everything, after hearing about it only once. besides that, some stuff i've just watched feels like if my brain was a RAM and my short-term memory buffer would randomly reset random bits before being sent to constant memory. i might forget some crucial details
i don't mean that your video is bad or something. your videos are actually very great and useful to people at certain level. but i'm just confused about few stuff about programming tutorials. would it be better if i were to learn the graphic (and the way graphic libraries work such as opengl) basics without code examples, and then spend a lot of time attempting to do that myself (which would probably make me remember how it works and it'd give me time to think about few details), or would it actually be better if i were to listen to a video and follow instructions? should i go the path of trial, error, and wasted time, or should i go the path of basically copying everything while trying to remember everything?
i just feel like some programming stuff is so repeatable that you can just copy paste the already written code, or turn it into a library and turn it into python import. like for example, a code for making a windows OS window. i can just copy paste the code and change whatever i want to change
The trick tends to be to do neither of those things; dont bash your head reinventing the wheel, especially for math, but also dont copy paste, write it manually line by line, make sure you understand what it is youre writing, maybe even switch up names to whatever you would have named that variable if you werent following a tutorial, get it working by mostly following along, once it works you're half way to actually completing the tutorial, instead of just stopping thats when you play with it more get a better 'gut feeling' for what each part does, toy with the values given "what if I change this what happens?" once you get a feel for the controls then it sticks a lot better.
👍
be sure to take a break sometimes!
nice thumb
Can anyone tell me why normal makes it brighter?
is this running on the gpu?
no but it can be easily transferred to run on the gpu since the cherno is using a "per-pixel" function to determine the color, just like a fragment shader
I wrote a pretty extensive raytracer with DOF, importance sampling, different materials and primitves on the cpu and the whole thing took maybe a few hours to port into CUDA. Most of the algorithms used are embarrasively parallel so hardly any care needs to be taken to get everything working in a compute shader or CUDA etc
@@60framesplus27 I am pretty new to graphics and stuff, don't even have a gpu, lol.
Nice, "light" philosophy
12:30 That's fine and good for us humans, but C++ isn't for instructing humans. When you use float, you are telling the compiler that values between 0.0 and 0.1 will use a higher range(resolution) when compared to values between 0.8 and 0.9. You are not using that at all, so it's entirely wasted and actually not what you want. A good example of this is the middle area of the screen, effectively OpenGL puts more pixel addresses in that area than are available at the edges. Why?
But that is how you want real numbers to behave when you have finite resources (bits in this case). Here's the problem: if you want to represent the product of two arbitrary numbers you need to double the number of bits in those numbers. What floats do is say "hey, I'm going to give you the order of magnitude accurately, and I'm going to give you precise enough information in that order of magnitude". That _is_ what you need most of the time, _especially_ in graphics.
>A good example of this is the middle area of the screen, effectively OpenGL puts more pixel addresses in that area than are available at the edges. Why?
You have mesh vertices defined in some local space that generally get transformed through a bunch of different coordinate spaces that cover an enormous range of possible values.
One of the reasons why you might want to define the clip volume the way OpenGL does is because at that point in the pipeline (right before rasterization) you have to work with floats, but you also need the results to have enough precision to be able to rasterize in a subpixel accurate manner. The [-1, +1] range gives you that.
Once it comes to rasterization however, the hardware will work with fixed-point arithmetic to give even, subpixel accurate results. In this sense, there aren't any more "pixel addresses" precision in the center of the screen compared to the edges, because the clip volume has no notion of pixels at all, and the rasterizer works with fixed-point arithmetic.
>You are not using that at all, so it's entirely wasted and actually not what you want.
For one - to reiterate - you don't have an alternative. Just the fact that you're modulating the light values by cos(angle) means you'll have to use floats.
But for two the shape of the gamma curve makes shown in the video at that point makes floats the _natural_ fit for color/light. When you're going from 0-1 (float) linear to the normalized sRGB values, you're pushing the precision in the "darks" (lower, more precise float values) to the higher range which of the evenly spaced 0-255 values.
@@Botondar I don't think you understand my argument. I'm saying for as many numbers as there are between 0.08 and 0.09 and 0.8 and 0.9 are roughly the same. However, for drawing vertices in the center of the screen, the extra accuracy is rounded away... Asking "why!" As for doing general math, you are correct floats do add value... but we are talking about 0 to 1 color space VS 0 to 255, and in that case with high dynamic range an argument could be made for higher precision with darker colors.
@@cheako91155 I understand your argument, but you're ignoring the fact that the constraint that lighting values range from 0 to 1 or that the coordinate values range from -1 to 1 only arise at the _end_ of the calculations.
_All_ of this is generic math, and there is no alternative to floating point to easily handle it.
1) You're going to have vertices 5 centimeters and 5 kilometers away in your scene. That gets transformed to the NDC range in the end, but you need to handle both.
2) Even if you're doing low dynamic range lighting, the _intermediate_ results aren't going to always be 0-1. If you have multiple light sources, one pixel might exceed 1, and you're only going to clip the value to 1 at the end. In this case floating point values are both more precise and faster than 8-bit integer arithmetic.
@@Botondar Depth only applies to the Z axis... When looking south how does something 5 kilometers west look, it doesn't values >1 are off-screen. For the Z axis float points make even less sense as Z buffer compression make those values take something like 4 bits. The arguments for float are numerous, but none of them preclude integers. If you are talking about numbers so large they are grater than 1, then we are not talking about the same thing. As for convention, yes I understand all of graphics is float point... but again you miss explaining the only question I've ever had: "why" Convention for convention's sake is bad.
@@cheako91155
> When looking south how does something 5 kilometers west look, it doesn't values >1 are off-screen.
Hypothetical: you're looking south; there's a mountain 5 kilometers south-west. It's still going to be on-screen. This is extremely common.
> For the Z axis float points make even less sense as Z buffer compression make those values take something like 4 bits
I don't know what you mean here. Render target and Z-buffer compression are generally based on tiles and deltas; the idea is to encode a few texels and define the other texels in the same tile with a delta that can be stored in a small number of bits. It's much more complicated than that, but that's the gist. Whether or not you're using floats or not is irrelevant. Also most of engines these days _do_ use float depth buffers.
> The arguments for float are numerous, but none of them preclude integers.
The arguments for floats _for graphics_ is that they allow you to do the arithmetic you need, while integers don't. In other domains that's not the case.
> If you are talking about numbers so large they are grater than 1, then we are not talking about the same thing.
So what do you mean when you say this? You were originally claiming that floats were not what you'd want for color and NDC position, but what I'm saying is that in both of those cases at the very least you need to be able to determine whether the values are or aren't greater than 1.
> (...) but again you miss explaining the only question I've ever had: "why" Convention for convention's sake is bad.
The "why" is because we _do_ actually need properties of floats, which is what I've been trying to provide examples of. I never claimed it was "convention".
What I'm curious about is what do you think the format should be? What format should we do the calculations in, and how would that change the pipeline?
teach me some unity
pay him some money
Video is about lighting and shading but there's no ambient occlusion in the thumbnail between your hand and the ball. Cmon cherno you gotta ray trace your thumbnails too
pre-calculated light maps
talk about heavy ligthing bible
planeray tracing, not per pixel, but per vertical line planeray tracing, plane primitive intersection, you get the vertical textured line directly, you dont have to render all the pixels, of triangles
combined ray-tracing with triangle raster
instead of skewed drawing, you test which of the pixel directed planerays hit the triangle, then draw the exact pixels (step the vertical texture uv-coordinates), no LOD required
ray tracing raster intersections has the advantage of being exactly accurate
Second
Is your name reallllly 'The Cherno'? seems fishy to me 0:00
UA-camrs commonly refer to themselves using their channel name while doing intros. And you don't actually need his real name. He is, at least for us, The Cherno
Thats his „artist“ name. His name is Yan Chernikov
@@emiktra7929 ik just trolling m8
;)
one small step for photorealism ....😞
why not union but bitwise logic
like i said in you're coding videos... wouldnt be so bad if it wasn't for the horrible accent.. it gets annoying af after a while of watching/hearing the coding videos
Spelling improperly is more annoying, in my opinion...
Do your own videos if these videos isn't up to your standards?
@@rasmadrak you must be mental.. their isn't any spelling wrong... and wtf would it matter anyway... gtfo boomer!