hello. i m french. i m a big fan of your tutorials. i know they are exclusively opengl. but i wonder if you could, ad exception, make a tutorial on 3d collisions please? thanks again for your videos, they are very clear! keep on making them!
Thanks! It's true that right now I'm focusing on OpenGL but in the future I plan to branch out to more game development topics, including collision detection. However, this is a big topic and will require some research. I'll try to do something in between my regular videos. Possible after a few more light and shadow tutorials.
I'm wondering why the perspective division has to be done in the fragment shader rather than the vertex shader? For gl_Position I've tried doing the division by W in the vertex shader and just set the W value to 1.0, and appeared to have got the same result. I'm supposing there's more difference in the calculations done to gl_Position vs other variables between vertex and fragment shaders than just dividing by W, but I don't know what they are exactly or how to predict them since I can't print the values.
Are you referring to the position from the light point of view? If that's the case then the system is designed for rendering from only a single camera position. This goes into gl_Position and if you need another camera position you have no choice but to do this manually. In terms of vertex shader vs fragment shader - remember that by default the rasterizer performs perspective correct interpolation (taking into account the fact that the triangle is not perpendicular to the viewer in 3D, even though it looks like that in 2D) so things like that are usually more accurate in the fragment shader.
@@OGLDEV Yeah I'm referring to the LightSpacePos. I'm finding it really difficult to understand what the math is doing since all this automatic stuff is being done in the shaders and I can't view the numerical output.
That's the problem with the graphics pipeline which you can't directly debug like you're used to. Try to use RenderDoc. It adds a lot of visual stuff like showing you how the model looks like after the vertex shader which is often useful.
Yes. If you want to take advantage of the power of the GPU to do the rendering then you have to choose one of the available APIs. There really isn't a direct access to the GPU as with the CPU. You can also do 2D stuff using OpenGL. After all, the 3rd dimension is simply a combination of a projection matrix and the perspective divide mechanism in the GPU. Not sure how convenient the API itself is for 2D and how it stacks up against 2D specific APIs such as Direct2D.
How do game engines like unity handle moving the sun so that it's in the frustum of the camera? I need to figure out the orientation and position of the directional light so that the camera shadows are always highest possible quality. Any resources on this?
In the case of a directional light you only have a direction, there's no position. You will find a discussion about a tight intersection between the view frustum and the light volume here: learn.microsoft.com/en-us/windows/win32/dxtecharts/common-techniques-to-improve-shadow-depth-maps
OK so it's not exactly a position per se. You define an orthogonal frustum with the usual six clipping planes which looks like a box. You need to make sure all the objects in the scene are within it and clip it against the view volume to make it as small as possible. No need to render stuff whose shadow cannot be seen (tricky because the object may be outside but the shadow inside).
I love the tutorial and your voice. But how can I render shadows for multiple point lights in the scene? do I need to create multiple depth attachments or there is another efficient technique? I would be very glad if you create a video on it
Thanks! There are many approaches to handling shadows from multiple light sources. The basic method is indeed to create several textures and render into them. You can use the geometry shader to render into multiple textures instead of doing a multi-pass on the application level. You can use a texture array for holding multiple textures. I haven't done much optimization on this topic so I guess you'll need to do more research. The following contains interesting info on multiple shadows in Doom: www.adriancourreges.com/blog/2016/09/09/doom-2016-graphics-study/#shadow-map-atlas. I hope to cover this topic at some point...
Hi, when I follow this tutorial in my project, my shadow map texture texture values all seem to be 0.0, any idea why that would be? (In the calcShadowFactor() function in the shader the depth variable is always 0.0)
Did you change the default depth value by calling glClearDepth? It should be 1. Try to output some color from the fragment shader in the shadow pass (instead of the empty shader) so that you can see what the light "sees". This requires not binding the shadow map so that everything will go to the screen instead.
@@OGLDEV Currently I've got it outputting a black and white light view, which has the expected depth. However the texture from sampler2D in the lighting shader still only samples the value 0.0 which is confusing me. Even when I swap the argument to the texture function (float depth = texture(my-texture, uvCoords).x ) (where my-texture is the texture I use for the scene) it still gives 0.
Make sure the shadow map is bound correctly for reading during the light pass (glActiveTexture, glBindTexture, set the index and not GL_TEXTURE* in the shader). You can use ApiTrace to verify the shadow map is created and bound correctly. If you believe it is bound you can replace it with an all white texture just to make sure you get 1. You can also hack the shader to display uvCoords in the window as color to check that it makes sense.
@@OGLDEV Finally found the issue, turns out when I was clearing my depth buffer with glClear(GL_DEPTH_BUFFER_BIT) I was doing it while the fbo was still unbound, and now my shadows are working
I've been stuck on a problem for a while - shadows rotate together with objects, i understand that it's a math error, but i can't find it. Anyone has any idea where to look at? Btw, great video, thanks for your detailed explanations! The course is really useful for learning graphics programming in general, not just opengl.
@OGLDEV thanks for the reply!) No, when the light is stationary, and the camera rotates, everything works fine, but when the object itself is rotated, the lit area on it, and the shadow rotate too, as if the light source has moved (which it didn't)
But if the object is rotated shouldn't it have an effect on the shadow? Unless the object is a sphere or something like that. If the light doesn't move the shadow WVP matrix and the shadow map should look the same. You can easily print the matrix and verify. The shadow map can be viewed in RenderDoc.
@OGLDEV I mean, it rotates like it's stuck to the object, as if it was just a texture. Logging data is a problem cause I'm using webgpu, it's not that straightforward, which is part of the problem)
Probably a bug in the construction of the shadow matrix. I'm not familiar with webgpu but this page provides some details on debugging with RenderDoc: eliemichel.github.io/LearnWebGPU/appendices/debugging.html
I've tried implementing shadow mapping into my OpenGL program, but for whatever reason the objects captured in the shadow map dissapear and re-appear depending on the light's position (It's a directional light that moves left and right). Any help would be greatly appreciated.
Are you using an orthogonal projection matrix for the shadow pass? It may not be large enough and the object stays out of the frustum on some location of the light source.
@@OGLDEV Nope, I'm using a perspective matrix for spot lights. I've tried switching up the perspective matrix that spot lights use with an orthogonal projection matrix, and the shadows work just fine (although they don't look as good given that now i'm using directional light shadows for a spot light).
Maybe the angle is simply not wide enough to capture the objects as the light moves left and right? If one matrix works and the other doesn't then it pretty much tells us the source of the problem.
It's a possible explanation, however, if I push the light's position far back enough so that both objects are clearly within the shadow map's bounds at all times, they dissapear regardless and no shadows are casted. I am absolutely baffled by whatever witchcraft is going on in my program.
Try to track the shadow map using apitrace or any other debugging tool. It will help you understand the changes to the shadow map at the frame level. You should be able to pinpoint the exact frame where the object disappeared and debug from there.
You need a separate shadow map for each spot light. You do a series of shadow passes - one pass per light/shadow map. You bind all the shadow maps to your lighting shader and do a single lighting pass. For each light you calculate its own shadow factor. A fragment may be in shadow for one light and not for another. You sum up the contribution from all light sources as usual. I will cover this in one of the upcoming videos.
I plan to do a PBR tutorial in the not very distant future. You may also want to check out the video by my buddy Victor Gordan: ua-cam.com/video/RRE-F57fbXw/v-deo.html.
Assuming that you are interested in learning a graphics API (because strictly for game development a game engine may be a better choice) then OpenGL is a good balance between low level and "developer friendly". Vulkan and D3D12 are very low level and move a lot of the driver work over to you. Theoretically it should provide performance improvement but you need to know what you're doing and how to optimize. OpenGL allows you to build almost any kind of renderer but the driver still takes care of a lot of stuff for you.
It must be incredibly hard and frustrating learning vulkan (or D3D12, Metal) if you don't have a previous knowledge of any Graphics API first. I think it's more human-friendly to learn OpenGL ES 1.1 to learn the basics, then OpenGL ES 3.0 and finally (and if your app/game has a cpu bottleneck) Vulkan. As far as I know OpenGL ES 3.0 has some advantages than OpenGL, it runs on Android and with ANGLE as a Renderer, it translates OGL calls to Vulkan, D3D and Metal
I recommend studying OpenGL before Vulkan. I'm not experienced with ES but I guess it should serve the same function of providing a 3D foundation as well as 3D math which will be useful for Vulkan as well.
Awesome, thanks a lot for all your work and great tutorials ! You deserve much more views.
Thanks a lot!
Doing the lord's work
Wow! Thank you!
hello. i m french. i m a big fan of your tutorials. i know they are exclusively opengl. but i wonder if you could, ad exception, make a tutorial on 3d collisions please? thanks again for your videos, they are very clear! keep on making them!
Thanks! It's true that right now I'm focusing on OpenGL but in the future I plan to branch out to more game development topics, including collision detection. However, this is a big topic and will require some research. I'll try to do something in between my regular videos. Possible after a few more light and shadow tutorials.
@@OGLDEV nice!
Really awesome explaination 👍
Thanks a lot 😊
I'm wondering why the perspective division has to be done in the fragment shader rather than the vertex shader?
For gl_Position I've tried doing the division by W in the vertex shader and just set the W value to 1.0, and appeared to have got the same result.
I'm supposing there's more difference in the calculations done to gl_Position vs other variables between vertex and fragment shaders than just dividing by W, but I don't know what they are exactly or how to predict them since I can't print the values.
Are you referring to the position from the light point of view? If that's the case then the system is designed for rendering from only a single camera position. This goes into gl_Position and if you need another camera position you have no choice but to do this manually. In terms of vertex shader vs fragment shader - remember that by default the rasterizer performs perspective correct interpolation (taking into account the fact that the triangle is not perpendicular to the viewer in 3D, even though it looks like that in 2D) so things like that are usually more accurate in the fragment shader.
@@OGLDEV Yeah I'm referring to the LightSpacePos. I'm finding it really difficult to understand what the math is doing since all this automatic stuff is being done in the shaders and I can't view the numerical output.
That's the problem with the graphics pipeline which you can't directly debug like you're used to. Try to use RenderDoc. It adds a lot of visual stuff like showing you how the model looks like after the vertex shader which is often useful.
Awesome video!
Glad you enjoyed it
Great, thanks for the video!
My pleasure!
Earned a subscribe
Welcome!
Hello. how is going?
Can we develop a program like Maya or 3ds max by OpenGL?
What about 2d animation engine or any 2d graphic program?
thankyou
Yes. If you want to take advantage of the power of the GPU to do the rendering then you have to choose one of the available APIs. There really isn't a direct access to the GPU as with the CPU. You can also do 2D stuff using OpenGL. After all, the 3rd dimension is simply a combination of a projection matrix and the perspective divide mechanism in the GPU. Not sure how convenient the API itself is for 2D and how it stacks up against 2D specific APIs such as Direct2D.
@@OGLDEV thanks a lot
How do game engines like unity handle moving the sun so that it's in the frustum of the camera? I need to figure out the orientation and position of the directional light so that the camera shadows are always highest possible quality. Any resources on this?
In the case of a directional light you only have a direction, there's no position. You will find a discussion about a tight intersection between the view frustum and the light volume here: learn.microsoft.com/en-us/windows/win32/dxtecharts/common-techniques-to-improve-shadow-depth-maps
@@OGLDEV but for lightspace matrix u need a position, is that somehow excluded?
For the light space matrix you use orthogonal projection so no need for position: ua-cam.com/video/JiudfB4z1DM/v-deo.html
@@OGLDEV now im confused. Surely u need a position when rendering to the framebuffer? Otherwise the projection doesnt know what to include
OK so it's not exactly a position per se. You define an orthogonal frustum with the usual six clipping planes which looks like a box. You need to make sure all the objects in the scene are within it and clip it against the view volume to make it as small as possible. No need to render stuff whose shadow cannot be seen (tricky because the object may be outside but the shadow inside).
I love the tutorial and your voice. But how can I render shadows for multiple point lights in the scene? do I need to create multiple depth attachments or there is another efficient technique? I would be very glad if you create a video on it
Thanks! There are many approaches to handling shadows from multiple light sources. The basic method is indeed to create several textures and render into them. You can use the geometry shader to render into multiple textures instead of doing a multi-pass on the application level. You can use a texture array for holding multiple textures. I haven't done much optimization on this topic so I guess you'll need to do more research. The following contains interesting info on multiple shadows in Doom: www.adriancourreges.com/blog/2016/09/09/doom-2016-graphics-study/#shadow-map-atlas. I hope to cover this topic at some point...
@@OGLDEV thank you so much
I feel we are gonna see a cascade of these videos
Probably... ;-)
Hi, when I follow this tutorial in my project, my shadow map texture texture values all seem to be 0.0, any idea why that would be? (In the calcShadowFactor() function in the shader the depth variable is always 0.0)
Did you change the default depth value by calling glClearDepth? It should be 1. Try to output some color from the fragment shader in the shadow pass (instead of the empty shader) so that you can see what the light "sees". This requires not binding the shadow map so that everything will go to the screen instead.
@@OGLDEV Currently I've got it outputting a black and white light view, which has the expected depth. However the texture from sampler2D in the lighting shader still only samples the value 0.0 which is confusing me. Even when I swap the argument to the texture function (float depth = texture(my-texture, uvCoords).x ) (where my-texture is the texture I use for the scene) it still gives 0.
Make sure the shadow map is bound correctly for reading during the light pass (glActiveTexture, glBindTexture, set the index and not GL_TEXTURE* in the shader). You can use ApiTrace to verify the shadow map is created and bound correctly. If you believe it is bound you can replace it with an all white texture just to make sure you get 1. You can also hack the shader to display uvCoords in the window as color to check that it makes sense.
@@OGLDEV Finally found the issue, turns out when I was clearing my depth buffer with glClear(GL_DEPTH_BUFFER_BIT) I was doing it while the fbo was still unbound, and now my shadows are working
Excellent!
I've been stuck on a problem for a while - shadows rotate together with objects, i understand that it's a math error, but i can't find it. Anyone has any idea where to look at?
Btw, great video, thanks for your detailed explanations! The course is really useful for learning graphics programming in general, not just opengl.
You're welcome! Do you mean that when the camera is moving and the light doesn't the shadows move instead of being stationary?
@OGLDEV thanks for the reply!) No, when the light is stationary, and the camera rotates, everything works fine, but when the object itself is rotated, the lit area on it, and the shadow rotate too, as if the light source has moved (which it didn't)
But if the object is rotated shouldn't it have an effect on the shadow? Unless the object is a sphere or something like that. If the light doesn't move the shadow WVP matrix and the shadow map should look the same. You can easily print the matrix and verify. The shadow map can be viewed in RenderDoc.
@OGLDEV I mean, it rotates like it's stuck to the object, as if it was just a texture. Logging data is a problem cause I'm using webgpu, it's not that straightforward, which is part of the problem)
Probably a bug in the construction of the shadow matrix. I'm not familiar with webgpu but this page provides some details on debugging with RenderDoc: eliemichel.github.io/LearnWebGPU/appendices/debugging.html
I've tried implementing shadow mapping into my OpenGL program, but for whatever reason the objects captured in the shadow map dissapear and re-appear depending on the light's position (It's a directional light that moves left and right). Any help would be greatly appreciated.
Are you using an orthogonal projection matrix for the shadow pass? It may not be large enough and the object stays out of the frustum on some location of the light source.
@@OGLDEV Nope, I'm using a perspective matrix for spot lights. I've tried switching up the perspective matrix that spot lights use with an orthogonal projection matrix, and the shadows work just fine (although they don't look as good given that now i'm using directional light shadows for a spot light).
Maybe the angle is simply not wide enough to capture the objects as the light moves left and right? If one matrix works and the other doesn't then it pretty much tells us the source of the problem.
It's a possible explanation, however, if I push the light's position far back enough so that both objects are clearly within the shadow map's bounds at all times, they dissapear regardless and no shadows are casted. I am absolutely baffled by whatever witchcraft is going on in my program.
Try to track the shadow map using apitrace or any other debugging tool. It will help you understand the changes to the shadow map at the frame level. You should be able to pinpoint the exact frame where the object disappeared and debug from there.
if you want to do shadow map for multi light source?
You need a separate shadow map for each spot light. You do a series of shadow passes - one pass per light/shadow map. You bind all the shadow maps to your lighting shader and do a single lighting pass. For each light you calculate its own shadow factor. A fragment may be in shadow for one light and not for another. You sum up the contribution from all light sources as usual. I will cover this in one of the upcoming videos.
Can you do a video about PBR next ?
I plan to do a PBR tutorial in the not very distant future. You may also want to check out the video by my buddy Victor Gordan: ua-cam.com/video/RRE-F57fbXw/v-deo.html.
And here's my video! - ua-cam.com/video/XK_p2MxGBQs/v-deo.html
Shadow volumes looks better
Each method can be improved in different ways and one method may look better than the other in different environments and settings.
Is OpenGL still worth learning in 2025?
Assuming that you are interested in learning a graphics API (because strictly for game development a game engine may be a better choice) then OpenGL is a good balance between low level and "developer friendly". Vulkan and D3D12 are very low level and move a lot of the driver work over to you. Theoretically it should provide performance improvement but you need to know what you're doing and how to optimize. OpenGL allows you to build almost any kind of renderer but the driver still takes care of a lot of stuff for you.
It must be incredibly hard and frustrating learning vulkan (or D3D12, Metal) if you don't have a previous knowledge of any Graphics API first. I think it's more human-friendly to learn OpenGL ES 1.1 to learn the basics, then OpenGL ES 3.0 and finally (and if your app/game has a cpu bottleneck) Vulkan. As far as I know OpenGL ES 3.0 has some advantages than OpenGL, it runs on Android and with ANGLE as a Renderer, it translates OGL calls to Vulkan, D3D and Metal
I recommend studying OpenGL before Vulkan. I'm not experienced with ES but I guess it should serve the same function of providing a 3D foundation as well as 3D math which will be useful for Vulkan as well.
If you want a more user friendly api or just need some reference for webgl it's very useful.