Your videos continue to inspire me! Thanks for all the work you put into making them. I have had horrible problems with z-fighting before when two planes were overlapping and almost at the same angle.
@@simondev758 Yeah, I did, after a lot of searching around for a solution, pretty much the same as the way you did. I wish this video had come out back then, it would’ve been so much help...
Half of what you say is way beyond what I understand (for now) yet it’s been a really fascinating series. I’m a backend web dev primarily, but always had an interest in further pursuing game dev. I just never considered JS could be a genuine medium for going this far.
I just smacked my head into the sin wave? no, triangle wave? I guess, but not it. And derived a nice piece of code on a page about repeating noise textures. pointInRepeatingSpace = (originalPosition % (repeatingPeriod + repeatingPeriod)) % repeatingPeriod; which has solved the original jittery textures without causing the jaggy/strechy bits at the edge of the repeating period that the sin and triangle waves cause. source: ronja-tutorials 029-tiling-noise also side note: you're an inspiration!
hmm, woke up this morning and found some streaky lines appearing where coordinates approached the repeating period, so I changed my repeating period to be a base 2 value (not sure if that helped), and changed to simply: pointInRepeatingSpace = originalPosition % repeatingPeriod; and the streaks are gone ... nope they moved to the location of the new base 2 value. I am not a smart man.
alright I think I just got it, and didn't just hide the problem from myself for the moment. So (position % divisor), gives distortion around axis 0, whereas the (position % divisor + divisor) % divisor) gives distortion approaching axis divisor. So I have just do both then test the absolute value of each, and use the original value, for the one that is lower in the absolute. And this seems to solved it.
okay no, that just moved it again... hmm. this may be be a bust. But I feel like there's something here, so I'll keep going, will update if I find a good solution.
it would appear that the issue with doing something like mod(position, 1024) is that when you get to the border the noise algorithm is trying to lerp between 1022 to 1023 to 0 to 1, and when it tries to bridge 1023 to 0 it (really rather gracefully) blows up, smearing textures along one axis or another. so a modification of the simplex noise algorithm is needed to make it understand that it too exists within a repeating period. Now I am not a math guy, like, I didn't do linear algerbra in school. So I'll hunt around and see what other tiling noise solutions are out there. The ronja-tutorials articles did have a relatively human-readable implementation of simplex noise so that seems like where I'll end up if I can't find an easier answer and have to write my own.
The minute at 1:40 blew my mind on understanding not just depth buffer but also log & precision! Thank you so much! Ps: yes, I know I'm stupid but I take pride in trying to understand the universe nonetheless 😅 even over decades... Thank you so much
Video by video you are educating something's we never learned in our life(both academic/self thought,i am kind of both). I wish you are teacher when i started learning programming. Quality over quantity. Thank you from bottom of heart.
Amazing content. Binged all ur content in one day. So many little nuggets of information I would never ever find otherwise. Thank u so much! As a request I was hoping if u could cover how to do raycasting and how u would be able to 'walk' on a vertex-shader displaced mesh.
What do you mean by "walk" on a vertex-shader displaced mesh? Do you mean apply a vertex shader to say, some terrain to generate a heightfield, and then put a character on it?
Nice stuff man, I ran into all these problems when testing large distances. Initially I used log depth and wrote to gl_fragdepth, I later replaced this with a reversed Z projection matrix, then I switched my camera code over to double precision and then used double precision throughout my shader pipeline. It worked but man it hit performance hard.
Nice! Was that in openGL or webgl? I thought the reversed-Z trick didn't work in OpenGL because of the NDC space bounds, and you needed an extension to change that.
I've been dealing with these problems for a while now. It's cool to see new solutions. I'm doing everything through Unity which is a bit of a bummer because though I could technically fix all my problems with what you presented, I'd have to throw out pretty much all my current materials, it appears. Still, I've been considering rewriting my old engine one day and might take these ideas and implement them in there. Great video!
There's a Unity' conference about Kerbal Space Program and how they solved z-clipping, and basically works pretty well for Kerbin being 600km in radius, they've opted for 4 to 5 render targets: sky and far space, far planet, near planet, iva (optional) and UI; no logarithmic depth buffer mentioned. Basically the second render has a far from player near plane, while the third renders what's nearby ( less than 1 km I guess ); sky, well renders the kerbol system and the atmosphere. Anyway, great awesome stuff!
That sort of multi-stepped approach is what Outerra (the ones who wrote up the log depth buffer thing) used before they figured out the log depth trick. I like the log depth because it's simple, doesn't introduce any complexity in the way the scene is handled, kinda just scales. Whether or not Kerbal had a good reason to use their approach, no idea, but if I find out I'll definitely make a video on it :)
@@simondev758 well, the only reason I could think of is the broken texture effect, anyway it'd be perfectly fine having two cameras instead ( near linear, far logarithmic ), they maybe have thought it was simpler this way.
@@simondev758 sorry if I spam you, but there are two interesting articles about map generation in this blog: www.redblobgames.com/ Namely, Polygonal Map Generation and Tiling a sphere with Voronoi, which are very interesting, they are worth reading.
hi , did you have a video to learn Entity-Component-System ? i see only video about javascript ? did you have one video where you talk in deep about this ?
Here's what I did for Z-fighting which has worked for me. I basically try to touch the Z numbers as little as possible after converting them to view space. First I took the projection calculation out of the matrix since it appeared to cause some instability at very long ranges. I just do this in a post step. For the final divide I just pick a very large power of two since that shouldn't touch the mantissa bits at all. So far this has worked at millions of kilometers. You do need a good LOD system however.
Your content is so eye opening. You address so many topics I’ve been curious about as a novice game and web developer. Unfortunately there are a lot of core concepts and algorithms you mention still fly over my head. Do you have any recommendations on how/where to get started for these base fundamentals? Like how did you start your career, because the way you handle yourself is way more advanced and experienced than any ol google search will do.
Any particular things I said? That way I can get a better idea of which way to steer you. As for me, how I started? I'll probably answer that more in a future video, but I got a bsc. in computer science, self-taught all the gamedev stuff to start, worked at major studios for about a decade, mostly as graphics/optimization, learned a lot, did another long stint at Google, again on performance. A lot of this info is out there, but hard to find, especially if you don't know what to look for.
Awesome, thank you! (TLDR version below) As far as what you said in this video I couldn't keep up with the math around z-buffers (not because I can't do math, lol). I actually do want to know the details behind it all. I've worked in Maya, Blender, Unity, and Unreal for 5 + years, but they pretty much have everything done for you when it comes to models/graphics/optimization/renders. BUT, over the past 3 years, I've become a certified Full-Stack web developer and general programmer and I wanted to marry my past skillsets to the web. I became very familiar with Javascript, landed in Three.js, then hit a wall when I no longer had the performance perks of unreal. A few high res models, 4k textures, and performance tanked. On top of that, I've never tampered with custom shaders or knew what is actually happening under the hood all the way to the roots. TLDR: Long story short - I want to make web apps with a perfect balance of performance and high-quality detailed artwork with all the bells and whistles (high res textures, shadows, reflections, etc.) and truly understand the deep underlying basics of graphics. But I have no idea what the limits are, or how to determine them, or how to bend them according to needs.
UA-cam suggested this video to me without me having watched the first 8, may I suggest you put a link to the full playlist in the description for others like me?
That's interesting, I saw it but didn't try it. My problem is that the terrain isn't textured with uv's, but I'm using triplanar mapping using the world space coordinates. I kinda think the saw tooth pattern won't work since you have that dropoff instead of linearly coming down again, since the mapping + I've got some tiling pattern breakup code that uses gradient noise + world space position to break up the obvious patterns. I may need to revisit how the terrain is textured, maybe go fully procedural, but I'll try the saw pattern when I'm at home.
@@simondev758 yeah as long as you run each of the world axes through it before any other processing it should probably work. Unless maybe your working with a large scale noise for variation but imo that's something I'd reccomend writing to the vertex colours if you can work it out because in this case they're going to be a lot better suited to large scale features without suffering floating point issues.
I don't know if it's available in the browser but if you can set the texture wrap parameters to GL_MIRRORED_REPEAT then that implements the triangle wave sampling in the sampler itself.
Hey, had a question.... I've trying to download some fbx files from mixamo and apply them in the third person camera, but, the models appear very blur in the browser,....what shall I do to get them loaded fully..?
I'm a front end dev. I've started thinking about a game I wanna make. I decided to make it a text based web multiplayer RPG (because i know nothing about graphics) seeing your videos made me think about maybe using three.js to make my game feel like an actual... game. So I thank you for showing me how I can take my JS skills even further. I do have a question I'd love to get your take on. My expertise using Vue JS is way higher than my expertise using Vanila Do you think using a framework hurts performance to the point where its not worth the hustle? (especially for HTML and state management for data)
I seem to recall that one of the most common ways to clean up any waveform's output is to give it another waveform or two in parallel. I could be remembering that wrong, but how much would it hurt to try?
I was thinking something along those lines, output a second wave from the vertex to use as the tiling breakup. I'll try it if I come back around to the texturing.
Writing to depth just overrides some common hardware optimizations, since you basically must run the pixel shader rather than depth testing and discarding a fragment after vertex transform.
@@simondev758 Since we need to modify depth we need to run fragment shader anyway, so that cost is a given, but with deferred shading any costly shaders like lighting will only be run once per fragment anyway, so that kinda works like a z pre-pass? And how fast is hw z-discard, compared to core count, anyway?
@@overloader7900 The depth testing happens on the gpu. So I guess it would depend entirely on your setup. I'm thinking of a setup where you do a depth prepass, followed by a full opaque pass to write fill the gbuffer. Maybe you combine those if performance says its fine. Where do we modify depth from fragment shaders, as you mention?
@@simondev758 Does it matter where it gets written if it is written to at all? The cost of threads had to be dispatched is there, but is it large enough to saturate the cores? Forward rendering is an obvious no because of lots of overdraw (although with occlusion culling?). The difference between depth prepass and deferred rendering is that in deferred rendering you also save some basic information(very cheap shader) in buffer, like normals, color, positions, etc. And in pre-pass you just save the depth, and then re-rendering it, only saving data if depths are equal. So deferred requires more memory and probably bandwidth on first pass, but pre-pass has cost of twice the geometry...
@@overloader7900 So if I'm understanding right, you're asking if it matters while writing depth, ie. writing to gl_fragdepth WHILE populating the depth buffer. That's an interesting question, and I'm pretty sure yes it still matters and probably hurts early discard optimizations. A cursory search online gave me this: www.khronos.org/registry/OpenGL/extensions/ARB/ARB_conservative_depth.txt An opengl extension specifically written to allow you to write to depth from the fragment shader while still allowing early fragment discard to work.
Writing to gl_FragDepth *IS* a big deal if you have a lot of overdraw in your scene because it won't skip running a shader if its resulting pixel will be behind what's already in the framebuffer. You effectively break the early-out functionality that hardware graphics have relied on for 25 years to keep things as fast as possible. Yes, in this project's case where there isn't a lot of overdraw, it's not going to hut anything, but in anything that has expensive material shaders and many objects/geometry in the scene, you want to take advantage of the Z-buffer omitting shader executions as much as possible, and render front-to-back to further leverage that functionality.
I point at early-out breakage literally in the same sentence, like... just a few words before. At the end of the day, numbers and profiling are king. If your use case demands this and you're willing to eat the cost, then go for it. It's all about knowing the option exists, and the trade-offs therein.
@@simondev758 thx this looks awesome! still lookink how we could use compressed Texture with ktx2 you know github.com/KhronosGroup/KTX-Software toktx2 command line
so im currently pursuing a CS degree , and is very interested in computer graphics and video game development, especially this kind of space sims, if you dont mind sharing, how did you learn how to do this?
I found the last part interesting, I don't get why you would use a wave to reduce the size of your numbers, if you were getting floating point errors in the fragment shader before, putting them through a sine wave shouldn't help I would think? And it mirrors your texture at the borders. Why can't you just say, for instance just looking at the x coordinate, why can't you say, vCoords.x = coords.x % max; % being a floating point mod that treats negative numbers properly, like -1%3=2
For the far-away planet stuff, couldn’t you also just dynamically turn the terrain into normal maps, since from far enough away it is basically just flat anyway.
Anyways, it’s due to the inherent precision of floating point numbers being higher the smaller the number gets, and nothing to do with OpenGL specifically
a triangle wave should repeat textures correctly if the texture is repeatable both ways, perhaps this texture isn't? ua-cam.com/video/kfM-yu0iQBk/v-deo.html
What kind of education do you need to understand this shit... I can't imagine your run of the mill game dev knows this? If so holy crap they are underpaid. Us web developers over here making bank because we watched a 3 minute CSS tutorial once and this guy is spitting straight maths.
This is significantly more important than online classes.
lol
Do online classes teach you practical things?
It’s going to be sad when you get the attention you deserve and can no longer comment back. Love the videos and would love to see more.
Trying to respond, but man so many
Fantastic presentation, this channel is just getting better and better; awesome work Simon!
Was using a logarithmic depth buffer and needed to properly get world position in a shader. Your code was a lifesaver. Thank you!
Your videos continue to inspire me! Thanks for all the work you put into making them. I have had horrible problems with z-fighting before when two planes were overlapping and almost at the same angle.
Ah, did you get it fixed?
@@simondev758 Yeah, I did, after a lot of searching around for a solution, pretty much the same as the way you did. I wish this video had come out back then, it would’ve been so much help...
Half of what you say is way beyond what I understand (for now) yet it’s been a really fascinating series.
I’m a backend web dev primarily, but always had an interest in further pursuing game dev. I just never considered JS could be a genuine medium for going this far.
This is a lot like the Sebastian Lague coding adventures, but I like the fact that these videos are a lot more in-depth. Thanks for your pog content
The triangle wave trick at the end is next level! Thanks a lot for sharing your knowledge
I just smacked my head into the sin wave? no, triangle wave? I guess, but not it. And derived a nice piece of code on a page about repeating noise textures.
pointInRepeatingSpace = (originalPosition % (repeatingPeriod + repeatingPeriod)) % repeatingPeriod;
which has solved the original jittery textures without causing the jaggy/strechy bits at the edge of the repeating period that the sin and triangle waves cause.
source: ronja-tutorials 029-tiling-noise
also side note: you're an inspiration!
hmm, woke up this morning and found some streaky lines appearing where coordinates approached the repeating period, so I changed my repeating period to be a base 2 value (not sure if that helped), and changed to simply:
pointInRepeatingSpace = originalPosition % repeatingPeriod;
and the streaks are gone ... nope they moved to the location of the new base 2 value. I am not a smart man.
Wait, so did this work or not? I love the simplicity here, really want to change my method if this is better.
alright I think I just got it, and didn't just hide the problem from myself for the moment. So (position % divisor), gives distortion around axis 0, whereas the (position % divisor + divisor) % divisor) gives distortion approaching axis divisor. So I have just do both then test the absolute value of each, and use the original value, for the one that is lower in the absolute.
And this seems to solved it.
okay no, that just moved it again... hmm. this may be be a bust. But I feel like there's something here, so I'll keep going, will update if I find a good solution.
it would appear that the issue with doing something like mod(position, 1024) is that when you get to the border the noise algorithm is trying to lerp between 1022 to 1023 to 0 to 1, and when it tries to bridge 1023 to 0 it (really rather gracefully) blows up, smearing textures along one axis or another. so a modification of the simplex noise algorithm is needed to make it understand that it too exists within a repeating period. Now I am not a math guy, like, I didn't do linear algerbra in school. So I'll hunt around and see what other tiling noise solutions are out there. The ronja-tutorials articles did have a relatively human-readable implementation of simplex noise so that seems like where I'll end up if I can't find an easier answer and have to write my own.
So glad I found your channel, youre videos will be very helpful when I start my projects again after exams 😁
Good luck!!
Really good stuff there.
ty
The minute at 1:40 blew my mind on understanding not just depth buffer but also log & precision! Thank you so much! Ps: yes, I know I'm stupid but I take pride in trying to understand the universe nonetheless 😅 even over decades... Thank you so much
You make it look easy
Ideally after the explanation, you also think its easy.
This is exactly the kind of content I need.
My little brother is your huge fan, he is 13 years old... I don't know what much about programming but I appreciate your work... Keep it up 👍
Awesome, hope he learns a lot from these!
Video by video you are educating something's we never learned in our life(both academic/self thought,i am kind of both). I wish you are teacher when i started learning programming. Quality over quantity.
Thank you from bottom of heart.
I really appreciate these videos. This is a thorough level of detail. Keep it up.
Amazing content. Binged all ur content in one day. So many little nuggets of information I would never ever find otherwise. Thank u so much! As a request I was hoping if u could cover how to do raycasting and how u would be able to 'walk' on a vertex-shader displaced mesh.
Or like how does raycasting even work on a shader level? And more generally: how to get information out of a shader?
What do you mean by "walk" on a vertex-shader displaced mesh? Do you mean apply a vertex shader to say, some terrain to generate a heightfield, and then put a character on it?
Nice stuff man, I ran into all these problems when testing large distances. Initially I used log depth and wrote to gl_fragdepth, I later replaced this with a reversed Z projection matrix, then I switched my camera code over to double precision and then used double precision throughout my shader pipeline. It worked but man it hit performance hard.
Nice! Was that in openGL or webgl? I thought the reversed-Z trick didn't work in OpenGL because of the NDC space bounds, and you needed an extension to change that.
I've been dealing with these problems for a while now. It's cool to see new solutions. I'm doing everything through Unity which is a bit of a bummer because though I could technically fix all my problems with what you presented, I'd have to throw out pretty much all my current materials, it appears. Still, I've been considering rewriting my old engine one day and might take these ideas and implement them in there. Great video!
There's a Unity' conference about Kerbal Space Program and how they solved z-clipping, and basically works pretty well for Kerbin being 600km in radius, they've opted for 4 to 5 render targets: sky and far space, far planet, near planet, iva (optional) and UI; no logarithmic depth buffer mentioned.
Basically the second render has a far from player near plane, while the third renders what's nearby ( less than 1 km I guess ); sky, well renders the kerbol system and the atmosphere.
Anyway, great awesome stuff!
That sort of multi-stepped approach is what Outerra (the ones who wrote up the log depth buffer thing) used before they figured out the log depth trick.
I like the log depth because it's simple, doesn't introduce any complexity in the way the scene is handled, kinda just scales. Whether or not Kerbal had a good reason to use their approach, no idea, but if I find out I'll definitely make a video on it :)
@@simondev758 well, the only reason I could think of is the broken texture effect, anyway it'd be perfectly fine having two cameras instead ( near linear, far logarithmic ), they maybe have thought it was simpler this way.
@@simondev758 sorry if I spam you, but there are two interesting articles about map generation in this blog: www.redblobgames.com/
Namely, Polygonal Map Generation and Tiling a sphere with Voronoi, which are very interesting, they are worth reading.
Really enjoyable explanation and the planet is looking great!
You're a madman, I love ur video
Ty for the great content
great!!!, very thanks Simon
Thanks for such a great video, Simon! :)
np
Thanks you for your work. It helped me well !
hi , did you have a video to learn Entity-Component-System ? i see only video about javascript ? did you have one video where you talk in deep about this ?
This is EXACTLY the video I was looking for! Thank you!
Thank u so much. You're so amazing and so generous ❤
Man, you should work in ThreeJS!
It is in three.js
Here's what I did for Z-fighting which has worked for me. I basically try to touch the Z numbers as little as possible after converting them to view space. First I took the projection calculation out of the matrix since it appeared to cause some instability at very long ranges. I just do this in a post step. For the final divide I just pick a very large power of two since that shouldn't touch the mantissa bits at all. So far this has worked at millions of kilometers. You do need a good LOD system however.
I watch this man for his beautiful voice
and the tutorials too
Your content is so eye opening. You address so many topics I’ve been curious about as a novice game and web developer. Unfortunately there are a lot of core concepts and algorithms you mention still fly over my head. Do you have any recommendations on how/where to get started for these base fundamentals? Like how did you start your career, because the way you handle yourself is way more advanced and experienced than any ol google search will do.
Any particular things I said? That way I can get a better idea of which way to steer you.
As for me, how I started? I'll probably answer that more in a future video, but I got a bsc. in computer science, self-taught all the gamedev stuff to start, worked at major studios for about a decade, mostly as graphics/optimization, learned a lot, did another long stint at Google, again on performance. A lot of this info is out there, but hard to find, especially if you don't know what to look for.
Awesome, thank you! (TLDR version below) As far as what you said in this video I couldn't keep up with the math around z-buffers (not because I can't do math, lol). I actually do want to know the details behind it all. I've worked in Maya, Blender, Unity, and Unreal for 5 + years, but they pretty much have everything done for you when it comes to models/graphics/optimization/renders. BUT, over the past 3 years, I've become a certified Full-Stack web developer and general programmer and I wanted to marry my past skillsets to the web. I became very familiar with Javascript, landed in Three.js, then hit a wall when I no longer had the performance perks of unreal. A few high res models, 4k textures, and performance tanked. On top of that, I've never tampered with custom shaders or knew what is actually happening under the hood all the way to the roots.
TLDR: Long story short - I want to make web apps with a perfect balance of performance and high-quality detailed artwork with all the bells and whistles (high res textures, shadows, reflections, etc.) and truly understand the deep underlying basics of graphics. But I have no idea what the limits are, or how to determine them, or how to bend them according to needs.
Love your channel man! Thank you for the amazing content!
Can you recommend entry-level design patterns for game programming?
How entry level? Using good naming conventions and a clear file structure helps a lot
State and architecture for example.
+1 to that recommendation, seems to cover a lot of territory.
Channels like this with high education always seen less popular.
I'm pretty happy to have the audience I've built thus far, amazing this many people care what I have to say heh
@@simondev758I love you. 🥰 Knowledge you are giving us free are priceless.
holy shit, dropping knowledge again!! love it haha
Big Brain. That's all I have to say.
UA-cam suggested this video to me without me having watched the first 8, may I suggest you put a link to the full playlist in the description for others like me?
Good idea, will make sure to do that from now on!
These are fantastic. Thank you.
This guy sounds a whole lot like Jim Keller :-)
(If you read this Simon, you can take it as a huge compliment)
This is really interesting but why not just use a saw wave as long as your texture tiles it should be fine right?
That's interesting, I saw it but didn't try it. My problem is that the terrain isn't textured with uv's, but I'm using triplanar mapping using the world space coordinates. I kinda think the saw tooth pattern won't work since you have that dropoff instead of linearly coming down again, since the mapping + I've got some tiling pattern breakup code that uses gradient noise + world space position to break up the obvious patterns.
I may need to revisit how the terrain is textured, maybe go fully procedural, but I'll try the saw pattern when I'm at home.
@@simondev758 yeah as long as you run each of the world axes through it before any other processing it should probably work. Unless maybe your working with a large scale noise for variation but imo that's something I'd reccomend writing to the vertex colours if you can work it out because in this case they're going to be a lot better suited to large scale features without suffering floating point issues.
Im gonna spend a whole day to learn this and convert it to GDScripting
I wish I could translate maths into code that easy!
awesome stuff man
I don't know if it's available in the browser but if you can set the texture wrap parameters to GL_MIRRORED_REPEAT then that implements the triangle wave sampling in the sampler itself.
Hey, had a question....
I've trying to download some fbx files from mixamo and apply them in the third person camera, but, the models appear very blur in the browser,....what shall I do to get them loaded fully..?
Great video
maybe chainsaw mapping instead of triangle, so instead of repeating the inverted texture you just repeat the texture uninverted?
Do I have to implement this if my engine allows a reverse z depth buffer?
Hard to say, it gives a similar effect, but without testing, you won't know for sure.
You did it :D Awesome!!
yay!
I'm a front end dev. I've started thinking about a game I wanna make. I decided to make it a text based web multiplayer RPG (because i know nothing about graphics)
seeing your videos made me think about maybe using three.js to make my game feel like an actual... game.
So I thank you for showing me how I can take my JS skills even further.
I do have a question I'd love to get your take on.
My expertise using Vue JS is way higher than my expertise using Vanila
Do you think using a framework hurts performance to the point where its not worth the hustle?
(especially for HTML and state management for data)
I've been doing js stuff mostly because I feel like this is slightly unexplored territory. You may need to do some investigation here and tell me.
Can you please make tutorial on js3 and webgl please
I seem to recall that one of the most common ways to clean up any waveform's output is to give it another waveform or two in parallel. I could be remembering that wrong, but how much would it hurt to try?
I was thinking something along those lines, output a second wave from the vertex to use as the tiling breakup. I'll try it if I come back around to the texturing.
I find it odd that I'm learning more about opengl in javascript than I have by using C++ xD
crazy stuff!!! where do I start from to start building open 3d worlds and become something like this one day ?
Would the writing to depth cost be mostly mitigated if using deferred shading?
Writing to depth just overrides some common hardware optimizations, since you basically must run the pixel shader rather than depth testing and discarding a fragment after vertex transform.
@@simondev758 Since we need to modify depth we need to run fragment shader anyway, so that cost is a given, but with deferred shading any costly shaders like lighting will only be run once per fragment anyway, so that kinda works like a z pre-pass? And how fast is hw z-discard, compared to core count, anyway?
@@overloader7900 The depth testing happens on the gpu.
So I guess it would depend entirely on your setup. I'm thinking of a setup where you do a depth prepass, followed by a full opaque pass to write fill the gbuffer. Maybe you combine those if performance says its fine. Where do we modify depth from fragment shaders, as you mention?
@@simondev758 Does it matter where it gets written if it is written to at all? The cost of threads had to be dispatched is there, but is it large enough to saturate the cores? Forward rendering is an obvious no because of lots of overdraw (although with occlusion culling?).
The difference between depth prepass and deferred rendering is that in deferred rendering you also save some basic information(very cheap shader) in buffer, like normals, color, positions, etc. And in pre-pass you just save the depth, and then re-rendering it, only saving data if depths are equal.
So deferred requires more memory and probably bandwidth on first pass, but pre-pass has cost of twice the geometry...
@@overloader7900 So if I'm understanding right, you're asking if it matters while writing depth, ie. writing to gl_fragdepth WHILE populating the depth buffer.
That's an interesting question, and I'm pretty sure yes it still matters and probably hurts early discard optimizations. A cursory search online gave me this: www.khronos.org/registry/OpenGL/extensions/ARB/ARB_conservative_depth.txt
An opengl extension specifically written to allow you to write to depth from the fragment shader while still allowing early fragment discard to work.
Writing to gl_FragDepth *IS* a big deal if you have a lot of overdraw in your scene because it won't skip running a shader if its resulting pixel will be behind what's already in the framebuffer. You effectively break the early-out functionality that hardware graphics have relied on for 25 years to keep things as fast as possible. Yes, in this project's case where there isn't a lot of overdraw, it's not going to hut anything, but in anything that has expensive material shaders and many objects/geometry in the scene, you want to take advantage of the Z-buffer omitting shader executions as much as possible, and render front-to-back to further leverage that functionality.
I point at early-out breakage literally in the same sentence, like... just a few words before. At the end of the day, numbers and profiling are king. If your use case demands this and you're willing to eat the cost, then go for it. It's all about knowing the option exists, and the trade-offs therein.
will this source on github?
Your post reminded me, it's up now!
@@simondev758 thx this looks awesome! still lookink how we could use compressed Texture with ktx2 you know github.com/KhronosGroup/KTX-Software toktx2 command line
so im currently pursuing a CS degree , and is very interested in computer graphics and video game development, especially this kind of space sims, if you dont mind sharing, how did you learn how to do this?
CS degree + industry experience
I found the last part interesting, I don't get why you would use a wave to reduce the size of your numbers, if you were getting floating point errors in the fragment shader before, putting them through a sine wave shouldn't help I would think? And it mirrors your texture at the borders. Why can't you just say, for instance just looking at the x coordinate, why can't you say, vCoords.x = coords.x % max; % being a floating point mod that treats negative numbers properly, like -1%3=2
Valuable stuff... Btw what are your pc specs as i'm curious.....plz mention your specs...thanks!
I have an old desktop from 2014: gtx 750 ti, i7 4790k,
For the far-away planet stuff, couldn’t you also just dynamically turn the terrain into normal maps, since from far enough away it is basically just flat anyway.
Good idea, I'm thinking of doing something like that, possibly switching to raymarched terrain at a distance.
In three.js you have an example on log buffer. Check this
threejs.org/examples/#webgl_camera_logarithmicdepthbuffer
Yep, mentioned in video. But you need to understand those in order to backproject into viewspace depth when reading from the depth buffer.
Me not understanding a lick of things: mhmm interesting
I thought the depth buffer in OpenGL is always logarithmic?
No, hyperbolic by default
aka 1/distance
I see
@@arhaisme ..., said the blind man to the deaf girl as he picked up his hammer and saw
Anyways, it’s due to the inherent precision of floating point numbers being higher the smaller the number gets, and nothing to do with OpenGL specifically
just use seamless textures and wrap them using a mod function. then you wont get that mirroring effect, and you can layer more over to give variance
is this inspired by space engine?
Nah, although space engine is impressive
WOOO
Ooooooh!
:)
Your voice has that baratone rumble that AI-generated voices typically have
a triangle wave should repeat textures correctly if the texture is repeatable both ways, perhaps this texture isn't? ua-cam.com/video/kfM-yu0iQBk/v-deo.html
What kind of education do you need to understand this shit... I can't imagine your run of the mill game dev knows this? If so holy crap they are underpaid. Us web developers over here making bank because we watched a 3 minute CSS tutorial once and this guy is spitting straight maths.