Some additional links from the video. Also, working discord link: Discord: graphics-programming.org/ RC Experimental Testbed: www.shadertoy.com/view/4ctXD8
This feels like it's related to wavelet transforms. Like, DCT:Wavelet Transform::Spherical Harmonic Lighting:Radiance Cascade. I don't see why it wouldn't work in 3d using cubemaps.
4 місяці тому+1619
I never thought I'd see Radiance Cascades, let alone create one!
If you would be so good as to climb up and start the compilers. We can bring the Global Illumination Spectrometer to eighty fps and hold it there until the release date arrives.
@@fonesrphunny7242 if implemented properly it might be able to be used as a spatial acceleration structure for spatial audio, example: ua-cam.com/video/M3W7m0QSX-8/v-deo.html though I'm not sure if it would be better quality or more perfomant than existing techniques.
the casual "GI in O(1)" made me do a double take like that sketch "this programming language knows if the program halts - nice. - wait, it knows if the program halts ?!?"
The Penumbra Collection includes Penumbra Overture, Black Plague, and the expansion Requiem. A thrilling blend of puzzles with multiple solutions and horror that will have you screaming for more! Full freedom of movement along with the ability to manipulate everything using natural gestures creates an immersive world.
We're in this weird place where I don't want to work enough that I will sit through a college level dissertation on lighting simulation. LoL Great Video!!!
It all makes so much sense when you explain and show it to us. Without your video, i would get lost in "paper" articles with just a few images. Scrolling through equations and getting familiar with new terms. Thanks for another great video SimonDev. 👍
Ever since Exilecon i've been waiting for someone to do a nice video breakdown of Radiance Cascades. I can see it becoming a mainstream technique in the upcoming years, so much potential
yeah the main limitation is that it's screen space so doesn't care about lights outside of the screen (mostly behind the camera is an issue, you can fairly trivially have cascades computed at ~1.5x1.5 resolution, i.e 25% extra space all around. and cropped down.). so it doesn't work well as is for first or behind the shoulder third person. (you can use world space probes but that's a bit more complex and not a neat constant time like SSRC) but there's also a lot of games that are 2d or pseudo-2d where this would work really well (e.g. league of legends/dota, or side scrollers like hollow knight, city builders would also benefit greatly as you could have individual home lights for free ).
@@satibel The effect is not tied to screenspace, you could do it screenspace, but it's usable with any grid of data. If you have a 3D grid of light probes in your world, you can use this. Have probes with 8 directions checked over a small area and place them every meter for example. Then every 2x2 meter in worldspace make probes that scan 64 directions further out. and so forth. Update these probes periodically, and importantly you really only need to update probes close to the player at any regular rates, and you don't need to have probes at infinite distance, you could center a 32x32x32 grid of probes around the player for example and update the probe positions as the player moves.
I always know you're going to make me understand something new in the way that I need it to understand it. I think we speak the same exact language; like a mixture of nothing-is-new-just-another-rehashed-version-of-the-same-stuff-we-already-did, and developer-that-wants-his-code-to-run-as-fast-as-possible. Thank you. Every time. Thank you for speaking my language.
In all likelihood the issue here is that the verbal and symbolic explanations are "high frequency" while the animated visual explanations are "low frequency". There were many times in the video where I was waiting for more detailed animations, which never came. The predominant example was the rendering equation, which could have been more fully elucidated by continued animations of each term (and possibly subterm) in the equation, but my critique extends to the rest of the video, where the animations were solid but stopped short of fully explaining what was being said and shown symbolically.
So what it's sounding like is multiple resolutions of like real time light probes? You create a fixed grid of probes, and then occasionally precompute the incoming light from different directions for each point, and then when determining the light of any point, you interpolate the light between the points, for each "cascade" of light and then combine them together? At least that's what I'm gathering. This way for each point you're only computing the light from the nearest few cascade points not the whole scene
the most important point is that probes don't store radiance (rays that start at the probe), they store radiance intervals (rays that start at a certain distance away from the probe and connect into a continuous ray).
Okay…I’m on my fourth watch of this and I can feel myself *slowwwwwly* getting to grips with it, but even with a background in physics and maths (my degrees are in physics and electronic engineering) and a long career as a systems architect, I’ll be honest: I’m struggling. It’s a testament both to the PoE developers for the original idea and to Simon (who I follow) that this is penetrating my thick skull at all. Definitely not for the faint of heart but it’s worth watching over and over until it clicks because the end result is fucking gorgeous. Thanks Simon (aka Bob from Bobs Burgers) ❤️
I became mathematician at age of 6. Then I became programmer at age of 8. And at age of 10, I did learn that I was already programmer & mathematician at age of 4, as I fully grasped mathematical concept of "Propositional Logic". Every mathematician is programmer. Many just do not know any computer programming languages. And every programmer is expert mathematician in field of logic.
I started creating my own game engine to learn how it works behind the scenes, all because of your videos. But since I only know JavaScript, I felt intimidated by WebGL and did everything in context2D. Your video on spatial hash grids helped me a lot to create my own version with dynamic ranges instead of fixed arrays. Watching this video, I realized my improvised lighting system in 2D is pretty humble lol.
Such an intuitive explanation of a super cool rendering method. Awesome work! The only thing I would have loved to see more detail is the actual implementation, especially: How does a point on the screen actually get it's value? A raycast I assume? how does the raycast avoid having to loop over every light source in the image to find a collision? Also, is your explanation only valid in 2d, would it map into 3d by projecting all the points onto the nearest surface, or would it need a 3d matrix of points everywhere? Some of this could have perhaps been clarified by a brief section detailing where this method can be used and where it can not be used as presented. Other than these nitpicks / curious questions though, excellent intuitive explanation!
RC is compatible with any technique of casting rays: SDF raymarching, voxel tracing, etc. Even RTX, I guess. PoE2 uses just a constant step per-pixel screenspace raymarching. As for 3d, I suggest you read the paper, because there's a lot of nuances: you can make full-on 3d grid of radiance probes, 2.5d screenspace probes with screenspace intervals, 2.5d screenspace probes with world intervals, etc.
Keep in mind that (as @Alexander_Sannikov mentioned in his presentations) the screenspace techniques work well for PoE(2) due to the PoV limitations of the game... something that is undoubtedly familiar to players of the genre and PoE specifically but which may be lost on other folks. IMO the expansion of this technique beyond PoE's rendering purview is the next major area of research for Radiance Cascades.
for those curious, cem yuksel has a series of graphics videos that are very easy to understand, including a really intuitive explanation of the rendering equation. he does things very visually
this is such a well put-together explanation. you convey a difficult concept from ground 0 to implementation really smoothly and i understand more than i'd expect. hats off.
I read the paper months ago and got the basic gist but made a mental note to revisit it for better understanding. This DEFINITELY jogged my memory. Bravo to @SimonDev for exposing this wonderful research to a broader audience.
This is such a great source of information, it explains Radiance Cascades so much better than other videos and papers, I finally managed to understand it! Thank you so much!
I’m not a game dev or know anything about any of this. Watched the whole thing without skipping through. You’re a good presenter, even if I still don’t fully get it 😅
I've really liked the demo! If you add the possibility to upload an image from wich generate the lights/shadows, and the posibility to change the backgound, you can sell it/launch it as a tool for graphic designers!
Awesome video, thought it would be realistic lightning bolts which would also be interesting since I've looked into it a bit but can't find much usable information on it.
The quality of presentation and the in depth knowledge u are able to explain in simple terms is awesome. Please keep it up I love your content. I would also love to have something focused on physics like gjk/epa for collision and response stuff.
nice to see alexander sannikov's radiance cascades be used. I actually theorized a way to use a similar things for real time physics calculations with fluid or fluid-like objects (e.g. plague tale's rats/huge armies) the idea is that only boundaries get true physics and the other are moved by a vector field based on the population (i.e. they move from high population to low population). and the physics need good angular resolution in the middle of the pack, but only good position in the outside.
Hmm, I think you could also use lower resolution cascades the further away you are from the camera, to save up on computation! :D I'm definitely going to try working with this!!!
You know, when I saw the interpolation and probes, it reminded me of a version of pong I made that would coordinate check the ball and then calculate the angles of incidence and reflection. Lol, I was inadvertently doing a similar kind of math to the checks being made for radiance. Honestly I made an argument about using this kind of behavior for a game that does radar simple simulation. The idea simply being if an object appears in a field of view. The non programmers all said "that's too computationally expensive!!". And of course, anyone who's done a simple coordinate check knows how easy it is to have something test that it can "see" the distant object. Add in some fourth power roots and presto you have a photon energy calculation.
I would love to see a full comparison of this technique and full path tracing rendering the same scene, while also showing how long it takes both to compute, PT would be done on software ofc to make it a fair fight
Could you do a video about different shadow techniques? From basic shadow mapping using hard coded projection params [like in directional shadows ortho(left: -10, right: 10, bottom: -10, top: 10, near: -10, far: 10)], through tight projection math, normal bias, texel size world space, etc. to CSM and VSM?
Cool, reminds me of voxel cone tracing with 3D clipmaps. It also has the same issues: light leakage, not good at perfect reflections but hopefully the new technique scales better and uses less vram. I'll have to look at the paper once it's released in its final form. Edit: Btw. for 2D you can make cone tracing work quite well and fast for GI. I only implemented the 3D version 8 years ago. A little bit surprised that it was hardly adapted since it can work quite well in certain types of games.
I understood everything up until the radiance cascades nodes. The rays are casting out something to something, because directions... uh, you do it again because idk, then you have another pair of nodes doing something farther away... then you combine it for some reason, somehow... and you get this magical thing I can't explain. 🥴 This is just before the GPU part and using pixels to solve for ray directions.
I am curious to see what the bias is like for large scenes though. It reminds me a bit of "surfels" which were developed by EA if I remember correctly. It was an innovative technique but contributed a lot of bias to get real-time noise free images. The way this method is layed out, it seems like that's also going to be the case here, limiting it's effective use in real-time games with certain FPS goals
Im not qualified into that field at all but that's always interesting to learn about new things. I aslo seen Gaussian Splatting (GSplat) techniques which could also provide quite interesting things for the game industry. Like preprocessing all the environnement + light inside a Gsplat which consume way less compute power, which can have lifelike graphics and also take way less space on the harddrive. Don't know how Radiance Cascades compete next to Gsplat though, would be an interesting subject to discuss actually (from a professional)
For some reason, even though you explained all it's doing, the end result looks better than what I would imagine if you didn't show it. Like I'd expect worse artifacts from this.
Been waiting for someone else to validate this technique. It's really cool to hear promises, but always even better when others get to compare the results. Would have loved a comparison with some other technique, though since you've only implemented on 2D I guess you can't really compare with your 3D ray-trace model.
Awesome video, however I have one suggestion. In this video, even at 1080p, UA-cam's video compression and low bitrate are extremely noticeable and there are a lot of artifacts all over the place the entire time. As a suggestion, could you upload videos like this at 1440p in the future? Even for people with a 1080p display, this can make a massive change in how clean the video looks because of the better bitrate.
@@simondev758 I don't see it either. I see: "How do Major Video Games Render Grass?" "How Big Budget AAA Games Render Clouds" "I Tried Making an FPS Game in JavaScript" "I made an EVEN BETTER Minecraft"
Maybe I'm missing something, but I think this only works in screen space, right? Therefore, it'll exhibit the usual disocclusion artifacts that such techniques have, such as SSAO, SSR.
Honestly, PoE devs are brilliant. PoE1 has a lot of technical debt from what I recall, and there's a metric fuckton of things that are happening in the game; and the game still performs extremely well up to a certain point where you reach upper limits of 32 bit integers. And they do that with god knows how many thousands of entities active at any given time.
Finally! Somebody has figured out how penumbras work in computer graphics! Every damn videogame ever has such crisp, delineated shadows, no matter how far the object is. Birds flying overhear wouldn't casts shadows at all if they were high in the sky. But there they are, breaking your immersion.
Just solidifies in my mind what I've always said; Graphics programmers are way smarter then me :) A good graphics programmer is like a Unicorn and whenever I can snatch one up for a project I do!
14:56 it is still possible to get away with only the 4 samples, the method would just be: spin the samples and take the data over time its basically just a temporal method of doing that with roughly the same cost as 4 so could get away with doing something like 4 with a assumed compute cost of 6-7 depending on the method used (this is a good method for 2D but 3D would require more then 4 samples so around 16 should be good enough)
Some additional links from the video. Also, working discord link:
Discord: graphics-programming.org/
RC Experimental Testbed: www.shadertoy.com/view/4ctXD8
why do i only see this comment on mobile but not pc
nvm i see it now on pc
oh also does ROBLOX use radiance cascading (probably not)
Some critical voices say that radiance cascades work in 2D, but were a non starter in 3D. Is this true?
This feels like it's related to wavelet transforms. Like, DCT:Wavelet Transform::Spherical Harmonic Lighting:Radiance Cascade.
I don't see why it wouldn't work in 3d using cubemaps.
I never thought I'd see Radiance Cascades, let alone create one!
Now now, Simon doesn't need to hear all this. He's a highly trained professional. We've assured the PoE2 team NOTHING will go wrong.
Alright. Let's let him in.
We've just been informed that the lightbulb is ready, Simon. It should be coming up to you at any moment
_panics in scientist_
If you would be so good as to climb up and start the compilers. We can bring the Global Illumination Spectrometer to eighty fps and hold it there until the release date arrives.
So this approach, but for audio, would be called a "resonance cascade"?
Isn't that what happened in Half-Life?
Gordon doesn't need to hear all this, he's a highly trained professional
Prepare for unforseen consequences.
As someone doing audio stuff, I can't imagine why you'd ever want a resonance cascade anywhere.
@@fonesrphunny7242 if implemented properly it might be able to be used as a spatial acceleration structure for spatial audio, example: ua-cam.com/video/M3W7m0QSX-8/v-deo.html though I'm not sure if it would be better quality or more perfomant than existing techniques.
Gordon doesn't need to hear all this, he's a highly trained professional!
Make Half-Life great again.
half life 3‼️‼️‼️
what
i dont understand
@@Monkeymario. resonance cascade, half life reference
That one statement @ 2:08 is precisely why I love this channel. Although i cant deny how much i need the maths in my life
frrrr tho, math so unreadable
Programming is a form of math
The original presentation by alexander for those that are interested: ua-cam.com/video/TrHHTQqmAaM/v-deo.html
the casual "GI in O(1)" made me do a double take like that sketch
"this programming language knows if the program halts - nice. - wait, it knows if the program halts ?!?"
What’s next, Radiance Cascading Style Sheets?!
Quick, contact the Chrome devs!
LMAO XD
A wild CSS framework has appeared!
10x web developers: hey folks, here's my implementation of Radiance Cascades, written entirely in HTML+CSS!
NO! No God please no. No!
Nooooooooo!
I knew those PoE2 devs were up to something!
Yeah, they are a talented bunch!
Great work.
It's him! He's the PoE2 dev!
The man, the MYTH, THE LEGEND
They're smart cookies, definitely :)
The Penumbra Condition sounds like a nice title for a game
if deltarune was made by sony
There's the Penumbra Collection
The Penumbra Collection includes Penumbra Overture, Black Plague, and the expansion Requiem.
A thrilling blend of puzzles with multiple solutions and horror that will have you screaming for more!
Full freedom of movement along with the ability to manipulate everything using natural gestures creates an immersive world.
penumbra mentioned 👹👹👹👹👹👹👹👹👹👹👹
We're in this weird place where I don't want to work enough that I will sit through a college level dissertation on lighting simulation. LoL
Great Video!!!
It all makes so much sense when you explain and show it to us.
Without your video, i would get lost in "paper" articles with just a few images. Scrolling through equations and getting familiar with new terms.
Thanks for another great video SimonDev. 👍
Papers are always hard to read (for me).
@@simondev758 reminds me of the meme "I hate how research papers are written, so much yapping, just get to the point bro."
Looking through the comments, and I'm glad that I'm not the only one who thought the title said "Resonance Cascade"
Ever since Exilecon i've been waiting for someone to do a nice video breakdown of Radiance Cascades. I can see it becoming a mainstream technique in the upcoming years, so much potential
yeah the main limitation is that it's screen space so doesn't care about lights outside of the screen (mostly behind the camera is an issue, you can fairly trivially have cascades computed at ~1.5x1.5 resolution, i.e 25% extra space all around. and cropped down.).
so it doesn't work well as is for first or behind the shoulder third person. (you can use world space probes but that's a bit more complex and not a neat constant time like SSRC)
but there's also a lot of games that are 2d or pseudo-2d where this would work really well (e.g. league of legends/dota, or side scrollers like hollow knight, city builders would also benefit greatly as you could have individual home lights for free ).
@@satibel The effect is not tied to screenspace, you could do it screenspace, but it's usable with any grid of data. If you have a 3D grid of light probes in your world, you can use this. Have probes with 8 directions checked over a small area and place them every meter for example. Then every 2x2 meter in worldspace make probes that scan 64 directions further out. and so forth. Update these probes periodically, and importantly you really only need to update probes close to the player at any regular rates, and you don't need to have probes at infinite distance, you could center a 32x32x32 grid of probes around the player for example and update the probe positions as the player moves.
@@DreadKyller how would the performance compare to screenspace ?
I always know you're going to make me understand something new in the way that I need it to understand it. I think we speak the same exact language; like a mixture of nothing-is-new-just-another-rehashed-version-of-the-same-stuff-we-already-did, and developer-that-wants-his-code-to-run-as-fast-as-possible. Thank you. Every time. Thank you for speaking my language.
You're welcome!
This video went from super simple to utterly incomprehensible in a span of seconds! I'm having whiplash! 😂
Hah
In all likelihood the issue here is that the verbal and symbolic explanations are "high frequency" while the animated visual explanations are "low frequency". There were many times in the video where I was waiting for more detailed animations, which never came.
The predominant example was the rendering equation, which could have been more fully elucidated by continued animations of each term (and possibly subterm) in the equation, but my critique extends to the rest of the video, where the animations were solid but stopped short of fully explaining what was being said and shown symbolically.
@@NotAnInterestingPerson this comment is big brain lol
1:50 Saving a timestamp for the next time I have to explain the difference between math and programming.
But Simrola,
what about the ring and ray artifacts?
So what it's sounding like is multiple resolutions of like real time light probes? You create a fixed grid of probes, and then occasionally precompute the incoming light from different directions for each point, and then when determining the light of any point, you interpolate the light between the points, for each "cascade" of light and then combine them together? At least that's what I'm gathering. This way for each point you're only computing the light from the nearest few cascade points not the whole scene
the most important point is that probes don't store radiance (rays that start at the probe), they store radiance intervals (rays that start at a certain distance away from the probe and connect into a continuous ray).
No idea what I just watched but still fascinated how clever people are.
Nice animations, and intuitive explanations, great video!
And thanks for consulting & mentioning the community at the end :D
lets hope to see this implemented in some open source engines. and especially blender. this could be really good tech to at least preview renders.
Do you think it could be used in full production games? (Not a programmer. Just curious about the technology.
@@TristanCleveland it already has. it was specifically invented for a game you see in the intro
This is one of those videos that I'm going to have to watch like 3 times over before this gets hammered into my thick skull
Okay…I’m on my fourth watch of this and I can feel myself *slowwwwwly* getting to grips with it, but even with a background in physics and maths (my degrees are in physics and electronic engineering) and a long career as a systems architect, I’ll be honest: I’m struggling.
It’s a testament both to the PoE developers for the original idea and to Simon (who I follow) that this is penetrating my thick skull at all. Definitely not for the faint of heart but it’s worth watching over and over until it clicks because the end result is fucking gorgeous. Thanks Simon (aka Bob from Bobs Burgers) ❤️
Excellent video. The paper was a bit too complex for me to understand, but this video explained it very well. I’ll probably go make my own now…
This channel is a gold mine. Thank you.
"Most of us are programmers, not math people." -> that's a great quote.
Exactly. I program so the computer can do the maths I don't understand 😅
I became mathematician at age of 6. Then I became programmer at age of 8.
And at age of 10, I did learn that I was already programmer & mathematician at age of 4, as I fully grasped mathematical concept of "Propositional Logic".
Every mathematician is programmer. Many just do not know any computer programming languages. And every programmer is expert mathematician in field of logic.
As a dev who took 3 tries to pass Calculus I, I agree with this statement.
I started creating my own game engine to learn how it works behind the scenes, all because of your videos. But since I only know JavaScript, I felt intimidated by WebGL and did everything in context2D. Your video on spatial hash grids helped me a lot to create my own version with dynamic ranges instead of fixed arrays. Watching this video, I realized my improvised lighting system in 2D is pretty humble lol.
Such an intuitive explanation of a super cool rendering method. Awesome work! The only thing I would have loved to see more detail is the actual implementation, especially: How does a point on the screen actually get it's value? A raycast I assume? how does the raycast avoid having to loop over every light source in the image to find a collision? Also, is your explanation only valid in 2d, would it map into 3d by projecting all the points onto the nearest surface, or would it need a 3d matrix of points everywhere? Some of this could have perhaps been clarified by a brief section detailing where this method can be used and where it can not be used as presented. Other than these nitpicks / curious questions though, excellent intuitive explanation!
RC is compatible with any technique of casting rays: SDF raymarching, voxel tracing, etc. Even RTX, I guess. PoE2 uses just a constant step per-pixel screenspace raymarching. As for 3d, I suggest you read the paper, because there's a lot of nuances: you can make full-on 3d grid of radiance probes, 2.5d screenspace probes with screenspace intervals, 2.5d screenspace probes with world intervals, etc.
Keep in mind that (as @Alexander_Sannikov mentioned in his presentations) the screenspace techniques work well for PoE(2) due to the PoV limitations of the game... something that is undoubtedly familiar to players of the genre and PoE specifically but which may be lost on other folks. IMO the expansion of this technique beyond PoE's rendering purview is the next major area of research for Radiance Cascades.
Lowkey wanna suggest that the term that comes after "umbra, penumbra" should be called "bruh."
for those curious, cem yuksel has a series of graphics videos that are very easy to understand, including a really intuitive explanation of the rendering equation. he does things very visually
this is such a well put-together explanation. you convey a difficult concept from ground 0 to implementation really smoothly and i understand more than i'd expect. hats off.
I never thought I needed a young H John Benjamin explaining lighting algorithms, yet here we are.
Seeing you refrence Aleksander Sannikovs paper is not something I was expecting :O
I read the paper months ago and got the basic gist but made a mental note to revisit it for better understanding. This DEFINITELY jogged my memory. Bravo to @SimonDev for exposing this wonderful research to a broader audience.
Thanks you for linking the paper. For such complex topics I like to carefully read an article rather than just watch the video
Thanks for helping give this awesome paper wider visibility! It's a fantastic insight.
13:18 Is it really live on your website? I don't see it, only Grass, Cloud, FPS Game and Minecraft projects.
Yeah some people seem to be getting older versions, let me know if it's still not showing up.
@@simondev758 didnt show up a minute ago but now it works, feels laggy but impressive either way
@@oscarelenius4801 Yeah, it's a stock implementation, with no optimizations whatsoever heh.
@@simondev758Might be a caching issue? Reloading with ctrl + f5 might work
Great to see the crazy graphics devs at GGG getting some love!
This is such a great source of information, it explains Radiance Cascades so much better than other videos and papers, I finally managed to understand it! Thank you so much!
I’m not a game dev or know anything about any of this. Watched the whole thing without skipping through. You’re a good presenter, even if I still don’t fully get it 😅
oh man i'm so hype to have bob belcher explain new and exciting graphics techniques to me
I've really liked the demo! If you add the possibility to upload an image from wich generate the lights/shadows, and the posibility to change the backgound, you can sell it/launch it as a tool for graphic designers!
Oh cool, didn’t know the PoE devs published this method! Thanks for the breakdown!
Crazy good idea and so simple in a way.
I really doubted if I should write the paper because of how obvious it seemed.
You are such a great teacher. Starting by building the intuition then it all makes sense. Thanks for posting this
This is the answer I was looking for. Thank you for this fracking awesome video. You sir are appreciated.
Awesome video, thought it would be realistic lightning bolts which would also be interesting since I've looked into it a bit but can't find much usable information on it.
The quality of presentation and the in depth knowledge u are able to explain in simple terms is awesome. Please keep it up I love your content. I would also love to have something focused on physics like gjk/epa for collision and response stuff.
This is really cool. Thanks for explaining it in an easy to understand manner!
Love the video! It'd be really great if you could make a video covering the 3d version and some of the fixes of the artifacts this technique has.
nice to see alexander sannikov's radiance cascades be used.
I actually theorized a way to use a similar things for real time physics calculations with fluid or fluid-like objects (e.g. plague tale's rats/huge armies)
the idea is that only boundaries get true physics and the other are moved by a vector field based on the population (i.e. they move from high population to low population).
and the physics need good angular resolution in the middle of the pack, but only good position in the outside.
Hmm, I think you could also use lower resolution cascades the further away you are from the camera, to save up on computation! :D
I'm definitely going to try working with this!!!
I love this lighting - definitely an inspiration towards trying new things - you never know what might work!
Great stuff. Although the project isn't in the projects list?
yeah i can't find it either
Should be there, if not, just go to my github.
@@simondev758 It's not there. The project is indeed on your Github but I can't get it working.
very interesting approach, seems to sit somewhere between light probe grids and surfels.
Cascade: The JPEG of Light Render.
I like it.
It is, isn't it?
I love learning about programming stuff from Archer.
I love the fact that the explanations in this video are really easy to understand,great video!
thanks for the laughter and learning in every video!
GorDon doesn' need to hear all this, he'sa highly trained propfessional. We've assurdly administrated that nothing-will-go-wrong.
7:03
I was the whole time distracted from the artefact on the left side.
Is this a computational error?
You know, when I saw the interpolation and probes, it reminded me of a version of pong I made that would coordinate check the ball and then calculate the angles of incidence and reflection. Lol, I was inadvertently doing a similar kind of math to the checks being made for radiance.
Honestly I made an argument about using this kind of behavior for a game that does radar simple simulation. The idea simply being if an object appears in a field of view. The non programmers all said "that's too computationally expensive!!". And of course, anyone who's done a simple coordinate check knows how easy it is to have something test that it can "see" the distant object. Add in some fourth power roots and presto you have a photon energy calculation.
I would love to see a full comparison of this technique and full path tracing rendering the same scene, while also showing how long it takes both to compute, PT would be done on software ofc to make it a fair fight
Could you do a video about different shadow techniques? From basic shadow mapping using hard coded projection params [like in directional shadows ortho(left: -10, right: 10, bottom: -10, top: 10, near: -10, far: 10)], through tight projection math, normal bias, texel size world space, etc. to CSM and VSM?
Cool, reminds me of voxel cone tracing with 3D clipmaps. It also has the same issues: light leakage, not good at perfect reflections but hopefully the new technique scales better and uses less vram. I'll have to look at the paper once it's released in its final form.
Edit: Btw. for 2D you can make cone tracing work quite well and fast for GI. I only implemented the 3D version 8 years ago. A little bit surprised that it was hardly adapted since it can work quite well in certain types of games.
I think voxel cone tracing was used in CryEngine but nowhere else.
Thank you so much for sharing this knowledge! Super interesting video, as always
Alexander, The Great!
The live demo seems not to be available on your homepage yet.
I think there's some caching issues, I'll try invalidating and hopefully you can access it.
Amazing explanation and it looks awesome on the website!
I understood everything up until the radiance cascades nodes. The rays are casting out something to something, because directions... uh, you do it again because idk, then you have another pair of nodes doing something farther away... then you combine it for some reason, somehow... and you get this magical thing I can't explain. 🥴
This is just before the GPU part and using pixels to solve for ray directions.
Cant wait for unreal engine to pick up on this.
Very nice!
I am curious to see what the bias is like for large scenes though. It reminds me a bit of "surfels" which were developed by EA if I remember correctly. It was an innovative technique but contributed a lot of bias to get real-time noise free images. The way this method is layed out, it seems like that's also going to be the case here, limiting it's effective use in real-time games with certain FPS goals
A radiance cascade? At this time of year, at this time of day, on this side of the border world, localized entirely within our facility?
May I see it?
No.
Amazing video. Thanks Simon.
Thanks so much for the website, it's so cool!
Im not qualified into that field at all but that's always interesting to learn about new things.
I aslo seen Gaussian Splatting (GSplat) techniques which could also provide quite interesting things for the game industry. Like preprocessing all the environnement + light inside a Gsplat which consume way less compute power, which can have lifelike graphics and also take way less space on the harddrive.
Don't know how Radiance Cascades compete next to Gsplat though, would be an interesting subject to discuss actually (from a professional)
I was gonna read that pdf he released about the technology. And now i dont have to :) Thanks!
Commenting mainly for the algorithm, but thank you for the video, please keep it up!
For some reason, even though you explained all it's doing, the end result looks better than what I would imagine if you didn't show it. Like I'd expect worse artifacts from this.
Excellent presentation. Thank you!
Awesome! Thanks!
Would love to see a 2.5D tutorial to integrate the raymarching to populate the probes.
Been waiting for someone else to validate this technique. It's really cool to hear promises, but always even better when others get to compare the results.
Would have loved a comparison with some other technique, though since you've only implemented on 2D I guess you can't really compare with your 3D ray-trace model.
Awesome video, however I have one suggestion. In this video, even at 1080p, UA-cam's video compression and low bitrate are extremely noticeable and there are a lot of artifacts all over the place the entire time. As a suggestion, could you upload videos like this at 1440p in the future? Even for people with a 1080p display, this can make a massive change in how clean the video looks because of the better bitrate.
It could also be the background having a sorta high amount of detail?
Still can't overappreciate what NVidia did in 2018. We still aren't fully there, but it put us so much closer.
Great video Simon - the projects page isn't showing that demo though, may need the caches clearing?
Yeah, let me know if it still isn't showing and I'll try to force an invalidation or something.
@@simondev758 I don't see it either.
I see:
"How do Major Video Games Render Grass?"
"How Big Budget AAA Games Render Clouds"
"I Tried Making an FPS Game in JavaScript"
"I made an EVEN BETTER Minecraft"
Radiance cascade? Mr Freeman is a highly trained professional!
1:02 the reflections of the walls
Maybe I'm missing something, but I think this only works in screen space, right? Therefore, it'll exhibit the usual disocclusion artifacts that such techniques have, such as SSAO, SSR.
NO, it can work in world space as well
Perfect video to watch while my PC renders my blender scene.
Love the voiceover, you sound like the cartoon character Archer haha
This is really beautiful! Well done.
These animations look top notch. Any chance of sharing what software you use to create them?
I animate them via code in shaders. I cover a lot of it in my shader course.
Honestly, PoE devs are brilliant. PoE1 has a lot of technical debt from what I recall, and there's a metric fuckton of things that are happening in the game; and the game still performs extremely well up to a certain point where you reach upper limits of 32 bit integers. And they do that with god knows how many thousands of entities active at any given time.
You and Ange the Great would make some good colab material imo
Finally! Somebody has figured out how penumbras work in computer graphics! Every damn videogame ever has such crisp, delineated shadows, no matter how far the object is. Birds flying overhear wouldn't casts shadows at all if they were high in the sky. But there they are, breaking your immersion.
This is so damn impressive. Graphics programming really does feel like magic sometimes.
This concept on minecraft shaders would be great for realistic colored light sources and shadows without bringing most GPUs to their knees.
Just solidifies in my mind what I've always said; Graphics programmers are way smarter then me :) A good graphics programmer is like a Unicorn and whenever I can snatch one up for a project I do!
14:56 it is still possible to get away with only the 4 samples, the method would just be: spin the samples and take the data over time its basically just a temporal method of doing that with roughly the same cost as 4 so could get away with doing something like 4 with a assumed compute cost of 6-7 depending on the method used (this is a good method for 2D but 3D would require more then 4 samples so around 16 should be good enough)
Love it!
I bet this or some more practical example would be interesting to 2kliksphilip too