Some additional links from the video. Also, working discord link: Discord: graphics-programming.org/ RC Experimental Testbed: www.shadertoy.com/view/4ctXD8
This feels like it's related to wavelet transforms. Like, DCT:Wavelet Transform::Spherical Harmonic Lighting:Radiance Cascade. I don't see why it wouldn't work in 3d using cubemaps.
6 місяців тому+1680
I never thought I'd see Radiance Cascades, let alone create one!
If you would be so good as to climb up and start the compilers. We can bring the Global Illumination Spectrometer to eighty fps and hold it there until the release date arrives.
@@fonesrphunny7242 if implemented properly it might be able to be used as a spatial acceleration structure for spatial audio, example: ua-cam.com/video/M3W7m0QSX-8/v-deo.html though I'm not sure if it would be better quality or more perfomant than existing techniques.
We're in this weird place where I don't want to work enough that I will sit through a college level dissertation on lighting simulation. LoL Great Video!!!
The Penumbra Collection includes Penumbra Overture, Black Plague, and the expansion Requiem. A thrilling blend of puzzles with multiple solutions and horror that will have you screaming for more! Full freedom of movement along with the ability to manipulate everything using natural gestures creates an immersive world.
Ever since Exilecon i've been waiting for someone to do a nice video breakdown of Radiance Cascades. I can see it becoming a mainstream technique in the upcoming years, so much potential
yeah the main limitation is that it's screen space so doesn't care about lights outside of the screen (mostly behind the camera is an issue, you can fairly trivially have cascades computed at ~1.5x1.5 resolution, i.e 25% extra space all around. and cropped down.). so it doesn't work well as is for first or behind the shoulder third person. (you can use world space probes but that's a bit more complex and not a neat constant time like SSRC) but there's also a lot of games that are 2d or pseudo-2d where this would work really well (e.g. league of legends/dota, or side scrollers like hollow knight, city builders would also benefit greatly as you could have individual home lights for free ).
@@satibel The effect is not tied to screenspace, you could do it screenspace, but it's usable with any grid of data. If you have a 3D grid of light probes in your world, you can use this. Have probes with 8 directions checked over a small area and place them every meter for example. Then every 2x2 meter in worldspace make probes that scan 64 directions further out. and so forth. Update these probes periodically, and importantly you really only need to update probes close to the player at any regular rates, and you don't need to have probes at infinite distance, you could center a 32x32x32 grid of probes around the player for example and update the probe positions as the player moves.
@@kitsune0689 Depend, it all comes down to the amount of data you need to calculate, in other words, how you setup your grid. In 2D it should be smaller since you cannot (should not, I guess?) go higher resolution than the pixel resolution. In 3D sure you have to make it in 3D grid and not being tied to screen space, so there is no way to compare to screen space.
It all makes so much sense when you explain and show it to us. Without your video, i would get lost in "paper" articles with just a few images. Scrolling through equations and getting familiar with new terms. Thanks for another great video SimonDev. 👍
I always know you're going to make me understand something new in the way that I need it to understand it. I think we speak the same exact language; like a mixture of nothing-is-new-just-another-rehashed-version-of-the-same-stuff-we-already-did, and developer-that-wants-his-code-to-run-as-fast-as-possible. Thank you. Every time. Thank you for speaking my language.
I became mathematician at age of 6. Then I became programmer at age of 8. And at age of 10, I did learn that I was already programmer & mathematician at age of 4, as I fully grasped mathematical concept of "Propositional Logic". Every mathematician is programmer. Many just do not know any computer programming languages. And every programmer is expert mathematician in field of logic.
That was well explained. It made reading the paper on it so much simpler. Things are always easier with a clear picture and an overview to start. Thanks for putting out a concise explainer.
In all likelihood the issue here is that the verbal and symbolic explanations are "high frequency" while the animated visual explanations are "low frequency". There were many times in the video where I was waiting for more detailed animations, which never came. The predominant example was the rendering equation, which could have been more fully elucidated by continued animations of each term (and possibly subterm) in the equation, but my critique extends to the rest of the video, where the animations were solid but stopped short of fully explaining what was being said and shown symbolically.
So what it's sounding like is multiple resolutions of like real time light probes? You create a fixed grid of probes, and then occasionally precompute the incoming light from different directions for each point, and then when determining the light of any point, you interpolate the light between the points, for each "cascade" of light and then combine them together? At least that's what I'm gathering. This way for each point you're only computing the light from the nearest few cascade points not the whole scene
the most important point is that probes don't store radiance (rays that start at the probe), they store radiance intervals (rays that start at a certain distance away from the probe and connect into a continuous ray).
Such an intuitive explanation of a super cool rendering method. Awesome work! The only thing I would have loved to see more detail is the actual implementation, especially: How does a point on the screen actually get it's value? A raycast I assume? how does the raycast avoid having to loop over every light source in the image to find a collision? Also, is your explanation only valid in 2d, would it map into 3d by projecting all the points onto the nearest surface, or would it need a 3d matrix of points everywhere? Some of this could have perhaps been clarified by a brief section detailing where this method can be used and where it can not be used as presented. Other than these nitpicks / curious questions though, excellent intuitive explanation!
RC is compatible with any technique of casting rays: SDF raymarching, voxel tracing, etc. Even RTX, I guess. PoE2 uses just a constant step per-pixel screenspace raymarching. As for 3d, I suggest you read the paper, because there's a lot of nuances: you can make full-on 3d grid of radiance probes, 2.5d screenspace probes with screenspace intervals, 2.5d screenspace probes with world intervals, etc.
Keep in mind that (as @Alexander_Sannikov mentioned in his presentations) the screenspace techniques work well for PoE(2) due to the PoV limitations of the game... something that is undoubtedly familiar to players of the genre and PoE specifically but which may be lost on other folks. IMO the expansion of this technique beyond PoE's rendering purview is the next major area of research for Radiance Cascades.
this is such a well put-together explanation. you convey a difficult concept from ground 0 to implementation really smoothly and i understand more than i'd expect. hats off.
This is such a great source of information, it explains Radiance Cascades so much better than other videos and papers, I finally managed to understand it! Thank you so much!
Okay…I’m on my fourth watch of this and I can feel myself *slowwwwwly* getting to grips with it, but even with a background in physics and maths (my degrees are in physics and electronic engineering) and a long career as a systems architect, I’ll be honest: I’m struggling. It’s a testament both to the PoE developers for the original idea and to Simon (who I follow) that this is penetrating my thick skull at all. Definitely not for the faint of heart but it’s worth watching over and over until it clicks because the end result is fucking gorgeous. Thanks Simon (aka Bob from Bobs Burgers) ❤️
I read the paper months ago and got the basic gist but made a mental note to revisit it for better understanding. This DEFINITELY jogged my memory. Bravo to @SimonDev for exposing this wonderful research to a broader audience.
I started creating my own game engine to learn how it works behind the scenes, all because of your videos. But since I only know JavaScript, I felt intimidated by WebGL and did everything in context2D. Your video on spatial hash grids helped me a lot to create my own version with dynamic ranges instead of fixed arrays. Watching this video, I realized my improvised lighting system in 2D is pretty humble lol.
The quality of presentation and the in depth knowledge u are able to explain in simple terms is awesome. Please keep it up I love your content. I would also love to have something focused on physics like gjk/epa for collision and response stuff.
I've really liked the demo! If you add the possibility to upload an image from wich generate the lights/shadows, and the posibility to change the backgound, you can sell it/launch it as a tool for graphic designers!
Honestly, PoE devs are brilliant. PoE1 has a lot of technical debt from what I recall, and there's a metric fuckton of things that are happening in the game; and the game still performs extremely well up to a certain point where you reach upper limits of 32 bit integers. And they do that with god knows how many thousands of entities active at any given time.
for those curious, cem yuksel has a series of graphics videos that are very easy to understand, including a really intuitive explanation of the rendering equation. he does things very visually
Awesome video, thought it would be realistic lightning bolts which would also be interesting since I've looked into it a bit but can't find much usable information on it.
I'm wondering how this could extend to 3d. Maybe we do something similar but for points on a UV mapped surface? If you could do that, you could actually speed up ray tracing by a large margin, and allow a high degree of freedom for the hardware spec based on how many iterations you run. Something I may experiment with, but my main expertise is Blender shaders. Glsl and it's equivalents are new to me.
Hmm, I think you could also use lower resolution cascades the further away you are from the camera, to save up on computation! :D I'm definitely going to try working with this!!!
Could you do a video about different shadow techniques? From basic shadow mapping using hard coded projection params [like in directional shadows ortho(left: -10, right: 10, bottom: -10, top: 10, near: -10, far: 10)], through tight projection math, normal bias, texel size world space, etc. to CSM and VSM?
Cool, reminds me of voxel cone tracing with 3D clipmaps. It also has the same issues: light leakage, not good at perfect reflections but hopefully the new technique scales better and uses less vram. I'll have to look at the paper once it's released in its final form. Edit: Btw. for 2D you can make cone tracing work quite well and fast for GI. I only implemented the 3D version 8 years ago. A little bit surprised that it was hardly adapted since it can work quite well in certain types of games.
I really liked the Visual approach of this but I'm honest I got lost at the chapter "What does this get us?" from 10:20 on... what does the rays from the yellow dots mean? Can someone point me at what I'm missing?
@@simondev758 Ok so an object at the top right would be fully lit by all sides? While when the Object would be moved to the bottom right it would only be lit by the bottom and when it's moved to the bottom left corner of the screen it would not be lit at all? I guess it's just random values in this case but it's hard for me to link this to a actual scenes. I mean how would the lights need to be positions to result in something like this. I don't think it has any connection to the previously shown layout? But ok maybe I'm getting it now. Thank you very much for taking the time to explain it.
Awesome video, however I have one suggestion. In this video, even at 1080p, UA-cam's video compression and low bitrate are extremely noticeable and there are a lot of artifacts all over the place the entire time. As a suggestion, could you upload videos like this at 1440p in the future? Even for people with a 1080p display, this can make a massive change in how clean the video looks because of the better bitrate.
Maybe I'm missing something, but I think this only works in screen space, right? Therefore, it'll exhibit the usual disocclusion artifacts that such techniques have, such as SSAO, SSR.
I would love to see a full comparison of this technique and full path tracing rendering the same scene, while also showing how long it takes both to compute, PT would be done on software ofc to make it a fair fight
I am curious to see what the bias is like for large scenes though. It reminds me a bit of "surfels" which were developed by EA if I remember correctly. It was an innovative technique but contributed a lot of bias to get real-time noise free images. The way this method is layed out, it seems like that's also going to be the case here, limiting it's effective use in real-time games with certain FPS goals
Isn't there a way to use engine properties like: check for emissive materials, get the size, position and luminance of the object and directly fire probe arrays at it? Maybe use the inverse Square and luminance to decide if it even worth to take it into account for further calculations Just a quick thought about it 😅
I’m not a game dev or know anything about any of this. Watched the whole thing without skipping through. You’re a good presenter, even if I still don’t fully get it 😅
This is insanely cool. I wonder how much more complicated something like like that looks in a 3D environment. Do you just take the 2D code but apply it to the gbuffer? Like obviously now you also need to shoot rays through hemispheres and do something (i'm not sure what) to avoid bleeding between objects (use the zbuf in a clever way?). I'm a sunday graphics engineering enthusiast but it seems like a promising way to do GI in 3D as well.
nice to see alexander sannikov's radiance cascades be used. I actually theorized a way to use a similar things for real time physics calculations with fluid or fluid-like objects (e.g. plague tale's rats/huge armies) the idea is that only boundaries get true physics and the other are moved by a vector field based on the population (i.e. they move from high population to low population). and the physics need good angular resolution in the middle of the pack, but only good position in the outside.
Grinding Gear Games is responsible for the greatest arpg on the planet Path of Exile. In a few weeks after 13 years, they are about to release Path of Exile 2, fyi.
Not sure if you've already done a video on this or not but could you do a video about transparency - as an artist i'd like to understand what makes it expensive, draw order issues when you have overlapping planes etc.
Fantastic video and Alexander's concept is incredibly intriguing. I'm trying to understand how it's applied in a screen-space context, as hinted at in Alexander's paper where he mentions they use a 'hierarchy of screen-space radiance probe cascades populated with screen-space ray-marching', - section 4.2, page 23. I noticed in your demo that you ray-march an SDF representing the scene's 'geometry' to populate the radiance cascades. I'm curious about what's being ray-marched in the screen-space implementation of say a 3d scene. Would it be similar to screen-space shadows; using the depth buffer to detect occlusion? I'm new to graphics programming, so any insights would be greatly appreciated. Thanks again for the content.
Reminds me of a video from several years back on Unitys Light Probes. I have VERY little understanding on how all this works though so not sure how similar they are other then they attempt to help solve the same issue of lighting.
14:56 it is still possible to get away with only the 4 samples, the method would just be: spin the samples and take the data over time its basically just a temporal method of doing that with roughly the same cost as 4 so could get away with doing something like 4 with a assumed compute cost of 6-7 depending on the method used (this is a good method for 2D but 3D would require more then 4 samples so around 16 should be good enough)
Im not qualified into that field at all but that's always interesting to learn about new things. I aslo seen Gaussian Splatting (GSplat) techniques which could also provide quite interesting things for the game industry. Like preprocessing all the environnement + light inside a Gsplat which consume way less compute power, which can have lifelike graphics and also take way less space on the harddrive. Don't know how Radiance Cascades compete next to Gsplat though, would be an interesting subject to discuss actually (from a professional)
It's kind of sad and frustrating that innovative techniques like this get less attention, because many graphics cards now have specific hardware for raytracing. It's great to see them flourishing regardless.
Could just be me but it felt like the video ended a bit soon. Like I am not sure how you get from "layers of probes at different resolutions sampling lighting" to what you show at the end. Also is this global illumination? because it looks more like direct illumination with soft shadows and emmissive surfaces (which is addmittedley impressive in it's own right if it runs fast).
Some additional links from the video. Also, working discord link:
Discord: graphics-programming.org/
RC Experimental Testbed: www.shadertoy.com/view/4ctXD8
why do i only see this comment on mobile but not pc
nvm i see it now on pc
oh also does ROBLOX use radiance cascading (probably not)
Some critical voices say that radiance cascades work in 2D, but were a non starter in 3D. Is this true?
This feels like it's related to wavelet transforms. Like, DCT:Wavelet Transform::Spherical Harmonic Lighting:Radiance Cascade.
I don't see why it wouldn't work in 3d using cubemaps.
I never thought I'd see Radiance Cascades, let alone create one!
Now now, Simon doesn't need to hear all this. He's a highly trained professional. We've assured the PoE2 team NOTHING will go wrong.
Alright. Let's let him in.
We've just been informed that the lightbulb is ready, Simon. It should be coming up to you at any moment
_panics in scientist_
If you would be so good as to climb up and start the compilers. We can bring the Global Illumination Spectrometer to eighty fps and hold it there until the release date arrives.
Gordon doesn't need to hear all this, he's a highly trained professional!
Make Half-Life great again.
half life 3‼️‼️‼️
what
i dont understand
@@Monkeymario. resonance cascade, half life reference
That one statement @ 2:08 is precisely why I love this channel. Although i cant deny how much i need the maths in my life
frrrr tho, math so unreadable
Programming is a form of math
So this approach, but for audio, would be called a "resonance cascade"?
Isn't that what happened in Half-Life?
Gordon doesn't need to hear all this, he's a highly trained professional
Prepare for unforseen consequences.
As someone doing audio stuff, I can't imagine why you'd ever want a resonance cascade anywhere.
@@fonesrphunny7242 if implemented properly it might be able to be used as a spatial acceleration structure for spatial audio, example: ua-cam.com/video/M3W7m0QSX-8/v-deo.html though I'm not sure if it would be better quality or more perfomant than existing techniques.
I knew those PoE2 devs were up to something!
Yeah, they are a talented bunch!
Great work.
It's him! He's the PoE2 dev!
The man, the MYTH, THE LEGEND
They're smart cookies, definitely :)
We're in this weird place where I don't want to work enough that I will sit through a college level dissertation on lighting simulation. LoL
Great Video!!!
The Penumbra Condition sounds like a nice title for a game
if deltarune was made by sony
There's the Penumbra Collection
The Penumbra Collection includes Penumbra Overture, Black Plague, and the expansion Requiem.
A thrilling blend of puzzles with multiple solutions and horror that will have you screaming for more!
Full freedom of movement along with the ability to manipulate everything using natural gestures creates an immersive world.
penumbra mentioned 👹👹👹👹👹👹👹👹👹👹👹
Ever since Exilecon i've been waiting for someone to do a nice video breakdown of Radiance Cascades. I can see it becoming a mainstream technique in the upcoming years, so much potential
yeah the main limitation is that it's screen space so doesn't care about lights outside of the screen (mostly behind the camera is an issue, you can fairly trivially have cascades computed at ~1.5x1.5 resolution, i.e 25% extra space all around. and cropped down.).
so it doesn't work well as is for first or behind the shoulder third person. (you can use world space probes but that's a bit more complex and not a neat constant time like SSRC)
but there's also a lot of games that are 2d or pseudo-2d where this would work really well (e.g. league of legends/dota, or side scrollers like hollow knight, city builders would also benefit greatly as you could have individual home lights for free ).
@@satibel The effect is not tied to screenspace, you could do it screenspace, but it's usable with any grid of data. If you have a 3D grid of light probes in your world, you can use this. Have probes with 8 directions checked over a small area and place them every meter for example. Then every 2x2 meter in worldspace make probes that scan 64 directions further out. and so forth. Update these probes periodically, and importantly you really only need to update probes close to the player at any regular rates, and you don't need to have probes at infinite distance, you could center a 32x32x32 grid of probes around the player for example and update the probe positions as the player moves.
@@DreadKyller how would the performance compare to screenspace ?
@@kitsune0689 Depend, it all comes down to the amount of data you need to calculate, in other words, how you setup your grid. In 2D it should be smaller since you cannot (should not, I guess?) go higher resolution than the pixel resolution. In 3D sure you have to make it in 3D grid and not being tied to screen space, so there is no way to compare to screen space.
It all makes so much sense when you explain and show it to us.
Without your video, i would get lost in "paper" articles with just a few images. Scrolling through equations and getting familiar with new terms.
Thanks for another great video SimonDev. 👍
Papers are always hard to read (for me).
@@simondev758 reminds me of the meme "I hate how research papers are written, so much yapping, just get to the point bro."
What’s next, Radiance Cascading Style Sheets?!
Quick, contact the Chrome devs!
LMAO XD
A wild CSS framework has appeared!
10x web developers: hey folks, here's my implementation of Radiance Cascades, written entirely in HTML+CSS!
NO! No God please no. No!
Nooooooooo!
I always know you're going to make me understand something new in the way that I need it to understand it. I think we speak the same exact language; like a mixture of nothing-is-new-just-another-rehashed-version-of-the-same-stuff-we-already-did, and developer-that-wants-his-code-to-run-as-fast-as-possible. Thank you. Every time. Thank you for speaking my language.
You're welcome!
Looking through the comments, and I'm glad that I'm not the only one who thought the title said "Resonance Cascade"
Nice animations, and intuitive explanations, great video!
And thanks for consulting & mentioning the community at the end :D
This channel is a gold mine. Thank you.
"Most of us are programmers, not math people." -> that's a great quote.
Exactly. I program so the computer can do the maths I don't understand 😅
I became mathematician at age of 6. Then I became programmer at age of 8.
And at age of 10, I did learn that I was already programmer & mathematician at age of 4, as I fully grasped mathematical concept of "Propositional Logic".
Every mathematician is programmer. Many just do not know any computer programming languages. And every programmer is expert mathematician in field of logic.
As a dev who took 3 tries to pass Calculus I, I agree with this statement.
That was well explained. It made reading the paper on it so much simpler. Things are always easier with a clear picture and an overview to start. Thanks for putting out a concise explainer.
This video went from super simple to utterly incomprehensible in a span of seconds! I'm having whiplash! 😂
Hah
In all likelihood the issue here is that the verbal and symbolic explanations are "high frequency" while the animated visual explanations are "low frequency". There were many times in the video where I was waiting for more detailed animations, which never came.
The predominant example was the rendering equation, which could have been more fully elucidated by continued animations of each term (and possibly subterm) in the equation, but my critique extends to the rest of the video, where the animations were solid but stopped short of fully explaining what was being said and shown symbolically.
@@NotAnInterestingPerson this comment is big brain lol
So what it's sounding like is multiple resolutions of like real time light probes? You create a fixed grid of probes, and then occasionally precompute the incoming light from different directions for each point, and then when determining the light of any point, you interpolate the light between the points, for each "cascade" of light and then combine them together? At least that's what I'm gathering. This way for each point you're only computing the light from the nearest few cascade points not the whole scene
the most important point is that probes don't store radiance (rays that start at the probe), they store radiance intervals (rays that start at a certain distance away from the probe and connect into a continuous ray).
Such an intuitive explanation of a super cool rendering method. Awesome work! The only thing I would have loved to see more detail is the actual implementation, especially: How does a point on the screen actually get it's value? A raycast I assume? how does the raycast avoid having to loop over every light source in the image to find a collision? Also, is your explanation only valid in 2d, would it map into 3d by projecting all the points onto the nearest surface, or would it need a 3d matrix of points everywhere? Some of this could have perhaps been clarified by a brief section detailing where this method can be used and where it can not be used as presented. Other than these nitpicks / curious questions though, excellent intuitive explanation!
RC is compatible with any technique of casting rays: SDF raymarching, voxel tracing, etc. Even RTX, I guess. PoE2 uses just a constant step per-pixel screenspace raymarching. As for 3d, I suggest you read the paper, because there's a lot of nuances: you can make full-on 3d grid of radiance probes, 2.5d screenspace probes with screenspace intervals, 2.5d screenspace probes with world intervals, etc.
Keep in mind that (as @Alexander_Sannikov mentioned in his presentations) the screenspace techniques work well for PoE(2) due to the PoV limitations of the game... something that is undoubtedly familiar to players of the genre and PoE specifically but which may be lost on other folks. IMO the expansion of this technique beyond PoE's rendering purview is the next major area of research for Radiance Cascades.
this is such a well put-together explanation. you convey a difficult concept from ground 0 to implementation really smoothly and i understand more than i'd expect. hats off.
Thanks!
Thank you!
Crazy good idea and so simple in a way.
I really doubted if I should write the paper because of how obvious it seemed.
Thanks for helping give this awesome paper wider visibility! It's a fantastic insight.
13:18 Is it really live on your website? I don't see it, only Grass, Cloud, FPS Game and Minecraft projects.
Yeah some people seem to be getting older versions, let me know if it's still not showing up.
@@simondev758 didnt show up a minute ago but now it works, feels laggy but impressive either way
@@oscarelenius4801 Yeah, it's a stock implementation, with no optimizations whatsoever heh.
@@simondev758Might be a caching issue? Reloading with ctrl + f5 might work
lets hope to see this implemented in some open source engines. and especially blender. this could be really good tech to at least preview renders.
Do you think it could be used in full production games? (Not a programmer. Just curious about the technology.
@@TristanCleveland it already has. it was specifically invented for a game you see in the intro
This is such a great source of information, it explains Radiance Cascades so much better than other videos and papers, I finally managed to understand it! Thank you so much!
A radiance cascade? At this time of year, at this time of day, on this side of the border world, localized entirely within our facility?
May I see it?
No.
Okay…I’m on my fourth watch of this and I can feel myself *slowwwwwly* getting to grips with it, but even with a background in physics and maths (my degrees are in physics and electronic engineering) and a long career as a systems architect, I’ll be honest: I’m struggling.
It’s a testament both to the PoE developers for the original idea and to Simon (who I follow) that this is penetrating my thick skull at all. Definitely not for the faint of heart but it’s worth watching over and over until it clicks because the end result is fucking gorgeous. Thanks Simon (aka Bob from Bobs Burgers) ❤️
Seeing you refrence Aleksander Sannikovs paper is not something I was expecting :O
I read the paper months ago and got the basic gist but made a mental note to revisit it for better understanding. This DEFINITELY jogged my memory. Bravo to @SimonDev for exposing this wonderful research to a broader audience.
But Simrola,
what about the ring and ray artifacts?
Lowkey wanna suggest that the term that comes after "umbra, penumbra" should be called "bruh."
Thanks you for linking the paper. For such complex topics I like to carefully read an article rather than just watch the video
This is the answer I was looking for. Thank you for this fracking awesome video. You sir are appreciated.
I started creating my own game engine to learn how it works behind the scenes, all because of your videos. But since I only know JavaScript, I felt intimidated by WebGL and did everything in context2D. Your video on spatial hash grids helped me a lot to create my own version with dynamic ranges instead of fixed arrays. Watching this video, I realized my improvised lighting system in 2D is pretty humble lol.
Great to see the crazy graphics devs at GGG getting some love!
This is one of those videos that I'm going to have to watch like 3 times over before this gets hammered into my thick skull
1:50 Saving a timestamp for the next time I have to explain the difference between math and programming.
This is really cool. Thanks for explaining it in an easy to understand manner!
You are such a great teacher. Starting by building the intuition then it all makes sense. Thanks for posting this
7:03
I was the whole time distracted from the artefact on the left side.
Is this a computational error?
Excellent video. The paper was a bit too complex for me to understand, but this video explained it very well. I’ll probably go make my own now…
No idea what I just watched but still fascinated how clever people are.
The quality of presentation and the in depth knowledge u are able to explain in simple terms is awesome. Please keep it up I love your content. I would also love to have something focused on physics like gjk/epa for collision and response stuff.
I love the fact that the explanations in this video are really easy to understand,great video!
I've really liked the demo! If you add the possibility to upload an image from wich generate the lights/shadows, and the posibility to change the backgound, you can sell it/launch it as a tool for graphic designers!
I never thought I needed a young H John Benjamin explaining lighting algorithms, yet here we are.
Great stuff. Although the project isn't in the projects list?
yeah i can't find it either
Should be there, if not, just go to my github.
@@simondev758 It's not there. The project is indeed on your Github but I can't get it working.
Thank you so much for sharing this knowledge! Super interesting video, as always
Honestly, PoE devs are brilliant. PoE1 has a lot of technical debt from what I recall, and there's a metric fuckton of things that are happening in the game; and the game still performs extremely well up to a certain point where you reach upper limits of 32 bit integers. And they do that with god knows how many thousands of entities active at any given time.
for those curious, cem yuksel has a series of graphics videos that are very easy to understand, including a really intuitive explanation of the rendering equation. he does things very visually
Oh cool, didn’t know the PoE devs published this method! Thanks for the breakdown!
thanks for the laughter and learning in every video!
Awesome video, thought it would be realistic lightning bolts which would also be interesting since I've looked into it a bit but can't find much usable information on it.
This is really beautiful! Well done.
Thanks so much for the website, it's so cool!
Amazing video. Thanks Simon.
Cascade: The JPEG of Light Render.
I like it.
It is, isn't it?
Alexander, The Great!
I love this lighting - definitely an inspiration towards trying new things - you never know what might work!
I'm wondering how this could extend to 3d. Maybe we do something similar but for points on a UV mapped surface? If you could do that, you could actually speed up ray tracing by a large margin, and allow a high degree of freedom for the hardware spec based on how many iterations you run. Something I may experiment with, but my main expertise is Blender shaders. Glsl and it's equivalents are new to me.
glsl syntax is really easy
Excellent presentation. Thank you!
oh man i'm so hype to have bob belcher explain new and exciting graphics techniques to me
Hmm, I think you could also use lower resolution cascades the further away you are from the camera, to save up on computation! :D
I'm definitely going to try working with this!!!
Amazing explanation and it looks awesome on the website!
Could you do a video about different shadow techniques? From basic shadow mapping using hard coded projection params [like in directional shadows ortho(left: -10, right: 10, bottom: -10, top: 10, near: -10, far: 10)], through tight projection math, normal bias, texel size world space, etc. to CSM and VSM?
Love the video! It'd be really great if you could make a video covering the 3d version and some of the fixes of the artifacts this technique has.
Cool, reminds me of voxel cone tracing with 3D clipmaps. It also has the same issues: light leakage, not good at perfect reflections but hopefully the new technique scales better and uses less vram. I'll have to look at the paper once it's released in its final form.
Edit: Btw. for 2D you can make cone tracing work quite well and fast for GI. I only implemented the 3D version 8 years ago. A little bit surprised that it was hardly adapted since it can work quite well in certain types of games.
I think voxel cone tracing was used in CryEngine but nowhere else.
The live demo seems not to be available on your homepage yet.
I think there's some caching issues, I'll try invalidating and hopefully you can access it.
I really liked the Visual approach of this but I'm honest I got lost at the chapter "What does this get us?" from 10:20 on... what does the rays from the yellow dots mean? Can someone point me at what I'm missing?
They're samples of radiance, in the direction of the line.
@@simondev758 Ok so an object at the top right would be fully lit by all sides? While when the Object would be moved to the bottom right it would only be lit by the bottom and when it's moved to the bottom left corner of the screen it would not be lit at all? I guess it's just random values in this case but it's hard for me to link this to a actual scenes. I mean how would the lights need to be positions to result in something like this. I don't think it has any connection to the previously shown layout?
But ok maybe I'm getting it now. Thank you very much for taking the time to explain it.
I love learning about programming stuff from Archer.
Awesome video, however I have one suggestion. In this video, even at 1080p, UA-cam's video compression and low bitrate are extremely noticeable and there are a lot of artifacts all over the place the entire time. As a suggestion, could you upload videos like this at 1440p in the future? Even for people with a 1080p display, this can make a massive change in how clean the video looks because of the better bitrate.
It could also be the background having a sorta high amount of detail?
We are missing you! Don't dissapear for that long(
Maybe I'm missing something, but I think this only works in screen space, right? Therefore, it'll exhibit the usual disocclusion artifacts that such techniques have, such as SSAO, SSR.
NO, it can work in world space as well
I would love to see a full comparison of this technique and full path tracing rendering the same scene, while also showing how long it takes both to compute, PT would be done on software ofc to make it a fair fight
9:27 its so weird to see the multiplication symbol as x over the years i have gotten so used to the multiplication symbol as * or .
the legend is back with a new upload!
I am curious to see what the bias is like for large scenes though. It reminds me a bit of "surfels" which were developed by EA if I remember correctly. It was an innovative technique but contributed a lot of bias to get real-time noise free images. The way this method is layed out, it seems like that's also going to be the case here, limiting it's effective use in real-time games with certain FPS goals
Awesome! Thanks!
GorDon doesn' need to hear all this, he'sa highly trained propfessional. We've assurdly administrated that nothing-will-go-wrong.
Isn't there a way to use engine properties like:
check for emissive materials, get the size, position and luminance of the object and directly fire probe arrays at it?
Maybe use the inverse Square and luminance to decide if it even worth to take it into account for further calculations
Just a quick thought about it 😅
I’m not a game dev or know anything about any of this. Watched the whole thing without skipping through. You’re a good presenter, even if I still don’t fully get it 😅
Thank you so much for sharing your valuable knowledge! :)
very interesting approach, seems to sit somewhere between light probe grids and surfels.
Commenting mainly for the algorithm, but thank you for the video, please keep it up!
This is insanely cool. I wonder how much more complicated something like like that looks in a 3D environment. Do you just take the 2D code but apply it to the gbuffer? Like obviously now you also need to shoot rays through hemispheres and do something (i'm not sure what) to avoid bleeding between objects (use the zbuf in a clever way?). I'm a sunday graphics engineering enthusiast but it seems like a promising way to do GI in 3D as well.
nice to see alexander sannikov's radiance cascades be used.
I actually theorized a way to use a similar things for real time physics calculations with fluid or fluid-like objects (e.g. plague tale's rats/huge armies)
the idea is that only boundaries get true physics and the other are moved by a vector field based on the population (i.e. they move from high population to low population).
and the physics need good angular resolution in the middle of the pack, but only good position in the outside.
These animations look top notch. Any chance of sharing what software you use to create them?
I animate them via code in shaders. I cover a lot of it in my shader course.
Grinding Gear Games is responsible for the greatest arpg on the planet Path of Exile. In a few weeks after 13 years, they are about to release Path of Exile 2, fyi.
Not sure if you've already done a video on this or not but could you do a video about transparency - as an artist i'd like to understand what makes it expensive, draw order issues when you have overlapping planes etc.
Great idea, I'll keep it as a potential topic, but ultimately I let my patreon supporters do the final vote.
Fantastic video and Alexander's concept is incredibly intriguing. I'm trying to understand how it's applied in a screen-space context, as hinted at in Alexander's paper where he mentions they use a 'hierarchy of screen-space radiance probe cascades populated with screen-space ray-marching', - section 4.2, page 23. I noticed in your demo that you ray-march an SDF representing the scene's 'geometry' to populate the radiance cascades. I'm curious about what's being ray-marched in the screen-space implementation of say a 3d scene. Would it be similar to screen-space shadows; using the depth buffer to detect occlusion? I'm new to graphics programming, so any insights would be greatly appreciated. Thanks again for the content.
Reminds me of a video from several years back on Unitys Light Probes. I have VERY little understanding on how all this works though so not sure how similar they are other then they attempt to help solve the same issue of lighting.
14:56 it is still possible to get away with only the 4 samples, the method would just be: spin the samples and take the data over time its basically just a temporal method of doing that with roughly the same cost as 4 so could get away with doing something like 4 with a assumed compute cost of 6-7 depending on the method used (this is a good method for 2D but 3D would require more then 4 samples so around 16 should be good enough)
Im not qualified into that field at all but that's always interesting to learn about new things.
I aslo seen Gaussian Splatting (GSplat) techniques which could also provide quite interesting things for the game industry. Like preprocessing all the environnement + light inside a Gsplat which consume way less compute power, which can have lifelike graphics and also take way less space on the harddrive.
Don't know how Radiance Cascades compete next to Gsplat though, would be an interesting subject to discuss actually (from a professional)
Perfect video to watch while my PC renders my blender scene.
Love the voiceover, you sound like the cartoon character Archer haha
Cant wait for unreal engine to pick up on this.
Very nice!
It's kind of sad and frustrating that innovative techniques like this get less attention, because many graphics cards now have specific hardware for raytracing. It's great to see them flourishing regardless.
hardware raytracing can actually be used with this technique. would love to see an implementation with it
Hardware raytracing, if anything, is going to make new techniques even more interesting.
Could just be me but it felt like the video ended a bit soon. Like I am not sure how you get from "layers of probes at different resolutions sampling lighting" to what you show at the end. Also is this global illumination? because it looks more like direct illumination with soft shadows and emmissive surfaces (which is addmittedley impressive in it's own right if it runs fast).
1:50 Im confused... I thought that was raytracing
Also remembered you can limit the max bounce count.