They released their first game with system recently: EA Sports College Football 25. It has some impressive lighting. The second game will be "skate.", the upcoming successor to Skate 3.
including skinned meshes, this is impressive. Can't you also save surfels by ignoring geometry with direct lighting...that is..not applying surfels to directly lit surfaces.???
@@NullPointer I get that, I'm not clever enough to have a creative solution for that..besides..it's kind of arrogant of me to have thought that during the development(ongoing) that you hadn't thought of this.. I love this btw, good work.
Irradiance cashe is better, not sure about that one, but Lumen do reflections too and its not ray tracing. My opinion VXGI / Lightmaps / SVOGI / Brute Force RTGI / Poton Mapping diffuse GI is best for now. PS: soon for Unreal Engine be added realtime caustics via Photon Mapping on GPU with extreme good denoise / approximation. I really excited about that method of Irradiance cashe - should be medium premium lol (ballance of quality/speed/production time).
In my testing, and since it's linked to my job it has been extensive - Lumen is slow for big exteriors. Unusable for most professional applications. This doesn't seem to be. But no way to know unless it becomes available for the general public.
I'm not an expert, but you're right to point out that Lumen and this are trying to solve roughly the same problem, and the high-level approach is somewhat similar as well. Both combine local probe points stuck to the surface of objects with a global grid of sample points, and both are using roughly similar approaches for ray steering. The biggest difference in approach that I see is that Lumen's "local" sampling points are re-created from scratch each frame because they are strictly placed on a screen-space grid, while surfels stay alive as long as the camera hasn't moved too dramatically. That means Lumen needs to do temporal smoothing in screen space at the end of the pipeline, while surfels can do it earlier (and a little bit more flexibly). In theory, that means the best-case performance of surfels when the scene _isn't_ changing and the camera's not moving is significantly better, especially for high-resolution rendering. On the other hand, when the camera is moving, surfels needs to do a lot more bookkeeping to move and update the surfels, so it seems likely more expensive in that case. In practice, the big difference is that Lumen is much farther in development, and actually exists today, including lots of work hammering out edge cases and adding all the little tweaks required to get good real-world performance. Surfel-based GI is clearly earlier stage right now, so it's hard to say how good it will be when it's "done".
sooo, am i right in thinking its kinda like nvidias hardware raytracing based global illumination but instead of single pixel samples with an AI noise filter, its a softer blobbier sample radius with far better performance?
RTX simply provides hardware acceleration of tracing the rays. That is, it makes it really fast to say "fire some rays from this point in this direction and tell me what they hit, how they bounce and what colour information they gather along the way". That's literally all it does. It's up to you to decide how to use that information and incorporate it into your GI shading. This is basically another version of "fire as many rays as we can afford and accumulate the results over time until it looks realistic". Hardware raytracing could totally be used in this algorithm to make it "look good faster" by firing a lot more rays. The trick with this sort of solution (well, one of many, many tricks) is that you don't want to waste any work you've already done, but you also have limited memory. I also don't think there's any AI noise filtering going on here. It's just regular procedural denoising unless I missed something.
Isnt this just photon mapping essentially? is there a difference, except that using surfels with depth functions instead of spheres? Photon mapping traces back several decades
There're some cool ideas here, but after watching this just once I'm not seeing an obvious advantage vs DDGI. This has very slow convergence times, and even the converged renders sometimes look a little blotchy in the demo. There's a lot of complexity that goes into handling skinned meshes etc (and that doesn't handle procedural geometry) that DDGI avoids by storing all of the information in the probe volume. At the start they mention that they think it's better to calculate the GI on the surface, because that's where it's needed. That sounds sensible in theory, but I wouldn't say that anything here stood out as being visually better than DDGI in practice. Is there something in the "pro" column that I've missed? I guess it doesn't suffer from DDGI's corner case when all eight surrounding probes are unreachable.
It’s good for large open world games. For ddgi, far objects will fallback to low res probe grid due to its clip map structure, whereas GIBS spawn the surfers from screen space which is almost constant
you could be shooting rays from all light sources, bounce them around, then keep the average in check, then you get automatic global illumination, just keep track of the real-time light maps, as if its accumulated real ray tracing, as in real-time light baking
Quite beautiful results... But, FrostBite is a EA engine.... and EA is not a nice company, at all. Pay-to-win and microtransactions, surprise mechanics, taking advantages of kids, etc... So not really interesting
Who cares? They're also spending some of that money on advancing GI technology. We can benefit greatly from their research and still never touch Frostbite.
They released their first game with system recently: EA Sports College Football 25. It has some impressive lighting. The second game will be "skate.", the upcoming successor to Skate 3.
including skinned meshes, this is impressive. Can't you also save surfels by ignoring geometry with direct lighting...that is..not applying surfels to directly lit surfaces.???
I thought the same, but then the surfaces close to those areas won't receive that bounce
@@NullPointer I get that, I'm not clever enough to have a creative solution for that..besides..it's kind of arrogant of me to have thought that during the development(ongoing) that you hadn't thought of this.. I love this btw, good work.
I'm curious on how this compares to lumen. Anyone willing to share their thought on comparing the 2?
If it supports VR, it beats Lumen.. Otherwise, it's a nice alternative.
Irradiance cashe is better, not sure about that one, but Lumen do reflections too and its not ray tracing. My opinion VXGI / Lightmaps / SVOGI / Brute Force RTGI / Poton Mapping diffuse GI is best for now. PS: soon for Unreal Engine be added realtime caustics via Photon Mapping on GPU with extreme good denoise / approximation. I really excited about that method of Irradiance cashe - should be medium premium lol (ballance of quality/speed/production time).
I had the exact same thought
In my testing, and since it's linked to my job it has been extensive - Lumen is slow for big exteriors. Unusable for most professional applications.
This doesn't seem to be. But no way to know unless it becomes available for the general public.
I'm not an expert, but you're right to point out that Lumen and this are trying to solve roughly the same problem, and the high-level approach is somewhat similar as well. Both combine local probe points stuck to the surface of objects with a global grid of sample points, and both are using roughly similar approaches for ray steering.
The biggest difference in approach that I see is that Lumen's "local" sampling points are re-created from scratch each frame because they are strictly placed on a screen-space grid, while surfels stay alive as long as the camera hasn't moved too dramatically. That means Lumen needs to do temporal smoothing in screen space at the end of the pipeline, while surfels can do it earlier (and a little bit more flexibly). In theory, that means the best-case performance of surfels when the scene _isn't_ changing and the camera's not moving is significantly better, especially for high-resolution rendering. On the other hand, when the camera is moving, surfels needs to do a lot more bookkeeping to move and update the surfels, so it seems likely more expensive in that case.
In practice, the big difference is that Lumen is much farther in development, and actually exists today, including lots of work hammering out edge cases and adding all the little tweaks required to get good real-world performance. Surfel-based GI is clearly earlier stage right now, so it's hard to say how good it will be when it's "done".
sooo, am i right in thinking its kinda like nvidias hardware raytracing based global illumination but instead of single pixel samples with an AI noise filter, its a softer blobbier sample radius with far better performance?
RTX simply provides hardware acceleration of tracing the rays. That is, it makes it really fast to say "fire some rays from this point in this direction and tell me what they hit, how they bounce and what colour information they gather along the way". That's literally all it does. It's up to you to decide how to use that information and incorporate it into your GI shading.
This is basically another version of "fire as many rays as we can afford and accumulate the results over time until it looks realistic". Hardware raytracing could totally be used in this algorithm to make it "look good faster" by firing a lot more rays. The trick with this sort of solution (well, one of many, many tricks) is that you don't want to waste any work you've already done, but you also have limited memory.
I also don't think there's any AI noise filtering going on here. It's just regular procedural denoising unless I missed something.
Isnt this just photon mapping essentially? is there a difference, except that using surfels with depth functions instead of spheres? Photon mapping traces back several decades
In realtime though?
So did they end up using this approach as a default for GI? Or do they use something else for new EA/Frostbite games?
There're some cool ideas here, but after watching this just once I'm not seeing an obvious advantage vs DDGI. This has very slow convergence times, and even the converged renders sometimes look a little blotchy in the demo. There's a lot of complexity that goes into handling skinned meshes etc (and that doesn't handle procedural geometry) that DDGI avoids by storing all of the information in the probe volume.
At the start they mention that they think it's better to calculate the GI on the surface, because that's where it's needed. That sounds sensible in theory, but I wouldn't say that anything here stood out as being visually better than DDGI in practice.
Is there something in the "pro" column that I've missed? I guess it doesn't suffer from DDGI's corner case when all eight surrounding probes are unreachable.
It’s good for large open world games. For ddgi, far objects will fallback to low res probe grid due to its clip map structure, whereas GIBS spawn the surfers from screen space which is almost constant
That is awesome!
I can thank Coretex for telling me about surfels
Interesting solution but I'll take path tracing with radiance caching over this anyway.
Neat! This was made by EA though and so we have to troll them with jokes about how they are gonna start charging $0.01 per surfel
OH!
I thought you said "squirrels". Worst clickbait EVAR!
(I still enjoyed the video.)
Wow
I think that "surface circle" is a better description of what these are versus "surface element"
Surficle
@@nielsbishere Surcle
@@inxiveneoy sule
Since it’s also something that has to do with sinuses, “Snot on a wall”.
Oh god this is so jank. But it works!
you could be shooting rays from all light sources, bounce them around, then keep the average in check, then you get automatic global illumination, just keep track of the real-time light maps, as if its accumulated real ray tracing, as in real-time light baking
paint the textures with light
you only need to fill the image pixels and no more
hows the light outside screen space
importance sample all objects
send more rays
pixar introduced this kind of rendering techniques 15 years ago for offline rendering,
Did they? Wasn't it just regular offline raytracing?
@@clonkex no. It was pointcloud/brickmap-based, with harmonic filtration etc.
Quite beautiful results...
But, FrostBite is a EA engine.... and EA is not a nice company, at all. Pay-to-win and microtransactions, surprise mechanics, taking advantages of kids, etc... So not really interesting
Company and Engine have to be separated imo.
Who cares? They're also spending some of that money on advancing GI technology. We can benefit greatly from their research and still never touch Frostbite.