Radiance Caching for Real-Time Global Illumination

Поділитися
Вставка
  • Опубліковано 27 лис 2024

КОМЕНТАРІ • 53

  • @rallokkcaz
    @rallokkcaz Рік тому +36

    As a computer scientist, this stuff is almost pure magic. Congratulation to the team at Unreal for designing/implementing these algorithms and tools! This is amazing work.

  • @10minuteartist87
    @10minuteartist87 3 роки тому +28

    remembering the old days of "Mental Ray" when i saw GI first time and Said wowww.. ☺️

    • @michipeka9973
      @michipeka9973 2 місяці тому +2

      The marbled artifacts before filtering even remind me of Mental Ray renders with a poor quality fgmap... (Vietnam Flashback)

    • @alejmc
      @alejmc Місяць тому +1

      I wonder if actual commercial DCC GI renderers could implement any of this? Like Octane or Cycles in Blender?
      Them not being rasterized though, maybe it wouldn’t have as much benefit in the end, but could be great at least for lightning fast previz

  • @Alexander_Sannikov
    @Alexander_Sannikov 3 роки тому +41

    I see much more people are interested in Nanite than in Lumen (judging by the number of views). But 90% of nanite ideas have already been implemented in the 2008 paper that it's based on. Radiance caching in screen space, however, looks novel and even though the idea is somewhat obvious once you name it, but I have actually never seen it implemented anywhere.
    This is in my opinion the core idea of the entire approach and can be used for a much wider set of GI algorithms based on different acceleration structures.

    • @djayjp
      @djayjp 3 роки тому +5

      Check out the video by EA about their Surfel GI technique used in Frostbite. I think it's using the same method fundamentally? Idk

    • @635574
      @635574 3 роки тому +4

      @@djayjp it might achieve similar results, but we have no direct behind the scenes comparison for realtime motion through scene and for bidirectional dynamic object GI in Lumen. There isnt as much temporal accumulation in lumen by what we have seen.

    • @wsqdawsdawdwad
      @wsqdawsdawdwad 2 роки тому +3

      "But 90% of lumen ideas have already been implemented in the 2008 paper that it's based on", is this a typo? From the context I think you mean Nanite.

    • @Alexander_Sannikov
      @Alexander_Sannikov 2 роки тому +2

      @@wsqdawsdawdwad correct, thanks for that. I edited the post.

    • @Revoker1221
      @Revoker1221 2 роки тому +1

      @@Alexander_Sannikov Would you happen to have a link or name of the 2008 paper on hand? I'd love to take a deeper dive.

  • @prithvib8662
    @prithvib8662 3 роки тому +13

    Really well explained, good stuff!

  • @DominikMorse
    @DominikMorse 3 роки тому +7

    This is just pure genius.

  • @robbie_
    @robbie_ 3 роки тому +6

    I didn't understand a single word of this. Am still working on my Lambert Shading algorithm.

    • @UserX03
      @UserX03 2 роки тому +10

      That’s alright bud everyone starts at knowing nothing about a topic

  • @eclairesrhapsodos5496
    @eclairesrhapsodos5496 3 роки тому +14

    I wonder if its possible to have semi-dynamic GI? - like lightmaps with data what allows of interaction/blend with realtime GI, so only changes / characters be having realtime GI.

    • @doltBmB
      @doltBmB 2 роки тому

      would basically be a modern version of quake's lighting system so yeah should be possible.

    • @clonkex
      @clonkex Рік тому +1

      Totally. I'd say they just didn't do that because it would be a pretty big chunk more work and their target audience (console game devs) would prefer to have fully dynamic lighting with no baking time.

    • @theneonbop
      @theneonbop 5 місяців тому +1

      How would you determine which parts of the lightmap are valid?

  • @caelanread3199
    @caelanread3199 3 роки тому +16

    It looks as though by the end, real life graphics achieved, with 2 rays Per pixel. Am excited for the future,

    • @GeneralKenobi69420
      @GeneralKenobi69420 3 роки тому +8

      People have been saying that for the last 20 years lol. It does look good but trust me there's still a lot that can be improved before it can be actually indistinguishable from reality. Just compare with any Disney or Pixar movie made in the last 3 years. Real time graphics are still many years away from that, and even then movies will probably get even better by then

    • @TehIdiotOne
      @TehIdiotOne 3 роки тому +8

      @@GeneralKenobi69420 I mean, modern graphics aren't even remotely comparable to graphics of 20 years ago dude...

    • @GeneralKenobi69420
      @GeneralKenobi69420 3 роки тому +8

      @@TehIdiotOne i know, and yet you should have seen the reactions when Sony revealed that PS2 tech demo. "Omg so lifelike! It looks just like a movie!!"

    • @mnomadvfx
      @mnomadvfx 3 роки тому +6

      @@GeneralKenobi69420 20 years ago real time gfx were not even close to this level of advancement.
      As for feature CGI animation, at this point there is little to make them LOOK better.
      The real advancements for offline rendering will come from:
      1) faster rendering of what they can already do, thereby making 4K VFX targets for all films regardless of shot quantity a serious possibility.
      2) Further improvements to FX simulations (water, fire, smoke) and tissue/muscle simulations for creatures and digital humans.
      For me what is currently lacking is a believable volume conserving model for facial animation that shows everything down to even small scale ticks and other unconscious movement that is constantly happening on the average human face.
      For example, it took the VFX guys a year to create the Blade Runner 2049 scene with faux Rachel.
      Despite extensive reference video from the original film and skull scans from Sean Young to guide them it still looks fundamentally off when her face moves - like parts of her face are simply asleep or affected with botox (IMHO they also way overdid it with the subsurface scattering effect too, her facial skin looked less like a fleshy volume over a skull than a light diffusing film covering another translucent volume.

    • @delphicdescant
      @delphicdescant 3 роки тому +4

      @@GeneralKenobi69420 People have been saying it for 20 years, and every time it's been true. That's a great thing imo.
      When someone looked at Mario 64 when it came out and said "wow that looks like real life," they weren't wrong.
      You can either look at the progression of graphics technology and think "we have never, and never will, achieve 100% photorealism, because there will always be something better 10 years later on," or you can recognize that human perception is a fuzzy and a relative thing and say "we have achieved photorealism many times, and every time it stays new and exciting."

  • @diligencehumility6971
    @diligencehumility6971 3 роки тому +11

    So what am I supposed to do with my 10 years of Unity experience now?

    • @iammichaeldavis
      @iammichaeldavis 3 роки тому +5

      It’ll translate 🥰

    • @cosmotect
      @cosmotect 2 місяці тому +1

      Lighting is fun, but it doesn't make a game

    • @alejmc
      @alejmc Місяць тому

      An opportunity for “Radiance Caching” Unity Asset Store is there now, with SRPs could be doable enough or somewhat useful.
      Maybe not as powerful, these people at Unreal Engine blow my mind.

  • @raanoctive6092
    @raanoctive6092 3 роки тому +2

    Excellent job👏👍

  • @arifahmed6610
    @arifahmed6610 3 роки тому +1

    Beautiful video 😍

  • @635574
    @635574 3 роки тому +1

    But does it even work on dynamic objects or just how the map interacts with them? Surfels from EA can handle rigged meshes but so far only one sample per bone.

  • @djayjp
    @djayjp 3 роки тому +6

    Anyone know how this differs from Frostbite's Surfels GI technique, in a nutshell?

    • @sporefergieboy10
      @sporefergieboy10 3 роки тому +16

      One uses wizard magic the other one uses alien technology

    • @Sergeeeek
      @Sergeeeek 3 роки тому +11

      They are similar, but Lumen uses grids for screen space cache, while GIBS uses surfels (surface elemetns) which lie directly on the geometry. Also I don't think GIBS has static world space probes like Lumen, they just put surfels far away and make them bigger, in a way they always use world space probes, but place and size them dynamically.
      They do use a grid to accelerate searching for surfels, and the grid changes based on distance from the camera. Close up it uses a uniformly sized grid, but far away it's a view aligned frustum thing, where each grid cell is stretched to align with the view. From the camera perspective it looks the same, but from world space it stretches grid cells a lot, which makes it easier to cover a huge open world area.
      Overall they seem to achieve a very similar result, but I personally think surfels are neater.

    • @djayjp
      @djayjp 3 роки тому +1

      @@Sergeeeek Ah I had watched part of their respective presentations but couldn't quite determine how they might differ. Your explanation helps a lot! Thanks! I have to say that, simply based on the renders shown in each of their SIGGRAPH videos, Lumen appears to give more realistic results, closer to unbiased PT.

    • @Sergeeeek
      @Sergeeeek 3 роки тому +3

      @@djayjp watch the gibs presentation fully too, I really liked the demo where a skinned character had a fully emissive material and was illuminating the scene while animating through it. Very impressive imo

    • @djayjp
      @djayjp 3 роки тому

      @@Sergeeeek Yeah that's a good point. I didn't see anything quite like that with Lumen. Also I haven't seen a scene with many dynamic lights in Lumen just yet.

  • @bananas4jim
    @bananas4jim Рік тому

    Awsome Job !!

  • @zooi.
    @zooi. Місяць тому

    all of this is very cool but i think any method that is even slightly temporally incoherent is just not worth it. everything ive seen so far depends on the last frame or last couple of frames, it and inevitably causes smearing and/or takes too long to accumulate. its so distracting that id much rather stick to static lightmaps and other conventional methods.

  • @yizhang7027
    @yizhang7027 Рік тому +2

    It'd be much clearer if you could state the problem prior to an explanation.

    • @blackrack2008
      @blackrack2008 6 місяців тому +3

      He did

    • @OrpheusSonOfCalliope
      @OrpheusSonOfCalliope 2 місяці тому +1

      The problem is how to make more realistic global/ambient shading that is efficient. Most objects in a scene do not move and typically the light source does not move, so that the light reflecting off of most of the surfaces is not changing and can be cached (stored) in order to calculate it just once.
      Part of the problem is that this is a technical conference and many of the terms are expected to be know by the attendees. It's also true that researchers aren't always the clearest communicators and may also introduce new terms without defining them.

  • @thecrypt6482
    @thecrypt6482 2 роки тому +2

    This is creep that both epic games and electronic arts new gi systems are based on 20 years old method by Mental Ray wow...

  • @roklaca3138
    @roklaca3138 3 місяці тому

    Pointless if it still needs 2000$gpu to get 30fps

  • @LotmineRu
    @LotmineRu Рік тому

    unusable

  • @prltqdf9
    @prltqdf9 3 роки тому +4

    This presenter's narration suffers from slurred speech. It's often times hard to understand what he is talking about.

    • @djayjp
      @djayjp 3 роки тому +2

      Turn on the auto captions.