How Ray Tracing Works - Computerphile

Поділитися
Вставка
  • Опубліковано 13 тра 2024
  • Ray tracing is massive and gives realistic graphics in games & movies but how does it work? Lewis Stuart explains.
    / computerphile
    / computer_phile
    This video was filmed and edited by Sean Riley.
    Computer Science at the University of Nottingham: bit.ly/nottscomputer
    Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharanblog.com
    Thank you to Jane Street for their support of this channel. Learn more: www.janestreet.com

КОМЕНТАРІ • 211

  • @RockLou
    @RockLou 14 днів тому +484

    "We should go outside" rarely uttered words by a computer engineer

    • @loc4725
      @loc4725 14 днів тому +50

      Yes, the bravery on show was impressive.

    • @n30v4
      @n30v4 14 днів тому +17

      At that moment I knew its AI generated.

    • @tango_doggy
      @tango_doggy 14 днів тому +4

      you're thinking of computer science side.. the electrical engineering side entrances people to fly across the world testing different country's fault protection systems

    • @vinnyfromvenus8188
      @vinnyfromvenus8188 14 днів тому +10

      literally a "touch grass" moment

    • @MePeterNicholls
      @MePeterNicholls 14 днів тому +5

      He struggled on the line “that’s how real life works” tho

  • @emanggitulah4319
    @emanggitulah4319 14 днів тому +147

    Awesome CGI... Looked so real having sunshine in Britain. 😂

  • @shanehebert396
    @shanehebert396 14 днів тому +73

    Way back in the day, one of our computer graphics assignments in college was to write a parallel ray tracer in C on a SGI Power IRIS server. It was a lot of fun.

    • @maximmk6446
      @maximmk6446 14 днів тому +3

      Is that sometime around the late 80s?

    • @sagejpc1175
      @sagejpc1175 14 днів тому +1

      You poor soul

  • @broyojo
    @broyojo 14 днів тому +124

    first time on computerphile that we touched grass

    • @vitaly2432
      @vitaly2432 14 днів тому +14

      I'm new to this channel and I hope it is the first time (and the last) that we touched a monitor, too

    • @benwisey
      @benwisey 12 днів тому +1

      I think it may not be the first time touching grass or a monitor.

  • @phiefer3
    @phiefer3 14 днів тому +38

    One thing that's sort of brushed over here, is that while all the things he mentioned about rasterization sounds more complicated than ray tracing; in the early days of computer graphics it was by far the simpler method (heck, in the early early days, things like having a light source wasn't even a thing). Rasterization evolved from the fact that doing something like ray tracing for every single pixel was not even close to being practical, especially not in anything real time. Essentially, rasterization was a shortcut that allowed us to render graphics in a very simplified manner because it was all that the technology of the time was capable of.
    As technology improved, we then added more bells and whistles to rasterization to improve it, like lighting, shadowmaps, depth maps, etc. These things all made it a bit more complex to take advantage of improved hardware and software, but it was still far easier than ray tracing, which was still beyond what could be done in real time. And this was mostly how graphics technology improved over time: adding more and more bells and whistles to shortcuts built on top of shortcuts on top of shortcuts that was rasterization.
    But in recent years we've reached a turning point where 2 key things have happened: First is that modern technology is now capable of things like ray tracing in real time; and the second is that all the extra stuff that's been added to rasterization over the years to improve its quality is starting to approach the complexity level of ray tracing. That's why it now seems like ray tracing is such a big leap in quality for such a small difference in performance. The tradeoff is still there, but eventually we'll probably see a point where ray tracing is both faster and higher quality than rasterization based graphics.

    • @yoyonel1808
      @yoyonel1808 14 днів тому

      Very nice explications thanks you 😊

    • @jcm2606
      @jcm2606 13 днів тому +4

      Another thing is that rasterization is starting to reach an accuracy ceiling where it's becoming disproportionately harder to push the accuracy higher. The best example of this I can think of would be complex interactions between different lighting phenomena (like a surface diffusely reflecting another surface that is specularly reflecting a light source, leaving a pattern of light behind on the diffuse surface; think the really pretty patterns reflecting off of water on the underside of a boat hull, that's what I mean).
      To accurately reproduce those interactions you really need the ability to have light be simulated/approximated in any order (ie you need the ability to diffusely reflect a specular reflection AND the ability to specularly reflect a diffuse reflection AT THE SAME TIME), which is extremely difficult to do with rasterization in a performance-friendly way, because of how rasterization uses a well defined order in its approximations (you could use some tricks like reuse the output of intermediate passes from past frames, then reproject, validate and maybe reweight past frame outputs to make them match more closely with the current frame, but that introduces a bunch of errors which hurts accuracy).
      Raytracing, on the other hand, gets this basically for free as it's an inherently "recursive" algorithm in that any type of lighting interaction is naturally nested within any other type of lighting interaction (at least for path tracing or recursive raytracing, most games nowadays are using "raytracing" to refer to replacing specific lighting interactions with standalone raytraced variants, so "raytracing" in the context of most current games still has a well defined order like rasterization).

    • @AnttiBrax
      @AnttiBrax 13 днів тому +4

      Are you sure about that historic part? Computerized ray tracing dates back to the late 60's. I think you might be only considering things from the real time graphics point of view.

    • @emporioalnino4670
      @emporioalnino4670 8 днів тому +1

      I disagree about the small impact on performance, it's pretty substantial. Most gamers choose to turn RT off to save frames!

  • @lMINERl
    @lMINERl 14 днів тому +53

    How ray tracing works : make a line then trace it where it will go

    • @Hydrabogen
      @Hydrabogen 13 днів тому +12

      One may even go so far as to call the line a ray

  • @mahdijafari7281
    @mahdijafari7281 14 днів тому +55

    "It's really easy!"
    Until you need to optimise it...
    Great video btw.

  • @IanFarquharson2
    @IanFarquharson2 14 днів тому +14

    1987, BSc computer science graphics course, same maths, but all night to render a teapot on a minicomputer. PS5 today doing cars in realtime.

  • @BrianMcElwain
    @BrianMcElwain 14 днів тому +22

    A golden opportunity was missed here to explain subsurface scattering par excellence via that 99.9% translucent skin of dear Lewis here.

    • @TheSliderW
      @TheSliderW 13 днів тому

      And bounce light contributing to object color ans lighting. Oie the green grass lighting him up on the left side :)

  • @yooyo3d
    @yooyo3d 13 днів тому +2

    I wrote my first ray tracer ~30 years ago on 486 PC in MSDOS. I used Watcom C compiler to build 32bit code and to use FPU unit on the CPU.
    My friend and me developed a wide variety of math functions to calculate intersection between line and triangle, sphere, quadratic surface, boxes, even bezier surfaces. Then we developed math functions to describe materials and surfaces, to calculate refractions and reflection. Our material system allow to define object with multiple reflection and refraction index. Then we developed procedural texturing, soft shadows, area lighting, CSG, "blobby" objects, .. we know that slowest thing is scene with triangles so we try hard to describe scene with more or less complex math functions.
    Part of project was to speed up ray-hit so we tried various algorithms like octree, bounding spheres, bounding boxes and finally we stick with some unique approach to project boundary boxes to world axis and then step through entry/exit points.. like open and closed braces.
    The scene was described from code itself. We didn't have any 3D editor. I was trying to develop it but eventually I give up.
    This was long before we had internet access.
    Then we got internet access and we found POV-Ray.

  • @jalsiddharth
    @jalsiddharth 14 днів тому +14

    OMG ITS MY TA FROM THE COMPUTER GRAPHICS MODULE, LEWIS!!!! LETS GOOO!

  • @AySz88
    @AySz88 14 днів тому +6

    Oof, the question from Brady at 14:10 about the "pixel" units, that's then glossed(ha) over, really does touch upon one of the hard subtle things about raytracing: avoiding infinite loops. If you're not careful, you can make lighting and reflections that don't "conserve energy" and end up unpredictably creating an infinite number of rays, imply an infinite amount of light, or (usually) both!
    So a ray can't simply be a "pixel" - you have to be pretty careful with precise units ("radiance" vs "radiosity", etc.), making it potentially less forgiving to create new effects with than rasterization shaders. Meanwhile rasterization can start out a lot more intuitive for artists that would like to think of the image like it's a canvas. Raytracing isn't all easier all the time.

  • @chrischeetham2659
    @chrischeetham2659 14 днів тому +116

    Great video, but my brain couldn't cope with the monitor prodding 😂

    • @MelHaynesJr
      @MelHaynesJr 14 днів тому +13

      I am glad I wasn't the only one. I was screaming in my head

    • @AySz88
      @AySz88 14 днів тому +10

      I was curious and googled the model number. It's apparently a ~2012 monitor discontinued prior to ~2018. I still wouldn't approve, but I could imagine its continued existence being considered more bane than boon.

    • @mihainita5325
      @mihainita5325 14 днів тому +1

      Same, I wanted to yell "stop doing that!" :-)
      The only explanation (in my head) was that it is in fact a touch screen (the way it deformed it might be?). So touching it is then a very natural thing to do.

    • @TheGreatAtario
      @TheGreatAtario 13 днів тому +5

      @@mihainita5325 Touch screens don't do that light-squish-ripple effect. Also if it were a touch screen I would have expected the touches to be doing clicks and drags and such

  • @realdarthplagueis
    @realdarthplagueis 14 днів тому +18

    I remember the Persistence of Vision ray-tracer from the 90s. I was running it on a 486 Intel PC. Every frame took hours to render.

    • @user-zz6fk8bc8u
      @user-zz6fk8bc8u 14 днів тому +2

      Me too. It was awesome.

    • @philp4684
      @philp4684 14 днів тому +3

      I played with it on my Atari ST. A 320x200 image would take all night, and you'd have to use an image viewer that did clever things with palette switching in timer interrupts to make it display more than 16 colours on screen to even see the result.

    • @feandil666
      @feandil666 14 днів тому +1

      yep and now a high end game on a high end gpu can raytrace a 4K image in 15ms (cheating of course, there are things like DLSS that allow the game to use a much small resolution to raytrace that is upscaled after)

    • @ryan0io
      @ryan0io 14 днів тому +1

      Who remembers DKBTrace before there was POV-Ray? I ran it on a 386. talk about needing patience.

    • @Zadster
      @Zadster 14 днів тому

      POV-Ray was incredible. I first used it on my 386SX20. Even rendering 80x60 pixel images needed a LOT of patience. When I got my first 24-bit video card it was mindblowing! That and FractInt really soaked up CPU cycles.

  • @DigitalJedi
    @DigitalJedi 12 днів тому +3

    Great breakdown of how ray tracing works. I'd love to see another video comparing the usual ray-traced approach to things like cone tracing and the other shape-tracing ideas. Would be interesting to see what optimizations and tradeoffs each makes.

  • @DS-rd8ud
    @DS-rd8ud 14 днів тому +5

    16:01 Arnold warning sign telling people to not touch the robots or the table. The ultimate security measure.

  • @Sora_Halomon
    @Sora_Halomon 14 днів тому +4

    I know why most computerphiles are filmed indoors, but I really like seeing outdoor footage. It kinda reminds me of the early Number videos filmed in the stadium and roads.

  • @arrowtlg2646
    @arrowtlg2646 14 днів тому +8

    6:20 man seeing that campus is a throwback! Miss Nottingham!

  • @luispereira628
    @luispereira628 10 днів тому +1

    Whenever someone who is very passionate about his work speaks you know the video will be great! Great explanation and loved the passion 😊

  • @therealEmpyre
    @therealEmpyre 7 днів тому

    In 1986, I wrote a program that used ray casting, a more primitive form of ray tracing, for a university project. It took several minutes for that 286 to render a simple scene at 320 by 200 by 256 colors in VGA.

  • @250bythepark
    @250bythepark 12 днів тому +1

    I think you're a great addition to Computerphile, hope you make more videos, really interesting stuff!

  • @HarhaMedia
    @HarhaMedia 9 днів тому

    Writing a bunch of raytracers as a hobbyist programmer really helped me understand vectors and matrices.

  • @jordantylerflores2993
    @jordantylerflores2993 13 днів тому +3

    Thank you! This was very informative. Could you do a segment on the differences between Ray Tracing and Path Tracing?

  • @BruceZempeda
    @BruceZempeda 13 днів тому +1

    Best computerphile video in a while

  • @Joseph_Roffey
    @Joseph_Roffey 14 днів тому +17

    I was sad that he didn’t mention “do mirrors themselves get treated as potential light sources”?
    Because from the way he described it, it didn’t seem like they would automatically do so, and also because in the final example I was surprised the shadow didn’t seem to change shape when the mirrors were added as surely the light would’ve been able to hit more of the shadow with the mirrors on either side.

    • @AnttiBrax
      @AnttiBrax 14 днів тому +26

      Mirrors aren't light sources per se. Whenever the ray hits any surface you basically start the same calculation as you did when you shot the first ray and add that result to the original ray. So a ray that hits a mirror may eventually hit a light source. What was skipped here was that the ray can keep bouncing hundreds of times before it hits a light source and each hit affects the colour of the pixel. And that's why ray tracing is so slow.

    • @unvergebeneid
      @unvergebeneid 14 днів тому +3

      Mirrors are just another surface in ray tracing but they are funnily enough the easiest surface to compute. Diffuse materials are much harder because each point needs to generate in theory infinitely many rays itself, whereas a perfect mirror needs only a single ray.

    • @SMorales851
      @SMorales851 14 днів тому +1

      No, mirrors are not light sources. For the mirrors to have the effect you described, a less basic raytracer is necessary. Typically, the rays are not bounced directly towards the light (that's more of classic rasterizer thing). Instead, the behavior depends on the type of surface. Smooth, mirror-like surfaces bounce the ray in one specific angle, like mirrors do in real life. Rough surfaces, instead, "split" the ray into smaller rays that shoot out in random directions; the color of the surface is then the sum of the colors returned by those rays. The more of those subrays you have, and the more times they are allowed to split and bounce recursively, the better the image quality (but performance suffers greatly). That random ray bouncing generates what's known as "indirect lighting", which is all light that doesn't come directly from a light source, but instead reflected off of something else first.

    • @jcm2606
      @jcm2606 13 днів тому

      @@SMorales851 To be pedantic, some raytracers do actually trace rays directly towards light sources. There's an entire optimisation technique called next event estimation where a subset of rays are specifically dedicated to being traced towards known light sources, then the returned energy value is weighted to conserve energy since you technically did introduce some bias to the algorithm by doing this. There's also another optimisation technique called reservoir importance sampling which generalises NEE (specifically as part of multiple importance sampling which combines NEE with BRDF importance sampling) to sample _pixels_ that are known to contribute meaningfully to the image, rather than specifically known light sources (this technique is commonly known as ReSTIR, though reservoir sampling is useful in other areas so ReSTIR isn't the only use of it).

  • @joaoguerreiro9403
    @joaoguerreiro9403 14 днів тому +1

    Computer Science is awesome!

  • @glitchy_weasel
    @glitchy_weasel 5 днів тому

    I think it would be fun for more outdoor Computerphile episodes :)

  • @AlbertoApuCRC
    @AlbertoApuCRC 14 днів тому

    I followed a raytracing course a few years ago... loved it

    • @WeeeAffandi
      @WeeeAffandi 14 днів тому

      How different is Parh Tracing?

  • @musthavechannel5262
    @musthavechannel5262 14 днів тому +11

    Obviously it is oversimplified version of ray tracing since it doesn't explain why shadows aren't pitch black

    • @trevinbeattie4888
      @trevinbeattie4888 14 днів тому +1

      Ambient lighting :)

    • @evolutionarytheory
      @evolutionarytheory 10 днів тому

      It's implied. If the lightsource has a non zero width it's implied. If you bounce the ray more than once it's also implied. But he didn't cover stochastic raytracing which would have made it more obvious.

  • @omegahaxors3306
    @omegahaxors3306 14 днів тому +1

    Minecraft still uses rasterization, the ray-tracing thing was a mod made by a graphics card company as part of a marketing campaign.
    They made mods for a bunch of other games too, from what I understood they had an API that let them hook into the lighting engine.

  • @paull923
    @paull923 13 днів тому +1

    great video, thank you very much!
    Regarding 10:01 "How do we figure out what objects we've hit", can you make video about that?

  • @Bugside
    @Bugside 14 днів тому +2

    Should have shown bounce light, ambient occlusion, colors affecting other objects. I find it the coolest effects

  • @jonnydve
    @jonnydve 14 днів тому

    I love that you are using (presumably old stocks) of ininite paper. Growing up I used to draw lots on there because my grandfather had tons of it still lying around (I was born 99)

  • @lolroflmaoization
    @lolroflmaoization 14 днів тому +5

    Honestly you would get a much better explanation of rasterization and it's limitations as compared to raytracing by watching Alex's videos on Digital Foundry

  • @SeamusHarper1234
    @SeamusHarper1234 14 днів тому +1

    I love how you got the green color all over your hands xD

  • @totlyepic
    @totlyepic 14 днів тому +10

    16:22 Dude must want a new monitor the way he's jamming his finger into this one.

  • @toxicbullets10
    @toxicbullets10 14 днів тому +2

    intuitive explanation

  • @ukbloke28
    @ukbloke28 13 днів тому +1

    Wait. Where did you get that oldschool printer paper? I used to draw on that as a kid in the 70s, you just gave me mad nostalgia. I want to get hold of some!

  • @Serjgap
    @Serjgap 14 днів тому +3

    I am now understanding even less than before

  • @ocamlmail
    @ocamlmail 6 днів тому

    Tremendously cool, thank you!

  • @Roxor128
    @Roxor128 14 днів тому +1

    And let's not forget the mad geniuses of the Demoscene who pulled off real-time ray tracing as far back as 1995!
    Recommended viewing: Transgression 2 by MFX (1996), Heaven 7 by Exceed (2000), and Still Sucking Nature by Federation Against Nature (2003). All of them have video recordings uploaded to UA-cam, so just plug the titles into the search box.

    • @bishboria
      @bishboria 13 днів тому

      Heaven 7 was/is amazing

    • @yooyo3d
      @yooyo3d 13 днів тому

      TGR2 by MFX intro didn't use any ray tracing. It was very effective fake.

    • @Roxor128
      @Roxor128 13 днів тому

      @@yooyo3d Citation? Better still, how about an annotated disassembly walking us through what it actually does?

  • @bengoodwin2141
    @bengoodwin2141 14 днів тому

    They mentioned Minecraft using Ray tracing, there is an experimental version of the game that uses it, but the main version of the game still uses rasterization. There are also fan made modifications that add ray tracing and/or extra shaders to make the game look nicer as well.

  • @frickxnas
    @frickxnas 14 днів тому

    Ray tracing for rendering and ray picking for mouse are incredible. Been using them since 2014

  • @thenoblerot
    @thenoblerot 14 днів тому

    I started ray tracing with POV-ray on my 386/387 with 2mb ram. Hours or even days for 320x240 image!

  • @Kane0123
    @Kane0123 14 днів тому

    “Now you can see why that’s more efficient” - another line added to my CV.

  • @MKBlackbird
    @MKBlackbird 14 днів тому +1

    Ray tracing in modern games are optimized by shooting fewer rays and then "guessing" how it would have looked with all the rays using AI. It is super cool how that makes ray tracing actually feasible for games. Now another interesting approach is bringing in a diffusion model. By either dreaming up the final image from a color coded (cheaply rendered rasterization) segmentation of the scene or just adding a final touch on top of a normal rendered frame. I imagine the diffusion models and other similar approches will become increasingly fast to actually make this possible. It would be like SweetFX with style transfer.

    • @jonsen2k
      @jonsen2k 14 днів тому +2

      We're still using rasterization at the bottom as well, aren't we?
      I thought ray tracing was only used to get shadows and reflections and stuff more lifelike on top of the else rasterized frame.

    • @MKBlackbird
      @MKBlackbird 14 днів тому +1

      @@jonsen2k Yes, that's true.

    • @jcm2606
      @jcm2606 13 днів тому +2

      AI was really only introduced recently with NVIDIA's ray reconstruction, and even that more so seems to be just a neural network performing the same work that a traditional denoiser does. Outside of RR, games tend to use traditional denoisers like SVGF or A-SVGF, which don't use any AI at all. Typically they'll use an accumulation pre-pass, where raw raytracer outputs are fed into a running average spanning multiple frames (anywhere from half a dozen to possibly 100+ frames) to try to gather as many samples as possible across time, then they'll feed the output of the accumulation pre-pass into a series of noise/variance-aware spatial filters which selectively blur parts of the image that are considered to be too noisy.

    • @DripDripDrip69
      @DripDripDrip69 12 днів тому

      @@jonsen2k There are different levels of implementation, some games only have a few ray traced effects like RT shadows and RT reflections, some have RT global illumination to replace light probe based rasterization GI (the most transformative RT effect imo), so they have ray traced effects slapped on top of rasterized image. Some go all out with path tracing like Cyberpunk and Allen Wake 2, those have very few rasterized components apart from the primary view, and if I'm remembering it right in Portal RTX and Quake RTX even the primary view is ray traced so there's no rasterization whatsoever.

  • @MatthewHoworko
    @MatthewHoworko 14 днів тому

    Can we just appreciate how you don't hear the wind in the audio for the outside demo section

  • @omegahaxors3306
    @omegahaxors3306 14 днів тому +1

    If you've ever played minecraft or a shooter you've used a ray trace, because that's how the game knows what you're targeting.

  • @unvergebeneid
    @unvergebeneid 14 днів тому +1

    Would've been nice to compare the toy ray tracer he built as an undergrad to an actual ray tracer with bounce lighting. Just to illustrate how much the explanation in this video only barely scratches the surface.

  • @emwave100
    @emwave100 14 днів тому

    I vote for more computer graphics videos. I am surprised ray tracing hasn't been covered on this channel before

  • @nickthane
    @nickthane 13 днів тому

    Highly recommend checking Sebastian Lague’s video where he builds a raytracing renderer step by step.

  • @jonnypanteloni
    @jonnypanteloni 14 днів тому

    I can finally gather my steps and talk to hilbert about how I just can't stop dithering on this topic. Maybe I should get my buckets in order.

  • @karolispavilionis8901
    @karolispavilionis8901 14 днів тому +2

    In the last comparison with mirrors, shouldn't the shadow on the box be smaller because of light bouncing off the mirror, therefore illuminating below the box?

    • @cannaroe1213
      @cannaroe1213 14 днів тому

      Mirrors reflect light, but that doesn't make them special. A white surface will probably reflect MORE light, but a mirror keeps the light linear without scattering. So a mirror could reflect less light, because it makes a reflection, but its darker. Does that make snese?
      So anyway, since everything reflects light not just mirrors, with some average amount of "reflectiveness" based on the material properties, they have something thats the opposite of shadow maps, called light maps, which layers additional light over everything based on the math behind what's emitting light

    • @jcm2606
      @jcm2606 13 днів тому

      In short, yes, it should. Light from the light source and the rest of the scene should specularly reflect off of the mirror and onto the floor below the box, illuminating the box's shadow. It's not doing that in the raytracer likely because the raytracer is simplistic and isn't handling multiple bounces correctly (if at all), but if it did then you'd naturally see what you're describing (especially with a path tracer, which is the big boy raytracer).

    • @jcm2606
      @jcm2606 13 днів тому

      @@cannaroe1213 None of this makes any sense. Firstly, mirrors (or, rather, metals) are actually a little special since they're one of the few surfaces that can reflect almost the entirety of incoming light in any outgoing direction, whereas most other surfaces will generally lose some amount of light to diffuse transmission at perpendicular angles (ie angles where you're looking straight down at the surface).
      Secondly, because mirrors keep the light linear without much scattering (being real pedantic here but there will always be _some_ scattering due to debris on the surface and inner layers of the mirror, and imperfections in the mirror's surface), the light they reflect is typically actually _brighter_ for the outgoing direction since more of the incoming light is exiting in that outgoing direction (light source emits 100% incoming light; incoming light reflects off of a very rough surface, 95+% of the incoming light is scattered in different directions,

  • @zebraforceone
    @zebraforceone 14 днів тому +2

    A question at @14:20 I see the bounce towards a fixed number of point lights,
    How does this work in raytracing with emissive surfaces?

    • @feandil666
      @feandil666 14 днів тому +1

      same, the surface is just considered a light source directly, so its emissivity is added to what the light would normally be there

  • @ChadGatling
    @ChadGatling 14 днів тому +4

    My first thought is how would ray tracing handle a scene where there is something next to a big red wall or something where you really should be able to see some red reflections but if the ray just hits the car then goes to the sun the ray will never see the wall an not know to tint the car a bit red

    • @kevingillespie5242
      @kevingillespie5242 14 днів тому +1

      (i have no graphics experience but) My guess is you can track rays that bounce off the wall and hit the car. Perhaps construct some sort of graph that tracks all the values at each point a ray hits so you can track how much light from the car is supposed to reflect off the wall? But each feature like that will make it more expensive / memory intensive.

    • @mrlithium69
      @mrlithium69 14 днів тому +2

      wouldnt apply to this convo. need additional calculations - specular and diffuse reflection

    • @sephirothbahamut245
      @sephirothbahamut245 14 днів тому +6

      You do multiple ray bounces. More rays, more bounces = more realistic image. For realtime rendering you mostly stick to 1 or 2 bounces. Stuff like rendering for cinema can easily go past 200 bounces, and hundreds of rays per pixel. That's why they can take days to render a scene.

    • @BeheadedKamikaze
      @BeheadedKamikaze 14 днів тому

      @@sephirothbahamut245 You're correct, but that process is typically called path tracing to make it distinct from ray tracing, which does not account for this effect.

    • @AySz88
      @AySz88 14 днів тому

      Ironically this is what the Cornell Box is supposed to test too (note the big red and green side walls). There's also an actual photo of a real life Cornell Box to compare to, where you see the effect you mention.

  • @sbmoonbeam
    @sbmoonbeam 14 днів тому

    I think your modern games rendering pipeline for something like cyberpunk 2077 will be a blend of these techniques with physically based rendering (PBR) pipeline using a rastering pathway enhanced by using ray tracing (using your RTX/compute pipeline pathway) to calculate reflectance and refraction effects rather than calculating every pixel in the render from first principles.

  • @shreepads
    @shreepads 14 днів тому +25

    The explanation feels unsatisfactory, probably in an attempt to keep things simple, e.g. light source occluded at a point would be rendered black but clearly it's picking up diffused light from other objects

    • @jean-naymar602
      @jean-naymar602 14 днів тому +4

      I don't think that's diffuse lighting in this specific implementation. It looks like the shadow color is set to some greyish black.
      You could set the shadow color to any color, it does not need to be black.

    • @mytech6779
      @mytech6779 12 днів тому +2

      They call that ambiant lighting and it's just a preset level of light applied to every pixel.
      Doing true diffuse lighting via raytracing is not possible with current computing power. There are far too many re-reflection calculations.
      Even the hard shadows will not be accurate in his mirror-room example, because the rays would need to originate at the light source, not at the observer, to have any practical chance of finding all of the primary reflections that would be lighting the "shadow" area, (let alone secondary tertiary reflections). The reflected lighting thing can be done to a limited extent but the rendering time increases several orders of magnitude so it is only used in pre-rendered scenes not realtime gaming.

    • @shreepads
      @shreepads День тому

      Thanks!

  • @muhammadsiddiqui2244
    @muhammadsiddiqui2244 13 днів тому

    It's the first I have seen a computer scientist "outside" 🤣

  • @user-lh6ig5gj4e
    @user-lh6ig5gj4e 5 днів тому

    When they went outside, I was expecting them to say that they had discovered that grass exists

  • @robertkelleher1850
    @robertkelleher1850 14 днів тому +6

    I'm surprised we can see anything with all the fingerprints that must be on that monitor.

  • @j7ndominica051
    @j7ndominica051 13 днів тому

    His head is filling the Zed-buffer. Where does diffuse light come from, which has been reflected from other nearby objects?

  • @richardwigley
    @richardwigley 14 днів тому +15

    Tell you me you didn’t pay for your monitor without telling me you didn’t pay for your monitor….

  • @scaredyfish
    @scaredyfish 11 днів тому

    I’d like to know more about how modern game engines use ray tracing.
    As I understand it, it’s still rasterised and the ray tracing is an additional step. They do a low resolution ray trace of the scene that’s then denoised and used as a lighting pass - that’s how it can run at game frame rates. Is that understanding correct?

  • @morlankey
    @morlankey 14 днів тому

    Why is the ray-traced blue box casting a shadow below it but isn't lighter on top?

  • @bw6378
    @bw6378 14 днів тому +2

    Anyone else remember POVray from way back when? lol

    • @Roxor128
      @Roxor128 14 днів тому

      Done a lot of mucking around with it. Even produced a few scenes that look nice.

  • @flippert0
    @flippert0 7 днів тому

    Plot twist: it's UK, so the brightly lit day outside was of course all CGI

  • @Gosu9765
    @Gosu9765 14 днів тому

    I don't think this video conveyed the subject well. What I got from it is that RT looks much better (tho as explained it seemed like you can get only the minecraft graphics with rasterisers, which is so far from truth) and that RT is slower, but simpler. Is it? All the ways to optimize BVH structures, accumulate traces across frames for real time graphics and then to cleanup artefacts from all the shortcuts you took. Studios clearly struggle with this - it's probably one of the hardest things to pull of well now in the gaming industry when it comes to graphics. What really should have been shown are the limitations of rasterisation techniques as that's the reason industry is moving towards RT now - SSR occlusion artefacts, shadow map resolutions, light bleeding through objects, etc. (no need to explain those - they could simply be shown). The only thing that was shown was that RT can reflect things that are not in screen space, but even that wasn't explained as limitation of rasterisation. As a simple gamer I'm kinda disappointed as I've seen other simple gamers explaining this much better.

  • @jojox1904
    @jojox1904 7 днів тому

    Can you do another video discussing the chances and dangers of AI? The last videos on this I saw from your channel were from 8 years ago and I'd be curious about an updated view on this

  • @OnionKnight541
    @OnionKnight541 14 днів тому

    can you please do a part II of this video using (say) apple's SceneKit / Vision frameworks, so that we as devs can see how that is implemented properly?

  • @badcrab7494
    @badcrab7494 13 днів тому

    In future with more powerful computers, would you do ray tracing the correct way round with rays starting at the light source rather than the camera?

    • @Juansonos
      @Juansonos 13 днів тому

      I believe that we start at camera to control how many calculations are needed to be done. Starting rays at light source sends more rays to more places and a lot of those would not be useful to us rendering a scene from the camera's vantage point. Leading to wasted computationsthat did nothing to make the result better.

  • @erikziak1249
    @erikziak1249 14 днів тому

    Why is the shadow grey and not black then if you do not render anything there?

  • @ares106
    @ares106 14 днів тому

    6:30 but can you imagine actually going outside?

  • @RayRay-kz1ms
    @RayRay-kz1ms 11 днів тому

    This is under the assumption that lights travel instantaneously, which is sometimes inaccurate

  • @mouhamadouseydoudiop8957
    @mouhamadouseydoudiop8957 14 днів тому

    Nice

  • @4.0.4
    @4.0.4 14 днів тому

    I had to do a double take to make sure this video wasn't from 2018.

  • @DavidDLee
    @DavidDLee 13 днів тому

    How much of this printer paper do you still have?
    Do you still use a printer which takes them?

  • @Parax77
    @Parax77 14 днів тому

    in that last scene.. whilst the view was reflected in the mirror the light source was not? how come?

    • @jcm2606
      @jcm2606 13 днів тому

      Would need a better image to really know for sure since the angle of the camera wouldn't have allowed us to see the light source to begin with (it looked like the light source was at the center of the ceiling, whereas we could at best see the far left and right sides), but it could have also been that the raytracer just wasn't set up for it to begin with. The style of raytracing that he was describing will generally treat all light sources as infinitely small points, so at best you'll only get one or two pixels that represent the light source, which may have been why we couldn't see it. Generally in that case you need to either trace rays in random directions within a cone pointed at the light source (which simulates a spherical light source with a specific radius), or you need to have the shaders set up to handle glossy surfaces to "blur" the reflection of the light source out across the surface.

  • @jeromethiel4323
    @jeromethiel4323 14 днів тому

    What's being described here is what i have always heard referred to as "ray casting." Because it eliminates a lot of unnecessary calculations that used to be done with Ray casting. Ray casting, classically, traced the rays from the light source. Which is inefficient, since a lot of those rays will never hit the "camera."
    I remember ray casting software on the Amiga, and it was glacially slow. While a similar program using ray casting was much, much faster.

  • @agoatmannameddesire8856
    @agoatmannameddesire8856 14 днів тому +1

    Do ambient occlusion next :)

  • @4santa391
    @4santa391 14 днів тому +11

    am I the only one getting triggered by the screen touching? 😆 16:15

    • @realdarthplagueis
      @realdarthplagueis 14 днів тому

      Agreed.

    • @rich1051414
      @rich1051414 14 днів тому +1

      I am not sure how he has so much marker ink on his hands as well... There is something about this guy that triggers me 100 different ways, but I am trying to keep it bottled down.

  • @Scenesencive
    @Scenesencive 14 днів тому

    Field trip daaaayyyy!

    • @Scenesencive
      @Scenesencive 14 днів тому

      Interesting video , I am fairly familiar with the subject , felt like there was maybe just one interesting key point missing of raytracing , or even in general light , witch is ofc GI , indirect lighting , that in theory we would like to cast on every hit point and infinite amount of new rays and again on every single hit point, and again; etc. That ultimately every surface basically is a verry rough form a mirror , wich results in the underside of the car not be completely black even though there is no direct ray to any light source , and the wall next to the bright red cube have a red tinted ambient light in games other than reflections this seems the most frequent usecase for rt especially when baking lightmaps is challenging like in open world games, that prolly the subjects you mention in the 1 hour directors cut prolly! gg

  • @elirane85
    @elirane85 14 днів тому +3

    I remember more then 20 years ago, I was learning to program video-games, so I read a book about "real-time graphics".
    The first chapter was about ray-tracing, it was only about 5 pages, and it basically covered most of the theory behind it and even had a full implementation which was less then 1 page long.
    But at the end of the chapter it said something like:
    "But this approach can take hours to render a single frame, so this technic is only good for pre-rendering on massive server farms, and the next 300 pages will teach you how to fake it" 😋

    • @AlmerosMusicCode
      @AlmerosMusicCode 14 днів тому

      That's a fantastic approach for explaining the subject! Must have been a great read.

  • @zxuiji
    @zxuiji 14 днів тому

    5:50, Occurs to me at this point the fragments could be a way of checking for objects in the way. If for example you've already calculated a fragment to be nearer to the camera than the one doing but the one doing should've effected the fragment that's been chosen then could just take on that fragments details and apply the reduction in lighting before finally overriding the nearer fragment with the current fragment that's now mascarading as the original. Naturally if the fragment you're working on is nearer then you can just apply the lighting reduction normally and carry on.

    • @jcm2606
      @jcm2606 13 днів тому +2

      This is an actual technique and is called screen-space raytracing or screen-space raymarching. Basically, a ray is marched across the screen's depth buffer until it either reaches its destination or the ray goes behind a pixel (which is determined by comparing the ray's depth to the pixel's), with different methods of handling each case depending on what it's being used for. The problem with this technique is that you don't know how "thick" the pixel is, so you don't know if you've _actually_ hit the object that the pixel belongs to or if you're sufficiently far behind it to not have hit it. You can sort of approximate the thickness in a couple different ways, but you'll always end up with false positives where the ray thinks it missed the object when it actually hit it (or vice versa), plus you don't know what other objects are behind the pixel so you don't know if you've hit some other object. For that reason it's not the best technique to use for shadows (it can work sometimes so some games do use it as a fallback to a main shadowing technique), but it _is_ commonly used for reflections since, for most surfaces, reflections are primarily visible at grazing angles where the limitations of the technique aren't as painful to deal with (doing reflections this way is called screen-space reflections).

    • @zxuiji
      @zxuiji 13 днів тому

      @@jcm2606dam yt and it's shadow deleting. I replied to this once and my post still isn't there. I basically said both problems are solvable. The pixel thickness by distance from camera and the multiple object thing by a shadow pixel that accumulates pixel values to apply

    • @jcm2606
      @jcm2606 13 днів тому

      *"The pixel thickness by distance from camera"*
      This is a very crude approximation of thickness, more crude than the industry standard of just having a fixed depth threshold. The thickness is meant to be measuring how long the object is along the screen's Z axis, so basing it off of the distance to the camera isn't correct as distance to the camera doesn't affect how long the object is along the screen's Z axis (ignoring perspective, which is already accounted for by coordinate space transformations).
      *"the multiple object thing by a shadow pixel that accumulates pixel values to apply"*
      This won't do anything at all as what we're looking for is what range of depths actually contain objects. Say we had a depth buffer that was in the range [0, 1]. We had three objects positioned at the current pixel, with the first object occupying depth range [0.1, 0.2], the second object occupying depth range [0.4, 0.6], and the third object occupying depth range [0.7, 1.0].
      What we'd like to know is if a ray has hit an object (ie is in one of those three ranges) or has missed all objects (ie is outside of those three ranges), but the problem is that the depth buffer can only store the _minimum_ value of all three of these ranges, which in this case is 0.1. Even though we know based on intuition that there's gaps between each of these ranges, the depth buffer can't store anything more than the minimum of all ranges, so the ray only ever sees that there's nothing in the depth range [0, 0.1] and something in the depth range [0.1, 1.0]. It has no idea what's behind depth 0.1.
      There are a few different techniques that try to address this, but none are perfect. The simplest technique would be deep depth buffers which allows you to store multiple depth values in a depth buffer as separate depth samples. This would let you at least store multiple minimums to get a better idea of the scene composition (especially if you were to combine deep depth buffers with a dedicated back face pass to store the depths of back faces in addition to front faces, letting you get both parts of each object's depth range), but it limits you to a specific number of depth values (2, 4, 8, 16, 32, etc) and each new depth value you add increases the memory footprint of the depth buffer by an additional 1x (ie 10 depth values = 10x memory footprint), so it's impractical for this alone (since deep depth buffers were intended to be used with transparent objects, so using them for screen-space raytracing would add even more memory usage).

    • @zxuiji
      @zxuiji 13 днів тому

      @@jcm2606 Still reading your msg but pixels do not need to know the object length, they're always a fixed size and since normalisation smplifies logic (can apply the camera dimensions after) it's always better to treat the pixels at the camera as 1x1x1 and scale down(+Z) or skip (-Z) based on distance from camera

    • @zxuiji
      @zxuiji 13 днів тому

      @@jcm2606 For the depth thing the shadow pixel DOES work, remember each pixel starts off assuming it has no objects in front of it. If it's further from the camera it's hidden by the pixel in front so it's values just get added to the accumulator. If it's closer then it's values get added to the accumulator and the colour values set with 0 light applied yet. Once all camera pixels and shadow pixels have been set then the pixels of just the camera are looped to apply the accumulated values are applied to the colour of the pixel. So if fragment 1 takes 0.1 light, 2 takes 0.3 and 3 takes 0.4 then there's 0.2 left to multiply against the colours. I'll move onto reading the last chunk of your msg now

  • @skyscraperfan
    @skyscraperfan 14 днів тому +3

    Doesn't it get complicated, if a ray hits a rough surface and is diffused in many directions?

    • @user-zz6fk8bc8u
      @user-zz6fk8bc8u 14 днів тому +3

      Yes in practice (because real ray tracing is still more complicated than the shown examples) but in theory it's simple. Just shoot more rays per pixel and if you hit a surface you randomize the continuation path based on the roughness. This way you can even render stuff like foggy half transparent glass.

    • @skyscraperfan
      @skyscraperfan 14 днів тому +1

      @@user-zz6fk8bc8u That sounds like a lot of computation per pixel, because if some of those rays hit another rough surface, you would need even more rays.

    • @CubicSpline7713
      @CubicSpline7713 14 днів тому +1

      @@skyscraperfan There is a cut off point obviously, otherwise it would never finish.

    • @SteelSkin667
      @SteelSkin667 14 днів тому +3

      @@skyscraperfan That is why in practice rough materials are more expensive to trace against. In games where RT is only used for reflections there is often a roughness cutoff, and sometimes the roughness is even faked by just blurring the reflection.

  • @kenjinks5465
    @kenjinks5465 14 днів тому

    Instead of rays, could we just trace the frustrum? Look for plane/frustum intersections, fragment the frustrum by the plane intersections, generate new frustum from planes... return with vectorized output not rasterized, much smaller?

  • @ishaan863
    @ishaan863 13 днів тому

    11:11 something in the way

  • @TESRG35
    @TESRG35 13 днів тому

    Wouldn't you need to also trace a line from the shadow back to the light source *through the mirror*?

    • @jcm2606
      @jcm2606 12 днів тому

      Yes, though typically it's not framed that way. Typically with this style of raytracing (recursive raytracing) you "shift the frame of reference", so to speak, each time you start "processing" a new ray, so it's more like you tracing a line from the mirror to the floor just in front of the blue cube, then tracing a new line back to the light source and "passing the light" back to the mirror and eventually back to the camera.

  • @mettemfurfur7691
    @mettemfurfur7691 14 днів тому

    cool

  • @zxuiji
    @zxuiji 14 днів тому

    4:23, uh, couldn't you just check the fragment's position BEFORE deciding to colour it? You calculate the position, check against the buffer position, if it's not nearer you just move onto the next fragment, otherwise begin identifying what colour to give it.

  • @HeilTec
    @HeilTec 14 днів тому

    Reflected light is the hardest.

  • @Ceelvain
    @Ceelvain 13 днів тому

    Looks like this video raises a lot of emotions. I hope you see there's a content mine here. ^^

  • @megamangos7408
    @megamangos7408 14 днів тому

    Wait, is this the first Computerphile where they actually went outside to touch grass?

  • @cannaroe1213
    @cannaroe1213 14 днів тому +1

    No yur a shado map.
    *_Rasterization for da nation!_*

  • @fttmvp
    @fttmvp 14 днів тому +3

    Lol for a second I thought it was Lord Miles from a quick glance at the thumbnail.

  • @dinsm8re
    @dinsm8re 14 днів тому

    couldn’t have asked for better timing

  • @thalivenom4972
    @thalivenom4972 13 днів тому

    so basically, ray tracing is 1 lazer per pixel.

  • @paulmitchell2916
    @paulmitchell2916 13 днів тому

    Has anyone heard of ray tracing used for enhanced audio reverb?

  • @aquacruisedb
    @aquacruisedb 14 днів тому

    Wonder if there is anyone old enough a computerphile to remember what that continuous feed dot matrix paper is actually for?! (other than marker pen crazy idea sketches)

  • @fotoni0s
    @fotoni0s 7 днів тому

    Thumbs up for Rubber duck debugging :p

  • @michaelsmith4904
    @michaelsmith4904 14 днів тому

    how does this relate to ray casting?