coding a really fast De-Noising algorithm

Поділитися
Вставка
  • Опубліковано 18 лют 2024
  • in this video, I coded a denoiser for raytracers.
    It is really fast because all it does is blur an image (with a few extra steps).
    GitHub repo (improvements are welcome :D)
    github.com/marmust/raytracing...
    music:
    1 - Hotline Miami OST - Inner Animal - Scattle
    2 - Hotline Miami OST - Blizzard - Light Club
    3 - Throttle Up - Dynatron
    mentions:
    coding adventures guy:
    • Coding Adventure: Ray ...
    prob the best raytracing explanation ever:
    • How Ray Tracing (Moder...
    another good video that helped me:
    • I made a better Ray-Tr...
    NVIDIA comparison from:
    • Ray Tracing Essentials...
    thx for watching :)
  • Наука та технологія

КОМЕНТАРІ • 129

  • @peppidesu
    @peppidesu 4 місяці тому +278

    there is also a technique called manifold exploration, if you really hate your sanity.

    • @drdesten
      @drdesten 4 місяці тому +12

      lol. Is it still correct that no one except the researchers has implemented it?

    • @PolyRocketMatt
      @PolyRocketMatt 4 дні тому

      @@drdesten Probably... The main reason being there are just way better alternatives available these days, since that original paper dates back to 2012... Remember, the original Metropolis algorithm also wasn't implemented by anyone except the original authors until Kelemen introduced primary sample spaces... It's all a matter of perspective...

  • @Looki2000
    @Looki2000 4 місяці тому +122

    The problem is that black pixels are not the only cause of noise in path tracers. Path tracers sample multiple rays per pixel. Some of these rays are finding their way to the light and some don't. This results in pixel colors calculated using multiple averaged ray samples to be sometimes dimmer and sometimes brighter compared to the neighboring pixels. They may look like black pixels in some circumstances, but they are actually not most of the time. Just look at the sides of the spheres where they are well lit by the big light.

    • @8AAFFF
      @8AAFFF  4 місяці тому +21

      Yeah ur right
      If i could somehow mark those out-of-place pixels for denoising then it would probably work on this type of noise.
      But yes probably an even bigger challenge is actually identifying the pixels that need to be smoothed out

    • @shadamethyst1258
      @shadamethyst1258 4 місяці тому +14

      ​@@8AAFFF It's unfortunaly *really* hard to mark those out-of-place pixels; common tools like blender provide heuristics to cut off rays that would be too bright, which does help in reducing the number of fireflies (pixels that are overly bright compared to the ground truth), but it does change how the image looks.
      With any monte-carlo technique, we haven't yet found a generic, simple and efficient algorithmic approach to denoise the output. The best we have is to split the output into as many channels as possible, and feeding all of that information to a neural network, which then guesses the ground truth, and then applying some filters on top of the NN's result to clean up the image and account for different amounts of samples per pixel.

    • @locinolacolino1302
      @locinolacolino1302 4 місяці тому +4

      @@8AAFFF If you still plan to implement a denoiser with a Path Tracer, another interesting approach is Spectral denoising: view the problem less as an image manipulation problem and more like a signal processing problem, having the wavelengths of rays in the pathtracer as the input for the denoiser as opposed to pixel colours, temporal stability should also be better this way.

  • @jimmyhirr5773
    @jimmyhirr5773 4 місяці тому +49

    The raw raytracing output looks like salt-and-pepper noise. A common way to remove salt-and-pepper noise is with a median filter: read N pixels around a center pixel and output the median pixel. Wikipedia also says that when there is only "pepper noise" (that is, random black pixels), it can be removed with a contraharmonic mean filter.

  • @Landee
    @Landee 4 місяці тому +89

    Im really hype for the ultrakill bot

  • @wilsonwilson3674
    @wilsonwilson3674 4 місяці тому +19

    0:10 a common optimization in modern path tracers is Next Event Estimation, which means that a light source is sampled at every ray intersection point before proceeding to the next bounce. The idea is that, in most cases, a point on a surface will either 1) be exposed to at least one light source, or 2) be indirectly lit by a nearby surface that is. It's in a class of techniques called Multiple Importance Sampling, and it's a deeeeep rabbit hole if you wanna fall into it at some point lmao.
    Not sure how pertinent/interesting this info is to you but I figured I'd toss it your way.

  • @SomeRandomPiggo
    @SomeRandomPiggo 4 місяці тому +18

    Wow, this was so much better than I expected it to turn out!

  • @blacklistnr1
    @blacklistnr1 4 місяці тому +50

    Cool! An interesting thing to try:
    - Render just the edges of an image (or do a highpass in Photoshop/Krita to extract the edges)
    - Run this to fill the cells for a cool looking filter

    • @Pockeywn
      @Pockeywn 4 місяці тому +3

      omg i need to see this i might have to try this myself

    • @blacklistnr1
      @blacklistnr1 4 місяці тому +1

      @@Pockeywn Please do, I'll watch your video too :))
      I expect it to be somewhere between a median filter and some voronoi cells (like a stained glass filter with varying sizes), but I am curious about its specific artifacts and look

  • @cinderwolf32
    @cinderwolf32 4 місяці тому +8

    Interesting that Nvidia's denoiser was able to completely change the imahe with the horse!

  • @fbiofusa3986
    @fbiofusa3986 4 місяці тому +16

    Next video you should train a CNN to take the image and output the denoised image. Training would be really simple, generate a bunch of scenes with noise, save the image, then shoot more and more rays to denoise. Then train a CNN on the low vs high amount of noise.
    You can do it all on the GPU as it’s just kernel functions and dot products!

    • @superyu1337
      @superyu1337 4 місяці тому

      That’s what NVIDIA's OptiX Denoiser does afaik

  • @UCFc1XDsWoHaZmXom2KVxvuA
    @UCFc1XDsWoHaZmXom2KVxvuA 4 місяці тому +2

    Brooo this video looks and sounds mesmerizing 😵‍💫😵‍💫 loove it

  • @griffinschreiber6867
    @griffinschreiber6867 4 місяці тому +7

    I really like where this channel is going!
    Edit: training a neural network to do denoise might be interesting.

    • @gorgolyt
      @gorgolyt 4 місяці тому +2

      That's what Nvidia does.

    • @griffinschreiber6867
      @griffinschreiber6867 4 місяці тому

      @gorgolyt I know, I just thought it might be an interesting project.

  • @Deeepansh
    @Deeepansh 4 місяці тому

    30 seconds into the video and i liked and subscribed at the same time, great video 🙌....

  • @cameronkhanpour3002
    @cameronkhanpour3002 4 місяці тому +2

    Great video! Cool to see someone making their own denoising algorithm instead of doing the shortcut of importing scikit :). Maybe try running IQA metrics like MSE/PSNR or SSIM to help quantify to the viewers how good your image enhancement is.

  • @squirrelcarla
    @squirrelcarla 4 місяці тому

    really amazing, i learned so much from this video, thank you

  • @chaosminecraft3399
    @chaosminecraft3399 4 місяці тому +1

    Damn, that is quite the denoising work you did 😳

  • @Fatherlake
    @Fatherlake 4 місяці тому

    i like the results, the slight blur makes it look dreamy

  • @RalphScott-wu8ei
    @RalphScott-wu8ei 4 місяці тому

    This looks awesome!

  • @FractalIND
    @FractalIND 4 місяці тому +1

    thanks a lot for the explanation, im working on my own 3d renderer without an api like opengl or vulkan and i searched a lot for an ideea for an algorythm but every webside explains it in words normal people can understand but no exact step by step guide for how it works

  • @gazehound
    @gazehound 4 місяці тому +1

    I love "funny Eve Online moments" as a description for intergalactic alien space war

  • @WalnutOW
    @WalnutOW 4 місяці тому +1

    Cool. This is kind of like morphological dilation

  • @starplatinum3305
    @starplatinum3305 4 місяці тому +3

    bro made me cry bc this video's good af 😭😭😭

  • @Tobiky
    @Tobiky 4 місяці тому +1

    looks sick, thanks dude

  • @ThylineTheGay
    @ThylineTheGay 4 місяці тому

    Amazing 'neighbours having a party at 11pm' vibes to the music 😅

    • @8AAFFF
      @8AAFFF  4 місяці тому +1

      Yeah if you played buckshot roulette its also similar

  • @MrTomyCJ
    @MrTomyCJ 4 місяці тому +2

    The look produced by this algorithm reminded me of how shadows look on rtx games. That made me wonder if the denoising algorithm in some real time raytracing applications is somewhat similar to this one.

  • @shadamethyst1258
    @shadamethyst1258 4 місяці тому +1

    Hmm, what you're describing sounds a lot like the voronoi cell pattern over the nonzero pixels, with the Manhattan/taxicab metric. Reading up on OpenCV's document, what you've been trying to implement can be done with the `dilate` operation, together with some simple masking. Alternatively, you could have taken any blurring convolution filter, then compute `image = image + blur(image) / blur(mask) * mask`, where `mask[x, y]` is 1 when the pixel is black.
    In both cases you wouldn't need to then normalize the entire image, which I believe is the cause of the weird artifacts you were getting: a black pixel surrounded by white pixels anywhere in the picture would cause the normalization step to divide the brightness of the entire convoluted image by 8.

  • @mauriciodanielromano7001
    @mauriciodanielromano7001 22 дні тому +1

    Go go gadget pixel enhancer

  • @lyagva
    @lyagva 4 місяці тому

    Dynatron - Throttle Up.
    I wasn't expecting this song to play in... Well... Any video...

  • @Leo_Aqua
    @Leo_Aqua 4 місяці тому

    Very nice video. I might try this too

  • @stio_studio
    @stio_studio 4 місяці тому +8

    Next time you can use something called Voronoin't. Does what you do on the first iteration but only goes thru the image once

    • @8AAFFF
      @8AAFFF  4 місяці тому +3

      i looked it up and yes
      its pretty much a better version of the first implementation

  • @raconvid6521
    @raconvid6521 4 місяці тому

    0:21 From experience this might not the case, since noise can still be seen with ray-marching without any reflections.
    My theory is that the noise is actually caused by the bit limit essentially giving objects a rough surface so some rays get stuck in the tiny sized crevices.
    I haven’t looked into blender’s source code specifically so I’d take this with a grain of salt.

  • @davutsauze8319
    @davutsauze8319 4 місяці тому

    8AAFFF: I fear no man,
    but that thing...
    *whatever demon is editing their videos*
    it scares me.

  • @meinlet5103
    @meinlet5103 4 місяці тому

    now I know why image sensor on dark places is noisy

  • @mirabilis
    @mirabilis 4 місяці тому

    My eyes see noise in dark areas IRL.

  • @lyagva
    @lyagva 4 місяці тому +1

    As I remember noise appear because of a different reason.
    IRL when light bounces off of a mirror it have the same angle before and after. But every other rough object work a bit differently: light bounces with the same angle, but the angle is measured from a rough (really smally curved) surface, wich on approximation makes light bounce in random direction (randomness is relative to object's roughness).
    As of Ray Tracing/Path Tracing, rendering mirrors is a peace of cake as it requires only one ray shooting for one pixel. But the things get pretty hard when working with rough materials, we have to shoot many many many lights rays and every time randomise their bounce angle, then find an average color of all rays and get output.
    We are getting the noise exactly because IRL light comes from the source of light with pretty high resolution, but in RTX we shoot rays from the camera and can't calculate all of the light falling on object, so we have to simulate many bounces and approximate the color.
    (Correct me if I'm wrong)

    • @sloppycee
      @sloppycee 4 місяці тому +1

      With ray tracing, illuminated points should never be black, since at each bounce you need to calculate each light source's direct contribution by shooting rays at each light.
      Black noise is typically seen in path tracing, where the stochastic nature of light source direction can result in some points just randomly not receiving a ray from the light source.

  • @notapplicable7292
    @notapplicable7292 4 місяці тому

    Hand written techniques are great but this is one of the few time AI techniques are genuinely unparalleled. I highly recommend looking into even the most basic AI denoising techniques, that are absurdly effective

  • @CharlesVanNoland
    @CharlesVanNoland 4 місяці тому

    It's normal for dark areas (dark because of a lack of light, not because of material color) to be blurrier because they will have less ray intersections. This is the situation with all denoisers.

  • @jakemeyer8188
    @jakemeyer8188 4 місяці тому

    I wanted to fork this a month ago, but got tied up with an emergency work project. I'm not sure if you're still working on it, but I definitively want to have a look at the code and see if I can contribute.

  • @kerojey4442
    @kerojey4442 4 місяці тому

    Thanks, that's was very educational.

  • @honichi1
    @honichi1 4 місяці тому

    i mean probably cant keep up with nvidia's denoisers in blender, but this looks better than a lot of other stuff ive seen

  • @TeamDman
    @TeamDman 4 місяці тому

    Nice sfx!

  • @bubbleboy821
    @bubbleboy821 4 місяці тому

    I wish you would have gone into convolution at 3:27! Maybe make a separate video on those?

  • @AlisterChowdhuryX
    @AlisterChowdhuryX 4 місяці тому +1

    This looks a lot like pull push (OpenImageIO calls it push pull) a reasonably common algorithm from 1999.
    Used for denoising and filling holes in textures.
    You create a mipchain down to 1x1, then merge under until you get back to your original format (unpremulting the alpha along the way), the idea being you preserve detail where you have it and fill in the missing data with blurred neighbours.

  • @cube2fox
    @cube2fox 4 місяці тому +3

    This gives me an idea: Modern generative image models like Stable Diffusion support "inpainting". That is, they can complete missing parts of a given image. This suggests the diffusion models could simply inpaint all the missing (black) pixels from the noisy image. This would be quite slow but the resulting quality should be very high.

    • @somdudewillson
      @somdudewillson 4 місяці тому +3

      It's generally wayyy more effective to use a much smaller, specialized denoising neural network. However, you are technically correct - generative image models like Stable Diffusion are actually denoising networks that are so absurdly good at their job that they can 'denoise' a high-quality image from literal pure noise.

    • @cube2fox
      @cube2fox 4 місяці тому

      @@somdudewillson Yeah diffusion models should be able to handle much heavier noise than specialized models.

  • @bbrainstormer2036
    @bbrainstormer2036 4 місяці тому

    It looks almost dreamlike. It could be used in a stylistic way, rather than in a realistic one. Also, it wouldn't have been difficult to generate some noisy images with blender, and I'm kind of curious to see how well it works "in the field"

  • @mehvix
    @mehvix 4 місяці тому

    v solid video
    small nit: render matplotlib w/o axis

  • @owencmyk
    @owencmyk 4 місяці тому

    Using a shader instead of a convolution would fix the problems with it not looking right. Also can't wait for the ULTRAKILL bot

  • @roborogue_
    @roborogue_ 4 місяці тому

    this looks so cool

    • @roborogue_
      @roborogue_ 4 місяці тому

      this happens to be a random interest of mine and it’s very cool to come across it being covered so thank you

  • @erin34uio5y32
    @erin34uio5y32 4 місяці тому +3

    wouldn't this algorithm fail given an actual raytracer? most of the noise comes from the randomisation of the direction of a diffused ray, meaning different pixels bounce a different number of times even in the same region which changes their overall brightness? Most of the cases where a ray doesnt hit a light source it samples the environment texture?

    • @8AAFFF
      @8AAFFF  4 місяці тому +2

      thats true
      i don't have any ideas on clearing out the more subtle noise from rays that fly off "diffuse" materials (besides blurring)
      but maybe it can help with fireflies type noise that happens when a ray flies into a light source by accident too early
      (tho idk how the raytracer would mark such pixels for denoising)

    • @erin34uio5y32
      @erin34uio5y32 4 місяці тому +1

      @@8AAFFF fair enough, i guess half the problem with a denoiser is figuring out what is noise and what isnt haha, its still a great video, im really impressed by how well your method fills in the detail

  • @accueil750
    @accueil750 4 місяці тому

    Ahh my ULTRAKILL neurons are firing

  • @Povilaz
    @Povilaz 4 місяці тому

    Very interesting!

  • @hugomatijascic5778
    @hugomatijascic5778 4 місяці тому +7

    Hello,
    Really interesting approach !
    Maybe you could correct the unwanted blurring effect by applying a sharpening kernel convolution onto the noisy patch areas after the denoising alorigthm part ?
    Idk if that would help to get better results ...

    • @8AAFFF
      @8AAFFF  4 місяці тому +3

      That might work
      I can even try modifying the kernel responsible for the blurring to also try preserving edges so its not just a normal gaussian blur
      Cool idea tho :)

  • @Johnsonwingus
    @Johnsonwingus 4 місяці тому

    actually now all you need is a sharpening algorithm and then youll have comparable quality to the nvidia dev channel

  • @tonas3843
    @tonas3843 4 місяці тому +1

    i read the title as "coding is a really fast de-noising algorithm" and it kinda makes sense

  • @kipchickensout
    @kipchickensout 4 місяці тому +1

    until we have AI filling in the missing pixels

    • @gorgolyt
      @gorgolyt 4 місяці тому +1

      Literally how denoisers work already.

    • @kipchickensout
      @kipchickensout 4 місяці тому

      @@gorgolyt Oh, I mean more of that tho, with levels of SUPIR

  • @rodrigoqteixeira
    @rodrigoqteixeira 4 місяці тому

    Is it of my phone or does the video have no sound??

  • @mysticdraguns
    @mysticdraguns 4 місяці тому

    אחלה סרטון אוהבים אותך וורקין

  • @vandelayindustries2971
    @vandelayindustries2971 4 місяці тому +1

    Awesome video! Maybe some feedback: the audio volume is really low :)

    • @8AAFFF
      @8AAFFF  4 місяці тому +1

      thanks :D
      ill increase it next time

  • @gorgolyt
    @gorgolyt 4 місяці тому

    Nvidia's denoisers use deep learning trained on pairs of noisy images and original images, you ain't gonna outperform those.
    Your account of raytracing noise seems somewhat erroneous or incomplete (to my limited understanding). The noise usually comes from the finite random sampling used to scatter rays. It's not about failing to hit a light source. In fact if you fail to hit a light source, that's information you want to use rather then try to repair.

  • @kuklama0706
    @kuklama0706 4 місяці тому

    Try applying Minimum and then Maximum filter, thats faster than Median.

  • @madghostek3026
    @madghostek3026 4 місяці тому

    Cool video, I wonder what would happen if raytracing engine, instead of rejecting a ray that ran out of bounces, took the colour it gathered but with lower intensity, maybe use that as a base for the blur filter so the black islands aren't so black. After all it carries some information.

    • @8AAFFF
      @8AAFFF  4 місяці тому

      yeah i think that's also how they add ambient glow :)

    • @AntonioNoack
      @AntonioNoack 4 місяці тому

      That introduces a systematic error, and therefore is a biased algorithm.
      Photorealism tries to stay unbiased though.

  • @mikkelens
    @mikkelens 4 місяці тому

    write it in scratch/js if thats so much faster lol. Even lua would be a giant leap for making it “reallt fast”. You can do an order of magnitue more work in the same time/less memory, especially in a compiled language. If you’re using indices for access/mutation of arrays then rust is a decent choice for this bc it looks similar to python (and can interface easily with your actual scripts).

  •  4 місяці тому

    now in a video

  • @sandded7962
    @sandded7962 4 місяці тому +1

    I am edging to this rn. I was never this close to Bussin... Looking forward to the continuation, that edging session would be wonderful

  • @jayrony69
    @jayrony69 4 місяці тому +1

    the voice is so quiet

  • @ThatOneUnityGamedev
    @ThatOneUnityGamedev 4 місяці тому

    I've used scratch for 3 years and it's VERY slow. but yes its a lot faster at certain things than python is

  • @memetech-
    @memetech- 4 місяці тому

    Can’t you just force alpha max post blur / use average of non-black neighbours?

  • @Beatsbasteln
    @Beatsbasteln 4 місяці тому +2

    that was extremely fascinating. you're great at visualizing the concepts that you wanna describe. however your voice sounded pretty dull compared to the sound effects. if i were you i'd consider slapping a fast compressor or an exciter on those vocals to bring up more speech intelligibility out of the highend

    • @8AAFFF
      @8AAFFF  4 місяці тому +2

      thanks :)
      i checked out your channel u have some great audio tips
      i ll def take into account

  • @marcellonovak7271
    @marcellonovak7271 4 місяці тому

    give your editor a raise

    • @8AAFFF
      @8AAFFF  4 місяці тому

      i am the editor thanks XD

  • @fayenotfaye
    @fayenotfaye 4 місяці тому

    Couldn’t you use an anti aliasing style method?

  • @xorlop
    @xorlop 4 місяці тому

    I didn't catch it, how long does the final algorithm take per image?

    • @8AAFFF
      @8AAFFF  4 місяці тому +1

      depends on the resolution and how bad the noise is (then it takes it more steps to denoise) but i calculated that on 1920x1080 image it can run at around 250-300 FPS
      when on an RTX 2060

  • @int16_t
    @int16_t 4 місяці тому

    How do we deal with fireflies though?

    • @8AAFFF
      @8AAFFF  4 місяці тому

      if i could detect them (maybe using some algorithm to detect very rapid brightness changes) i could add them to the "mask" and they would be filled in.
      ur right tho, i didn't think about them

  • @drdca8263
    @drdca8263 4 місяці тому

    Edit: I should have watched until the end of the video before commenting. Silly me. You even said “before you ask”!
    [strikethrough]Something I wonder if might make sense, is, if the rays that time out and don’t reach a light source, instead of being black, are instead a not-a-color value?
    Because like, that way you can distinguish between pixels that are black because showing a completely absorbing surface, and pixels that ran out of bounces.[/strikethrough]
    Another idea: what if each surface could be treated as a light source, but only if running out of bounces, and where the light-source properties of the surface was based on how it was generally illuminated? (Like, maybe based on the total brightness of the rays that hit that surface first in a previous frame, divided by its surface area?)

  • @FreakyWavesMusic
    @FreakyWavesMusic 4 місяці тому +1

    interesting approach, but your gain is too low, please put your voice to at least -3 db

  • @nassinger3365
    @nassinger3365 4 місяці тому

    so cool

  • @pax5072
    @pax5072 4 місяці тому

    Nvidia might exaggerating there image they known for doing that.

  • @FishSticker
    @FishSticker 3 місяці тому

    Okay is scratch ACTUALLY FASTER THAN PYTHON or are you fucking with me

  • @laurensvanhelvoort3921
    @laurensvanhelvoort3921 4 місяці тому

    Cool!

  • @that_guy1211
    @that_guy1211 4 місяці тому

    ah yes, there are images that are made so that they "poison" AI image generators, now with this, we can noise, and then de-noise images to un-poison our AI image gens! Great coming 8AAFFF!!!

  • @Scratchfan321
    @Scratchfan321 4 місяці тому

    I can confirm that Scratch can indeed often outperform python

    • @mikkelens
      @mikkelens 4 місяці тому +2

      Not a very difficult feat for most programming languages. Scratch (javascript) is pretty fast with V8.

  • @xskii
    @xskii 4 місяці тому

    1:42 tbh idk either editor

  • @theshuman100
    @theshuman100 Місяць тому

    love the video. but i feel like someone who has no experience with the 3D pipeline shouldnt be optimising it. you get wierd assumptions like noise in raytracing only comes in black

  • @alexdefoc6919
    @alexdefoc6919 4 місяці тому

    Question : What if we cast shadows and invert the colors? Basically light everywhere and inverse?

    • @AntonioNoack
      @AntonioNoack 4 місяці тому

      You can happily try that, but you'd need the maximum value for light at a point.
      Unfortunately, you can imagine a theoretical focusing lens around every single point individually, with a set of mirrors if it's on the backside.
      Every pixel (on its own) could be extremely bright.
      Which then would always make the image wayy to bright, and we'd have to apply denoising in reverse: reducing too bright pixels.

  • @VioletGiraffe
    @VioletGiraffe 4 місяці тому +2

    This is amazing, your 3rd version works so well, I'd never guess such a simple algorithm (conceptually) would be so good.
    Of course, nowadays image generation with neural networks is all the rage in tasks like this.

  • @adansmith5299
    @adansmith5299 4 місяці тому

    "raytracing lore" lmao

  • @legreg
    @legreg 3 місяці тому

    How not to make a denoiser :D

  • @Idiot354
    @Idiot354 4 місяці тому

    holy shit youtube fkd the volume of this video

    • @AntonioNoack
      @AntonioNoack 4 місяці тому

      UA-cam doesn't adjust the volume of videos afaik.

  • @besusbb
    @besusbb 4 місяці тому

    lol

  • @vatyunga
    @vatyunga 4 місяці тому

    If we don't know the color of the current pixel, can't we take average of it's neighbours?

    • @AntonioNoack
      @AntonioNoack 4 місяці тому

      Smear all over your image, and your noise will be gone. Your features, too ;).
      No pixel is known to be correct, so everything needs to be blurred ^^.

  • @KX36
    @KX36 4 місяці тому

    you invented CSI's "enhance" function

  • @adicsbtw
    @adicsbtw 4 місяці тому

    I imagine it's already been implemented, as it would just make sense to my brain, but could you not take pixels that bounce off into the void and try to perform some check to see what percentage of light that hits that spot would bounce hit the last surface it hit, and bounce in the direction it came from when it hit that spot? Or is that an operation that's just too expensive to compute within a reasonable amount of time?
    Perhaps if you could bake some information though, it might be easier to perform, and could help with realtime raytracers at least by just being ok with a bit of extra error, at the benefit of possibly significant quality improvements at similar sample counts
    Also, most raytracing denoisers have access to far more data than just the color
    for example, they usually have access to a normal map of the entire camera view in tangent space (relative to the camera's perspective), so you know where the edges of objects should roughly be, which can help make object differentiation much cleaner
    there's also usually some depth information which helps with this as well

  • @hwstar9416
    @hwstar9416 4 місяці тому

    why are you even using python?

    • @8AAFFF
      @8AAFFF  4 місяці тому

      Thru pytorch with gpu
      So its technically running on c++

  • @Kyoz
    @Kyoz 4 місяці тому

    🤍