Illumination Tutorial for Software 3D Rendering (2/2+) [c++20]

Поділитися
Вставка
  • Опубліковано 30 лис 2024

КОМЕНТАРІ • 349

  • @Bisqwit
    @Bisqwit  4 роки тому +32

    All the source code of this series can be downloaded from: iki.fi/bisqwit/jkp/polytut/ It also includes, as patches, fixes for _all_ the bugs I mentioned in this video.
    *A reminder of what I said at **31:25**: Do not reply to **_this post_** if you want me to see your comment. Post your words as a new comment (the line for that is above), not as a reply, unless you are addressing the contents of **_this comment_** specifically. UA-cam does not show creators new replies, it only shows new comments. If you reply here **_I will not see it_** unless I manually check for it.* If you are addressing a comment someone wrote, then you _should_ reply it, though.
    Note: Luxels are also sometimes called _lumels._

  • @awesomekling
    @awesomekling 4 роки тому +212

    I love how this series is teaching me what all those game settings I’ve tweaked in my life actually mean :)

    • @sZenji
      @sZenji 4 роки тому

      so u life with vsync?

    • @MESYETI
      @MESYETI 4 роки тому +22

      Hello SerenityOS person ;)

    • @Mad3011
      @Mad3011 4 роки тому +6

      Ayy, you're here too.

    • @luckylove72
      @luckylove72 4 роки тому

      Or may be you could have read some books on graphics instead of wasting your time on UA-cam.

    • @sharoyveduchi
      @sharoyveduchi 4 роки тому +5

      Andreas Kling and Bisqwit collab when?

  • @unevenprankster
    @unevenprankster 4 роки тому +39

    I think this has been truly the greatest series of videos in the channel so far. No one else has explained better these concepts on a video format before, and hopefully this will pave the way for creators to touch the topic better. Thank you very much. Onto the future we go!

  • @logins
    @logins 4 роки тому +15

    I am amazed on how you can keep up any topic in programming and just implement it, especially when talking about graphics lighting. And after all that, you are also able to explain it! Great work.

    • @Bisqwit
      @Bisqwit  4 роки тому +10

      You have to remember I only do videos about topics I know about.

  • @ashleyoliver3193
    @ashleyoliver3193 4 роки тому +3

    Watching those lightmaps get progressively recomputed as you move the lights around is absolutely fascinating.

  • @gsuberland
    @gsuberland 4 роки тому +9

    Absolutely loving this series. I know some superficial information about 3D rendering but it's great to see the actual details and mathematics broken down in a practical sense. Much of this topic is often presented in a manner that makes it feel very daunting to even begin, but you've really done it justice here.

  • @ricardo.mazeto
    @ricardo.mazeto 4 роки тому +51

    1 - You're finally taking the time to explain your code! And you're explaining it well! Big thumbs up for that!
    2 - Wouldn't it be faster and better if instead of putting a camera on every single pixel of the light map, rendering a small image and setting the light accordingly, checking if lines between the center of the pixel and the light sources are intersected by any objects?

    • @Bisqwit
      @Bisqwit  4 роки тому +31

      That would only account for direct lighting, and is essentially the same as raytracing. It would not create indirect lighting. For example, the tunnel near the ceiling (which I apparently did not traverse in this video) which has no light sources, would be pitch-black - which is not realistic. It should still receive _indirect_ (reflected) lighting from walls that are illuminated.
      Also raytracing towards the center of light sources creates ugly razor sharp shadows, as if the light source is very far away (like the sun) or tiny and directly pointed at the object.
      You _can_ add indirect lighting by also casting a few hundred rays in random directions (not just towards light sources) and getting whatever pixel color the ray hits - and this is in fact exactly what I did when generating the lightmaps for the OpenGL video - but then you’ve lost any performance advantages over the method I described in this video.

    • @GoldenThumbs
      @GoldenThumbs Рік тому

      @@Bisqwit To add to this (two year old) answer here, this is also not like some random concept he came up with either... Or maybe it is... But it *is* also an existing technique for generating lightmaps. It's called "radiosity", or also "radiosity lightmapping" if you prefer. It's been used in videogames for decades at this point. First game *I* personally know about that used it is Quake(1996), but more recent games such as Half Life (1998), Half Life 2 (2004), Portal (2008) (you probably get where I'm going with this, it's a part of Source's mapping tools lol... Also Unity (the game engine) used it, in the form of a third-party library called "enlighten"). There's several papers about this algorithm, one of which I see cited a lot but have a hard time tracking down is one by Hugo Elias. There's *also* the open source... Library? I guess I'll call it a library... The open source library lightmapper (github.com/ands/lightmapper) which is a single-file C & opengl implementation of this effect.

  • @duuqnd
    @duuqnd 4 роки тому +46

    Oh boy, I'm gonna have to watch this a few times

  • @vegardertilbake1
    @vegardertilbake1 3 роки тому +1

    I love the dry witty humor you manage throw in. Truly a master at work!

  • @timetravelbeard3588
    @timetravelbeard3588 3 роки тому +1

    Thank you so much for these videos. Your voice and personality are so comforting and your coding is inspiring. Your solution to lighting here is oddly elegant even though it's processor heavy. Using cameras at every surface is something I wouldn't even consider even though it solves every lighting problem at once. To solve the leaking light between polygons I would add one pixel to the rasterizer x and y loops, as in "for(x=left,x

  • @henryso4
    @henryso4 4 роки тому +22

    GI is always fun - I've been doing it in "real time" (runs at like 20 fps right now which isn't very acceptable) using a voxel structure - by cone tracing with slightly randomized cones aligned with any diffuse surface's normal. The results are surprisingly decent but I'm still working on optimization and reducing flickering from voxelization

    • @DoctorGester
      @DoctorGester 4 роки тому +1

      You can check out Handmade Hero series for voxel-based GI running in realtime (implemented from scratch)

    • @chetana9802
      @chetana9802 4 роки тому

      Can you share link to your work?

  • @SuperSpeed52
    @SuperSpeed52 4 роки тому +1

    Your mental strength to come up with these kind of solutions is what motivates me to keep going and never think I have learned enough, its also quite depressing that my university doesn't exploit our potential more into projects like this, in my context is either impossible or really, really hard, mathematically speaking and to put it all together in code.
    You're basically what my companions and myself aspire to be, very thankful for this showcase of pure math and code skill.
    I love your work and the lufia music you fit in :)

  • @3DSage
    @3DSage 3 роки тому +1

    28:30 I like the 64x64 pixels! It's like my 3D Minecraft on the GBA I Programmed.

  • @kruruneiwyn2107
    @kruruneiwyn2107 4 роки тому +12

    I watch a lot of stuff here on UA-cam but nothing on here can or ever will match one of your uploads.
    Been a subscriber for a while & I don't work with C++ or even anything remotely close to game-related libraries or what have you... But thank you so much for making these videos! Always look forward to watching this stuff when I see your uploads in my feed. Even though I may not work with C++ or graphics libraries, I'll always learn something, which is always good.
    Tonight this video was accompanied with a pizza, after waking up at ~6pm.
    Continue being awesome!

  • @MissNorington
    @MissNorington 4 роки тому +5

    29:47 "It's a bug in my engine". If you have the book Michael Abrash's Graphics Programming Black Book Special Edition, go to page 1066, chapter 57 Figure 57.1: "Gaps caused by mixing fixed-point and all-integer math". And if you know it is not your polygon edge interpolation math that is causing it, then you might be running into the other problem: Texture sampling, interpolating from outside the texture, or reading outside the texture (clamp texture edge doesn't solve everything). Many games use textures that expand the borders of the polygon, so that when being rendered from far away, will still look okay. Even Michael Abrash admitted in his book that he completely skipped over his own advice of polygon rendering, and had to go back and fix his code in order to solve the problems.

  • @Je3f0o
    @Je3f0o 4 роки тому +4

    Amazing work. I'm so amazed how hard you work on this video. Many years ago I watched your NES emulator video and inspired me to learn C++. Now I'm already very decent programmer in wide range of computer science. Few years ago I tried to create my own 3D game engine using OpenGL. I learned a lot about rendering and some game physics, lighting etc... but didn't figure it out how to animate character yet. And last few years I didn't write code related to Game engine (Busy working javascript all the time). But 3D game engine is still my most favorite subject in computer science. It is very challanging and teaches me a lot about Math, physics and computer science. That is why I love it so much. By the way great work again. You are still inspiring me to grow up more and work harder like you too. :P Thank you.

  • @ipaqmaster
    @ipaqmaster 4 роки тому +4

    I love your videos and they're very entertaining!
    Especially upgrading from a single rendering thread to full cpu 48 core multithreading in the middle of the video!

  • @Acuebed
    @Acuebed 4 роки тому +3

    Hey Bisqwit! Just wanted to say thank you for posting all these crazy high quality videos. I'm not in the same domain, or the same caliber as you at this. But It's motivated me to document myself writing code as well.
    Best wishes, and cannot wait to see more of your content!

  • @frahohen
    @frahohen 2 роки тому

    You are so good that you can explain it simple and interesting. I swear to god I miss people like you on youtube, who take the hard topics and break it down in understandable simple pieces and present them even very amazing visually.

  • @mateusarruda23
    @mateusarruda23 4 роки тому +4

    You explain things in a way that is easier and pleasant to understand. I hope you continue to do these amazing videos. Congrats!

  • @therewasblood
    @therewasblood 4 роки тому

    Google/UA-cam must not know me very well. I saw this channel for the first time after years of watching programming, hacking, gaming channels and they didn't even realize THIS is my new favorite channel. Google, get your act together! Friggen awsome, amazing channel Bisqwit! Subscribed.

  • @ihspan6892
    @ihspan6892 4 роки тому

    I LOVE how deep you go with each topic. I salute you!

  • @AcmeMeca
    @AcmeMeca 4 роки тому +6

    nomenclature-wise i prefer to think of it as "pixel" = picture element, and "texel" = texture element.

  • @HugRunner
    @HugRunner 4 роки тому +2

    So glad you take the time to create these videos and projects. It's truly amazing and very inspiring. Hearing about (real time) global illumination these days, it's really hard not to think about Unreal Engine 5 and the demo they showed there. You talk about your code not being optimized etc. and being CPU intensive and I understand that's not the prime purpose of this series, but it could be really interesting if you could make a video trying to describe what kind of techniques or differences it would take for your project to obtain similar result to the light demoed in UE5. Thanks a lot, no matter if you have time to make something like that!

    • @Bisqwit
      @Bisqwit  4 роки тому +4

      An engine such as UE5 uses a combination of dozens of different techniques to achieve its result. I could not hope to catch up with that. However, I do try to keep doing this series and covering progressively more complex themes. The next thing that I will cover, will be probably HDRi, and after that, maybe portal rendering. But before I get there I may need to take a short break and do a less demanding video first so I don’t burn out.

  • @de_generate
    @de_generate 4 роки тому +2

    Big fan of the practical light experiment, thanks for the effort!

  • @SomeRandomPiggo
    @SomeRandomPiggo 8 місяців тому

    Great video! I finally understand how lightmaps are calculated, even if as you said it isn't the most efficient method, I might try calculating them with an FBO on the GPU instead. Thank you very much

  • @Arti9m
    @Arti9m 4 роки тому +1

    Videos like this motivate me to keep working on my own rather complicated projects. Thank you!

  • @janmroz348
    @janmroz348 3 роки тому

    Wow, that's a great video about lightmapping!
    I tried some time ago to bake ambient occlussion for 3D models using very simmilar technique. I faced the same problem as described in 28:37, because I was baking 32x32 texture with a single camera shot with 170 FOV angle (with fisheye remapping), but I solved this problem by using MSAA x8 in a bake texture + lightmap denoise algorithm.
    Also 32x32 texture worked really well, because the texture was fitting into my GPU cache nicely, so the mean computation was done on GPU almost without any cache misses and without using CPU GPU bus. With this approach I could bake high quality 4k AO map and still measure the bake time in seconds, not minutes!
    From my approach I've learned that lightmapping is not about writing lightmapper logic, but mainly about fixing small details and fighting for milliseconds in optimization, but I highly recommend - if you are a graphics programmer - give it a shot, it's a great journey.
    It's great that someone created so intuitive explanation, keep it up!

  • @mikec3412
    @mikec3412 4 роки тому

    Wonderful video Bisqwit! You have always been one of my favorite programmers to watch. Never afraid to dive deep and explore different ideas.

  • @TheSenorTuco
    @TheSenorTuco 4 роки тому

    Actually your series motivated me to give up on 17 and start re-learning the 20

  • @vitaliyforij5205
    @vitaliyforij5205 11 місяців тому

    Thanks for explaining)
    I searched for a long time for explanation of light map )
    You are amazing)

  • @PietroNardelli
    @PietroNardelli 4 роки тому +7

    It is blast to see how the quality of videos improved, I remember when you could not record directly your screen, I am very happy for you. Your content is one the best regarding programming. It is a bit weird hearing you saying hello and not shalom. Could you do more videos on obfuscated programming?

    • @Bisqwit
      @Bisqwit  4 роки тому +4

      If I get ideas in that area, maybe.

  • @petacreepers23
    @petacreepers23 4 роки тому

    Extremelly nice video, probably would have to watch it again, to fully comprehend It as I am not an expert programmer. It is really good that this content is created since most of the "tutorials" are just basic stuff and the realc omplicated thing that are written in the underlying libraries are rarely explained. Really nice content, again

  • @GIJOEG36
    @GIJOEG36 4 роки тому +6

    I like the new "intro animation"/"transition animation"

  • @captainshitbrix7271
    @captainshitbrix7271 4 роки тому +1

    i love a good graphics lecture/video essay/knowledge explosion from bisqwit

  • @sirpizza2044
    @sirpizza2044 4 роки тому

    The amount of knowledge this man has is insane

  • @BRUXXUS
    @BRUXXUS 4 роки тому

    This is absolutely incredible!
    I've recently gotten into the Demoscene and love seeing what people can do with code.
    I've also been mapping in Source since HL2 was released. My skills definitely lean more to the actual design and mechanics side rather than coding, although I really wish I knew how to code.
    Someday I will learn and watching videos like this really inspire me to finally just get started.

  • @l3p3
    @l3p3 4 роки тому +1

    Nice video. The shift from voiceover to life footage felt really weird. A good reason for keeping that style in future videos.
    My understanding: We need to loop over all those luxels over and over again since the processor can only look at one luxel at a time. Over the number of iterations, it gets closer and closer to the ideal value. (Like man running against turtle.) We don't have many other choices here. I was reading about analog computing the last few days and I thought: How could we model this problem so it is solved without atomic stepping and looping. Then I thought about using light in a room and a camera. I feel stupid now because I actually concluded building the scene physically and taking a photograph of it. Nice solution Bisqwit, try it out! Perfect and realtime, programmed and offered by the best programmer ever.

  • @starc0w
    @starc0w 4 роки тому +1

    Very, very impressive Bisqwit! Keep going! It's very intressting and entertaining!

  • @ians.2349
    @ians.2349 4 роки тому

    Thank you for this series, I've loved it. I may try to implement some of the techniques from this series in a C based software renderer sometime in the future.

  • @yigitpolat
    @yigitpolat 4 роки тому +6

    hello bisqwit thanks for the beautiful content!
    a normal map is not the same thing as a bump map. bump map encodes bumps as a map in which each pixel value corresponds to the elevation amount of that particular location.

    • @Bisqwit
      @Bisqwit  4 роки тому +5

      I see.

    • @abcxyz5806
      @abcxyz5806 4 роки тому +4

      These names are a complete mess. These elevation maps are also often called displacement maps and I have seen normal maps referred to as bump maps

    • @emperorpalpatine6080
      @emperorpalpatine6080 3 роки тому

      yeah , the naming convention is a mess ...
      I always considered the bump map to be a normal map , whereas I know the elevation map to be the height map instead.
      I think there's a way to compute a normal map from a height map , but not the other way around though.

    • @yigitpolat
      @yigitpolat 3 роки тому

      @@emperorpalpatine6080 you can, you would lose precision though.

    • @MrDavibu
      @MrDavibu 3 роки тому

      This isn't true.
      Bump maps and normal maps are the same thing.
      What you are referring to are height maps or displacement maps.
      Normal maps create the illusion of bumps, thus the name.
      I just read the Wikipedia article and they refer to bump mapping as category of texture mapping , but I personally never read this in actual CG literature, but even then they explicitly say bump mapping doesn't change the geometry of the object. So height maps still wouldn't be considered bump mapping either way.

  • @szymoniak75
    @szymoniak75 4 роки тому +35

    20:17 lol, actually using gamma symbol in code. looks strange

    • @Bisqwit
      @Bisqwit  4 роки тому +16

      Yeah, C++ allows plenty of Unicode characters in identifers. en.cppreference.com/w/cpp/language/identifiers#Unicode_characters_in_identifiers As does C since C99. This page does not mention it, but the feature was introduced in C++11. However, compiler support has been incomplete for a long time. Only as recently as in GCC 10, was support added for those symbols presented verbatim in UTF-8 encoding, rather than having to type them as escapes, such as \u03B3 for γ.

  • @TripleBla
    @TripleBla 4 роки тому

    bang on, looks great in 4K .. I want to start creating content in 4K .. all I need is a camera, can't wait to see more of your new 4K content. Keep up the great work!

  • @MKVideoful
    @MKVideoful 4 роки тому +53

    Every programmer should: create compiler, 3D graphics renderer, voice synthesiser and AI. ... Then then the programmer will be probably ascended to another dimension. =D

    • @islilyyagirl
      @islilyyagirl 4 роки тому +1

      i'm currently coding a CNN

    • @SuperSpeed52
      @SuperSpeed52 4 роки тому +1

      I have only made the compiler part, my university won't even touch anything related to graphics or voice synthesisers, and AI is just about the end of the career, guess I will just have to ascend to another dimension in satanic ways

    • @peacefulexistence_
      @peacefulexistence_ 4 роки тому

      add to it an MPM simulator and an OS, the list is too short lol

    • @mariobrother1802
      @mariobrother1802 4 роки тому

      Don't forget emulators!

  • @az0r22
    @az0r22 3 роки тому

    Amazingly well worked video! You are the best bisqwit. They are so nice to watch.

  • @taza99
    @taza99 4 роки тому

    hyvältä näyttää, raskashan tuota on pyörittää, mutta hyvältä se näyttää. tekstuuriselitykset alussa oli myös helppoja seurata, tiesin niiden merkityksen toki etukäteen mutta hyvin selitetty, varmasti saa muutkin selvää

    • @Bisqwit
      @Bisqwit  4 роки тому +1

      Kiitos. Itse asiassa puolet suorituskyvystä uppoaa pelkästään tuohon gammakorjaukseen. pow() ei ole mitenkään tehokas funktio…

  • @ni.ko3869
    @ni.ko3869 3 роки тому

    your video made lightmapping understandable to a rube like me and got me to think of how to apply a lightmapping algorithm myself

  • @matthewconte
    @matthewconte 4 роки тому

    [cool music intensifies]
    Love your videos and dedication, Bisqwit!

  • @田中くま-f1i
    @田中くま-f1i 4 роки тому

    From my point of view you is the image of power. Want to be like you in future.

  • @terraria9934
    @terraria9934 4 роки тому

    i love this series so far. keep going dude you are doing amazing.

  • @shire7949
    @shire7949 4 роки тому +1

    Amazing video, thank you Bisqwit as always for such valuable knowledge

  • @alexpaww
    @alexpaww 4 роки тому

    Bisqwit, you can use RenderDoc to change the OpenGL state in applications. It's normally used to debug things, but you can ofc also change things like texture filtering on a per-texture basis

    • @Bisqwit
      @Bisqwit  4 роки тому +1

      Thank you. It won’t help this video anymore, but I will keep that in mind, and study how to use it.

    • @alexpaww
      @alexpaww 4 роки тому

      @@Bisqwit Yea, I just wanted you to know it exists. It's a nice tool to have in ones toolbox.

  • @skaruts
    @skaruts 2 роки тому

    One good thing about the Source Engine (I suppose the same can be said of GoldSrc and Quake Engine) was that you could specify the lightmap resolution for each surface separately, while editing the maps. I don't see this possibility in Godot, and I suspect neither in Unity and Unreal. Though I could be wrong about the latter two. Basically, a HL2 map defaulted to low resolution lightmaps all around, and you specified the surfaces where higher resolutions were needed.

  • @Legnog822
    @Legnog822 2 роки тому +1

    29:22 unreal engine 5 Lumen but slower haha, pretty cool!

    • @Bisqwit
      @Bisqwit  2 роки тому

      Thanks! Using the CPU for rendering is far slower than using the GPU, but it can be useful for teaching the concepts.

  • @therealdutchidiot
    @therealdutchidiot 4 роки тому

    The bug you ran into sounds very much like a note in the QuakeIII engine, where as a fix some vertices are drawn next to eachother to prevent seams from showing.

  • @Bleenderhead
    @Bleenderhead 4 роки тому

    If you wanted to get real time dynamic lighting, rather than constantly running what amounts to path tracing in a background thread, you could use the hemisphere cameras to precompute the radiosity form factor matrix, which encodes the (cosine-weighted) visibility from every element to every other element. Then computing the global illumination amounts to solving a sparse linear system, and the form factor matrix does not need to be recomputed if you change the emission of various surfaces. It wouldn't handle moving lights, though. So I guess it's only dynamic with respect to which surfaces are glowing.

  • @ERINDIGR
    @ERINDIGR 4 роки тому +8

    The white lines are probably not gaps but light leaking from another surface on the uv map you should have a gap of at least 16 pixels between uv polygones

    • @Bisqwit
      @Bisqwit  4 роки тому +3

      That is a good theory, but it is unfortunately wrong; adding padding between the lightmaps does not fix the problem, I tried it. As far as I understand, there are two causes that operate in parallel.
      1. First is the workaround in polygon_draw, that I made to address the problem pointed out in ua-cam.com/video/hxOw_p0kLfI/v-deo.htmlm53s . This error causes the rightmost column of texels to sometimes not be rendered. If I disable the workaround for that problem, the artifacts on wall remain but the seam glitch is gone.
      2. Rounding errors in clipping. Clipping sometimes produces seams between adjacent polygons. This is apparent especially in this video when the 64x64 view is shown. There is an inexplicable dark diagonal line on the wall at 28:37. I don’t know why it happens, but it occurs only when the edge of the polygon is clipped by the frustum. I don’t know why the clipping causes different rounding in different polygons even though they share same end vertices. EDIT: I fixed this particular problem; the patch is included on the webpage that is in the description. However, the problem with artifacts on wall remain.
      There is also the issue that a white stripe of light appears on the edge of some polygons, such as the left side of screen at 30:24. This is, I think, because the lightmap camera is not positioned perfectly on the luxel, but it sees out of boundaries. This, too, is not fixed by adding padding between lightmaps. The game _Mirror’s Edge_ also suffers from this problem in some locations that are not intended to be visible to the player, although the cause is slightly different (interpolation between visible luxel and an oob luxel).

    • @Prithvidiamond
      @Prithvidiamond 4 роки тому

      Isnt it basically simulating diffraction of light through a single tiny slit?

    • @Bisqwit
      @Bisqwit  4 роки тому +1

      Diffraction has nothing to do with it. This program does not simulate light waves, or waves of any sort.

    • @l3p3
      @l3p3 4 роки тому

      @@Bisqwit maybe you slipped in some quantum physics by accident, causing tunneling, you better look into the code again to rule that theory out.
      Edit: Could also be some AI persons drawing some kind of art. Make sure there is no brain simulation in there. Almost missed this theory.

    • @Prithvidiamond
      @Prithvidiamond 4 роки тому

      @@Bisqwit I know, but the result looks very similar to diffraction patterns... So I suggested that...

  • @humanman951
    @humanman951 4 роки тому

    This demo looks fantastic! I guess bounce lighting could be done by repeating this process a couple of times while reading the light map calculated previously. Then all surfaces can become light emitters. Keep up the fab work

    • @Bisqwit
      @Bisqwit  4 роки тому

      I’m not sure how what you are describing differs from what I am already doing in this episode. This technique already does radiosity perfectly. That is, surfaces that are only illuminated _indirectly_ by other walls that are lit.

    • @humanman951
      @humanman951 4 роки тому

      Bisqwit ah, so it will eventually converge on a total light level or will it continue to get brighter forever? Given each quad will get more and more light each iteration.

    • @Bisqwit
      @Bisqwit  4 роки тому +1

      It converges on the total light level. The total sum of light reflected by all walls can never exceed the brightness of the lightsource times its surface area, or something to that effect. One particular factor that makes this true is how the weightmap in lightmap rendering is normalized to 1. That is, unless you get full brightness of the light on _every possible pixel_ in the lightmap camera view, the brightness on the wall will always be less than the brightness of the light source. If even _one_ of those pixels does not see the light source, or sees just its reflected light from a wall (that is already dimmed), the luxel will be dimmed too.

  • @RahulJain-wr6kx
    @RahulJain-wr6kx 4 роки тому +1

    Awesome series. I learnt some good things with this, do you plan explain colorspace like srgb and conversions as well. It can be a good follow up 😀

  • @sznio
    @sznio 4 роки тому

    Could you do something like an edge-detect on the lightmap to find areas with streaking and decide to increase the camera resolution for them? This could also be used for optimization: run a bad low quality render, if that render is completely uniform then most likely increasing the resolution won't add more detail, if the render is noisy/streaky then discard the result and increase the resolution. This also would increase resolution around shadow boundaries, and reduce resolution where it's not as needed.

    • @Bisqwit
      @Bisqwit  4 роки тому

      It would need a custom storage format for bitmaps that have varying resolution in various parts of the bitmap. I don’t know any approach to do that efficiently, neither in writing nor in reading.

  • @szymoniak75
    @szymoniak75 4 роки тому +1

    are you planning on making a video where you would rewrite the code to use the GPU, using CUDA for example?

    • @Bisqwit
      @Bisqwit  4 роки тому

      It is not in plans at the moment.

  • @SimulatedWarfare
    @SimulatedWarfare 2 роки тому

    This guy is a genius

  • @Mautar55
    @Mautar55 4 роки тому

    one of the most beautifull tutorials!

  • @tiagotiagot
    @tiagotiagot 3 роки тому

    For the fisheye light-probe for diffuse lighting, what if you do it with rectilinear projection, but temporarily apply a distortion factor to the coordinates of the vertexes just for the light-probes, matching the approximate look of the true fisheye rendering?

  • @Ariel16283
    @Ariel16283 4 роки тому +12

    The antibitangii... I mean the antibitangnar... Dammit! I mean the antibitango... Nope I quit.

  • @kadiyamsrikar9565
    @kadiyamsrikar9565 4 роки тому +1

    Great work please keep doing it

  • @lusiaa_
    @lusiaa_ 4 роки тому

    I think that on 22:48 you wanted to implement something like a probability density function, I used PDFs (like Lambertian distribution for completely matte surfaces or GGX distribution for rough/glossy surfaces) in my path tracer so I could "weigh" how important would a ray be in a calculation. I might be completely wrong about this though, so please take this with a grain of salt, this might be completely irrelevant for your project 😅

  • @suarezvictor77
    @suarezvictor77 2 роки тому

    Awesome series! I'd like to know why triangles aren't rendered with antialias, particularlly prefilitering antialiasing to avoid the high overhead of supersampling

    • @Bisqwit
      @Bisqwit  2 роки тому +1

      Antialias is difficult to implement because it involves transparent pixels (reading what’s underneath and modifying the pixel such that its new color is something between the old color and new color), and transparency is sensitive to rendering order.
      For example, suppose that there is a red polygon and a blue polygon that share an edge, and first the red polygon is drawn. Its edge pixels are a mixture of black (background) and red, i.e. darker shades of red. Then the blue polygon is drawn. Its edge pixels are a mixture of those dark-red pixels and blue pixels, even though they should be a mixture of red and blue. This effectively means that the black leaks through. If the polygons are drawn in opposite order, then the edge pixels would be a mixture of red and dark-blue. Different result, but still wrong. It is difficult to avoid this problem.
      Additionally, antialias requires drawing more pixels. An aliased line from (1,1) to (2,2) would be two pixels. An antialiased line would be four pixels: a square with bright pixels in two corners and dark pixels in other corners.
      The mathematics of drawing antialiased polygons are heavy: One needs to calculate the bounding box of the triangle with rounding up and down for all corners, and the blending proportion of color for every edge pixel and its neighbor and perform the blending (read-modify-write) for each of those edge pixels.
      Supersampling, such as drawing the entire screen at 2x size, and then downscaling, is a mathematically simple way to solve all these problems.

    • @suarezvictor77
      @suarezvictor77 2 роки тому

      @@Bisqwit I agree with and thanks your explanation. I'm trying to write some rasterizing code to be run on a FPGA and I plan to sort it out those problems. One way I think is a possible solution to the blue+red polygons is to use a 4th byte to store alpha for each poligon's pixels then blend colors considerong the alpha generated from each polygon, this should solve the mixing with black as you explained since the blending is not done first. I plan to use some of your really nice code to test that. Hopefully there's interest to improve the rendering and avoiding supersampling

  • @EnriquePage91
    @EnriquePage91 Рік тому

    Mr. Bisqwit also known as “Render Daddy” 😎

  • @gustavoandrade58
    @gustavoandrade58 4 роки тому

    your smartness is frightening

  • @carlospaz8564
    @carlospaz8564 4 роки тому

    Bisqwit siempre veo tus vídeos para inspirarme!! :)

  • @LittleRainGames
    @LittleRainGames 4 роки тому +1

    Why not usw true color × intensity?
    And rays instead of cameras? Wouldnt that be cheaper, to just send a ray from each texel to each light, instead of a camera in 5 directions?
    Love your work btw, just found out a few weeks ago that you also had a big part in snes development, i just started delving into that.

    • @Bisqwit
      @Bisqwit  4 роки тому +1

      You may have to elaborate a little on your proposal.
      EDIT: As for rays, that would only account for direct lighting, and is essentially the same as raytracing. It would not create indirect lighting. For example, the tunnel near the ceiling (which I apparently did not traverse in this video) would be pitch-black, because none of the light sources are directly visible from it. It should still receive indirect (reflected) lighting from walls that are illuminated.
      You can add indirect lighting by also doing a couple hundred lines in random directions (not just towards light sources) and getting whatever pixel color the ray hits - and this is in fact exactly what I did when generating the lightmaps for the OpenGL video - but then you’ve lost any performance advantages over the method I described in this video.

  • @franciscoaguilar8234
    @franciscoaguilar8234 4 роки тому

    Bisqwit, your content is amazing, however, your handwriting skills using just a mouse are overwhelming!

  • @Jonnio
    @Jonnio 4 роки тому

    Awesome, makes me wanna do something lighting stuff too.

  • @catfree
    @catfree 2 роки тому

    Be sure to implement ACES 2065-1 (AP0) in your project for very accurate colours

    • @Bisqwit
      @Bisqwit  2 роки тому

      Can you TL/DR it?

  • @voxbine4005
    @voxbine4005 4 роки тому

    So dificult to my brain but bery spectacular to my eyes

  • @zerdagdir1988
    @zerdagdir1988 4 роки тому +1

    would this be faster if you used frustum culling?

    • @Bisqwit
      @Bisqwit  4 роки тому +1

      Already done. ua-cam.com/video/hxOw_p0kLfI/v-deo.htmlm41s
      A significant loss of performance actually happens in the gamma correction. pow() is a rather slow function, and calling it three times for every pixel at 1280x720 is not exactly efficient.

  • @Centurion256
    @Centurion256 4 роки тому +1

    Great video, as always, Bisqwit.
    On a side note, how familiar are you with the topics of memory consistency and lock-free programming? I find them quite intriguing, however, there doesn't seem to be nearly enough high quality content on these topics, especially lock-free programming, and I don't feel qualified enough to produce any myself. In case that you are familiar with them, would you perhaps consider making a brief video series about this sometime in the future?

    • @Bisqwit
      @Bisqwit  4 роки тому +1

      Not very familiar to be honest. I study when I need something, and I haven’t much needed to delve into complex thread-safety topics. The whole c++20 memory_order thing is still an unexplored land to me, for instance.
      But in case I do get intimate with the topic, it may make into a new video some day.

  • @95vinoth
    @95vinoth 4 роки тому +7

    How many years of learning one needs to achieve this level of knowledge

    • @josedejesuslopezdiaz
      @josedejesuslopezdiaz 4 роки тому +5

      you can learn it, you maybe will need to invest more time in the right things but can do it if you really want

    • @Bisqwit
      @Bisqwit  4 роки тому +23

      As I’ve written before, IQ has nothing to do with it. Different people just have brains working differently, with talent for different things. For example, I am _very_ dumb when it comes to learning by observing and repeating. I am a dance teacher, but unlike most of my pupils, _I_ cannot learn dances by repeating what others are doing. If there are no explanatory words involved, in most cases I cannot learn it. I have to process it in words, even if just in my mind, to learn it. Another example is that I cannot throw a ball very far. It perplexed me to no end when I was a child how my peers could throw a snowball to the topmost floors of a six-floor apartment building, while I could hardly make it reach the second one. I never figured out the trick. Yes, I know the theory of assisting the motion with your whole upper body. Nope, not getting it.

    • @95vinoth
      @95vinoth 4 роки тому +1

      @@Bisqwit Thanks for the reply. I really admire your work and knowledge.

    • @birdbrid9391
      @birdbrid9391 4 роки тому +2

      @@Ljosi blunt but correct

    • @stickfigure42
      @stickfigure42 4 роки тому +10

      @@Ljosi IQ is widely recognized as basically worthless at predicting anything at all besides how good you are at taking IQ tests.

  • @007LvB
    @007LvB 2 роки тому

    You are a great teacher!

  • @szymoniak75
    @szymoniak75 4 роки тому

    oh the music is the same you used in the DOS OpenGL video

  • @theboy181
    @theboy181 4 роки тому

    Still love your voice, and your videos!

  • @Rek-55
    @Rek-55 4 роки тому

    Intresting... Why UE4 not supporting dynamic emissive lights....

  • @thefastjojo
    @thefastjojo 4 роки тому

    Amazing video like always, thank you Bisqwit, take care of yourself man

  • @arrangemonk
    @arrangemonk 4 роки тому +2

    antibitangent! also how comfortable are shiny spandex longsleeves?

    • @Bisqwit
      @Bisqwit  4 роки тому +1

      Pretty nice. Not ideal for hot weather though.

  • @nosuchthing8
    @nosuchthing8 4 роки тому

    One million thumbs up

  • @jasdeepsinghgrover2470
    @jasdeepsinghgrover2470 4 роки тому

    Really amazing video man!!

  • @Legnog822
    @Legnog822 2 роки тому

    love the acsent!

  • @Hatsune_Miku
    @Hatsune_Miku 2 роки тому

    I love
    Bisqwits!

  • @LambOfDemyelination
    @LambOfDemyelination 4 роки тому +1

    Looking pretty good!

  • @codeslasher
    @codeslasher 4 роки тому

    Hello Bisqwit. Thank you for such a great video. I'd like to implement something like this for my own engine. Can you share some of the research links, documents, papers you used to implement the approach with using a camera along the 5 axis? I'd like to gain more understanding.

    • @Bisqwit
      @Bisqwit  4 роки тому

      I don’t think I have concealed any information in this series.

  • @the-guy-beyond-the-socket
    @the-guy-beyond-the-socket 5 місяців тому

    Hi! What is the filter for lightmap? The way it overlays on the texture and changing color. Is it the same as "soft light" In photoshop?

  • @panupentikainen953
    @panupentikainen953 4 роки тому

    Joo-o... Munkin pitäisi kaikki pitkät matikat käyneenä tehdä tällainen engine viikon pähkäilyllä, mutta aika hiljaista on. Toiset tekee, toiset täällä tyytyvät katsomaan aiheesta tekemäsi videon. :)

  • @alexandru-florinene4173
    @alexandru-florinene4173 4 роки тому +1

    What font are you using for the editor? (i've always wondered).

    • @Bisqwit
      @Bisqwit  4 роки тому

      The editor does not deal with fonts at all. It’s terminal program. It only deals with inputs and outputs. Visual representation is entirely the terminal’s job. Within the terminal various fonts are used at different times.

    • @Bisqwit
      @Bisqwit  4 роки тому

      Followup: Answered in ua-cam.com/video/uITpN-OZcuo/v-deo.html

  • @rlenclub
    @rlenclub 4 роки тому +1

    Bisqwit: we are going to write a graphics engine with global illumination and raytracing
    me in unity: well it only took 5 hours to figure out how delegates work

  • @chromosundrift
    @chromosundrift 4 роки тому

    Bisqwit The dynamic lighting that John Carmack made for the ID Tech 4 engine en.wikipedia.org/wiki/Id_Tech_4 contained many of these features and it was running on GPU hardware of the day at game playing frame rates. This was fascinating to read about during its development. I'm sure you would like it if you haven't already read about it.

  • @Sturmtreiben
    @Sturmtreiben 4 роки тому

    Why did the goatee have to leave?
    Great video!

  • @HA7DN
    @HA7DN 4 роки тому

    Damn I love this series. And you just mentioned a raytracing one... I won't watch it for now, at least before I try doing that on my own.
    Have you tried doing electronics? I can imagine you having lots of fun with digital electronics ESPECIALLY FPGA stuff...

    • @Bisqwit
      @Bisqwit  4 роки тому

      I have electronics education from vocational school, and I deal with embedded programming for my work, but I haven’t really done much with electronics. This was maybe the most complex electronics project I have done. ua-cam.com/video/FYXRK5P0qJ4/v-deo.html It is a NES music player running on a PIC16F628A, which has 128 bytes of EEPROM memory, 224 bytes of RAM, and 3.5 kilobytes of program flash. It has no signal generator hardware suitable for this purpose, so the program generates the audio as PCM. I also wrote an emulator for it. ua-cam.com/video/P82Zf31joPk/v-deo.html
      I have never done FPGA stuff. I would probably just need some getting-started material, but aside from reading through the entire VHDL specification in 1996 or so and skimming through a couple of VHDL/Verilog source codes in the years, I have absolutely zero experience about FPGA programming.

    • @HA7DN
      @HA7DN 4 роки тому

      @@Bisqwit Wow!
      Impressive project as always the case with you. Electronics is very fun!

  • @tissuepaper9962
    @tissuepaper9962 2 роки тому

    Bisqwit, forgive me this nitpick.
    In English we often make a voiced/voiceless distinction between two words that are spelled the same, compare e.g.:
    refuse (v.) - to deny receipt of something, voiced s
    refuse (n.) - trash, rubbish, i.e. that which has been refused, voiceless s
    To the point, diffuse (adj.) (the one you are using in this video) has a voiceless "s", diffuse (v.) has a voiced "s". You seem to say both with a voiced s.

    • @Bisqwit
      @Bisqwit  2 роки тому

      In general, I err to the side of voiceless sibilants, because my native language, Finnish, does not have voiced sibilants at all. In fact, it took me years of conscious effort to even begin to notice them. Nowadays, I pick them up case-by-case by listening, if I pay enough conscious attention, and duplicate that same phenomenon, if I consciously remember to do so.

    • @tissuepaper9962
      @tissuepaper9962 2 роки тому

      @@Bisqwit hey, no worries, we're all still learning, that's why I made my comment in the first place. I hoped with my comment that I could fill a gap I often find in my own language learning, namely: finding native speakers who are willing to spend their time teaching me finer points. You spend so much time sharing you domain-specific knowledge, ideally you see this as me returning that favor and not me acting like your gradeschool teacher lol.

  • @destinydonut
    @destinydonut 4 роки тому +1

    Can you try out the Vulkan API?

    • @Bisqwit
      @Bisqwit  4 роки тому +1

      It is a frequent request, but _so far_ I have been putting it off, because Vulkan is an epitome of boilerplate. You need like 200 lines of code to do even the equivalent of “hello world”. It is _extremely_ dull reading, and doesn’t have ingredients for a good video in my opinion.