All the source code of this series can be downloaded from: iki.fi/bisqwit/jkp/polytut/ It also includes, as patches, fixes for _all_ the bugs I mentioned in this video. *A reminder of what I said at **31:25**: Do not reply to **_this post_** if you want me to see your comment. Post your words as a new comment (the line for that is above), not as a reply, unless you are addressing the contents of **_this comment_** specifically. UA-cam does not show creators new replies, it only shows new comments. If you reply here **_I will not see it_** unless I manually check for it.* If you are addressing a comment someone wrote, then you _should_ reply it, though. Note: Luxels are also sometimes called _lumels._
I think this has been truly the greatest series of videos in the channel so far. No one else has explained better these concepts on a video format before, and hopefully this will pave the way for creators to touch the topic better. Thank you very much. Onto the future we go!
I am amazed on how you can keep up any topic in programming and just implement it, especially when talking about graphics lighting. And after all that, you are also able to explain it! Great work.
Absolutely loving this series. I know some superficial information about 3D rendering but it's great to see the actual details and mathematics broken down in a practical sense. Much of this topic is often presented in a manner that makes it feel very daunting to even begin, but you've really done it justice here.
1 - You're finally taking the time to explain your code! And you're explaining it well! Big thumbs up for that! 2 - Wouldn't it be faster and better if instead of putting a camera on every single pixel of the light map, rendering a small image and setting the light accordingly, checking if lines between the center of the pixel and the light sources are intersected by any objects?
That would only account for direct lighting, and is essentially the same as raytracing. It would not create indirect lighting. For example, the tunnel near the ceiling (which I apparently did not traverse in this video) which has no light sources, would be pitch-black - which is not realistic. It should still receive _indirect_ (reflected) lighting from walls that are illuminated. Also raytracing towards the center of light sources creates ugly razor sharp shadows, as if the light source is very far away (like the sun) or tiny and directly pointed at the object. You _can_ add indirect lighting by also casting a few hundred rays in random directions (not just towards light sources) and getting whatever pixel color the ray hits - and this is in fact exactly what I did when generating the lightmaps for the OpenGL video - but then you’ve lost any performance advantages over the method I described in this video.
@@Bisqwit To add to this (two year old) answer here, this is also not like some random concept he came up with either... Or maybe it is... But it *is* also an existing technique for generating lightmaps. It's called "radiosity", or also "radiosity lightmapping" if you prefer. It's been used in videogames for decades at this point. First game *I* personally know about that used it is Quake(1996), but more recent games such as Half Life (1998), Half Life 2 (2004), Portal (2008) (you probably get where I'm going with this, it's a part of Source's mapping tools lol... Also Unity (the game engine) used it, in the form of a third-party library called "enlighten"). There's several papers about this algorithm, one of which I see cited a lot but have a hard time tracking down is one by Hugo Elias. There's *also* the open source... Library? I guess I'll call it a library... The open source library lightmapper (github.com/ands/lightmapper) which is a single-file C & opengl implementation of this effect.
Thank you so much for these videos. Your voice and personality are so comforting and your coding is inspiring. Your solution to lighting here is oddly elegant even though it's processor heavy. Using cameras at every surface is something I wouldn't even consider even though it solves every lighting problem at once. To solve the leaking light between polygons I would add one pixel to the rasterizer x and y loops, as in "for(x=left,x
GI is always fun - I've been doing it in "real time" (runs at like 20 fps right now which isn't very acceptable) using a voxel structure - by cone tracing with slightly randomized cones aligned with any diffuse surface's normal. The results are surprisingly decent but I'm still working on optimization and reducing flickering from voxelization
Your mental strength to come up with these kind of solutions is what motivates me to keep going and never think I have learned enough, its also quite depressing that my university doesn't exploit our potential more into projects like this, in my context is either impossible or really, really hard, mathematically speaking and to put it all together in code. You're basically what my companions and myself aspire to be, very thankful for this showcase of pure math and code skill. I love your work and the lufia music you fit in :)
I watch a lot of stuff here on UA-cam but nothing on here can or ever will match one of your uploads. Been a subscriber for a while & I don't work with C++ or even anything remotely close to game-related libraries or what have you... But thank you so much for making these videos! Always look forward to watching this stuff when I see your uploads in my feed. Even though I may not work with C++ or graphics libraries, I'll always learn something, which is always good. Tonight this video was accompanied with a pizza, after waking up at ~6pm. Continue being awesome!
29:47 "It's a bug in my engine". If you have the book Michael Abrash's Graphics Programming Black Book Special Edition, go to page 1066, chapter 57 Figure 57.1: "Gaps caused by mixing fixed-point and all-integer math". And if you know it is not your polygon edge interpolation math that is causing it, then you might be running into the other problem: Texture sampling, interpolating from outside the texture, or reading outside the texture (clamp texture edge doesn't solve everything). Many games use textures that expand the borders of the polygon, so that when being rendered from far away, will still look okay. Even Michael Abrash admitted in his book that he completely skipped over his own advice of polygon rendering, and had to go back and fix his code in order to solve the problems.
Amazing work. I'm so amazed how hard you work on this video. Many years ago I watched your NES emulator video and inspired me to learn C++. Now I'm already very decent programmer in wide range of computer science. Few years ago I tried to create my own 3D game engine using OpenGL. I learned a lot about rendering and some game physics, lighting etc... but didn't figure it out how to animate character yet. And last few years I didn't write code related to Game engine (Busy working javascript all the time). But 3D game engine is still my most favorite subject in computer science. It is very challanging and teaches me a lot about Math, physics and computer science. That is why I love it so much. By the way great work again. You are still inspiring me to grow up more and work harder like you too. :P Thank you.
I love your videos and they're very entertaining! Especially upgrading from a single rendering thread to full cpu 48 core multithreading in the middle of the video!
Hey Bisqwit! Just wanted to say thank you for posting all these crazy high quality videos. I'm not in the same domain, or the same caliber as you at this. But It's motivated me to document myself writing code as well. Best wishes, and cannot wait to see more of your content!
You are so good that you can explain it simple and interesting. I swear to god I miss people like you on youtube, who take the hard topics and break it down in understandable simple pieces and present them even very amazing visually.
Google/UA-cam must not know me very well. I saw this channel for the first time after years of watching programming, hacking, gaming channels and they didn't even realize THIS is my new favorite channel. Google, get your act together! Friggen awsome, amazing channel Bisqwit! Subscribed.
So glad you take the time to create these videos and projects. It's truly amazing and very inspiring. Hearing about (real time) global illumination these days, it's really hard not to think about Unreal Engine 5 and the demo they showed there. You talk about your code not being optimized etc. and being CPU intensive and I understand that's not the prime purpose of this series, but it could be really interesting if you could make a video trying to describe what kind of techniques or differences it would take for your project to obtain similar result to the light demoed in UE5. Thanks a lot, no matter if you have time to make something like that!
An engine such as UE5 uses a combination of dozens of different techniques to achieve its result. I could not hope to catch up with that. However, I do try to keep doing this series and covering progressively more complex themes. The next thing that I will cover, will be probably HDRi, and after that, maybe portal rendering. But before I get there I may need to take a short break and do a less demanding video first so I don’t burn out.
Great video! I finally understand how lightmaps are calculated, even if as you said it isn't the most efficient method, I might try calculating them with an FBO on the GPU instead. Thank you very much
Wow, that's a great video about lightmapping! I tried some time ago to bake ambient occlussion for 3D models using very simmilar technique. I faced the same problem as described in 28:37, because I was baking 32x32 texture with a single camera shot with 170 FOV angle (with fisheye remapping), but I solved this problem by using MSAA x8 in a bake texture + lightmap denoise algorithm. Also 32x32 texture worked really well, because the texture was fitting into my GPU cache nicely, so the mean computation was done on GPU almost without any cache misses and without using CPU GPU bus. With this approach I could bake high quality 4k AO map and still measure the bake time in seconds, not minutes! From my approach I've learned that lightmapping is not about writing lightmapper logic, but mainly about fixing small details and fighting for milliseconds in optimization, but I highly recommend - if you are a graphics programmer - give it a shot, it's a great journey. It's great that someone created so intuitive explanation, keep it up!
It is blast to see how the quality of videos improved, I remember when you could not record directly your screen, I am very happy for you. Your content is one the best regarding programming. It is a bit weird hearing you saying hello and not shalom. Could you do more videos on obfuscated programming?
Extremelly nice video, probably would have to watch it again, to fully comprehend It as I am not an expert programmer. It is really good that this content is created since most of the "tutorials" are just basic stuff and the realc omplicated thing that are written in the underlying libraries are rarely explained. Really nice content, again
This is absolutely incredible! I've recently gotten into the Demoscene and love seeing what people can do with code. I've also been mapping in Source since HL2 was released. My skills definitely lean more to the actual design and mechanics side rather than coding, although I really wish I knew how to code. Someday I will learn and watching videos like this really inspire me to finally just get started.
Nice video. The shift from voiceover to life footage felt really weird. A good reason for keeping that style in future videos. My understanding: We need to loop over all those luxels over and over again since the processor can only look at one luxel at a time. Over the number of iterations, it gets closer and closer to the ideal value. (Like man running against turtle.) We don't have many other choices here. I was reading about analog computing the last few days and I thought: How could we model this problem so it is solved without atomic stepping and looping. Then I thought about using light in a room and a camera. I feel stupid now because I actually concluded building the scene physically and taking a photograph of it. Nice solution Bisqwit, try it out! Perfect and realtime, programmed and offered by the best programmer ever.
Thank you for this series, I've loved it. I may try to implement some of the techniques from this series in a C based software renderer sometime in the future.
hello bisqwit thanks for the beautiful content! a normal map is not the same thing as a bump map. bump map encodes bumps as a map in which each pixel value corresponds to the elevation amount of that particular location.
yeah , the naming convention is a mess ... I always considered the bump map to be a normal map , whereas I know the elevation map to be the height map instead. I think there's a way to compute a normal map from a height map , but not the other way around though.
This isn't true. Bump maps and normal maps are the same thing. What you are referring to are height maps or displacement maps. Normal maps create the illusion of bumps, thus the name. I just read the Wikipedia article and they refer to bump mapping as category of texture mapping , but I personally never read this in actual CG literature, but even then they explicitly say bump mapping doesn't change the geometry of the object. So height maps still wouldn't be considered bump mapping either way.
Yeah, C++ allows plenty of Unicode characters in identifers. en.cppreference.com/w/cpp/language/identifiers#Unicode_characters_in_identifiers As does C since C99. This page does not mention it, but the feature was introduced in C++11. However, compiler support has been incomplete for a long time. Only as recently as in GCC 10, was support added for those symbols presented verbatim in UTF-8 encoding, rather than having to type them as escapes, such as \u03B3 for γ.
bang on, looks great in 4K .. I want to start creating content in 4K .. all I need is a camera, can't wait to see more of your new 4K content. Keep up the great work!
Every programmer should: create compiler, 3D graphics renderer, voice synthesiser and AI. ... Then then the programmer will be probably ascended to another dimension. =D
I have only made the compiler part, my university won't even touch anything related to graphics or voice synthesisers, and AI is just about the end of the career, guess I will just have to ascend to another dimension in satanic ways
hyvältä näyttää, raskashan tuota on pyörittää, mutta hyvältä se näyttää. tekstuuriselitykset alussa oli myös helppoja seurata, tiesin niiden merkityksen toki etukäteen mutta hyvin selitetty, varmasti saa muutkin selvää
Bisqwit, you can use RenderDoc to change the OpenGL state in applications. It's normally used to debug things, but you can ofc also change things like texture filtering on a per-texture basis
One good thing about the Source Engine (I suppose the same can be said of GoldSrc and Quake Engine) was that you could specify the lightmap resolution for each surface separately, while editing the maps. I don't see this possibility in Godot, and I suspect neither in Unity and Unreal. Though I could be wrong about the latter two. Basically, a HL2 map defaulted to low resolution lightmaps all around, and you specified the surfaces where higher resolutions were needed.
The bug you ran into sounds very much like a note in the QuakeIII engine, where as a fix some vertices are drawn next to eachother to prevent seams from showing.
If you wanted to get real time dynamic lighting, rather than constantly running what amounts to path tracing in a background thread, you could use the hemisphere cameras to precompute the radiosity form factor matrix, which encodes the (cosine-weighted) visibility from every element to every other element. Then computing the global illumination amounts to solving a sparse linear system, and the form factor matrix does not need to be recomputed if you change the emission of various surfaces. It wouldn't handle moving lights, though. So I guess it's only dynamic with respect to which surfaces are glowing.
The white lines are probably not gaps but light leaking from another surface on the uv map you should have a gap of at least 16 pixels between uv polygones
That is a good theory, but it is unfortunately wrong; adding padding between the lightmaps does not fix the problem, I tried it. As far as I understand, there are two causes that operate in parallel. 1. First is the workaround in polygon_draw, that I made to address the problem pointed out in ua-cam.com/video/hxOw_p0kLfI/v-deo.htmlm53s . This error causes the rightmost column of texels to sometimes not be rendered. If I disable the workaround for that problem, the artifacts on wall remain but the seam glitch is gone. 2. Rounding errors in clipping. Clipping sometimes produces seams between adjacent polygons. This is apparent especially in this video when the 64x64 view is shown. There is an inexplicable dark diagonal line on the wall at 28:37. I don’t know why it happens, but it occurs only when the edge of the polygon is clipped by the frustum. I don’t know why the clipping causes different rounding in different polygons even though they share same end vertices. EDIT: I fixed this particular problem; the patch is included on the webpage that is in the description. However, the problem with artifacts on wall remain. There is also the issue that a white stripe of light appears on the edge of some polygons, such as the left side of screen at 30:24. This is, I think, because the lightmap camera is not positioned perfectly on the luxel, but it sees out of boundaries. This, too, is not fixed by adding padding between lightmaps. The game _Mirror’s Edge_ also suffers from this problem in some locations that are not intended to be visible to the player, although the cause is slightly different (interpolation between visible luxel and an oob luxel).
@@Bisqwit maybe you slipped in some quantum physics by accident, causing tunneling, you better look into the code again to rule that theory out. Edit: Could also be some AI persons drawing some kind of art. Make sure there is no brain simulation in there. Almost missed this theory.
This demo looks fantastic! I guess bounce lighting could be done by repeating this process a couple of times while reading the light map calculated previously. Then all surfaces can become light emitters. Keep up the fab work
I’m not sure how what you are describing differs from what I am already doing in this episode. This technique already does radiosity perfectly. That is, surfaces that are only illuminated _indirectly_ by other walls that are lit.
Bisqwit ah, so it will eventually converge on a total light level or will it continue to get brighter forever? Given each quad will get more and more light each iteration.
It converges on the total light level. The total sum of light reflected by all walls can never exceed the brightness of the lightsource times its surface area, or something to that effect. One particular factor that makes this true is how the weightmap in lightmap rendering is normalized to 1. That is, unless you get full brightness of the light on _every possible pixel_ in the lightmap camera view, the brightness on the wall will always be less than the brightness of the light source. If even _one_ of those pixels does not see the light source, or sees just its reflected light from a wall (that is already dimmed), the luxel will be dimmed too.
Could you do something like an edge-detect on the lightmap to find areas with streaking and decide to increase the camera resolution for them? This could also be used for optimization: run a bad low quality render, if that render is completely uniform then most likely increasing the resolution won't add more detail, if the render is noisy/streaky then discard the result and increase the resolution. This also would increase resolution around shadow boundaries, and reduce resolution where it's not as needed.
It would need a custom storage format for bitmaps that have varying resolution in various parts of the bitmap. I don’t know any approach to do that efficiently, neither in writing nor in reading.
For the fisheye light-probe for diffuse lighting, what if you do it with rectilinear projection, but temporarily apply a distortion factor to the coordinates of the vertexes just for the light-probes, matching the approximate look of the true fisheye rendering?
I think that on 22:48 you wanted to implement something like a probability density function, I used PDFs (like Lambertian distribution for completely matte surfaces or GGX distribution for rough/glossy surfaces) in my path tracer so I could "weigh" how important would a ray be in a calculation. I might be completely wrong about this though, so please take this with a grain of salt, this might be completely irrelevant for your project 😅
Awesome series! I'd like to know why triangles aren't rendered with antialias, particularlly prefilitering antialiasing to avoid the high overhead of supersampling
Antialias is difficult to implement because it involves transparent pixels (reading what’s underneath and modifying the pixel such that its new color is something between the old color and new color), and transparency is sensitive to rendering order. For example, suppose that there is a red polygon and a blue polygon that share an edge, and first the red polygon is drawn. Its edge pixels are a mixture of black (background) and red, i.e. darker shades of red. Then the blue polygon is drawn. Its edge pixels are a mixture of those dark-red pixels and blue pixels, even though they should be a mixture of red and blue. This effectively means that the black leaks through. If the polygons are drawn in opposite order, then the edge pixels would be a mixture of red and dark-blue. Different result, but still wrong. It is difficult to avoid this problem. Additionally, antialias requires drawing more pixels. An aliased line from (1,1) to (2,2) would be two pixels. An antialiased line would be four pixels: a square with bright pixels in two corners and dark pixels in other corners. The mathematics of drawing antialiased polygons are heavy: One needs to calculate the bounding box of the triangle with rounding up and down for all corners, and the blending proportion of color for every edge pixel and its neighbor and perform the blending (read-modify-write) for each of those edge pixels. Supersampling, such as drawing the entire screen at 2x size, and then downscaling, is a mathematically simple way to solve all these problems.
@@Bisqwit I agree with and thanks your explanation. I'm trying to write some rasterizing code to be run on a FPGA and I plan to sort it out those problems. One way I think is a possible solution to the blue+red polygons is to use a 4th byte to store alpha for each poligon's pixels then blend colors considerong the alpha generated from each polygon, this should solve the mixing with black as you explained since the blending is not done first. I plan to use some of your really nice code to test that. Hopefully there's interest to improve the rendering and avoiding supersampling
Why not usw true color × intensity? And rays instead of cameras? Wouldnt that be cheaper, to just send a ray from each texel to each light, instead of a camera in 5 directions? Love your work btw, just found out a few weeks ago that you also had a big part in snes development, i just started delving into that.
You may have to elaborate a little on your proposal. EDIT: As for rays, that would only account for direct lighting, and is essentially the same as raytracing. It would not create indirect lighting. For example, the tunnel near the ceiling (which I apparently did not traverse in this video) would be pitch-black, because none of the light sources are directly visible from it. It should still receive indirect (reflected) lighting from walls that are illuminated. You can add indirect lighting by also doing a couple hundred lines in random directions (not just towards light sources) and getting whatever pixel color the ray hits - and this is in fact exactly what I did when generating the lightmaps for the OpenGL video - but then you’ve lost any performance advantages over the method I described in this video.
Already done. ua-cam.com/video/hxOw_p0kLfI/v-deo.htmlm41s A significant loss of performance actually happens in the gamma correction. pow() is a rather slow function, and calling it three times for every pixel at 1280x720 is not exactly efficient.
Great video, as always, Bisqwit. On a side note, how familiar are you with the topics of memory consistency and lock-free programming? I find them quite intriguing, however, there doesn't seem to be nearly enough high quality content on these topics, especially lock-free programming, and I don't feel qualified enough to produce any myself. In case that you are familiar with them, would you perhaps consider making a brief video series about this sometime in the future?
Not very familiar to be honest. I study when I need something, and I haven’t much needed to delve into complex thread-safety topics. The whole c++20 memory_order thing is still an unexplored land to me, for instance. But in case I do get intimate with the topic, it may make into a new video some day.
As I’ve written before, IQ has nothing to do with it. Different people just have brains working differently, with talent for different things. For example, I am _very_ dumb when it comes to learning by observing and repeating. I am a dance teacher, but unlike most of my pupils, _I_ cannot learn dances by repeating what others are doing. If there are no explanatory words involved, in most cases I cannot learn it. I have to process it in words, even if just in my mind, to learn it. Another example is that I cannot throw a ball very far. It perplexed me to no end when I was a child how my peers could throw a snowball to the topmost floors of a six-floor apartment building, while I could hardly make it reach the second one. I never figured out the trick. Yes, I know the theory of assisting the motion with your whole upper body. Nope, not getting it.
Hello Bisqwit. Thank you for such a great video. I'd like to implement something like this for my own engine. Can you share some of the research links, documents, papers you used to implement the approach with using a camera along the 5 axis? I'd like to gain more understanding.
Joo-o... Munkin pitäisi kaikki pitkät matikat käyneenä tehdä tällainen engine viikon pähkäilyllä, mutta aika hiljaista on. Toiset tekee, toiset täällä tyytyvät katsomaan aiheesta tekemäsi videon. :)
The editor does not deal with fonts at all. It’s terminal program. It only deals with inputs and outputs. Visual representation is entirely the terminal’s job. Within the terminal various fonts are used at different times.
Bisqwit: we are going to write a graphics engine with global illumination and raytracing me in unity: well it only took 5 hours to figure out how delegates work
Bisqwit The dynamic lighting that John Carmack made for the ID Tech 4 engine en.wikipedia.org/wiki/Id_Tech_4 contained many of these features and it was running on GPU hardware of the day at game playing frame rates. This was fascinating to read about during its development. I'm sure you would like it if you haven't already read about it.
Damn I love this series. And you just mentioned a raytracing one... I won't watch it for now, at least before I try doing that on my own. Have you tried doing electronics? I can imagine you having lots of fun with digital electronics ESPECIALLY FPGA stuff...
I have electronics education from vocational school, and I deal with embedded programming for my work, but I haven’t really done much with electronics. This was maybe the most complex electronics project I have done. ua-cam.com/video/FYXRK5P0qJ4/v-deo.html It is a NES music player running on a PIC16F628A, which has 128 bytes of EEPROM memory, 224 bytes of RAM, and 3.5 kilobytes of program flash. It has no signal generator hardware suitable for this purpose, so the program generates the audio as PCM. I also wrote an emulator for it. ua-cam.com/video/P82Zf31joPk/v-deo.html I have never done FPGA stuff. I would probably just need some getting-started material, but aside from reading through the entire VHDL specification in 1996 or so and skimming through a couple of VHDL/Verilog source codes in the years, I have absolutely zero experience about FPGA programming.
Bisqwit, forgive me this nitpick. In English we often make a voiced/voiceless distinction between two words that are spelled the same, compare e.g.: refuse (v.) - to deny receipt of something, voiced s refuse (n.) - trash, rubbish, i.e. that which has been refused, voiceless s To the point, diffuse (adj.) (the one you are using in this video) has a voiceless "s", diffuse (v.) has a voiced "s". You seem to say both with a voiced s.
In general, I err to the side of voiceless sibilants, because my native language, Finnish, does not have voiced sibilants at all. In fact, it took me years of conscious effort to even begin to notice them. Nowadays, I pick them up case-by-case by listening, if I pay enough conscious attention, and duplicate that same phenomenon, if I consciously remember to do so.
@@Bisqwit hey, no worries, we're all still learning, that's why I made my comment in the first place. I hoped with my comment that I could fill a gap I often find in my own language learning, namely: finding native speakers who are willing to spend their time teaching me finer points. You spend so much time sharing you domain-specific knowledge, ideally you see this as me returning that favor and not me acting like your gradeschool teacher lol.
It is a frequent request, but _so far_ I have been putting it off, because Vulkan is an epitome of boilerplate. You need like 200 lines of code to do even the equivalent of “hello world”. It is _extremely_ dull reading, and doesn’t have ingredients for a good video in my opinion.
All the source code of this series can be downloaded from: iki.fi/bisqwit/jkp/polytut/ It also includes, as patches, fixes for _all_ the bugs I mentioned in this video.
*A reminder of what I said at **31:25**: Do not reply to **_this post_** if you want me to see your comment. Post your words as a new comment (the line for that is above), not as a reply, unless you are addressing the contents of **_this comment_** specifically. UA-cam does not show creators new replies, it only shows new comments. If you reply here **_I will not see it_** unless I manually check for it.* If you are addressing a comment someone wrote, then you _should_ reply it, though.
Note: Luxels are also sometimes called _lumels._
I love how this series is teaching me what all those game settings I’ve tweaked in my life actually mean :)
so u life with vsync?
Hello SerenityOS person ;)
Ayy, you're here too.
Or may be you could have read some books on graphics instead of wasting your time on UA-cam.
Andreas Kling and Bisqwit collab when?
I think this has been truly the greatest series of videos in the channel so far. No one else has explained better these concepts on a video format before, and hopefully this will pave the way for creators to touch the topic better. Thank you very much. Onto the future we go!
I am amazed on how you can keep up any topic in programming and just implement it, especially when talking about graphics lighting. And after all that, you are also able to explain it! Great work.
You have to remember I only do videos about topics I know about.
Watching those lightmaps get progressively recomputed as you move the lights around is absolutely fascinating.
Absolutely loving this series. I know some superficial information about 3D rendering but it's great to see the actual details and mathematics broken down in a practical sense. Much of this topic is often presented in a manner that makes it feel very daunting to even begin, but you've really done it justice here.
Yeah super easy to understand.
1 - You're finally taking the time to explain your code! And you're explaining it well! Big thumbs up for that!
2 - Wouldn't it be faster and better if instead of putting a camera on every single pixel of the light map, rendering a small image and setting the light accordingly, checking if lines between the center of the pixel and the light sources are intersected by any objects?
That would only account for direct lighting, and is essentially the same as raytracing. It would not create indirect lighting. For example, the tunnel near the ceiling (which I apparently did not traverse in this video) which has no light sources, would be pitch-black - which is not realistic. It should still receive _indirect_ (reflected) lighting from walls that are illuminated.
Also raytracing towards the center of light sources creates ugly razor sharp shadows, as if the light source is very far away (like the sun) or tiny and directly pointed at the object.
You _can_ add indirect lighting by also casting a few hundred rays in random directions (not just towards light sources) and getting whatever pixel color the ray hits - and this is in fact exactly what I did when generating the lightmaps for the OpenGL video - but then you’ve lost any performance advantages over the method I described in this video.
@@Bisqwit To add to this (two year old) answer here, this is also not like some random concept he came up with either... Or maybe it is... But it *is* also an existing technique for generating lightmaps. It's called "radiosity", or also "radiosity lightmapping" if you prefer. It's been used in videogames for decades at this point. First game *I* personally know about that used it is Quake(1996), but more recent games such as Half Life (1998), Half Life 2 (2004), Portal (2008) (you probably get where I'm going with this, it's a part of Source's mapping tools lol... Also Unity (the game engine) used it, in the form of a third-party library called "enlighten"). There's several papers about this algorithm, one of which I see cited a lot but have a hard time tracking down is one by Hugo Elias. There's *also* the open source... Library? I guess I'll call it a library... The open source library lightmapper (github.com/ands/lightmapper) which is a single-file C & opengl implementation of this effect.
Oh boy, I'm gonna have to watch this a few times
I love the dry witty humor you manage throw in. Truly a master at work!
Thank you so much for these videos. Your voice and personality are so comforting and your coding is inspiring. Your solution to lighting here is oddly elegant even though it's processor heavy. Using cameras at every surface is something I wouldn't even consider even though it solves every lighting problem at once. To solve the leaking light between polygons I would add one pixel to the rasterizer x and y loops, as in "for(x=left,x
GI is always fun - I've been doing it in "real time" (runs at like 20 fps right now which isn't very acceptable) using a voxel structure - by cone tracing with slightly randomized cones aligned with any diffuse surface's normal. The results are surprisingly decent but I'm still working on optimization and reducing flickering from voxelization
You can check out Handmade Hero series for voxel-based GI running in realtime (implemented from scratch)
Can you share link to your work?
Your mental strength to come up with these kind of solutions is what motivates me to keep going and never think I have learned enough, its also quite depressing that my university doesn't exploit our potential more into projects like this, in my context is either impossible or really, really hard, mathematically speaking and to put it all together in code.
You're basically what my companions and myself aspire to be, very thankful for this showcase of pure math and code skill.
I love your work and the lufia music you fit in :)
28:30 I like the 64x64 pixels! It's like my 3D Minecraft on the GBA I Programmed.
I watch a lot of stuff here on UA-cam but nothing on here can or ever will match one of your uploads.
Been a subscriber for a while & I don't work with C++ or even anything remotely close to game-related libraries or what have you... But thank you so much for making these videos! Always look forward to watching this stuff when I see your uploads in my feed. Even though I may not work with C++ or graphics libraries, I'll always learn something, which is always good.
Tonight this video was accompanied with a pizza, after waking up at ~6pm.
Continue being awesome!
29:47 "It's a bug in my engine". If you have the book Michael Abrash's Graphics Programming Black Book Special Edition, go to page 1066, chapter 57 Figure 57.1: "Gaps caused by mixing fixed-point and all-integer math". And if you know it is not your polygon edge interpolation math that is causing it, then you might be running into the other problem: Texture sampling, interpolating from outside the texture, or reading outside the texture (clamp texture edge doesn't solve everything). Many games use textures that expand the borders of the polygon, so that when being rendered from far away, will still look okay. Even Michael Abrash admitted in his book that he completely skipped over his own advice of polygon rendering, and had to go back and fix his code in order to solve the problems.
Amazing work. I'm so amazed how hard you work on this video. Many years ago I watched your NES emulator video and inspired me to learn C++. Now I'm already very decent programmer in wide range of computer science. Few years ago I tried to create my own 3D game engine using OpenGL. I learned a lot about rendering and some game physics, lighting etc... but didn't figure it out how to animate character yet. And last few years I didn't write code related to Game engine (Busy working javascript all the time). But 3D game engine is still my most favorite subject in computer science. It is very challanging and teaches me a lot about Math, physics and computer science. That is why I love it so much. By the way great work again. You are still inspiring me to grow up more and work harder like you too. :P Thank you.
I love your videos and they're very entertaining!
Especially upgrading from a single rendering thread to full cpu 48 core multithreading in the middle of the video!
Hey Bisqwit! Just wanted to say thank you for posting all these crazy high quality videos. I'm not in the same domain, or the same caliber as you at this. But It's motivated me to document myself writing code as well.
Best wishes, and cannot wait to see more of your content!
You are so good that you can explain it simple and interesting. I swear to god I miss people like you on youtube, who take the hard topics and break it down in understandable simple pieces and present them even very amazing visually.
You explain things in a way that is easier and pleasant to understand. I hope you continue to do these amazing videos. Congrats!
Google/UA-cam must not know me very well. I saw this channel for the first time after years of watching programming, hacking, gaming channels and they didn't even realize THIS is my new favorite channel. Google, get your act together! Friggen awsome, amazing channel Bisqwit! Subscribed.
I LOVE how deep you go with each topic. I salute you!
nomenclature-wise i prefer to think of it as "pixel" = picture element, and "texel" = texture element.
So glad you take the time to create these videos and projects. It's truly amazing and very inspiring. Hearing about (real time) global illumination these days, it's really hard not to think about Unreal Engine 5 and the demo they showed there. You talk about your code not being optimized etc. and being CPU intensive and I understand that's not the prime purpose of this series, but it could be really interesting if you could make a video trying to describe what kind of techniques or differences it would take for your project to obtain similar result to the light demoed in UE5. Thanks a lot, no matter if you have time to make something like that!
An engine such as UE5 uses a combination of dozens of different techniques to achieve its result. I could not hope to catch up with that. However, I do try to keep doing this series and covering progressively more complex themes. The next thing that I will cover, will be probably HDRi, and after that, maybe portal rendering. But before I get there I may need to take a short break and do a less demanding video first so I don’t burn out.
Big fan of the practical light experiment, thanks for the effort!
Great video! I finally understand how lightmaps are calculated, even if as you said it isn't the most efficient method, I might try calculating them with an FBO on the GPU instead. Thank you very much
Videos like this motivate me to keep working on my own rather complicated projects. Thank you!
Wow, that's a great video about lightmapping!
I tried some time ago to bake ambient occlussion for 3D models using very simmilar technique. I faced the same problem as described in 28:37, because I was baking 32x32 texture with a single camera shot with 170 FOV angle (with fisheye remapping), but I solved this problem by using MSAA x8 in a bake texture + lightmap denoise algorithm.
Also 32x32 texture worked really well, because the texture was fitting into my GPU cache nicely, so the mean computation was done on GPU almost without any cache misses and without using CPU GPU bus. With this approach I could bake high quality 4k AO map and still measure the bake time in seconds, not minutes!
From my approach I've learned that lightmapping is not about writing lightmapper logic, but mainly about fixing small details and fighting for milliseconds in optimization, but I highly recommend - if you are a graphics programmer - give it a shot, it's a great journey.
It's great that someone created so intuitive explanation, keep it up!
Wonderful video Bisqwit! You have always been one of my favorite programmers to watch. Never afraid to dive deep and explore different ideas.
Actually your series motivated me to give up on 17 and start re-learning the 20
Thanks for explaining)
I searched for a long time for explanation of light map )
You are amazing)
It is blast to see how the quality of videos improved, I remember when you could not record directly your screen, I am very happy for you. Your content is one the best regarding programming. It is a bit weird hearing you saying hello and not shalom. Could you do more videos on obfuscated programming?
If I get ideas in that area, maybe.
Extremelly nice video, probably would have to watch it again, to fully comprehend It as I am not an expert programmer. It is really good that this content is created since most of the "tutorials" are just basic stuff and the realc omplicated thing that are written in the underlying libraries are rarely explained. Really nice content, again
I like the new "intro animation"/"transition animation"
i love a good graphics lecture/video essay/knowledge explosion from bisqwit
The amount of knowledge this man has is insane
hes like a walking library lol
This is absolutely incredible!
I've recently gotten into the Demoscene and love seeing what people can do with code.
I've also been mapping in Source since HL2 was released. My skills definitely lean more to the actual design and mechanics side rather than coding, although I really wish I knew how to code.
Someday I will learn and watching videos like this really inspire me to finally just get started.
Nice video. The shift from voiceover to life footage felt really weird. A good reason for keeping that style in future videos.
My understanding: We need to loop over all those luxels over and over again since the processor can only look at one luxel at a time. Over the number of iterations, it gets closer and closer to the ideal value. (Like man running against turtle.) We don't have many other choices here. I was reading about analog computing the last few days and I thought: How could we model this problem so it is solved without atomic stepping and looping. Then I thought about using light in a room and a camera. I feel stupid now because I actually concluded building the scene physically and taking a photograph of it. Nice solution Bisqwit, try it out! Perfect and realtime, programmed and offered by the best programmer ever.
Very, very impressive Bisqwit! Keep going! It's very intressting and entertaining!
Thank you for this series, I've loved it. I may try to implement some of the techniques from this series in a C based software renderer sometime in the future.
hello bisqwit thanks for the beautiful content!
a normal map is not the same thing as a bump map. bump map encodes bumps as a map in which each pixel value corresponds to the elevation amount of that particular location.
I see.
These names are a complete mess. These elevation maps are also often called displacement maps and I have seen normal maps referred to as bump maps
yeah , the naming convention is a mess ...
I always considered the bump map to be a normal map , whereas I know the elevation map to be the height map instead.
I think there's a way to compute a normal map from a height map , but not the other way around though.
@@emperorpalpatine6080 you can, you would lose precision though.
This isn't true.
Bump maps and normal maps are the same thing.
What you are referring to are height maps or displacement maps.
Normal maps create the illusion of bumps, thus the name.
I just read the Wikipedia article and they refer to bump mapping as category of texture mapping , but I personally never read this in actual CG literature, but even then they explicitly say bump mapping doesn't change the geometry of the object. So height maps still wouldn't be considered bump mapping either way.
20:17 lol, actually using gamma symbol in code. looks strange
Yeah, C++ allows plenty of Unicode characters in identifers. en.cppreference.com/w/cpp/language/identifiers#Unicode_characters_in_identifiers As does C since C99. This page does not mention it, but the feature was introduced in C++11. However, compiler support has been incomplete for a long time. Only as recently as in GCC 10, was support added for those symbols presented verbatim in UTF-8 encoding, rather than having to type them as escapes, such as \u03B3 for γ.
bang on, looks great in 4K .. I want to start creating content in 4K .. all I need is a camera, can't wait to see more of your new 4K content. Keep up the great work!
Every programmer should: create compiler, 3D graphics renderer, voice synthesiser and AI. ... Then then the programmer will be probably ascended to another dimension. =D
i'm currently coding a CNN
I have only made the compiler part, my university won't even touch anything related to graphics or voice synthesisers, and AI is just about the end of the career, guess I will just have to ascend to another dimension in satanic ways
add to it an MPM simulator and an OS, the list is too short lol
Don't forget emulators!
Amazingly well worked video! You are the best bisqwit. They are so nice to watch.
hyvältä näyttää, raskashan tuota on pyörittää, mutta hyvältä se näyttää. tekstuuriselitykset alussa oli myös helppoja seurata, tiesin niiden merkityksen toki etukäteen mutta hyvin selitetty, varmasti saa muutkin selvää
Kiitos. Itse asiassa puolet suorituskyvystä uppoaa pelkästään tuohon gammakorjaukseen. pow() ei ole mitenkään tehokas funktio…
your video made lightmapping understandable to a rube like me and got me to think of how to apply a lightmapping algorithm myself
[cool music intensifies]
Love your videos and dedication, Bisqwit!
From my point of view you is the image of power. Want to be like you in future.
i love this series so far. keep going dude you are doing amazing.
Amazing video, thank you Bisqwit as always for such valuable knowledge
Bisqwit, you can use RenderDoc to change the OpenGL state in applications. It's normally used to debug things, but you can ofc also change things like texture filtering on a per-texture basis
Thank you. It won’t help this video anymore, but I will keep that in mind, and study how to use it.
@@Bisqwit Yea, I just wanted you to know it exists. It's a nice tool to have in ones toolbox.
One good thing about the Source Engine (I suppose the same can be said of GoldSrc and Quake Engine) was that you could specify the lightmap resolution for each surface separately, while editing the maps. I don't see this possibility in Godot, and I suspect neither in Unity and Unreal. Though I could be wrong about the latter two. Basically, a HL2 map defaulted to low resolution lightmaps all around, and you specified the surfaces where higher resolutions were needed.
29:22 unreal engine 5 Lumen but slower haha, pretty cool!
Thanks! Using the CPU for rendering is far slower than using the GPU, but it can be useful for teaching the concepts.
The bug you ran into sounds very much like a note in the QuakeIII engine, where as a fix some vertices are drawn next to eachother to prevent seams from showing.
If you wanted to get real time dynamic lighting, rather than constantly running what amounts to path tracing in a background thread, you could use the hemisphere cameras to precompute the radiosity form factor matrix, which encodes the (cosine-weighted) visibility from every element to every other element. Then computing the global illumination amounts to solving a sparse linear system, and the form factor matrix does not need to be recomputed if you change the emission of various surfaces. It wouldn't handle moving lights, though. So I guess it's only dynamic with respect to which surfaces are glowing.
The white lines are probably not gaps but light leaking from another surface on the uv map you should have a gap of at least 16 pixels between uv polygones
That is a good theory, but it is unfortunately wrong; adding padding between the lightmaps does not fix the problem, I tried it. As far as I understand, there are two causes that operate in parallel.
1. First is the workaround in polygon_draw, that I made to address the problem pointed out in ua-cam.com/video/hxOw_p0kLfI/v-deo.htmlm53s . This error causes the rightmost column of texels to sometimes not be rendered. If I disable the workaround for that problem, the artifacts on wall remain but the seam glitch is gone.
2. Rounding errors in clipping. Clipping sometimes produces seams between adjacent polygons. This is apparent especially in this video when the 64x64 view is shown. There is an inexplicable dark diagonal line on the wall at 28:37. I don’t know why it happens, but it occurs only when the edge of the polygon is clipped by the frustum. I don’t know why the clipping causes different rounding in different polygons even though they share same end vertices. EDIT: I fixed this particular problem; the patch is included on the webpage that is in the description. However, the problem with artifacts on wall remain.
There is also the issue that a white stripe of light appears on the edge of some polygons, such as the left side of screen at 30:24. This is, I think, because the lightmap camera is not positioned perfectly on the luxel, but it sees out of boundaries. This, too, is not fixed by adding padding between lightmaps. The game _Mirror’s Edge_ also suffers from this problem in some locations that are not intended to be visible to the player, although the cause is slightly different (interpolation between visible luxel and an oob luxel).
Isnt it basically simulating diffraction of light through a single tiny slit?
Diffraction has nothing to do with it. This program does not simulate light waves, or waves of any sort.
@@Bisqwit maybe you slipped in some quantum physics by accident, causing tunneling, you better look into the code again to rule that theory out.
Edit: Could also be some AI persons drawing some kind of art. Make sure there is no brain simulation in there. Almost missed this theory.
@@Bisqwit I know, but the result looks very similar to diffraction patterns... So I suggested that...
This demo looks fantastic! I guess bounce lighting could be done by repeating this process a couple of times while reading the light map calculated previously. Then all surfaces can become light emitters. Keep up the fab work
I’m not sure how what you are describing differs from what I am already doing in this episode. This technique already does radiosity perfectly. That is, surfaces that are only illuminated _indirectly_ by other walls that are lit.
Bisqwit ah, so it will eventually converge on a total light level or will it continue to get brighter forever? Given each quad will get more and more light each iteration.
It converges on the total light level. The total sum of light reflected by all walls can never exceed the brightness of the lightsource times its surface area, or something to that effect. One particular factor that makes this true is how the weightmap in lightmap rendering is normalized to 1. That is, unless you get full brightness of the light on _every possible pixel_ in the lightmap camera view, the brightness on the wall will always be less than the brightness of the light source. If even _one_ of those pixels does not see the light source, or sees just its reflected light from a wall (that is already dimmed), the luxel will be dimmed too.
Awesome series. I learnt some good things with this, do you plan explain colorspace like srgb and conversions as well. It can be a good follow up 😀
Could you do something like an edge-detect on the lightmap to find areas with streaking and decide to increase the camera resolution for them? This could also be used for optimization: run a bad low quality render, if that render is completely uniform then most likely increasing the resolution won't add more detail, if the render is noisy/streaky then discard the result and increase the resolution. This also would increase resolution around shadow boundaries, and reduce resolution where it's not as needed.
It would need a custom storage format for bitmaps that have varying resolution in various parts of the bitmap. I don’t know any approach to do that efficiently, neither in writing nor in reading.
are you planning on making a video where you would rewrite the code to use the GPU, using CUDA for example?
It is not in plans at the moment.
This guy is a genius
one of the most beautifull tutorials!
For the fisheye light-probe for diffuse lighting, what if you do it with rectilinear projection, but temporarily apply a distortion factor to the coordinates of the vertexes just for the light-probes, matching the approximate look of the true fisheye rendering?
The antibitangii... I mean the antibitangnar... Dammit! I mean the antibitango... Nope I quit.
Great work please keep doing it
I think that on 22:48 you wanted to implement something like a probability density function, I used PDFs (like Lambertian distribution for completely matte surfaces or GGX distribution for rough/glossy surfaces) in my path tracer so I could "weigh" how important would a ray be in a calculation. I might be completely wrong about this though, so please take this with a grain of salt, this might be completely irrelevant for your project 😅
Awesome series! I'd like to know why triangles aren't rendered with antialias, particularlly prefilitering antialiasing to avoid the high overhead of supersampling
Antialias is difficult to implement because it involves transparent pixels (reading what’s underneath and modifying the pixel such that its new color is something between the old color and new color), and transparency is sensitive to rendering order.
For example, suppose that there is a red polygon and a blue polygon that share an edge, and first the red polygon is drawn. Its edge pixels are a mixture of black (background) and red, i.e. darker shades of red. Then the blue polygon is drawn. Its edge pixels are a mixture of those dark-red pixels and blue pixels, even though they should be a mixture of red and blue. This effectively means that the black leaks through. If the polygons are drawn in opposite order, then the edge pixels would be a mixture of red and dark-blue. Different result, but still wrong. It is difficult to avoid this problem.
Additionally, antialias requires drawing more pixels. An aliased line from (1,1) to (2,2) would be two pixels. An antialiased line would be four pixels: a square with bright pixels in two corners and dark pixels in other corners.
The mathematics of drawing antialiased polygons are heavy: One needs to calculate the bounding box of the triangle with rounding up and down for all corners, and the blending proportion of color for every edge pixel and its neighbor and perform the blending (read-modify-write) for each of those edge pixels.
Supersampling, such as drawing the entire screen at 2x size, and then downscaling, is a mathematically simple way to solve all these problems.
@@Bisqwit I agree with and thanks your explanation. I'm trying to write some rasterizing code to be run on a FPGA and I plan to sort it out those problems. One way I think is a possible solution to the blue+red polygons is to use a 4th byte to store alpha for each poligon's pixels then blend colors considerong the alpha generated from each polygon, this should solve the mixing with black as you explained since the blending is not done first. I plan to use some of your really nice code to test that. Hopefully there's interest to improve the rendering and avoiding supersampling
Mr. Bisqwit also known as “Render Daddy” 😎
your smartness is frightening
Bisqwit siempre veo tus vídeos para inspirarme!! :)
Why not usw true color × intensity?
And rays instead of cameras? Wouldnt that be cheaper, to just send a ray from each texel to each light, instead of a camera in 5 directions?
Love your work btw, just found out a few weeks ago that you also had a big part in snes development, i just started delving into that.
You may have to elaborate a little on your proposal.
EDIT: As for rays, that would only account for direct lighting, and is essentially the same as raytracing. It would not create indirect lighting. For example, the tunnel near the ceiling (which I apparently did not traverse in this video) would be pitch-black, because none of the light sources are directly visible from it. It should still receive indirect (reflected) lighting from walls that are illuminated.
You can add indirect lighting by also doing a couple hundred lines in random directions (not just towards light sources) and getting whatever pixel color the ray hits - and this is in fact exactly what I did when generating the lightmaps for the OpenGL video - but then you’ve lost any performance advantages over the method I described in this video.
Bisqwit, your content is amazing, however, your handwriting skills using just a mouse are overwhelming!
Awesome, makes me wanna do something lighting stuff too.
Be sure to implement ACES 2065-1 (AP0) in your project for very accurate colours
Can you TL/DR it?
So dificult to my brain but bery spectacular to my eyes
would this be faster if you used frustum culling?
Already done. ua-cam.com/video/hxOw_p0kLfI/v-deo.htmlm41s
A significant loss of performance actually happens in the gamma correction. pow() is a rather slow function, and calling it three times for every pixel at 1280x720 is not exactly efficient.
Great video, as always, Bisqwit.
On a side note, how familiar are you with the topics of memory consistency and lock-free programming? I find them quite intriguing, however, there doesn't seem to be nearly enough high quality content on these topics, especially lock-free programming, and I don't feel qualified enough to produce any myself. In case that you are familiar with them, would you perhaps consider making a brief video series about this sometime in the future?
Not very familiar to be honest. I study when I need something, and I haven’t much needed to delve into complex thread-safety topics. The whole c++20 memory_order thing is still an unexplored land to me, for instance.
But in case I do get intimate with the topic, it may make into a new video some day.
How many years of learning one needs to achieve this level of knowledge
you can learn it, you maybe will need to invest more time in the right things but can do it if you really want
As I’ve written before, IQ has nothing to do with it. Different people just have brains working differently, with talent for different things. For example, I am _very_ dumb when it comes to learning by observing and repeating. I am a dance teacher, but unlike most of my pupils, _I_ cannot learn dances by repeating what others are doing. If there are no explanatory words involved, in most cases I cannot learn it. I have to process it in words, even if just in my mind, to learn it. Another example is that I cannot throw a ball very far. It perplexed me to no end when I was a child how my peers could throw a snowball to the topmost floors of a six-floor apartment building, while I could hardly make it reach the second one. I never figured out the trick. Yes, I know the theory of assisting the motion with your whole upper body. Nope, not getting it.
@@Bisqwit Thanks for the reply. I really admire your work and knowledge.
@@Ljosi blunt but correct
@@Ljosi IQ is widely recognized as basically worthless at predicting anything at all besides how good you are at taking IQ tests.
You are a great teacher!
oh the music is the same you used in the DOS OpenGL video
Still love your voice, and your videos!
Intresting... Why UE4 not supporting dynamic emissive lights....
Amazing video like always, thank you Bisqwit, take care of yourself man
antibitangent! also how comfortable are shiny spandex longsleeves?
Pretty nice. Not ideal for hot weather though.
One million thumbs up
Really amazing video man!!
love the acsent!
I love
Bisqwits!
Looking pretty good!
Hello Bisqwit. Thank you for such a great video. I'd like to implement something like this for my own engine. Can you share some of the research links, documents, papers you used to implement the approach with using a camera along the 5 axis? I'd like to gain more understanding.
I don’t think I have concealed any information in this series.
Hi! What is the filter for lightmap? The way it overlays on the texture and changing color. Is it the same as "soft light" In photoshop?
Multiplication.
@@Bisqwit oh ok, thanks!
Joo-o... Munkin pitäisi kaikki pitkät matikat käyneenä tehdä tällainen engine viikon pähkäilyllä, mutta aika hiljaista on. Toiset tekee, toiset täällä tyytyvät katsomaan aiheesta tekemäsi videon. :)
What font are you using for the editor? (i've always wondered).
The editor does not deal with fonts at all. It’s terminal program. It only deals with inputs and outputs. Visual representation is entirely the terminal’s job. Within the terminal various fonts are used at different times.
Followup: Answered in ua-cam.com/video/uITpN-OZcuo/v-deo.html
Bisqwit: we are going to write a graphics engine with global illumination and raytracing
me in unity: well it only took 5 hours to figure out how delegates work
Bisqwit The dynamic lighting that John Carmack made for the ID Tech 4 engine en.wikipedia.org/wiki/Id_Tech_4 contained many of these features and it was running on GPU hardware of the day at game playing frame rates. This was fascinating to read about during its development. I'm sure you would like it if you haven't already read about it.
Why did the goatee have to leave?
Great video!
Damn I love this series. And you just mentioned a raytracing one... I won't watch it for now, at least before I try doing that on my own.
Have you tried doing electronics? I can imagine you having lots of fun with digital electronics ESPECIALLY FPGA stuff...
I have electronics education from vocational school, and I deal with embedded programming for my work, but I haven’t really done much with electronics. This was maybe the most complex electronics project I have done. ua-cam.com/video/FYXRK5P0qJ4/v-deo.html It is a NES music player running on a PIC16F628A, which has 128 bytes of EEPROM memory, 224 bytes of RAM, and 3.5 kilobytes of program flash. It has no signal generator hardware suitable for this purpose, so the program generates the audio as PCM. I also wrote an emulator for it. ua-cam.com/video/P82Zf31joPk/v-deo.html
I have never done FPGA stuff. I would probably just need some getting-started material, but aside from reading through the entire VHDL specification in 1996 or so and skimming through a couple of VHDL/Verilog source codes in the years, I have absolutely zero experience about FPGA programming.
@@Bisqwit Wow!
Impressive project as always the case with you. Electronics is very fun!
Bisqwit, forgive me this nitpick.
In English we often make a voiced/voiceless distinction between two words that are spelled the same, compare e.g.:
refuse (v.) - to deny receipt of something, voiced s
refuse (n.) - trash, rubbish, i.e. that which has been refused, voiceless s
To the point, diffuse (adj.) (the one you are using in this video) has a voiceless "s", diffuse (v.) has a voiced "s". You seem to say both with a voiced s.
In general, I err to the side of voiceless sibilants, because my native language, Finnish, does not have voiced sibilants at all. In fact, it took me years of conscious effort to even begin to notice them. Nowadays, I pick them up case-by-case by listening, if I pay enough conscious attention, and duplicate that same phenomenon, if I consciously remember to do so.
@@Bisqwit hey, no worries, we're all still learning, that's why I made my comment in the first place. I hoped with my comment that I could fill a gap I often find in my own language learning, namely: finding native speakers who are willing to spend their time teaching me finer points. You spend so much time sharing you domain-specific knowledge, ideally you see this as me returning that favor and not me acting like your gradeschool teacher lol.
Can you try out the Vulkan API?
It is a frequent request, but _so far_ I have been putting it off, because Vulkan is an epitome of boilerplate. You need like 200 lines of code to do even the equivalent of “hello world”. It is _extremely_ dull reading, and doesn’t have ingredients for a good video in my opinion.