Triangle Rasterization

Поділитися
Вставка
  • Опубліковано 17 чер 2024
  • This video is an introduction to how triangle rasterization works.
    We'll start by discussing a parallel algorithm for polygon rasterization based on an article written by Juan Pineda in 1988. We'll review the basic algorithm and implement a simple version using C & SDL.
    But more than just a simple rasterizer, we'll cover some other important ideas in this video:
    - Rasterization rules (top-left fill convention)
    - Subpixel precision
    - Interpolation using barycentric coordinates
    - Some simple rasterizer optimizations
    Chapters:
    00:00:00 Introduction
    00:05:48 Scanline rasterizer
    00:11:07 Pineda's rasterization algorithm
    00:11:24 Sources & inspiration
    00:13:34 High-level overview of our rasterization algorithm
    00:18:30 Initial code overview
    00:31:44 Compiling our code
    00:34:12 Defining if a point is inside a triangle
    00:57:34 Fill convention (top-left rasterization rule)
    01:11:56 Barycentric coordinates
    01:31:23 Avoiding computing the edge function per-pixel
    01:50:32 Rotating our triangles
    01:56:58 Subpixel precision
    02:11:37 Conclusion & next steps
    Download the initial boilerplate code:
    github.com/gustavopezzi/sdl-r...
    Triangle rasterizer code (integer):
    github.com/gustavopezzi/trian...
    Triangle rasterizer code (float):
    github.com/gustavopezzi/trian...
    Triangle rasterizer code (16.16 fixed-point):
    github.com/gustavopezzi/trian...
    Juan Pineda's parallel rasterization article:
    www.cs.drexel.edu/~david/Clas...
    Fabian Giesen's article on triangle rasterization:
    fgiesen.wordpress.com/2013/02...
    Kristoffer Dyrkorn's article on triangle rasterization:
    kristoffer-dyrkorn.github.io/...
    Bastian Molkenthin's article on triangle rasterization:
    www.sunshine2k.de/coding/java/...
    Gabriel Gambetta's article on triangle rasterization:
    gabrielgambetta.com/computer-...
    For comprehensive courses on computer science, retro programming, and mathematics, visit: www.pikuma.com.
    Also, don't forget to subscribe to receive updates and news about new lectures and tutorials:
    / @pikuma
    Enjoy!
  • Наука та технологія

КОМЕНТАРІ • 86

  • @Felipekimst
    @Felipekimst 4 місяці тому +7

    This guy is simply the best at what he does on UA-cam.

  • @sng6392
    @sng6392 Рік тому +11

    I was so sad when the previous video removed, but now it is back with more stuff!. Thank you!!!

  • @ulysses_grant
    @ulysses_grant Рік тому +2

    This is like someone explaining me my childhood and adolescence playing games that I could copy in Floppy Disks and play them in my friend's computers, because I had none back in the day.
    I'm definitely gonna cry and hug people after this.

  • @yuriorkis_scream
    @yuriorkis_scream 20 днів тому +1

    Great work! Thank you for detailed explanation of such an important thing for all people who doing some staff in computer graphics!

  • @Yuvaraj-pd6ng
    @Yuvaraj-pd6ng Місяць тому +2

    the best explanation of Rasterization on youtube

  • @Bunny99s
    @Bunny99s 3 місяці тому +3

    I've done most of that stuff 25 years ago so this video didn't really taught me anything new. However I have to agree with all the other commentators: This has to be the best and most comprehensive video on rasterization. Most videos skim over it as they will be targetting an actual graphics API like OpenGL or DirectX which does all that for you. So they usually focus just on the vector and matrix stuff and many even mix up some terminology (NDC and Clipspace are what most confuse and when the homogeneous divide actually happens). I'm sure many probably will struggle sitting through this video as some concepts are explained in a quite tight format. Though I think you really mentioned every little detail that is necessary. Even providing some visual and mental hints for some concepts (barycentric coordinates for example) which is certainly helpful for many.
    So I'm really impressed that you managed to pack all this into one video.

  • @braveitor
    @braveitor Рік тому +11

    Really good stuff. If my college math teachers would have tought this kind of equations and formulas that way, I'd have loved maths eagerly. I understood everything on it and I hope I can show my son this video when the time to study this matter comes. Thank you, you're a wonderful communicator. :)

  • @hapboyman
    @hapboyman 3 місяці тому +2

    It was an impeccable performance. I'm so grateful for the motivation you've given me to study.

  • @mandasartur
    @mandasartur 9 місяців тому +6

    By far the best video on triangle rasterization which I have seen, professionally made and explained. It made me rework in the middle of the night my naive and slow hybrid bresenham/scanline solution to this one based on half-planes, which in fact is theoretically simpler. Fantastic tutor skills, I will most probably buy the course when I'm done with my the current pile of shame.

    • @pikuma
      @pikuma  9 місяців тому +2

      Hahaha. Thanks.
      PS: We allhave our pile of shame. 😅

  • @martincohen28
    @martincohen28 Рік тому +6

    Yaaay! It's back!

  • @tenthlegionstudios1343
    @tenthlegionstudios1343 Рік тому +1

    Epic walkthrough! Thanks for linking all the articles as well!

  • @paulooliveiracastro
    @paulooliveiracastro Рік тому +4

    I just bought the full course because of this amazing video. I'm very glad I've found it. I was reading the book "Computer Graphics from Scratch" and although they have different approaches to the subject, they go very well together. I hope one day you make a lecture on Raytracing from scratch as well. Thank you :)

  • @MissPiggyM976
    @MissPiggyM976 7 місяців тому +1

    Very well done, many thanks!

  • @undofix
    @undofix 11 місяців тому +2

    The HUGEST thanks for this absolutely comprehensive tutorial! I've spent a lot of time writing rasterizers and always wanted to but couldn't make a gapless rasterizer because of the lack of the information for this topic. Your video finally solves the problem! You explained everything as clear as possible!

    • @pikuma
      @pikuma  11 місяців тому +1

      Thanks for the kind words. It's an extremely fun topic to study. 🙂

  • @mrtitanhearted3832
    @mrtitanhearted3832 10 місяців тому +1

    That was really awesome and useful!!! 😄👍

  • @harshguptaxg2774
    @harshguptaxg2774 Рік тому +2

    Awesome video Gustavo sir

  • @Plrang
    @Plrang 6 місяців тому +1

    Great stuff. Took me some time to make it work on Commodore Plus/4, but it was an awesome ride.

    • @pikuma
      @pikuma  6 місяців тому

      Oh that's great! Pics please. 🙂

  • @jamessouza6927
    @jamessouza6927 Рік тому +4

    Sensacional!!!!

  • @tylervandermate6818
    @tylervandermate6818 10 місяців тому +1

    This is FANTASTIC!!! Thank you! holy moly insta-sub

  • @HTMangaka
    @HTMangaka Місяць тому +1

    Most of these concepts work on a GPU as well, with a bit of mathematical fanagling. My current hobby is coding crazy efficient GPGPU kernels with CUDA. ^^

  • @araarathisyomama787
    @araarathisyomama787 Рік тому +3

    Instantly subscribed! I have functionally rewritten PS1 GPU in C so I felt called out when you mentioned how PS1 handled rasterization ;). Even with multithreading I still had problems with performance on weaker devices like PSVita. Calculating barycentric coordinates "the proper way" on every pixel is just out of the budget.
    This video solved most of my problems with this, though I may add some empty space skipping later if profiler will say so, but I want to avoid divisions somehow... Speaking of which at 1:25:49 you could've calculated invArea instead of area at line 68. This way you could replace divisions at lines 86-88 with multiplications.
    Now I just have to understand the bit arithmetic sorcery Duckstation project has in their `GPU_SW_Backend::ShadePixel` function (GitHub) and maybe I can finally squeeze the performance I need for this thing... if I could understand it. I recommend checking out some projects with software renderers (especially emulators targeting weaker devices) some of those are literal gems and maybe you'll come up with more video ideas. There is very little good content on YT and internet in general regading that topic.
    My two cents. Keep up the great work you do!

    • @pikuma
      @pikuma  8 місяців тому

      Loved your comment. Thanks for the tips on that division. Divisions really were 'bad hombres' back in the day. :)

  • @rafaelsantana9946
    @rafaelsantana9946 4 місяці тому +1

    Valeu Gustavo, seu video ta sendo usado pra minha aula aqui na SFSU. Parabens!

    • @pikuma
      @pikuma  4 місяці тому

      Que legal! Qual o seu curso?

    • @rafaelsantana9946
      @rafaelsantana9946 3 місяці тому +1

      @@pikuma COMPUTER GRAPHICS SYSTEM

  • @krakulandia
    @krakulandia Рік тому +2

    You can scan convert the edges using floating points without issues if you just use algorithm which ensures that connected edges of two triangles are calculated the same way. Then you won't get any black pixels at all. Back in the 486/Pentium days I used to do the edge scan conversion so that both left and right edges were calculated simultaneously, which forced the algorithm to keep track which is the left/right edge. A month ago I wrote a new polygon filler algorithm and decided that the benefits of doing those things on a modern CPU are minimal. So these days I simply use edge buffers: 2 floats per row --> left X and right X coordinate of the polygon. Now I can for real render polygons instead of triangles and the algorithm itself is simpler than if I was drawing triangles only. And the speed is really good and there are never any overlapping pixels or holes between polygon edges. No biases of anykind are needed.

    • @pikuma
      @pikuma  Рік тому +2

      That's interesting! Thanks for taking the time and explaining. Now that you mentioned it, I see many programmers writing engines that work with quads (and polys) and they all mention the same benefits you did. My engines usually work with tris, but I will give this poly approach a try soon. Just one question, do you always keep track of triangles in "pairs" of left-right? How do you reason about their connectivity?

    • @krakulandia
      @krakulandia Рік тому +1

      @@pikuma You don't need to keep track which triangles are connected. You only need to make sure you calculate the edge X coordinates for each line the exact same way for both triangles/polygons which share that edge. Easiest way to do this is to take points P1 and P2. Before you calculate the edge P1-->P2 X coordinates for each line on screen, just sort those vertices (P1 & P2) by their Y coordinates so that P1.Y

  • @andrew_lim
    @andrew_lim 6 місяців тому +2

    Note that the diagrams at 47:29-47:54 only work for y-up cartesian coordinates and if the vertices are defined and passed to the cross() function in counter-clockwise (CCW) order. They do not work for y-down screen coordinates. However the edge_cross() function in the C code works because the vertices are passed in clockwise (CW) order which are okay for y-down screen coordinates so the w0,w1,w1 >= 0 test works.

    • @pikuma
      @pikuma  6 місяців тому +1

      Yes! Thank you. My life is a constant tug of war between traditional y-up math notation and formulas that work in screen coordinates with y-down.

  • @cryptogaming9052
    @cryptogaming9052 Рік тому +1

    Thank i m a technical artist and this is gold

  • @laminak1173
    @laminak1173 28 днів тому +2

    It reminds me the time of demomakers in the 90s

  • @PrecisionzFPS
    @PrecisionzFPS 11 місяців тому +1

    thank you!

    • @pikuma
      @pikuma  11 місяців тому +1

      You're welcome! 🙂

  • @legeorgelewis3530
    @legeorgelewis3530 4 місяці тому

    Something fun you can do is implement this in a compute shader and render normal vertices with textures and all that.

  • @normaalewoon6740
    @normaalewoon6740 10 місяців тому +1

    1:59:20 taking this a step further, you can offset the rasterization point randomly for every pixel and every frame, as long as it stays inside the pixel area, instead of using only the pixel centers. This turns jagged edges into noisy approximation of an endlessly supersampled image, if done in real time. Compared to ssaa, msaa, fxaa, dlaa and tsr, this could be the cheapest and most detail preserving way of anti aliasing in gaming. Blending the current frame with previous frames takes place inside our eyes due to a phenomenon called persistence of vision, which suppresses the noise by a lot, of course depending on the framerate. There is a github project called gaussian anti aliasing that does this. I have implemented it in a ray marching shader and it works really well. Now, only the gaming industry needs to pick it up

    • @pervognsen_bitwise
      @pervognsen_bitwise 10 місяців тому +2

      This is called stochastic rasterization in the literature and you are drastically overstating its virtues. Even if it was a perfect solution to jaggies (it isn't but it's a useful tool), the major aliasing problems for the last 15 years in games and real-time graphics are primarily about lighting, shadows and material shading. That's why TAA/TSR has won--it integrates samples over time, so it naturally filters temporal aliasing, and combined with intentional temporal subpixel jittering (similar idea to stochastic rasterization) you can turn spatial aliasing into temporal aliasing and filter that too. And it's a big hammer that can be used to address all the major sources of aliasing, not just jaggies.
      Jaggies just haven't been top of mind for graphics programmers for a long time and for a good reason (1080p -> 1440p -> 2160p). There's a reason Apple got rid of subpixel antialiased text rendering when they moved to Retina/1440p displays. In games, the shift towards deferred shading made MSAA unavailable/impractical and TAA picked up the slack. The one case where I think MSAA/anti-aliased rasterization is a big win is in VR because of how visible edge aliasing can be with the low relative resolution. But that's a niche. Outside of that, it's too far down the list of aliasing problems to be a major concern.

    • @normaalewoon6740
      @normaalewoon6740 10 місяців тому

      @@pervognsen_bitwise Thanks for your comment. The literature on stochastic rasterization looks quite complicated to me, but as far as I can tell, its main focus is multi pixel blur. Besides the gaussian anti aliasing project, I can't find any literature on random rasterization anti aliasing
      Other than that, I think it all comes down to personal preference. Real time rendering has its limitations, so even with random rasterization points, you won't hide aliasing artefacts. That is not the primary goal of it though. Which is showing more accurately what is going on inside the pixel, while preserving the finest detail during movement, without additional cost. Lots of people rather disable anti aliasing than using taa. As I do, but I noticed that jagged edges stand out the most on stills and slow camera movement, especially on grass fields with a high polygon/edge density. Faster camera movement looks a lot better to me, as the jagged edges ar pretty much random, as wel as texture undersampling of course. Random rasterization can emulate this at any time. If the noise gets unbearable because there is too much sub pixel detail, than this should be adressed with lods or texture mipmaps. Taa can only blur the noise away, together with a lot of precious detail. After all, taa doesn't see a difference between the two. Dlaa is an improvement, but significantly more expensive and still not as crisp as seeing the picture as it is
      There is also foveated adaptive resolution, which works with deferred rendering. If you have an eye tracker in vr goggles or goggle mounts without glasses to look at a regular monitor, you can render at lower resolution in the periphery of vision to improve the performance. It also allows for supersampling in the center of vision. This reduces random rasterization noise to very acceptable levels. Still not noise free, but lots of people don't care too much, myself included. It's always possible to include reprojection, but the player should have full control of the most recent frame contribution
      Then there is the problem of effects relying on taa to look smoother (at the cost of washing out details). These effects often use dither patterns to emulate translucency, mostly by turning off the opacity mask or by pushing pixels forward so a part of them is hidden behind other objects. I rather use a random number generator instead. Without reprojection, noise looks a lot better to me than dither patterns. I have made a swamp water shader with random pixel depth offsets in ureal engine. This is not only a noisy approximation of translucency, but it projects volumetric shadows inside the water and it looks really awesome. It also works properly with cloud shadows, unlike real translucency. Other than that, I tend to disable smooth lod changing and object blending as soon as I can, as unnecessary and problematic as they are. I really don't mind small changes in geometry and hard edges between objects
      There is an even bigger problem than taa blur though: sample and hold motion blur due to the way modern monitors work. Even on 240hz with 1 ms response time or less, you see every frame for 4.2 ms. This produces a significant amount of motion blur during eye tracking, as the picture doesn't track your eye movement in that time. On 120 and 60 hz it gets even worse. Oled won't save us from this. At this time, only a crt monitor or a strobing backlight lcd has a pixel visibility time small enough for sharp movement. This makes taa blur even more obvious during movement, so I can confidently say that I don't need reprojection anymore. As well as variable refresh rates, which happened overnight as it isn't compatible with backlight strobing. A constant framerate is always the smoothest and most predictable, unlike multiplying in game movements by the previous frametime. In order to reach the target framerate all the time, it's well possible to do runtime view distance optimizations based on gpu utilization, if you can get it

  • @said-rv1er
    @said-rv1er 6 місяців тому +1

    Thanks a lot, so actually we are doing cross product with 3d vectors (with z component being 0) and only care about the *sign* of the z component of the resulting vector.

  • @TheBitProgress
    @TheBitProgress Рік тому +1

    Can you compare it to scanline algorithm? -Is it slower? At first look it should be slower because of math.-
    My bad. Now i have watched this part of the video.
    Brilliant stuff! Thank you!

    • @pikuma
      @pikuma  Рік тому +1

      🙂👍❤️

  • @giggles8593
    @giggles8593 9 місяців тому +1

    hello sir, i was wondering what font are you using for your text editor?

  • @Felipe_f
    @Felipe_f 7 місяців тому +1

    I did somethig like it. My project is a 3D rendering software. I'm using a version of the scanline algorithm. The program is ready to run, but has not finished.

  • @DiThi
    @DiThi Рік тому

    There's another solution for this bias other than fixed point numbers: if all your floats are positive you can just reinterpret them as integers for the comparison and the bias can be just -1 like before.

  • @GonziHere
    @GonziHere Рік тому +1

    Building up for your own version of Nanite? :D

  • @KafkaesqueCthulhu
    @KafkaesqueCthulhu Місяць тому

    Oi Gustavo! Antes de mais nada, muito obrigado pelo conteúdo! Eu sempre tive o sonho de aprender computação gráfica e, quem sabe, trabalhar na área, e pelo que tô percebendo através deste vídeo é que teus cursos vão ser o pontapé inicial pra isso. Sério, obrigado! Mas eu tô com uma dúvida aqui. Teria como me ajudar, por favor?
    Fui até a parte da fill convention e, em tese, entendi tudo. A única coisa que tô com dúvida é sobre o sinal do w0, w1, e w2. Eles não deveriam dar um valor negativo no lugar de um positivo? Seguindo aquela animação que você mostrou do produto vetorial aumentando e diminuindo (minuto 42:42), quando o vetor que se move (vamos dizer que ele é o vetor b) está do lado esquerdo do vetor a (aquele que está parado), o produto vetorial seria positivo; caso estivesse do lado direito, o produto vetorial seria negativo. No caso do triângulo que a gente quer preencher, o vetor b (que no caso seria do ponto v2, por exemplo, até o ponto p dentro do triângulo) estaria do lado direito do vetor a (que iria do ponto v2 até o ponto v0), tendo, em princípio, um produto vetorial com valor negativo, porque a gente tá fazendo a x b, e não b x a; o problema é que no vídeo ocorre o contrário, daí a confusão. Você poderia dizer o que eu não estou captando?
    Novamente, muito obrigado pelo conteúdo! Sei que vou passar minhas férias da faculdade fazendo o teu curso de 3D. :)

    • @KafkaesqueCthulhu
      @KafkaesqueCthulhu Місяць тому

      Depois de terminar um trabalho da faculdade e mais umas coisinhas, aproveitei pra voltar ao problema. 1:01 da manhã, sendo que 6:00 da manhã tenho que acordar pra faculdade, mas finalmente encontrei o motivo! Eu esqueci que o y na tela, diferente do plano cartesiano, cresce de cima pra baixo. (eu fiz várias vezes utilizando o plano cartesiano e o w sempre dava negativo.) Caramba, que detalhe *bobo*!
      Enfim, apesar da pequena dor de cabeça que esse problema gerou, fazia tempo que não ficava tão empolgado com algo. Espero que as férias cheguem logo!
      Abração! :)

  • @paulooliveiracastro
    @paulooliveiracastro Рік тому +1

    @pikuma what's the performance difference between this algorithm to fill triangles and the flat-top/flat-bottom one that you teach in the paid course?

    • @pikuma
      @pikuma  Рік тому

      Compared to that implementation (scanline rasterizer), this one is faster! You can easily replace the triangle_fill() function for this one and measure it in your machine. Since it's just a simple addition per-pixel it's better than having to compute the slope and the start/end points per scanline.

    • @paulooliveiracastro
      @paulooliveiracastro Рік тому

      @@pikuma any tips on how to optimize this? Maybe I'm not being reasonable, but I was expecting to reach 60fps with this when drawing a few thousand triangles on screen. In reality I'm achieving ~38fps (even with backface and fustrum culling turned on).
      I tried to skip a line when going out of the triangle, I pre-computed the inverse of the area to avoid divisions per pixel, but that only got me so far.

    • @paulooliveiracastro
      @paulooliveiracastro Рік тому

      I just compiled with flag O2 and...surprise! 140fps. Those compiler optimizations are dope.

    • @lt_henry820
      @lt_henry820 6 місяців тому

      @@paulooliveiracastro Software rasterizers are bounded by filling rate rather than triangle count. It will bottleneck if you are using a high resolution and model is near to near clip plane. A single triangle filling the screen will struggle your CPU more than a several thousands far away from the camera on a small 64 pixel square.
      Knowing this....140fps is a lot a textured Sponza model, but you should achieve 200/300 fps for a small rendering area.

  • @johnhajdu4276
    @johnhajdu4276 3 місяці тому

    On the github the int version of the triangle rasterizer's main.c is wrong. The multiple "edge_cross" call is outside of the 2 FOR-loop. So it cannot check individual pixels.

  • @demon_hunter7905
    @demon_hunter7905 11 місяців тому

    At 1:26:00 these are biases added to the w's, does this not affect the results of alpha, beta, and gamma?

    • @pikuma
      @pikuma  11 місяців тому +1

      Hm, good question. I'll have to do some proper thinking about this but from a very quick thinking I'd say that it respects what we consider what's inside or outside.
      For example, changing the w's by a bias modifies what points we consider inside or outside. So, when we compute alpha, beta, and gamma, we are computing the barycentric coords for that point that we consider inside the triangle. Again, I'll revisit the code and think about this properly, but that's my initial quick thought.

    • @demon_hunter7905
      @demon_hunter7905 11 місяців тому +1

      @@pikuma Thank you for replying! Maybe texture mapping something using alpha, beta, gamma would make things clear.....?

  • @nikefootbag
    @nikefootbag 9 місяців тому

    I'm a windows user and not able to compile. "make" command is not recognized, I have gcc/mingw installed but can't seem to work out what I need to resolve.
    If I just run the gcc command it complains about sd12-config: No such file and unrecognized commands '--libs' and '--cflags'.
    I've been tempted to get your full 3D graphics programming course but am wondering if it's more comprehensive in the project setup than this video?
    I've also recently followed another video of yours about setting up SDL in a Visual Studio Project on windows and feel like that might be what i'm missing here, but I don't seem to have the experience to combine this video's source code with a Visual Studio Project setup with SDL.
    Your videos are great no doubt but any help getting this example running on windows would be greatly appreciated!

    • @pikuma
      @pikuma  9 місяців тому +1

      I don't habe a Windows machine with me, but if I recall correctly MinGW cones with an executable called "mingw32-make", which should behave similar to GCC's make on Linux.
      But my suggestion would be to simply use Visual Studio. It has not only a better build process (as I show on my SDL+Windows video) but you get a great debugger with it as well.

    • @pikuma
      @pikuma  9 місяців тому +1

      Using Visual Studio also means you don't need a Makefile (or make).

    • @nikefootbag
      @nikefootbag 9 місяців тому

      @@pikuma thanks for the reply! I’ll try again with visual studio

  • @anthonypace5354
    @anthonypace5354 Рік тому +2

    I like your vids, but this barycentric approach is actually a bit slow.
    The better approach really is to get the lines first, find the leftmost and rightmost edge per triangle in 2 slot buckets, using the bounding box so your height y buckets can start at 0, and LERPing the colour in a scanline approach from the leftMost point to the rightMost point for each row.
    The Barycentric technique, of having to calculate to cross product with multiple multiplications per point, is much slower than using a properly written Bresenham, which limits it’s operations to a few branches prior to the main loop which has only 1 branch, and a + or - operation; which means finding the edges first can outpace the barycentric approach completely. Also, to find the edges, allows you to skip needing to find 1/3rd of the edges for successive connected triangles, and would allows you to know the top-left immediately. In either technique, that is 1/3rd less computation right away if you properly memo-ize edges to be used with surrounding triangles and render outward; thus finding the edges first significantly helps, and it helps parallelize too.
    Not only can multiple triangles be done per thread, but you can break down aspects of the triangle into threads too. If we were talking very large triangles, each line can be sent to different threads, and to so too can bucket comparisons for line segments, and when you have your 2-slot buckets of leftMost and rightMost points figured out, you can segment/sub-divide the triangle and have a texture rendered or colours LERPed by multiple threads per division of rows too.
    Both approaches of course benefitting from sharing a cache for texture/fill application, and getting rid of ranges of triangles that would be covered or showing a backface right from the beginning before doing any rendering at all.
    What I do like about your vid, is that you can extrapolate some of the concepts you were teaching to other applications, less specific to rendering.

    • @pikuma
      @pikuma  Рік тому +2

      Thank you so much for this breakdown. You're correct. If I was creating a software renderer I'd probably approach it from this angle. I guess my idea was to give students an overview on how GPUs see this problem, and currently barycentric coords play a part in the modern pipeline.

    • @anthonypace5354
      @anthonypace5354 Рік тому +2

      @@pikuma Well, I do agree that it is smart to teach what the current pipeline is, and what you are teaching is the common technique out there; yet, what is popular is not always the most efficient. Scanline Rasterization, finding the edges first can lead to a giant boost in performance. Segmentation is easy given the constrained boundaries, requiring less work, and very efficiently balanced.
      But I’m not expecting you to take the word of a rando; I did a google search and found that there has been work done and it’s about 2.5X faster than current popular techniques. E.g. An interesting paper about efficient gpu path rendering using the scanline rasterization, by kunzhou, came right up.

    • @pikuma
      @pikuma  Рік тому +2

      @@anthonypace5354 Great stuff, Anthony. Agreed! 🙂👍

    • @lt_henry820
      @lt_henry820 6 місяців тому

      @@anthonypace5354 This approach is known as Pineda algorithm. It is known to be used at least, on early 3Dfx gpus. Last decade, Intel tried some sort of cpu-based gpu, and this algorithm was selected instead of bresenham one.
      I also implement this algorithm on my rasterizers because it makes side clipping easy (and faster). Isn't Kunzhou paper about glyph rasterization on modern gpus? I find kind of off topic

  • @DeafMan1983
    @DeafMan1983 6 місяців тому

    Hello great idea but I use smilar with "uint32_t inter_color = (a

    • @freemasry-gr8hw
      @freemasry-gr8hw 4 місяці тому +1

      I did it that way :) its called type punning
      Color color = {r,g,b,a};
      DrawPixel(x, y, *((u32*)&color));

  • @Felipekimst
    @Felipekimst 4 місяці тому +1

    but how do you turn the alpha, beta and gama for a given P point to uv cordinates?

    • @pikuma
      @pikuma  4 місяці тому +1

      You multiply them.
      alpha * u0, beta * u1, gamma * u2
      alpha * v0, beta * v1, gamma * v2

    • @Felipekimst
      @Felipekimst 4 місяці тому +1

      @@pikuma
      haha thanks for taking long to reply... I was trying to figure out what you said but I kinda of failed but didnt want to cause you to explain it once again haha
      but considering another part of my algorithm, I was trying to use that approach with quadrilaterals, do you think it is possible to use the cross product just like you did to find the correct weights for the 4 vertices?
      I just need to know if that is possible, if it's, I'll try to figure out how to interpolate them haha so don't feel obligated to answer that once again ahah

  • @patrickpeer7774
    @patrickpeer7774 Рік тому +2

    It says "for beginners" but while I did generally understand the visualised concept of rasterising, I didn't understand the code overview part too much. I think I'm missing certain prior knowledge. Is there a video or course here that is "more for beginners"? 😅

    • @1u8taheb6
      @1u8taheb6 7 місяців тому +1

      To understand the code you need to be a little bit familiar with programming languages like C. There's no specialist knowledge related to this specific topic of rendering that you need in order to understand this code - it's functionally quite simple. You just need to be more familiar with C-like languages in general and their syntax and then you'll be able to follow along much easier. Code always looks more complicated than it is because all the keywords and boilerplate distract the untrained eye from the actual relevant bits.

  • @ashwithchandra2622
    @ashwithchandra2622 11 місяців тому

    Are you using opengl or what?

    • @pikuma
      @pikuma  11 місяців тому +1

      No OpenGL. Just a window with a framebuffer of pixels to be painted. The source code is in yhe description. I use SDL to create the operating system window.

  • @johnhajdu4276
    @johnhajdu4276 3 місяці тому

    At 1:14:27 you are using greek letters to define areas, which is misleading. According to genereal math rule the greek letters are used for angles (degrees or radians).

    • @pikuma
      @pikuma  3 місяці тому

      What about PI? Or delta? 🤔
      I've seen alpha, beta, and gamma being used by one book and I always liked that. Feel free to call them whatever you want though. 👍🙂

  • @colonthree
    @colonthree 6 місяців тому

    OwO

  • @tocatocastudio
    @tocatocastudio 2 місяці тому +1

    Você é brasileiro?

    • @pikuma
      @pikuma  2 місяці тому

      Sim

    • @tocatocastudio
      @tocatocastudio 2 місяці тому +1

      @@pikuma é muito difícil achar conteúdo bom de computação gráfica em português então achei o seu vídeo eu consigo entender tudo. Parabéns você é muito bom no que faz.

  • @mrdkaaa
    @mrdkaaa 27 днів тому +1

    Ok, for whatever reason, instead of tracking the vector, you're looping through all the pixels in the bounding box (And repeat ten times what a bounding box is. Everything so insanely long winded.) and wasting time with the cross product. Then when you get to optimization with a fixed step, you still keep going through all the X's. It's so painful to watch.