This is very well done. I've seen many articles that were too technical to understand and too excessive with all of the details. The visuals helped a lot with understanding the algorithm.
Great explanation. I did something like this in high-school, but had to code this just recently. It’s very close to what I was thinking of doing, but watching your vid helped clarify things in my mind to a point I was able to make further optimizations. It is an excellent starting point, and The animation was exactly what I needed. More of these please.
I was so dumb until I watched this. Long time ago I wanted to make a scanline renderer and used some weird complex math to get the texture coordinates, now this is much easier. I'd like to know about perspective correctness for 3D triangles.
thank you very much your explanation is almost perfect and the graphics were very helpul unfortunately this is another example of „what can be explained in 8 minutes, what my professor can't explain within around a total of 20 hours“ i‘m glad there are videos like this
Just to add to the discussions. Some textbook uses the DDA-based approach and maintain an active edge list which stores all of the edges that crosses the current scanline (I am not able to describe the algorithm in full here bu t just to give you an idea). By exploiting the coherence properties of scanline (such that all pixels bounded by the same 2 edges in a scanline must have the same "insideness"), one can efficiently compute all of the "insideness values" of all pixels along the scanline. I am pretty sure that this approach can easily generalise to support color interpolation across scanline as well. A distinct advantage of what seemingly to be an inefficient and counter-intuitive approach is that it can handle overlapping polygons. By maintaining also a polygon identifier inside the active edge list, we can effectively perform visibility-testing right at the rasterisation stage (without needing z-buffer) and writes directly to frame buffer. I might be wrong here. Please correct me if I am.
After I think about it more I realise the DDA-based approach should turns out to be more space-efficient since the exact pixel coordinates of the lines do not have to be stored. Also, running Bresenham's upfront before the actual scanlining would induce an overhead. Given that the primitive is given by the form of an edge list, the DDA-based approach should be a better choice.
Basically, lerp colors between each vertex on a line, then fill lerped colors between each pixel! I hope I am right and this helped anyone who wanted a quick explanation ":D!
This is so interesting! Thanks for making this video. It helps me approach image binary representation without feeling intimidated we I'm trying to code out an idea.
Great job. BTW wouldn't be more efficient if we skip drawing lines with Bresenham's algorithm and just do scan lines? For example, we find the minimum and maximum of Y for all 3 points. In this case - 1 and 10. Then we do loop from 1 to 10, find the the points on both sides and draw line between this two points. I see only one drawback for this, we need to divide triangle to 2 parts: first will be with left side v1-v2, and second part will be with left side v0-v1. Right side will be the same for both parts. But at the end we need only 1 loop for every point in the triangle and some calculation for each point
I wish I found this video a while ago when I was trying to do basically this for a project that uses software rendering. I did end up doing somthing similar to this, but at first I was using barycentric coordinates to interpolate. And then it dawned on me: "Why don't I just interpolate the interpolations!? duh!" Only thing I did differently was that I tried to avoid the "sorting" that you did here, where I would need to have extra memory to hold that info. But instead I would do the horizontal lines while I was doing the triangles edges. Which did require extra logic to make sure the y-values were lined up, but it worked. Though I'm sure that I coud've made it faster by not doing those checks and simply drawing out the edges and sorting the pixels as you did here.
Nice, short, sweet video, although I wish you would’ve talked about Bresenham algorithm. I assume it’s just calculating y for every x value and rounding it, where x is a list of whole numbers, but it would’ve been nice to have atleast a high level understanding of it
Very nice video, i'm just wondering what is exactly the formula for linear interpolation.. How did you use it in order to interpolate colors between vertices? Thanks
I'm guessing the bresennam algorithm will give you a sorted array of points to represent the line between each two vertices of the triangle. For each position i in this size n array, the color will be (i/n)*color1 + (n-i/n)*color2. Does it make sense?
Perspective divide doesn't have anything to do with interpolation. In 3D graphics pipelines, that occurs in the vertex shader, before the fragment shader ever runs on the interpolated values.
@@npip99 What I meant is, you need to yeah perspective divide when processing vertices and then interpolate the w component to get the correct texture coord. On PS1 they didn't and just lerped the texture coords xy components instead. Probably too slow to transform vertices into normalized space.
All of this is done by dedicated hardware(ROPs) in sub-milliseconds time, modern GPUs are amazing, if you want to know your GPU's raster speed, check its number of ROPs(and memory bandwidth too)..
nice video, but why isnt the triangle after being projected from worldspace to screen space converted to its parametric form there and then for each pixel it is checked if both parameters are within the 0-1range? im no programmer but solving the same equasion for every pixel seems faster than storing so many things in lists and sorting values.
Well... If you propose to check every pixel you are proposing a brute force quadratic solution (O(n^2)) while sorting algos are way more efficient (O(n logn)). Space in a computer is cheap. What you optimize is operations made. check up on dynamic programming to see more examples of the space-operations trade off
Hey, you have outstanding explanation skills (just in case you don't know) ... You should keep uploading new videos, rather start a playlist on some topic 🔥
I'd guess it's because drawLine() takes two x-values, so if we end up with more, we need to run the function once per pair within a scanline. Therefore, if we end up having an odd amount of values, we need to duplicate one, so no value is being left out and left unprocessed by drawLine()
exactly for the reason we have that y_10 is [8,8], when we implement this algorithm we can always guarantee that a y_set has 2 points so we don't have to check every time in the drawLine for an edgecase. the reason for y_3 = [1,1, 2, 9] and y_1 = [7,8,9,9] is the overlap of the lines at there endpoints. we leave the overlap intentionally exactly for this case.
The best thing about this was the graphics. This was otherwise terrible. First off, he doesn't show how to implement Bresenham's algorithm. You don't solve the linear equation. That's the whole point of the algorithm. Second, when you do use it, you only use it for wire-frame graphics. For polygon graphics you only need to generate the pixel pairs for each raster, the start and finish. As you can see, this linear equation doesn't achieve that.
Thx you very much for the feedback! Sorry that I didn’t explain the Bresenham's algorithm. The Video was made with the intent that it will serve as an additional explanation for the understanding for our Computer graphics script, in which the Bresenham’s Algorithm is explained in detail. For the point, that i didn't solve the linear equation, I already assume that people who watches this video can solve a linear equation. Unfortunately, I don't quite understand what you mean/try to explain in your second point. I applied the scanline algorithm for my own 3D renderer, and it did solve that just fine. If you want to see my own 3D renderer, where I use the scanline algorithm for polygons, you can clone my “3D CPU renderer project” on github (C#): github.com/cedi-code/VertexProject
Great to see more works being done with Manim. It's a pure pleasure watching all the animations.
Came here to say that exact thing. I've been trying to setup manim, could you point me in the right direction?
This is very well done. I've seen many articles that were too technical to understand and too excessive with all of the details. The visuals helped a lot with understanding the algorithm.
Great explanation. I did something like this in high-school, but had to code this just recently. It’s very close to what I was thinking of doing, but watching your vid helped clarify things in my mind to a point I was able to make further optimizations. It is an excellent starting point, and The animation was exactly what I needed. More of these please.
I was so dumb until I watched this. Long time ago I wanted to make a scanline renderer and used some weird complex math to get the texture coordinates, now this is much easier. I'd like to know about perspective correctness for 3D triangles.
- Well done. Thx.
- Clear, concise, engaging.
Wow this guy has a really nice voice! I wish I was that cool!
And the music as well :D
Erotic voice
yeah he's fucking impressive
It’s aight
thank you very much
your explanation is almost perfect
and the graphics were very helpul
unfortunately this is another example of „what can be explained in 8 minutes, what my professor can't explain within around a total of 20 hours“
i‘m glad there are videos like this
most simple and understandable explanation. Thank you !!!
Such a great explanation and what a nice voice! Perfect 5/7
Just to add to the discussions. Some textbook uses the DDA-based approach and maintain an active edge list which stores all of the edges that crosses the current scanline (I am not able to describe the algorithm in full here bu t just to give you an idea).
By exploiting the coherence properties of scanline (such that all pixels bounded by the same 2 edges in a scanline must have the same "insideness"), one can efficiently compute all of the "insideness values" of all pixels along the scanline. I am pretty sure that this approach can easily generalise to support color interpolation across scanline as well.
A distinct advantage of what seemingly to be an inefficient and counter-intuitive approach is that it can handle overlapping polygons. By maintaining also a polygon identifier inside the active edge list, we can effectively perform visibility-testing right at the rasterisation stage (without needing z-buffer) and writes directly to frame buffer.
I might be wrong here. Please correct me if I am.
After I think about it more I realise the DDA-based approach should turns out to be more space-efficient since the exact pixel coordinates of the lines do not have to be stored. Also, running Bresenham's upfront before the actual scanlining would induce an overhead.
Given that the primitive is given by the form of an edge list, the DDA-based approach should be a better choice.
Amazing video! Reminds me of 3b1b. keep it up!
it is because he is using Manim which is made by 3b1b
Basically, lerp colors between each vertex on a line, then fill lerped colors between each pixel!
I hope I am right and this helped anyone who wanted a quick explanation ":D!
This is so interesting! Thanks for making this video. It helps me approach image binary representation without feeling intimidated we I'm trying to code out an idea.
Great! manim revolutionazed visual explanation videos.
Beautiful video ❣️ truly amazing 💯🔥
Beautiful and simple, thank you for this very helpful and easy explanation.
Wow, one of the best explanation I've heard, thank you!
Great job. BTW wouldn't be more efficient if we skip drawing lines with Bresenham's algorithm and just do scan lines? For example, we find the minimum and maximum of Y for all 3 points. In this case - 1 and 10. Then we do loop from 1 to 10, find the the points on both sides and draw line between this two points. I see only one drawback for this, we need to divide triangle to 2 parts: first will be with left side v1-v2, and second part will be with left side v0-v1. Right side will be the same for both parts. But at the end we need only 1 loop for every point in the triangle and some calculation for each point
Best explanation video.
Simplest explanation ever, Big like
Beautifully done. Masterfully explained thanks so much for sharing
Nice! It would be great to see Bresenham algorithm on the circle.
THANK YOU!!! Great teaching!
I wish I found this video a while ago when I was trying to do basically this for a project that uses software rendering.
I did end up doing somthing similar to this, but at first I was using barycentric coordinates to interpolate. And then it dawned on me: "Why don't I just interpolate the interpolations!? duh!"
Only thing I did differently was that I tried to avoid the "sorting" that you did here, where I would need to have extra memory to hold that info. But instead I would do the horizontal lines while I was doing the triangles edges. Which did require extra logic to make sure the y-values were lined up, but it worked. Though I'm sure that I coud've made it faster by not doing those checks and simply drawing out the edges and sorting the pixels as you did here.
Thank you.
Nice, short, sweet video, although I wish you would’ve talked about Bresenham algorithm. I assume it’s just calculating y for every x value and rounding it, where x is a list of whole numbers, but it would’ve been nice to have atleast a high level understanding of it
Thanks very much, understandable
this made it so clear. thank you so much
Very nice video, i'm just wondering what is exactly the formula for linear interpolation.. How did you use it in order to interpolate colors between vertices? Thanks
I'm guessing the bresennam algorithm will give you a sorted array of points to represent the line between each two vertices of the triangle. For each position i in this size n array, the color will be (i/n)*color1 + (n-i/n)*color2. Does it make sense?
Shouldn't the first linear equation with given point be y=x+2?
For texturemapped triangles however you need to use perspective divide instead of interpolation
Perspective divide doesn't have anything to do with interpolation. In 3D graphics pipelines, that occurs in the vertex shader, before the fragment shader ever runs on the interpolated values.
@@npip99 What I meant is, you need to yeah perspective divide when processing vertices and then interpolate the w component to get the correct texture coord. On PS1 they didn't and just lerped the texture coords xy components instead. Probably too slow to transform vertices into normalized space.
This resembles me 3blue1brown.
somebody got inspired
He did mention It’s made using Manim, an open source animation engine made by 3b1b himself
@@henryzt Great open source engine
Thanks you so much. This very easy to understand!!
Very nice video! Good job.
Beautiful, thank you and keep it up.
Great explanation
This video was awesome
Great work
Shoouldn't the first linear equation with given point be y=x+2?
Such an in depth and clear explanation!
All of this is done by dedicated hardware(ROPs) in sub-milliseconds time, modern GPUs are amazing, if you want to know your GPU's raster speed, check its number of ROPs(and memory bandwidth too)..
1:40 why does the line between the green and blue pixel not resemble a stair of 3 equal steps of 3 length?
nice video, but why isnt the triangle after being projected from worldspace to screen space converted to its parametric form there and then for each pixel it is checked if both parameters are within the 0-1range? im no programmer but solving the same equasion for every pixel seems faster than storing so many things in lists and sorting values.
Well... If you propose to check every pixel you are proposing a brute force quadratic solution (O(n^2)) while sorting algos are way more efficient (O(n logn)).
Space in a computer is cheap. What you optimize is operations made. check up on dynamic programming to see more examples of the space-operations trade off
yeah his explanation kinds suckzzz
Very good animation, well done
Nice explanation with great animation thx
an awesome video.
passing 3 vertices to the gay peyo
Thanks! This helped me.
Hey, you have outstanding explanation skills (just in case you don't know) ... You should keep uploading new videos, rather start a playlist on some topic 🔥
Well done.
Would bresenham's algorithm work for vertical lines?
nice video dude!
is this equivalent to using barycentric coordinates?
Excellent!!! Congratulations!!!
In 3:28 why in y3 why there is two 1??
Awesomeeee
wow nice triangle
I believe the image coordinate is wrong in this.
0, 0 is in the top left corner.
There is no standard for NDC coordinates. Both are correct.
Ok, I get it. I admit defeat. No further comment.
Thanks, I'm building a small graphic motor to be implemented in a ESP32 :)
awesome!
nice! .
Hi, the Github link does not work any more
Perfect expo
3blue1brown fan?
Hi. Not clear @3:30 onwards why y[3] repeats the 1 and similar for the y[1] repeats the 9
same reason as @4:10 the corners have the postion stored twice (also while calculating the linear equation the corners overlap)
Why'd you delete the source code??? Can you reupload it please?
Yes! Did you hear any response?
y3 = [1,1,2,9] why 1 twice?
I'd guess it's because drawLine() takes two x-values, so if we end up with more, we need to run the function once per pair within a scanline. Therefore, if we end up having an odd amount of values, we need to duplicate one, so no value is being left out and left unprocessed by drawLine()
Jurgen Klopp EROTIC VOICE
YOU???
🤔
why is y_3=[1 1 2 9] and not [1 2 9]
same with 7899 not being 789 for y_1
and 88 not being just 8 for y_10 (although this can kinda make sense?)
exactly for the reason we have that y_10 is [8,8], when we implement this algorithm we can always guarantee that a y_set has 2 points so we don't have to check every time in the drawLine for an edgecase.
the reason for y_3 = [1,1, 2, 9] and y_1 = [7,8,9,9] is the overlap of the lines at there endpoints. we leave the overlap intentionally exactly for this case.
Thank you!
ty
Nice video! How did you make the animation?
Manim. It's a mathematical animation engine. You can find it on GitHub
@@tuna3977 thank you so much
Thank you for the video. The link to the source code is broken. Can you fix it? I would like to see the code.
Okay let's go, I am going to develop the killer of VulkanAPI ! 👹👹😈😈
Nice backgroundmusic
This is cool
github link can not found ?
Great
how did you get manim to render this?
Pretty video but this algorithm would be extremely slow.
lol are you using 3blue1brown's rendering library?
Why did you use the word "LOL"? Is it funny? I find it COOL!!
@@int16_t Yeah, I am mildly amused at the observation that he is using 3blue1brown's rendering library. It's pretty cool too
this is the new 3blue1brown
3b1b?
Imagine if gpus used *flood fill* algorithm instead lmao
great!
Rasterization
Way to few subscribers :(
Your first equation is wrong
The best thing about this was the graphics. This was otherwise terrible.
First off, he doesn't show how to implement Bresenham's algorithm. You don't solve the linear equation. That's the whole point of the algorithm. Second, when you do use it, you only use it for wire-frame graphics. For polygon graphics you only need to generate the pixel pairs for each raster, the start and finish. As you can see, this linear equation doesn't achieve that.
Thx you very much for the feedback!
Sorry that I didn’t explain the Bresenham's algorithm. The Video was made with the intent that it will serve as an additional explanation for the understanding for our Computer graphics script, in which the Bresenham’s Algorithm is explained in detail. For the point, that i didn't solve the linear equation, I already assume that people who watches this video can solve a linear equation.
Unfortunately, I don't quite understand what you mean/try to explain in your second point. I applied the scanline algorithm for my own 3D renderer, and it did solve that just fine.
If you want to see my own 3D renderer, where I use the scanline algorithm for polygons, you can clone my “3D CPU renderer project” on github (C#):
github.com/cedi-code/VertexProject
Wow this is slow lmao