I replicated this in Blender with the node editor. It is probably the most impractical thing I've made, but it was fun. Thanks for the great explanation/tutorial!
I was watching a series of videos on Ray Marching lately and all skipped on many details and things, focusing only on cool effects and fractals, thankfully i found this video to help me understand all the basic elements Amazing work!
Great introduction to ray marching. Small nitpick: While your function for computing normals works for your sphere case, I recommend using central difference instead of forward difference like you did when computing the surface gradient. You also shouldn't normalize, but instead divide your gradient with the epsilon. In most non-trivial cases the actual gradient is a better surface approximation than the renormalized vector. So you get vec3(f(x+eps,y,z)-f(x-eps,y,z), f(x,y+eps,z)-f(x,y-eps,z), f(x,y,z+eps)-f(x,y,z-eps)) * (1./eps). It is also useful for people to know that because surface normals are done with gradients, it is extremely important that the distance function is smooth everywhere. Sharp corners, discontinuities and stuff like that have a nasty tendency to mess up lighting. It is sometimes helpful to render the gradients as colors for debugging.
Thanks for the reply. I personally have never had any issues with forward difference normals and it saves two GetDist calls. I have to try out your normalization comment... thanks!
@@TheArtofCodeIsCool It's a good general solution but if you have an expensive distance function the 3 extra ray march calls needed to get the surface derivatives are going to quadruple the computation needed.
@@SerBallister Only if the raymarch only took 1 step, which, for complex scenes especially, is very rare. Lets say it takes 30 steps, then the 3 extra calls make it 10% more expensive.
Hello, first of all, thank you Martijn for this amazing tutorial :D I understood it all, well, almost it all. I don't quite understand how you get the normals, I get that if you do the gradient with the forward diference method, you get the tangent or binormal (I'm not quite sure which), but wouldn't you have to do the negative reciprocals to get the normal instead of the tangent? (like in this video: ua-cam.com/video/_9x2cqO7-Ig/v-deo.html&feature=emb_title)?
Thank you! This is one of the best tutorials for anything I have seen in my life. Simple enough to grasp as a beginner, yet it's simple without glossing anything over so you really get to understand everything you learn here. An excellent starting point for anyone interested in ray marching!
Awesome! This sorta stuff is a bit beyond me but I like knowing the general gist. This was so well made I really feel like I do know the gist now, thanks!
damn you're crazy... explained such a complex topic, that even I could understand everything you said with only one time viewing the video... Big props
The fact that you can create something like this out of small samples of code and some very neat core ideas, makes me believe that it could actually be possible to simulate a world so realistic people don't notice right away they are in one
Had been meaning to try this technique for years, but only just got around to it. This video was an amazingly good sanity check. Thank you so much for making it!
Yeaayaa, omg I'm so glad you exist! Not knowing how to program shaders is a very subtle restriction like.... sheeesh describing how it feels to be limited by it is tough... Ok, it's like loving to work on cars, but you can't touch the electrical systems. You can master whatever as long as wired parts remain a mystery. The longer you do it, the more apparent it becomes that simple fixes can become ugly kludges, large-scale improvements rarely feel perfect, and when you're done... there are haunting moments when you just gotta hope the lights come on, the car will start, and it won't die in front of anyone later on.
Do we need to register somewhere? (if you have the urge to reply with a political joke, then just hold your breath until the feeling subsides or you get a scene fade to the next scene) ((nothing wrong with politics, I just like to keep it apart from my programming))
@@dariusduesentrieb I had trouble with Lithp, I mean Lisp. But I do like having thoughts on thoughts (and sometimes commenting on thoughts (and overdoing it at times)).
It can be optimized. If in first you use classical projection approach, then you can segment all screen to fragment belonging to different object. Then if you process all segment separately, you can march few ray simultaneously because near ray don't intersect object too if central ray has enough empty space nearby.
That's great stuff - thank you for sharing this, gotta say that I love your presentation style - you hit the nail on the head for my preferred style! Thanks again man; I am really looking forwards to checking out your other content!
Amazing video. Shadows too! The only thing bugging me is that I think the vector between two points on a surface is a tangent, not a normal . You definitely treat it like it’s a normal (since you dot it with the light direction), but I can’t see why it works (and it clearly does!). What am I missing?
Great point, I could have perhaps expounded on this a bit more. Turns out we are both right because we are getting a tangent, but it turns out that the normal of the surface IS the tangent of the distance field. It is the direction in which the distance increases the most.
I'm addicted to your videos. Your newer ones don't have this issue, but the keyboard thumps that get picked up by the mic here are really distracting. Seems you've been more mindful of that since then though. Thank you so much for uploading this stuff; will support when I can!
You made it again. Great entry explanation of raymarching. Concise, simple, easy to understand. Neat neat job, congratulations and thanks for your work.
WAY cool my man ! I find this extremely interesting, exactly what I was hoping you'd eventually cover. Long time no see. It seems I wasn't getting alerted, damn ty notifications. I got a few vids to catch up on anyways. Thanks for sharing, you're so awesome at explaining these things.
Wow, this is really nice! I learned so much! For the shadow, why don't you march from the light to the hitpoint? It avoids the n*SURF_DIST trick I would think? Groetjes.
I think instead of clamping the dot(n, l), we could've done: float dif = dot(n, l) * 0.5 + 0.5; It would provide mapped lighting instead of clamped lighting which in my opinion is truer (if that's even a word) and looks a bit better too.
Thanks for the video. One minor piece of feedback. The way you visualized the world space normals at 24:07 suffers from clamping. While it's good enough for a quick look to see if normals are pointing in the correct direction, all the z values are clamped out at 0 because every normal you can see has a negative z value. This problem is really apparent if you do col = GetNormal(p).zzz at which point the entire scene is black. Since normals can point in any direction you should map the range of normals [(-1,-1,-1), (1,1,1)] to RGB colors [(0,0,0), (1,1,1,)] with col=(1.+GetNormal(p)) / 2..
Thanks for watching! Yes, if you want to visualize the normals by themselves then what you suggest is off course better as it allows you to see the negative vector components as well. What I did was just a quick thing to verify that the normals are indeed pointing in the right direction. For this you don't need to remap everything into the visible range.
I love your videos, truly. Excellent explanations! I do feel like they would be easier to follow with actual variable names. I realize most shadertoy authors go for a more codegolf approach, but it'd be nice. Keep it up!
I feel you and I will try to be a bit more descriptive in certain cases. Having said that, a lot of the short vars are almost convention; everyone uses the same names. This way, you can get used to the names which makes it easier reading other peoples shaders. Also, it makes it faster for me to type (my vids are long enough as it is), and easier to use in formulas.
The shadow at the end felt like a hack. A better method would have been to ray march in the opposite direction: start from the light source and march in the direction of your point. If the distance is less than the distance from the light source to the point, you know your point is in shadow.
I'm having trouble understanding the getNormal function. You have a point p that is already "on the surface" of the object, why calculate getDist from that point p? Won't it be very small? Then you create 3 more points at small fixed offsets from p, one for each axis, and getDist for each of those points, which are themselves distances to the closest object. So what does putting those distances into a vector3 give us? I fail to understand why this has anything to do with the slope, let alone the normal.
For those who have already understood the ray marching machanics in the video but are too lazy to type the code, just copy from here: #define MAX_STEPS 100 #define MAX_DIST 100.0 #define SURF_DIST 0.01 float GetDist(vec3 p) { vec4 s = vec4(0, 1, 6, 1);
Myeah debugging is hard sometimes. But the good thing about it is that you really have to understand your code at every step of the way. The only thing you can really do to debug is to output values and interpret the output color as a number. What I do in many videos is to visualize UVs to assure myself that they are correct before moving on.
Great video (and series). Though I feel like still, at a fundamental level, I'm not understanding what is being returned by RayMarch() that can change the colors of specified pixels on the sphere and plane. I would expect there to be some kind of condition, like: "if ray < distance, color = 'black' , else color = 'white' " ...something along those lines. I might just be too used to dealing with vertices, faces, coordinate systems, etc, i.e. not directly programming fragment shaders. Like what is a 'distance field' exactly, a list of 2d coordinates, 3d coordinates, or a list of boolean T/F's mapped to pixels?
What the raymarch function does is, for every pixel, return how far the scene is from the camera. It finds a depth value. If that depth value is smaller than MAX_DEPTH, then we know we hit something. In the main function there is an if statement for that. Inside of it, that depth value is used to calculate a 3d world position. And from that we calculate material properties.
@@TheArtofCodeIsCool Thanks for the reply! I think I understand now. I'm seeing that with the MAX_DIST = 100, RayMarch will return a 100 when the pixel does not hit anything, and a lower number if it does, so dividing d by numbers like 20, 50, we start to see the sphere and the plane. I noticed that dividing d by 200, the white 'sky' becomes grey (corresponding to [.5, .5, .5])
I finally got round to doing this. I've watched this before so I wrote it myself without reference to the video and while the sphere worked fine, the plane needed a small amount added to work: `float plane_dist = rv.y+0.001;` Otherwise it washed out the whole screen. Even when I dumped the sphere... I'd originally been lazy and just hard coded the normals so I changed them to your cross product method and it made absolutely no difference. I'm at a complete loss at to why my implementation needed a tiny amount added to the y component distance on the plane. Baffling... Btw, is there a reason why raymarching is the standard with shadertoy? Surely we could just make a vertex array and just multiply the vert vecs by a transform matrix to get screen coordinates? You could do it in blender and save it as a .obj if you wanted and just loop through it. Obviously you'd need a depth buffer and dot product to check for collisions but it would be a more memory / less computation rasterisation type approach. I'm just curious why nobody seems to use matrix transforms.
I understand the concept of sampling around p to get the normal, but I don't understand the implementation of it at 21.35. I have 2 specific questions: 1. We need to sample around p. How do you choose the sampling points? In this case, you chose: p - (0.01, 0, 0) , p - (0, 0.01, 0) and p - (0, 0, 0.01). Why do we chose those specific sampling points? Could we also chose triangle around the hit point, for example? 2. we substract n = d - vec3(GetDist(p1), GetDist(p2), GetDist(p3). But d is a float. I don't understand why this makes a normal vector. Any clues from anybody?
2 роки тому+2
I think this part is really messy. Multiple logical steps are skipped. First of all we are not calculating the "slope" but the normal, moreover why we need to recalculate dist for p was not explained either.
Thanks! this is a great tutorial. I have a question about the get normal function, I could see that being inefficient if you have lots of objects because the get distance function will have to again go through all the objects to get the min distance. Not sure if I misunderstand or maybe you deal with this later...
The getnormal function makes a few extra getdist calls. It's not cheap, but in the scheme of things, not that expensive either. It's equivalent to having the raymarch loop three iterations longer.
--> receives free lesson --> shamelessly complaints about audio quality... Seriously? Thank you Martijn for this super useful video!
i think they fried their speakers, the audio quality is great
I replicated this in Blender with the node editor. It is probably the most impractical thing I've made, but it was fun. Thanks for the great explanation/tutorial!
How did you deal with the raymarching for loop in the node editor?
Looks like Christmas came early! This is the best explanation of ray marching I have come across. Thank you!!
Swizzles make a lot more sense now that I see an actual use case for them.
I agree !
Thank you for this amazing tutorial! I successfully implemented my first ray marching shader on my renderer. It looks awesome!
I was watching a series of videos on Ray Marching lately and all skipped on many details and things, focusing only on cool effects and fractals, thankfully i found this video to help me understand all the basic elements
Amazing work!
This is such an amazing explanation of the math behind raymarching - thanks so much! Everyone should sign up for the Patreon!
Great introduction to ray marching.
Small nitpick: While your function for computing normals works for your sphere case, I recommend using central difference instead of forward difference like you did when computing the surface gradient. You also shouldn't normalize, but instead divide your gradient with the epsilon. In most non-trivial cases the actual gradient is a better surface approximation than the renormalized vector. So you get vec3(f(x+eps,y,z)-f(x-eps,y,z), f(x,y+eps,z)-f(x,y-eps,z), f(x,y,z+eps)-f(x,y,z-eps)) * (1./eps).
It is also useful for people to know that because surface normals are done with gradients, it is extremely important that the distance function is smooth everywhere. Sharp corners, discontinuities and stuff like that have a nasty tendency to mess up lighting. It is sometimes helpful to render the gradients as colors for debugging.
Thanks for the reply. I personally have never had any issues with forward difference normals and it saves two GetDist calls.
I have to try out your normalization comment... thanks!
@@TheArtofCodeIsCool It's a good general solution but if you have an expensive distance function the 3 extra ray march calls needed to get the surface derivatives are going to quadruple the computation needed.
@@SerBallister Only if the raymarch only took 1 step, which, for complex scenes especially, is very rare. Lets say it takes 30 steps, then the 3 extra calls make it 10% more expensive.
@@TheArtofCodeIsCool Good point
Hello, first of all, thank you Martijn for this amazing tutorial :D I understood it all, well, almost it all. I don't quite understand how you get the normals, I get that if you do the gradient with the forward diference method, you get the tangent or binormal (I'm not quite sure which), but wouldn't you have to do the negative reciprocals to get the normal instead of the tangent? (like in this video: ua-cam.com/video/_9x2cqO7-Ig/v-deo.html&feature=emb_title)?
The way you explained it makes a lot of sense and I feel like I understand *how* everything works, so thank you for that! You're a great teacher!
Thank you! This is one of the best tutorials for anything I have seen in my life. Simple enough to grasp as a beginner, yet it's simple without glossing anything over so you really get to understand everything you learn here. An excellent starting point for anyone interested in ray marching!
Thanks! And thanks for watching!
Awesome! This sorta stuff is a bit beyond me but I like knowing the general gist.
This was so well made I really feel like I do know the gist now, thanks!
that keyboard rumble caused my glass to be bounced from the table by the thundering keystrokes coming from my sub
I'm impressed what Ray Marching can do! Feels magical
The one thig that does not stop to amaze me is how little code is needed to create wonderful graphics.
This really is for dummies. Thanks for the amazingly clear explanation! Would love to see more stuff coming out :)
Hehe, yeah I try to explain it as basic as possible. Glad you like it and thanks for watching!
damn you're crazy... explained such a complex topic, that even I could understand everything you said with only one time viewing the video... Big props
The fact that you can create something like this out of small samples of code and some very neat core ideas, makes me believe that it could actually be possible to simulate a world so realistic people don't notice right away they are in one
That is where we are headed, absolutely.
Just found this tutorial and even though I'm not doing it in a shader (I wanted to make it in python) I find this very useful so a big thank you!
Had been meaning to try this technique for years, but only just got around to it. This video was an amazingly good sanity check. Thank you so much for making it!
This is an amazing explanation of the marching rays. Awesome!
This channel fits like a glove. Thank you for the work !
Distance functions are where it’s really fascinating! For interesting objects the distance functions are fun.
Yep :)
Yeaayaa, omg I'm so glad you exist! Not knowing how to program shaders is a very subtle restriction like.... sheeesh describing how it feels to be limited by it is tough...
Ok, it's like loving to work on cars, but you can't touch the electrical systems. You can master whatever as long as wired parts remain a mystery. The longer you do it, the more apparent it becomes that simple fixes can become ugly kludges, large-scale improvements rarely feel perfect, and when you're done... there are haunting moments when you just gotta hope the lights come on, the car will start, and it won't die in front of anyone later on.
Haha, that could be the foreword to the shader coding book I'll write one day ;)
@@TheArtofCodeIsCool Wow..., that is without a doubt the nicest thing anyone has ever said about one of my comments.... thank you.
= )
have you tried tbos? thebookofshaders.com
it's mostly generative focused but you can apply a lot of the same techniques to image sampling too
Awesome explanation. You make some top quality content, good sir. Much appreciated!
Awesome. How you explain the thought proces about the inner workings. The logic, just wow!
Excellent video covering a lot of the basics! Should be the first video in any Raymarching playlist ;)
One of the best explanations i've ever come across, thank you so much!
Superb! Really dig the code being written as you go along.
I guess I am a dummy then...
Do we need to register somewhere?
(if you have the urge to reply with a political joke, then just hold your breath until the feeling subsides or you get a scene fade to the next scene)
((nothing wrong with politics, I just like to keep it apart from my programming))
@@erikjohnson9112 your sentence looks like it turned into lisp.
@@dariusduesentrieb I had trouble with Lithp, I mean Lisp. But I do like having thoughts on thoughts (and sometimes commenting on thoughts (and overdoing it at times)).
Thank you so much for this series. You help me experience joy of all that ❤️
It can be optimized.
If in first you use classical projection approach, then you can segment all screen to fragment belonging to different object.
Then if you process all segment separately, you can march few ray simultaneously because near ray don't intersect object too if central ray has enough empty space nearby.
Yes there are certainly some optimizations that could be done.
That's great stuff - thank you for sharing this, gotta say that I love your presentation style - you hit the nail on the head for my preferred style! Thanks again man; I am really looking forwards to checking out your other content!
This is exactly what I was looking for - please continue this path!
Your teaching method is the best. Thank you very much
This was what I was looking for
Thank you for putting it out and dumming it down
Amazing video. Shadows too! The only thing bugging me is that I think the vector between two points on a surface is a tangent, not a normal . You definitely treat it like it’s a normal (since you dot it with the light direction), but I can’t see why it works (and it clearly does!). What am I missing?
Great point, I could have perhaps expounded on this a bit more. Turns out we are both right because we are getting a tangent, but it turns out that the normal of the surface IS the tangent of the distance field. It is the direction in which the distance increases the most.
I've been waiting for this video for years! You cannot comprehend how happy I was to see this in my subscription feed today! Haha thanks so much!
Awww.. you are welcome!
Awesome tutorial, so easy explained, that even a 9th grader (me) can understand it! Thank you so much!
Awesome!
Absolutely amazing! Now I finally got the idea of raymarching! And it is suprisingly simpler than I think! Thanks mate! Keep your good work!
Thank you for the clear explanation of an otherwise complicated topic!
very helpful, thank you! btw the keyboard shortcut to run the shader is alt-enter
After read dozens of articles and videos, I finally found one that I can understand. Thanks a ton! m(_ _)m
I'm addicted to your videos. Your newer ones don't have this issue, but the keyboard thumps that get picked up by the mic here are really distracting. Seems you've been more mindful of that since then though.
Thank you so much for uploading this stuff; will support when I can!
Thanks for the feedback. I'm trying a lapel mike next, hopefully it will get better!
Awesome vid. Never really knew how raymarching worked
You made it again. Great entry explanation of raymarching. Concise, simple, easy to understand. Neat neat job, congratulations and thanks for your work.
holy shit the sub bass in this is wild!! lol anyway, thank you so much for this amazing walkthrough. extremely clear and organized. thank you!!!!
Great work! I tried to understand it by myself, looking at examples on Shadertoy, but I failed :D Your explanation is so easy to understand! Thanks!
Thank you so much. Your teaching skills are wonderful!
It's a good thing you're a great explainer because reading the code is horrid with those variable names.
i feel you man
Thank you! Very well explained video. Great pace for a newbie to ray marching. Thumbs up!
Ooooo, thanks! btw - you can't hear you at the end as the music drowns you out. But great - I'm fired up!
WAY cool my man ! I find this extremely interesting, exactly what I was hoping you'd eventually cover. Long time no see. It seems I wasn't getting alerted, damn ty notifications. I got a few vids to catch up on anyways. Thanks for sharing, you're so awesome at explaining these things.
Great explanation. Your graphics really helped get the point across.
This chapter is really super useful, thank you again!
Really interesting and well-made video, thanks a lot!
Wow, this is really nice! I learned so much! For the shadow, why don't you march from the light to the hitpoint? It avoids the n*SURF_DIST trick I would think? Groetjes.
That should work also, though I forgot why i didn't do it here. I think I ran into some other issue but I forgot.
Excellent tutorial Martijn, bedankt!
I think instead of clamping the dot(n, l), we could've done:
float dif = dot(n, l) * 0.5 + 0.5;
It would provide mapped lighting instead of clamped lighting which in my opinion is truer (if that's even a word) and looks a bit better too.
Sure, you can do that too :)
@@TheArtofCodeIsCool Amazing video and amazing channel, by the way. I've learnt a lot from it.
When I do it, the shader looks broken
OMG this is magical. Thank you for making this
Why the get normal is the slope of the line intersecting the surface? I thought it should be the perpendicular line to that (get normal) line.
Thanks for the video. One minor piece of feedback. The way you visualized the world space normals at 24:07 suffers from clamping. While it's good enough for a quick look to see if normals are pointing in the correct direction, all the z values are clamped out at 0 because every normal you can see has a negative z value. This problem is really apparent if you do col = GetNormal(p).zzz at which point the entire scene is black. Since normals can point in any direction you should map the range of normals [(-1,-1,-1), (1,1,1)] to RGB colors [(0,0,0), (1,1,1,)] with col=(1.+GetNormal(p)) / 2..
Thanks for watching! Yes, if you want to visualize the normals by themselves then what you suggest is off course better as it allows you to see the negative vector components as well. What I did was just a quick thing to verify that the normals are indeed pointing in the right direction. For this you don't need to remap everything into the visible range.
I love your videos, truly. Excellent explanations! I do feel like they would be easier to follow with actual variable names. I realize most shadertoy authors go for a more codegolf approach, but it'd be nice. Keep it up!
I feel you and I will try to be a bit more descriptive in certain cases.
Having said that, a lot of the short vars are almost convention; everyone uses the same names. This way, you can get used to the names which makes it easier reading other peoples shaders. Also, it makes it faster for me to type (my vids are long enough as it is), and easier to use in formulas.
thank you so much !!!!
The shadow at the end felt like a hack. A better method would have been to ray march in the opposite direction: start from the light source and march in the direction of your point. If the distance is less than the distance from the light source to the point, you know your point is in shadow.
Yeah thats how I did it at first but for some reason it yielded shadow artifacts, so I traced it the other way around ;)
Thank you for your videos! Maybe you could look into the topic of reflections in ray marching?
Thanks for watching. Reflection is certainly something we'll look at in the future.
I'm having trouble understanding the getNormal function. You have a point p that is already "on the surface" of the object, why calculate getDist from that point p? Won't it be very small? Then you create 3 more points at small fixed offsets from p, one for each axis, and getDist for each of those points, which are themselves distances to the closest object. So what does putting those distances into a vector3 give us? I fail to understand why this has anything to do with the slope, let alone the normal.
For those who have already understood the ray marching machanics in the video but are too lazy to type the code, just copy from here:
#define MAX_STEPS 100
#define MAX_DIST 100.0
#define SURF_DIST 0.01
float GetDist(vec3 p)
{
vec4 s = vec4(0, 1, 6, 1);
float sphereDist = length(p-s.xyz) - s.w;
float planeDist = p.y;
float d = min(sphereDist, planeDist);
return d;
}
float RayMarch(vec3 ro, vec3 rd)
{
float dO = 0.0;
for(int i=0; iMAX_DIST || dS < SURF_DIST) break;
}
return dO;
}
vec3 GetNormal(vec3 p)
{
float d = GetDist(p);
vec2 e = vec2(0.01, 0);
vec3 n = d - vec3(
GetDist(p-e.xyy),
GetDist(p-e.yxy),
GetDist(p-e.yyx));
return normalize(n);
}
float GetLight(vec3 p)
{
vec3 lightPos = vec3(0, 5, 6);
//move the light
float moveSpeed = 2.0;
lightPos.xz += vec2(sin(iTime), cos(iTime)) * moveSpeed;
vec3 l = normalize(lightPos-p);
vec3 n = GetNormal(p);
float dif = clamp(dot(n, l), 0.0, 1.0);
//producing shadow
float d = RayMarch(p+n*SURF_DIST *2.0 , l);
if(d
Thank you ! your skecth/diagram helped me a lot. Great explanations
*Starts explaining..*
me - *instant like*
Thanks for the awesome video. I'd love to see a video on how you debug when you face an issue.
Myeah debugging is hard sometimes. But the good thing about it is that you really have to understand your code at every step of the way. The only thing you can really do to debug is to output values and interpret the output color as a number. What I do in many videos is to visualize UVs to assure myself that they are correct before moving on.
Subscribed without wasting a second
Great video (and series). Though I feel like still, at a fundamental level, I'm not understanding what is being returned by RayMarch() that can change the colors of specified pixels on the sphere and plane. I would expect there to be some kind of condition, like: "if ray < distance, color = 'black' , else color = 'white' " ...something along those lines. I might just be too used to dealing with vertices, faces, coordinate systems, etc, i.e. not directly programming fragment shaders. Like what is a 'distance field' exactly, a list of 2d coordinates, 3d coordinates, or a list of boolean T/F's mapped to pixels?
What the raymarch function does is, for every pixel, return how far the scene is from the camera. It finds a depth value. If that depth value is smaller than MAX_DEPTH, then we know we hit something. In the main function there is an if statement for that. Inside of it, that depth value is used to calculate a 3d world position. And from that we calculate material properties.
@@TheArtofCodeIsCool Thanks for the reply! I think I understand now. I'm seeing that with the MAX_DIST = 100, RayMarch will return a 100 when the pixel does not hit anything, and a lower number if it does, so dividing d by numbers like 20, 50, we start to see the sphere and the plane. I noticed that dividing d by 200, the white 'sky' becomes grey (corresponding to [.5, .5, .5])
Very helpful! Thanks for taking the time to make this.
Very clear and interesting. Dank je wel.
Great video! Came here from Coreteks
Great video, would love to see texturing added.
thanks for this cool tutorial. really well done
Congratulations! Great work.
For a long time I thought these renderings are done with black magic.
Now I see it isn't!
you mean vortex math or deep learning
I finally got round to doing this. I've watched this before so I wrote it myself without reference to the video and while the sphere worked fine, the plane needed a small amount added to work: `float plane_dist = rv.y+0.001;` Otherwise it washed out the whole screen. Even when I dumped the sphere... I'd originally been lazy and just hard coded the normals so I changed them to your cross product method and it made absolutely no difference. I'm at a complete loss at to why my implementation needed a tiny amount added to the y component distance on the plane. Baffling...
Btw, is there a reason why raymarching is the standard with shadertoy? Surely we could just make a vertex array and just multiply the vert vecs by a transform matrix to get screen coordinates? You could do it in blender and save it as a .obj if you wanted and just loop through it. Obviously you'd need a depth buffer and dot product to check for collisions but it would be a more memory / less computation rasterisation type approach. I'm just curious why nobody seems to use matrix transforms.
Shadertoy is purely fragment shader. It doesn't support meshes. That is why we have to raymarch, or ray trace, to get 3d objects.
@@TheArtofCodeIsCool Ah, that explains it. Thanks man.
Awesome video, super easy to understand.
Thanks for the video, it seems so simple now
tnx for making this :)
Powerful stuff. Thanks for sharing.
Thank you for your tutorial. It really helps me a lot!
Glad it helped!
very well explained, thank you!
I understand the concept of sampling around p to get the normal, but I don't understand the implementation of it at 21.35. I have 2 specific questions:
1. We need to sample around p. How do you choose the sampling points? In this case, you chose: p - (0.01, 0, 0) , p - (0, 0.01, 0) and p - (0, 0, 0.01). Why do we chose those specific sampling points? Could we also chose triangle around the hit point, for example?
2. we substract n = d - vec3(GetDist(p1), GetDist(p2), GetDist(p3). But d is a float. I don't understand why this makes a normal vector.
Any clues from anybody?
I think this part is really messy. Multiple logical steps are skipped. First of all we are not calculating the "slope" but the normal, moreover why we need to recalculate dist for p was not explained either.
This is truly incredible.
Thanks. I'm glad you like it!
thank you for a very consistent explanation :)
I really like that you continue to make these very helpful videos ... Thank You :)
Fantastic lesson!
Very nice and clear explanation. Cheers
My life is a lie, I thought that I knew about rendering at least a little but nope.
This video was way too good to only have 371 likes @_@
Thanks! this is a great tutorial. I have a question about the get normal function, I could see that being inefficient if you have lots of objects because the get distance function will have to again go through all the objects to get the min distance. Not sure if I misunderstand or maybe you deal with this later...
The getnormal function makes a few extra getdist calls. It's not cheap, but in the scheme of things, not that expensive either. It's equivalent to having the raymarch loop three iterations longer.
Great introduction. Thanks!
That poor poor keyboard (great video btw)
I'm gonna learn so much from you it's carzy
Thank you for making great stuff!
I'll learn many things from you, thanks man
Great to hear!
As an interesting exercise I wrote this in software... I mean it turned out identically but it was still fun...
Cool! If its not in shader language it must be slow no?