Patrons can now vote for the next video! Thank you for your support. ❤ Support me on Patreon: www.patreon.com/simondevyt 🌍 Live Demo + Courses: simondev.io
Even if I ever understood before, this might be the time it actually sticks :) for some reason I always mixed it up with tracing, or maybe something to do with section searches
@@johanngambolputty5351 well technically you're still tracing rays, right? certainly it's ray casting, the term they used in 3d games back in the 90s to collision check your projectile trajectories and the such, which is just 1 ray. [ray tracing] is a routine that casts a ray for every angle your "caster" can "see". in 3d rendering often a camera, so the camera settings dictate the position and the angles the caster limits itself to, the pixels it traces are projected into angles depending on FOV/perspective values from the camera, they ARE the set/array of angles to process basically. it's still raycasting too, raycasting a lot in fact, raycasting is still the integral part that does the work. (a 1080p image is about 2 million pixels, so ~2 million rays if just 1 ray for every pixel, modern path tracing do many many more per pixel) there's also bounces for light ofcourse and the such, but for the sake of simplicity let's keep it at 1 bounce. [path tracing] is a more complex raytracing that can also do reverse tracing for caustics and the such, has way more rays for more modern PBR rendering methods, z distance information feedback for post processing and the such, essentially raytracing on crack. [ray marching] i consider it to be a variant/derivate of ray tracing, i'm not sure how disputed that is, but i'd put it under that moniker. but it is smart/dynamic, rather than checking everything, it makes bigger jumps, and you can reduce the amount of cycles by a great deal just working with minimum lenghts set on a higher value , but very small object intersections aren't "seen" because the ray jumps over them, this disadvantage is exploited as an advantage in the demos in this video by interpolating and blending the missing parts together the "looking over it" principle is also used in metaballs, but metaballs calculate a lot of shit on the way, this method actually drops data and fills in the gaps, it has a bit less control, but you could put extra data in vectors and use those with proximity for all kinds of effects too i don't think the vector lengths (jump distances) strictly have to be dynamic for it to be ray marching, they could be static and just have it jump the same distance every time as a crude version until it hits something it's not ray marching if you only shoot 1 ray either, so it's just grid-snapped ray casting strictly taken, but the distributed is implied although there may be differentiation in terminology these days perhaps, but that was also understood as ray marching back in the day ray marching is a de facto collision detection algorithm, as is ray tracing, because they're all raycasting and the rays are looking for something to bounce on, that precisely is what collision detection is, finding an intersection. as with many code, raymarching differs a lot between CPU based (serial functional programming) and GPU based (parallel logical programming), they need a quite different approach. the crude version of ray marching basically is raytracing, but the ray casting it does, is snapped to a grid with the grid's rotation relative to the individual ray's angle so it always traces over a striaght line (which is exactly what "local coordinates" are) short story long: i consider raymarching a variant of raytracing, i'd define them as: "distributed raycasting algorithms" when in doubt, use the term ray casting, that's the underlying principle for all of them, including ray casting itself obviously, so you can never be wrong then ;) even simple pixel-based detection is essentially just crude ray casting with the vectors being snapped to integer values, using global coordinates, and dropping all the benefits of unit vectors, when you think about it, right? lol i just had an epiphany, never even thought of it about like that myself even.
@@dutchdykefinger True, its a clever sort of tracing that you can do if you use distance fields instead of polygons, especially since the step size is not just dynamic but optimal (largest while guaranteed not to hit anything). I'm not familiar with this stuff, so I just imagined constant step sizes for tracing (against polygon primitives)... though I'm sure it can get way more clever
I‘ve actually been working on a graphics engine using signed distance fields as primitives instead of triangles for ~2 years now - super hyped to see it become more and more mainstream recently :D
@@keyb not yet unfortunately. The first year was mainly reading papers, understanding the heaps of theory and finding a sensible way of actually making it performant. There are bunch of challenges i wanted / had to solve first: - how do i make dynamic scenes with SDFs instead of scenes hardcoded into a shader? - how do i avoid as many branches as possible? - how do i completely avoid the default render pipeline? - how would instancing work? - how can i drastically improve performance? - etc. And the second year was mostly occupied by learning vulkan tbh. But i‘ve been making some major strides recently and i‘m hoping to finish a 2D version before end of the year. That will allow me to make some tools to drastically accelerate my dev process and it will all snowball from there. End-goal is a full 3d engine for real time rendering :D I can bookmark this video and let you know in this thread when something showable is abailable if you want? :)
@@simondev758 not yet unfortunately :D See comment above yours hahaha I‘d be happy to send you a demo as soon as something showable is ready, hopefully near the end of this year :)
@@Chribit Yes please! It sounds like a ton of work (especially in the sense of optimizations) so you still probably have a while to go. Whenever you do get a working model please let me know!
Great video, but small correction on a common misconception: Ray marching is just doing raytracing but numerically (so instead of having an equation tell you how far you need to go, you just go in small steps and check if you are inside a shape at any point), it doesn't necessarily involve SDFs (e.g. screenspace reflections also use raymarching, but not SDFs). The special name for this is sphere tracing.
I've seen a lot of videos about raymarching before, but this is the first time I've "gotten it". The graphics you used to explain it make so much sense.
I remember this blend mechanic being used in a Valve tech demo from around 2003 where they made blobs able to combine and melt together like here. Later used years later for Portal 2. Insane they came up with this already back then.
Nice video! I love playing with SDFs too. One point I think is worth mentioning, which I'm sure you understand but perhaps some viewers won't. It's a misconception that using modulo to repeat the space, and get seemingly infinite copies of your SDF, is for free. From a performance point of view its far from free. It's because then more rays are getting slowed down, and more frequently so, by passing near to surfaces. It is a neat trick though, and if your SDF is fast anyway then its not an issue.
Agreed, the performance cost is definitely real. I was mostly thinking in terms of implementation cost for a lot of these effects, but yeah the ray marching loop can't early out as easily.
Yeah, I think if I were to redo it, I'd add some animations around computing things like the normal/shadows in the same way that I did for the raymarching. Next time!
Spore-like games would benefit heavily with ray marching, its always something ive thought about. you could build a cell and have the individual cell parts combine using ray marching or make cellular animations of two spheres dividing or even have creature parts be put together using ray marching. Ray marching gives endless opportunities
CodeParade's marble marcher uses ray marching, if you haven't heard of it, it's really cool, you roll a marble along a fractal which is sometimes morphing around in real time
I won't ever try to write actual code, yet this video was still very interesting and understandable and it might teach me what I can use smooth min and max for in Blender.
I am learning shader programming aa it's really interesting. And while learning abour Shadertoy and their implementation of multipass buffers - I came up with an idea for a very naive physics simulation over night and managed to implement it the following day. But now I am stuck for over a week to do a squish kinda motion between a soft and hard surface. Think like a bouncy ball. But the more I try and sketch - I don't think you can do such a squish motjon just based on two distance fields. It would be easier with two shapes to blend between and the center positions.
Wonderful Simon, came here from another one of your shader vids, mind-blown that you made the actual video using ray-marching, now picking jaw up from table
I like the explanation! I'm working on my own ray marching based graphics engine right now and, while your demo here is far more polished, I'm getting places.
There's worlds in VRChat featuring Raymarching shaders that you can experience in VR. It's quite fascinating. One of the worlds was called "Treehouse In The Shade".
I love my recommended, what an amazing video! I feel like there's a good chance this would work well with Geometric Algebra, but I don't know enough of either subjects to really say something about it.
3:07, That wire frame cube is your clue to doing super cheap 2d ray tracing on 3d environments, and you only need to do it on 1 side, the rest just need that one side's ray positions to be multiplied against either 1 or -1
Great video and explanations/visuals, makes me want to mess around with it. Would be nice to talk about the limitations and why it's not as popular as the usual rendering techniques Maybe a game like portal that does weird things with visuals could benefit from this.
Been waiting for someone to make a game engine that renders everything using SDF marching for a decade now. I know that evaluating many combined distance functions merged together to model something can get expensive, but utilizing simple bounding boxes and things to use as proxy geometry for the initial ray march origin for each object should help speed things up. I also think that extremely complex stuff could programmatically be reduced down into fewer distance functions that approximate it - particularly for an LOD type scheme. I don't know about an entire world being described using distance functions, maybe, or maybe they could be compiled down into static 3D textures that are raymarched instead - which likely would be faster than evaluating tons of distance functions every ray step for every pixel for every frame. I just feel like now's the time that this sort of thing could be realized if someone just sat down and actually pursued it.
@@simondev758 Hi, while you're around, I have a question, I'm a noob, can I use that method to project a simple parallax from an animated stripe and its animated mist pass? 2 mkv files, basically.
@@simondev758 What I'm trying to achieve is some kind of a deep picture frame that contains an animation which contains its own mist pass, for scene depth. I imagine the infinite possibilities of raymarching... Imagine if it was possible with a series of pictures containing their own depth map or estimated depth map, and making a snow globe out of it with accurate geometry from all angles, in Blender... That would be so great! Yet right now I'm really just after a method to create detailed pseudo-geometry off of low poly models. And the lowest factor is that I'm trying to cast a 60 frames scene into a cube, a picture frame, which contains depth information, and have an accurate representation of depth, without extra geometry nor multi-res modifier. Oh, is that even Blender that you used? I think you mentioned several other software, I'll rewatch just now.
@@simondev758 Yeah I realized on 2nd watch that's not python nor blender :P I heard that with vector displacement mapping, certain textures can create creases in all directions and not just X and Y. I saw and old model that would create the geometry of an ear, then a nose, just out of a single texture. Tho I still haven't really figured it out. Learning that would clearly help me achieve what I've been trying to do for a while which is making a deep picture frame containing an animated scene rendered from pre-rendered material. An MKV file for depth, one for color diffusion, one for light, one for reflection... Yeah... That seem so far away for me right now x) For such a simple artistic project.
Wonderful! I know how to do lighting/reflections/combining geometry but I think my code quality could use help for putting together larger more complex scenes. I'll take a look at your course :)
2:43 "it's really that simple" Meanwhile me having to pause the video for 10 minuts in order to understand what was going on and why it even worked -_-
TNice tutorialS IS WHAT I NEEDED BRO, thank you for taking the ti and doing tNice tutorials for most of that are starting with tNice tutorials beautiful tNice tutorialng called
Thank you so much, I’m learning tNice tutorials in quarantine and you made it very simple I really appreciate it, thank you for going over every little
i figured instead of rendering a sphere mesh, rendering sphere with ray marching cost less or same, and it looks 1000x better with instancing of course, so there is no use of `min` function. just render a cube mesh as sphere with ray marching shader
Hi Simon! I love your videos, the topics, the way you explain things, the graphics and the general flow of the video as well. The only thing I can recommend is a great microphone or improved sound quality for your voice! With all those high quality things in your videos, imo the sound quality is the only thing that is distracting at the moment. Much loves and please keep what you are doing
yes, they are. Inigo Quilez has excellent tutorials about it. You can see hundreds, if not thousands of samples of SDFs on ShaderToy. The platform was created by him partially as well.
Hey Simon, fantastic explanations and excellent visuals to go along with! You mentioned at the end that the code is up on your github, but I wasn't able to find it. Am I looking in the wrong spot? Cheers!
@@simondev758 at each point/iteration, we have a different set of scanning angles around the point, right? So how about the scanning resolution per sample point usually?
I was always wondering about some effects in Inque - Ooze. Now I think they used a lot of ray marching to make the visuals in that demo. Coding is not my strong suit, but I'll have to if this can be integrated into Tooll3.
Patrons can now vote for the next video! Thank you for your support.
❤ Support me on Patreon: www.patreon.com/simondevyt
🌍 Live Demo + Courses: simondev.io
For the first time I understood what this magic "ray marching" is and how it works. You are good at explaining things
Even if I ever understood before, this might be the time it actually sticks :) for some reason I always mixed it up with tracing, or maybe something to do with section searches
@@johanngambolputty5351 well technically you're still tracing rays, right?
certainly it's ray casting, the term they used in 3d games back in the 90s to collision check your projectile trajectories and the such, which is just 1 ray.
[ray tracing] is a routine that casts a ray for every angle your "caster" can "see".
in 3d rendering often a camera, so the camera settings dictate the position and the angles the caster limits itself to, the pixels it traces are projected into angles depending on FOV/perspective values from the camera, they ARE the set/array of angles to process basically.
it's still raycasting too, raycasting a lot in fact, raycasting is still the integral part that does the work.
(a 1080p image is about 2 million pixels, so ~2 million rays if just 1 ray for every pixel, modern path tracing do many many more per pixel)
there's also bounces for light ofcourse and the such, but for the sake of simplicity let's keep it at 1 bounce.
[path tracing] is a more complex raytracing that can also do reverse tracing for caustics and the such, has way more rays for more modern PBR rendering methods, z distance information feedback for post processing and the such, essentially raytracing on crack.
[ray marching] i consider it to be a variant/derivate of ray tracing, i'm not sure how disputed that is, but i'd put it under that moniker.
but it is smart/dynamic, rather than checking everything, it makes bigger jumps, and you can reduce the amount of cycles by a great deal just working with minimum lenghts set on a higher value , but very small object intersections aren't "seen" because the ray jumps over them, this disadvantage is exploited as an advantage in the demos in this video by interpolating and blending the missing parts together
the "looking over it" principle is also used in metaballs, but metaballs calculate a lot of shit on the way, this method actually drops data and fills in the gaps, it has a bit less control, but you could put extra data in vectors and use those with proximity for all kinds of effects too
i don't think the vector lengths (jump distances) strictly have to be dynamic for it to be ray marching, they could be static
and just have it jump the same distance every time as a crude version until it hits something
it's not ray marching if you only shoot 1 ray either, so it's just grid-snapped ray casting strictly taken, but the distributed is implied
although there may be differentiation in terminology these days perhaps, but that was also understood as ray marching back in the day
ray marching is a de facto collision detection algorithm, as is ray tracing, because they're all raycasting and the rays are looking for something to bounce on, that precisely is what collision detection is, finding an intersection.
as with many code, raymarching differs a lot between CPU based (serial functional programming) and GPU based (parallel logical programming), they need a quite different approach.
the crude version of ray marching basically is raytracing, but the ray casting it does, is snapped to a grid
with the grid's rotation relative to the individual ray's angle so it always traces over a striaght line (which is exactly what "local coordinates" are)
short story long: i consider raymarching a variant of raytracing, i'd define them as: "distributed raycasting algorithms"
when in doubt, use the term ray casting, that's the underlying principle for all of them, including ray casting itself obviously, so you can never be wrong then ;)
even simple pixel-based detection is essentially just crude ray casting with the vectors being snapped to integer values, using global coordinates, and dropping all the benefits of unit vectors, when you think about it, right?
lol i just had an epiphany, never even thought of it about like that myself even.
@@dutchdykefinger True, its a clever sort of tracing that you can do if you use distance fields instead of polygons, especially since the step size is not just dynamic but optimal (largest while guaranteed not to hit anything). I'm not familiar with this stuff, so I just imagined constant step sizes for tracing (against polygon primitives)... though I'm sure it can get way more clever
I‘ve actually been working on a graphics engine using signed distance fields as primitives instead of triangles for ~2 years now - super hyped to see it become more and more mainstream recently :D
Ooooh sounds cool!
Do you have any demos or code bases you can share?
@@keyb not yet unfortunately.
The first year was mainly reading papers, understanding the heaps of theory and finding a sensible way of actually making it performant. There are bunch of challenges i wanted / had to solve first:
- how do i make dynamic scenes with SDFs instead of scenes hardcoded into a shader?
- how do i avoid as many branches as possible?
- how do i completely avoid the default render pipeline?
- how would instancing work?
- how can i drastically improve performance?
- etc.
And the second year was mostly occupied by learning vulkan tbh. But i‘ve been making some major strides recently and i‘m hoping to finish a 2D version before end of the year.
That will allow me to make some tools to drastically accelerate my dev process and it will all snowball from there. End-goal is a full 3d engine for real time rendering :D
I can bookmark this video and let you know in this thread when something showable is abailable if you want? :)
Super cool! Got a demo I can watch of it in action?
@@simondev758 not yet unfortunately :D
See comment above yours hahaha
I‘d be happy to send you a demo as soon as something showable is ready, hopefully near the end of this year :)
@@Chribit Yes please!
It sounds like a ton of work (especially in the sense of optimizations) so you still probably have a while to go.
Whenever you do get a working model please let me know!
Great video, but small correction on a common misconception: Ray marching is just doing raytracing but numerically (so instead of having an equation tell you how far you need to go, you just go in small steps and check if you are inside a shape at any point), it doesn't necessarily involve SDFs (e.g. screenspace reflections also use raymarching, but not SDFs). The special name for this is sphere tracing.
yeah, the naive ray marching function steps a point towards the target and returns when it ends up inside an entity/object
Do you mean sphere tracing is just ray marching with SDFs?
@@diodin8587basically, if you open up the wikipedia page of ray marching there's a section on it.
I've seen a lot of videos about raymarching before, but this is the first time I've "gotten it". The graphics you used to explain it make so much sense.
I remember this blend mechanic being used in a Valve tech demo from around 2003 where they made blobs able to combine and melt together like here.
Later used years later for Portal 2. Insane they came up with this already back then.
I believe they used a 2D version for text on signs in Team Fortress 2. I vaguely recall reading about it back in the 2ks.
Nice video! I love playing with SDFs too.
One point I think is worth mentioning, which I'm sure you understand but perhaps some viewers won't. It's a misconception that using modulo to repeat the space, and get seemingly infinite copies of your SDF, is for free. From a performance point of view its far from free. It's because then more rays are getting slowed down, and more frequently so, by passing near to surfaces. It is a neat trick though, and if your SDF is fast anyway then its not an issue.
Agreed, the performance cost is definitely real. I was mostly thinking in terms of implementation cost for a lot of these effects, but yeah the ray marching loop can't early out as easily.
I have started the glsl course a couple weeks back and am learning so much! Thank you!!
Awesome!
Amazing video, well done on the explanations! The 3D scene really helps to understand, maybe explaining a bit more the code part would be even better
Yeah, I think if I were to redo it, I'd add some animations around computing things like the normal/shadows in the same way that I did for the raymarching. Next time!
I was really envisioning this for an attack in my 3d plaforming game as an obstacle. Thx
for smoothmin(a, b, k), using smoothmax(a, b, -k) also works and is possibly slightly easier to write in code.
This is quite cool, ive added these in Unreal engine and u can even blend with the mesh generated SDF
Now THIS is how you sell a course.
Great content!
Spore-like games would benefit heavily with ray marching, its always something ive thought about. you could build a cell and have the individual cell parts combine using ray marching or make cellular animations of two spheres dividing or even have creature parts be put together using ray marching. Ray marching gives endless opportunities
I just coded my first ray marcher and I found it way easier than a raytracer, plus you can do a lot more amazing stuff.
this was the single coolest video I've seen on UA-cam for a while
It's great to see more content like this. Loved your explanation of the central differences method.
CodeParade's marble marcher uses ray marching, if you haven't heard of it, it's really cool, you roll a marble along a fractal which is sometimes morphing around in real time
In VRChat there is a famous world called Treehouse in the shade, that uses ray marching. Its really beautiful!
I won't ever try to write actual code, yet this video was still very interesting and understandable and it might teach me what I can use smooth min and max for in Blender.
I am learning shader programming aa it's really interesting. And while learning abour Shadertoy and their implementation of multipass buffers - I came up with an idea for a very naive physics simulation over night and managed to implement it the following day.
But now I am stuck for over a week to do a squish kinda motion between a soft and hard surface. Think like a bouncy ball.
But the more I try and sketch - I don't think you can do such a squish motjon just based on two distance fields. It would be easier with two shapes to blend between and the center positions.
Wonderful Simon, came here from another one of your shader vids, mind-blown that you made the actual video using ray-marching, now picking jaw up from table
I like the explanation! I'm working on my own ray marching based graphics engine right now and, while your demo here is far more polished, I'm getting places.
The game called "Dreams" on the PS4 works exactly like this, except it has a "flake" effect to the surface to give it a painterly look.
Don't give up mate, that was my first day to use soft soft and i will work on it for a long ti!
I love your editing & teaching style! Thank you for making such informative and fun content for free ^_^
Nice stuff, nice explanation, nice voice. Love it
This is the best content on youtube! Thank you!
This is awesome! The first thing I’m gonna do (once I get a job) is buy your shader course!
I would love to see an interactive VR experience that delves into ray matching.
There's worlds in VRChat featuring Raymarching shaders that you can experience in VR. It's quite fascinating. One of the worlds was called "Treehouse In The Shade".
@@KillFrenzy96 thank you so much!
Amazing stuff! Now I'm curious about how SDFGI works!
Me too!
@@simondev758 just found this
ua-cam.com/video/ARlbxXxB1UQ/v-deo.html
I'm your old fan, thanks for updating the program so honestly! You are a hero to me! Yoom view bot king! Definitely the best in the business
When I look at a subject crooked with a thinking face, I know I've learned something.
I love my recommended, what an amazing video! I feel like there's a good chance this would work well with Geometric Algebra, but I don't know enough of either subjects to really say something about it.
3:07, That wire frame cube is your clue to doing super cheap 2d ray tracing on 3d environments, and you only need to do it on 1 side, the rest just need that one side's ray positions to be multiplied against either 1 or -1
I purchased your course. You really explain things well!
Your vids are so incredibly good! I always get excited when a new one comes out. Pure gold!
Great video and explanations/visuals, makes me want to mess around with it.
Would be nice to talk about the limitations and why it's not as popular as the usual rendering techniques
Maybe a game like portal that does weird things with visuals could benefit from this.
Dreams on PS4/PS5 uses this technique extensively.
As soon as we went 3D everything started going over my head 😅
Fun visuals tho :)
You can't help but love listening to Bob from Bob's Burgers explain programming concepts.
That's an amazing video Simon, thanks a lot for sharing your knowledges with us !
Really great video! Might implement this into my devlog game somehow 🙂 Great explaining and awesome video and visuals.
Let me know if you do!
Fascinating, especially when the shapes merge into each other.
Great video, thanks. Made me realize that I need a distance function
Wow this was so interesting! You also mentioned csg ❤
I think my brain fell over half way on this one
Basically this is what "Dreams" for the PS4/PS5 is made of
This is really a cool video, the way objects transition between frames are just amazing! Wondering how that could be made.
Every single sentence is worth writing down on the notebook. Thanks
An awesome, well-made video. Love it
Been waiting for someone to make a game engine that renders everything using SDF marching for a decade now. I know that evaluating many combined distance functions merged together to model something can get expensive, but utilizing simple bounding boxes and things to use as proxy geometry for the initial ray march origin for each object should help speed things up. I also think that extremely complex stuff could programmatically be reduced down into fewer distance functions that approximate it - particularly for an LOD type scheme. I don't know about an entire world being described using distance functions, maybe, or maybe they could be compiled down into static 3D textures that are raymarched instead - which likely would be faster than evaluating tons of distance functions every ray step for every pixel for every frame. I just feel like now's the time that this sort of thing could be realized if someone just sat down and actually pursued it.
100% I feel like as gpu's get more powerful and general purpose, it's opening up some interesting avenues that previously weren't feasible.
because in the intro for example I want to make an acoustic soft, but later I want to make a distortion or any other effects in that sa
Very cool video. I would have love to have it a bit longer and more in depth though.
I love you Simon. You’ve turned me into a smartass. Thank you 😃
Hah
This is the rendering engine for PS4/5 Dreams. It’s amazing.
you record with hardware outside of the program. Great tutorial btw it was very detailed but still just right for beginners.
That's actually perturbing to watch. Feels like an old Outer Limits episode! :P You adjust the horizontals, the verticals, you're a wizard! 📺
I loved that show!
@@simondev758 Hi, while you're around, I have a question, I'm a noob, can I use that method to project a simple parallax from an animated stripe and its animated mist pass? 2 mkv files, basically.
@@SaintMatthieuSimard I'm not super sure what you're asking, you can explain again
@@simondev758 What I'm trying to achieve is some kind of a deep picture frame that contains an animation which contains its own mist pass, for scene depth. I imagine the infinite possibilities of raymarching... Imagine if it was possible with a series of pictures containing their own depth map or estimated depth map, and making a snow globe out of it with accurate geometry from all angles, in Blender... That would be so great!
Yet right now I'm really just after a method to create detailed pseudo-geometry off of low poly models.
And the lowest factor is that I'm trying to cast a 60 frames scene into a cube, a picture frame, which contains depth information, and have an accurate representation of depth, without extra geometry nor multi-res modifier.
Oh, is that even Blender that you used? I think you mentioned several other software, I'll rewatch just now.
@@simondev758 Yeah I realized on 2nd watch that's not python nor blender :P
I heard that with vector displacement mapping, certain textures can create creases in all directions and not just X and Y. I saw and old model that would create the geometry of an ear, then a nose, just out of a single texture. Tho I still haven't really figured it out. Learning that would clearly help me achieve what I've been trying to do for a while which is making a deep picture frame containing an animated scene rendered from pre-rendered material. An MKV file for depth, one for color diffusion, one for light, one for reflection... Yeah... That seem so far away for me right now x) For such a simple artistic project.
phew. That is a cool thing, I will have a deeper look later. Thanks
The videos only get better
Wonderful! I know how to do lighting/reflections/combining geometry but I think my code quality could use help for putting together larger more complex scenes. I'll take a look at your course :)
This is a really good explanation!
2:43 "it's really that simple"
Meanwhile me having to pause the video for 10 minuts in order to understand what was going on and why it even worked
-_-
This is really cool! A lot can be imagined with math
There's a software called MagicaCSG that's so easy and intuitive to use to make 3D models.
Very nice video!
I would like to use this technic in a game!
I'd love to see you making some Non-euclidean worlds in javascript
Great idea!
I feel intrigued to code a ray-Marcher…!
Mindblown again
..you had me on radius
fire video, thanks bro
This should be at the #SoME2
oh yeah i heard codeparade talking about this in his marble marcher videos
wow very good explanation, made it easy!!
Theres a channel called Inigo Quilez who paints with SDFs,he made some really pretty char models I recommend checking it out
Awesome site.
using soft, can't wait to get my hands on it.
TNice tutorialS IS WHAT I NEEDED BRO, thank you for taking the ti and doing tNice tutorials for most of that are starting with tNice tutorials beautiful tNice tutorialng called
Thank you so much, I’m learning tNice tutorials in quarantine and you made it very simple I really appreciate it, thank you for going over every little
Wow! Thanks for the video :)
"Until next time, cheers"
Such a neat video!
I love soft soft so so so so much!
Yoo tnx dude, everytNice tutorialng works. I LIKE IT
Question: Can the minecraft clone you made be remade completely in glsl? Like the terrain generatjon and stuff
I think it could, but that'd be a major challenge. Maybe for a day when I'm super bored heh
i figured instead of rendering a sphere mesh, rendering sphere with ray marching cost less or same, and it looks 1000x better
with instancing of course, so there is no use of `min` function. just render a cube mesh as sphere with ray marching shader
Best video ive seen today. yosh
Awesome! :D
Hi Simon! I love your videos, the topics, the way you explain things, the graphics and the general flow of the video as well. The only thing I can recommend is a great microphone or improved sound quality for your voice! With all those high quality things in your videos, imo the sound quality is the only thing that is distracting at the moment. Much loves and please keep what you are doing
Thanks, I think I need to find a better recording location too heh
@@simondev758 Yeah, that could work as well 😅
god damn it that is cool - lemme watch it 20 more times so I can understand it lol
Watched it twice. Need a lay down and a cup of tea now.
Simon, are these transitions, intersections, adds, subs being performed purely in the shader?
100% done in shaders yes
yes, they are. Inigo Quilez has excellent tutorials about it. You can see hundreds, if not thousands of samples of SDFs on ShaderToy. The platform was created by him partially as well.
@@simondev758 think you just sold me on the course. Awesome.
@@AntonioNoack thanks! I will check it out
@@brennonwilliams9181 Heh awesome. And yes, definitely check out Inigo's site, it's in the description, amazing resource for ray marching & sdf's.
my old professor made an efficient fluid simulator using signed distance fields
Mindblowing 😍
Why do I feel like I've seen this before... But like it was uploaded 7 days ago
really amazing informative technical and beautiful tutorial thank you very much enjoyed it alot! subbed :)
Hey Simon, fantastic explanations and excellent visuals to go along with! You mentioned at the end that the code is up on your github, but I wasn't able to find it. Am I looking in the wrong spot? Cheers!
I probably forgot to upload it, lemme go find the code and do that.
@@simondev758Not to come off as rude, but did you upload it yet? I also couldn't find it and I'd be super interested...
@@simondev758 Just casually dropping by to remind you to upload this code. Cheers :)
Thanks for the intuitive video. May I know how to determine the necessary number of points to render a distance field of a scene?
Most implementations set a max # of iterations, usually about 100 or so get's you a good result, a lot less depending on the scene.
@@simondev758 at each point/iteration, we have a different set of scanning angles around the point, right? So how about the scanning resolution per sample point usually?
I was always wondering about some effects in Inque - Ooze. Now I think they used a lot of ray marching to make the visuals in that demo.
Coding is not my strong suit, but I'll have to if this can be integrated into Tooll3.
This reminds me dreams on Ps4
Looks like a new face is joining Inigo quillez on graphics mt rushmore!
Nah, that dude is king.
and I will find my way back there too!
Great Stuff!