Couldn’t understand it less than my linear algebra class taught by the nuttiest Ukrainian to ever enter the US. I still don’t know what an eigen value is. Oh, and 20 years of engineering later… I never needed to.
Im not smart enough to figure out to fix this repetition problem myself, how would you actually do it, without adding all those step nodes manually? This parallax effect is so mindblowing i really want to incorporate it in my work
@@samfellner honestly you're better off just using actual displacement instead of doing all this lol this is only useful if you're really serious about cutting down render times
To replicate this with displacement mapping I guess I'd need one triangle per pixel. That scales reaaaasaally fast. A 1K*1K texture would need a million triangles. Multiplying that if the texture repeats. Yeah, sure I probably wouldn't need 1 triangle per pixel in most cases. But even outside of games, realtime backgrounds on volume stages and stuff like that, I would love a more user friendly approach to occlusion mapping, collapsed to a single node. Would save on so much effort. :)
@@Denomote displacement can really eat up your vram/ram usage and destroy render times. This is really useful. Im thinking theres probably a way to implement using osl, that way you can just use a for loop.
Think how useful that could be, you could set it so it used the view distance to determine the number of "layers" for the parallax, I be that sort of dynamism could be great for optimization. I already use view distance to scale the amount of detail in procedural material, which I at least *think* increases performance.
Extra tip: use a white noise texture instead of the discrete depth checks - simplifies the number of nodes and gets rid of the stepping, at the cost of looking a bit noisier.
I wanna cry. I understood nothing!! But, I really wanna use parallax occlusion, because my PC isn't strong enough for displacement maps! This was still a very detailed tutorial, so I will probably get it after watching it a few times. It's easier to understand things when I have no choice but to understand them to complete a project.
This is something i ALWAYS wanted. In real time rendering engines you can have "Per Pixel Displacement Mapping". But in blender for you to actually see the displacement you need to subdivide the heck out of your mesh, since displacement works "Per Vertex". I always found that silly and ineffficient, so this technique is AMAZING for when subdividing a plane ad infinitum is just not ideal
@Concodroid That still creates geometry tho, this genuinely does not. You get real depth with no added geometry, which is great for large scenes. For example, this is how videogames make the interior of windows in large cities
@@sebastiangudino9377 No I know, it's just that this approach has limitations too. Glancing angles suffer from this technique. This does best when you're looking top-down at a flat plane; adaptive subdivision works best with something like landscapes
WHOOAAA !!! But seriously, great tutorial ! Relatively deep subject but well explained, even though we might have to take a few steps back a few times to figure things out correctly, you gave us a precise and concise explanation. I just watch a video on the colour perception of jumping spiders and all to say, it is quite a wonder the things we can manage to do with the information we can manage to perceive :)
Doesn't that break when you rotate the plain? you need to transform the incoming vector into texture space. ie take the dot product on the incoming vector with normal, tangent and binormal. Although I'm sure you know that, I'm guessing there is a part 2 🙂Be warned the technique doesn't work on curved surfaces anymore, blender clamps the normal so you can't have be pointing into the plane surface, because is messes up eevee next. That caused me such a headache trying to figure out what was wrong!
After I plugged in the geometry incoming to vector, I went no further. That was pretty cool looking just doing that. Thanks as always for teaching and I will try to finish once I watch it again and again.
A couple years ago i followed a similar tutorial showing how to make fake windows with rooms behind them. I ended up with a project to build the rooms (with wall decorations, lighting, etc.), which put out a node set, and a shader template to modify with the node set.
First time I saw Parallax mapping was in F.E.A.R. The decals on damaged walls. It was one of the coolest things to see because it was so much detail for something that use to be a black dot.
have you seen anisotropic cone step mapping? I would be surprised if its possible to make in nodes but it is supposed to often be both faster and higher quality. It uses scaled cones centered on each pixel instead of vertical layers, and requires a preprocessing step. There's also nvidia's relaxed cone step mapping which looks similar except it makes the cones intersect the geometry and adds a binary search at the end.
Forgot to set normal and roughness to non color instead of sRGB (I assume you know to do that but just forgot in the moment). Normal maps and roughness maps are not displayed correctly when set as sRGB so the rocks at the end look a bit weird. Cool video though, I wish blender just had POM support by default like most game engines where you just plug in a height map.
First thing popping to mind: convert the height map into something akin to 3d SDF, which will optimise the amount of steps needed for each fragment as well as accuracy.
So let me see if I'm understanding this correctly, cuz the difference between parallax mapping and displacement mapping is confusing. Traditional displacement mapping actually displaces the geometry. I'm not sure about Cycles/Eevee, but in Octane Render it's displacing the surface at the render level rather than actually displacing the polygons themselves. That's how you can have high quality displacement with a low poly model. I've always seen this as a great way to get height detail without overloading a scene. It seems parallax mapping seems to imitate displacement mapping without actually displacing anything. Kind of like how bump/normal maps create the illusion of surface detail, but when you look at the edges of the model, it's still smooth and flat. It seems this parallax method would similarly break down when you're looking along the tangent of a surface. I'd imagine it's less computationally expensive though, so it'd render way faster. Interesting. Nice tutorial!
It looks kinda wonderful, but I have serious concerns about how much longer the render will become with a setup like that. If only it was a sort of - low level processor instruction node...
Really cool. It got me wondering tho. Could we get rid of the "layered" effect somehow ? Like calculating what the normal should be in-between based on previous plans next layer ?
Looks great... Now... Is there a plugin that gives me that with a single node? 😅 I mean, if that node tree works for any plane. Then it should be collapsible for ease of use and minimizing any risks of user error? Also. Doing that operation 20 times makes me wonder, is there no way to do for-loops in blender?
I’ve spent some time trying to use this technique to represent windows of a building similar to the ones in the Spider-Man games. Any ideas pointers you’d be able to share?
I use Blender for ages and never checked if this is possible. Granted the setup is too complex for something I'd use normally: Would be great if this was builtin!
So. I open blender up, delete the default cube......then what?
Then delete default light and then default camera. Finally, delete Blender. Start again.
Then you add a new cube.
Simple, he can’t make anymore videos because well….you deleted him :(
then you rewatch the whole tutorial again step by step
@@sicfxmusic 🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣
he's just using blender to teach you linear algebra
dont tell them
@@DefaultCube You better knock that shit off..!
It's working!
DAMNIT ALGEBRA you fooled me once again
Couldn’t understand it less than my linear algebra class taught by the nuttiest Ukrainian to ever enter the US. I still don’t know what an eigen value is. Oh, and 20 years of engineering later… I never needed to.
9:05 and _this_ is why I want Repeat Zones in the Shader Editor
Im not smart enough to figure out to fix this repetition problem myself, how would you actually do it, without adding all those step nodes manually? This parallax effect is so mindblowing i really want to incorporate it in my work
@@samfellner honestly you're better off just using actual displacement instead of doing all this lol
this is only useful if you're really serious about cutting down render times
To replicate this with displacement mapping I guess I'd need one triangle per pixel. That scales reaaaasaally fast. A 1K*1K texture would need a million triangles. Multiplying that if the texture repeats.
Yeah, sure I probably wouldn't need 1 triangle per pixel in most cases. But even outside of games, realtime backgrounds on volume stages and stuff like that, I would love a more user friendly approach to occlusion mapping, collapsed to a single node. Would save on so much effort. :)
@@Denomote displacement can really eat up your vram/ram usage and destroy render times. This is really useful. Im thinking theres probably a way to implement using osl, that way you can just use a for loop.
Think how useful that could be, you could set it so it used the view distance to determine the number of "layers" for the parallax, I be that sort of dynamism could be great for optimization. I already use view distance to scale the amount of detail in procedural material, which I at least *think* increases performance.
Extra tip: use a white noise texture instead of the discrete depth checks - simplifies the number of nodes and gets rid of the stepping, at the cost of looking a bit noisier.
Classic “dithering”. Just bear in mind for anyone that does this, its NOT a blur method.
Dithering can mess up displacement and height map outputs.
😮 I remember this trick in making good looking refraction.
Couldn't you improve that further with a gradient texture?
GOAT of blender still on the block
"get it...? BLOCK?"
I finally understand how parallax occlusion mapping works.
Thanks for motivation. I was planning to delete blender. Now I did it.
kkkkkkkk🤣🤣🤣🤣
🤣
I wanna cry. I understood nothing!! But, I really wanna use parallax occlusion, because my PC isn't strong enough for displacement maps!
This was still a very detailed tutorial, so I will probably get it after watching it a few times. It's easier to understand things when I have no choice but to understand them to complete a project.
This is something i ALWAYS wanted. In real time rendering engines you can have "Per Pixel Displacement Mapping". But in blender for you to actually see the displacement you need to subdivide the heck out of your mesh, since displacement works "Per Vertex". I always found that silly and ineffficient, so this technique is AMAZING for when subdividing a plane ad infinitum is just not ideal
Well you can just enable adaptive subdivision which is per - pixel
@Concodroid That still creates geometry tho, this genuinely does not. You get real depth with no added geometry, which is great for large scenes. For example, this is how videogames make the interior of windows in large cities
@@sebastiangudino9377 No I know, it's just that this approach has limitations too. Glancing angles suffer from this technique. This does best when you're looking top-down at a flat plane; adaptive subdivision works best with something like landscapes
WHOOAAA !!!
But seriously, great tutorial ! Relatively deep subject but well explained, even though we might have to take a few steps back a few times to figure things out correctly, you gave us a precise and concise explanation. I just watch a video on the colour perception of jumping spiders and all to say, it is quite a wonder the things we can manage to do with the information we can manage to perceive :)
Doesn't that break when you rotate the plain? you need to transform the incoming vector into texture space. ie take the dot product on the incoming vector with normal, tangent and binormal. Although I'm sure you know that, I'm guessing there is a part 2 🙂Be warned the technique doesn't work on curved surfaces anymore, blender clamps the normal so you can't have be pointing into the plane surface, because is messes up eevee next. That caused me such a headache trying to figure out what was wrong!
yep, just multiply by that matrix - the one i linked in description has that
After I plugged in the geometry incoming to vector, I went no further. That was pretty cool looking just doing that. Thanks as always for teaching and I will try to finish once I watch it again and again.
A couple years ago i followed a similar tutorial showing how to make fake windows with rooms behind them. I ended up with a project to build the rooms (with wall decorations, lighting, etc.), which put out a node set, and a shader template to modify with the node set.
Thank you so much I’ve been trying to find out how to do this forever!!!
BEST Parallax Occlusion Tutorial ever!
First time I saw Parallax mapping was in F.E.A.R. The decals on damaged walls. It was one of the coolest things to see because it was so much detail for something that use to be a black dot.
This guy is the best blender youtuber
Would love to see your take on shell texturing, been messing around with ways to do it in geometry nodes.
oh finally a great quality tutorial! I am joking you are the best.
Men that's just so much you give us. Thanks a lot, have to rewatch it several times to fully get it i think
I was waiting for this video
'the man, the myth, the legend, the mathematical wizard'
Wow, amazing stuff! Thanks a lot for sharing it! A big ciao from Italy and Long life to Blender! :)
have you seen anisotropic cone step mapping? I would be surprised if its possible to make in nodes but it is supposed to often be both faster and higher quality. It uses scaled cones centered on each pixel instead of vertical layers, and requires a preprocessing step.
There's also nvidia's relaxed cone step mapping which looks similar except it makes the cones intersect the geometry and adds a binary search at the end.
Forgot to set normal and roughness to non color instead of sRGB (I assume you know to do that but just forgot in the moment). Normal maps and roughness maps are not displayed correctly when set as sRGB so the rocks at the end look a bit weird. Cool video though, I wish blender just had POM support by default like most game engines where you just plug in a height map.
Great explanation for something I considered sorcery when I saw it used in 3D.
I like how my brain turned off for every single thing except the skillshare ad.
I wonder if you could do a binary search instead of buckets
I dont know blender well so I cant say, but it would help mitigate the blockiness
First thing popping to mind: convert the height map into something akin to 3d SDF, which will optimise the amount of steps needed for each fragment as well as accuracy.
Man I was looking for these tutorial and I am also looking for a lot of other tutorial by you on geometer nodes. Like how to randomize hair thickness
I'm following the steps until 6:52 when suddenly appeared a group called Depth, and I'm not sure how that was created. Help!
It made my head hurt but it was well worth it, the longest thing for me here was the 7 hours I invested in Zbrush to create that texture lol
instead of doing the "drill check" wouldn't a binary search be better, so start at 0.5 then go half the way in the direction that it hints?
Once again, blow my mind
Thanks for posting this! That forum is a wealth of knowledge
How to add more than one POM texture in a blend file, since multiple materials share the same height map group?
this method has some limitations but can work for some cases is enough, thanks for sharing
Is there a way of blurring the edges of an image texture into eachother to make a short fake seemless texture with nodes? Might be a cool experiment
So let me see if I'm understanding this correctly, cuz the difference between parallax mapping and displacement mapping is confusing.
Traditional displacement mapping actually displaces the geometry. I'm not sure about Cycles/Eevee, but in Octane Render it's displacing the surface at the render level rather than actually displacing the polygons themselves. That's how you can have high quality displacement with a low poly model. I've always seen this as a great way to get height detail without overloading a scene.
It seems parallax mapping seems to imitate displacement mapping without actually displacing anything. Kind of like how bump/normal maps create the illusion of surface detail, but when you look at the edges of the model, it's still smooth and flat. It seems this parallax method would similarly break down when you're looking along the tangent of a surface. I'd imagine it's less computationally expensive though, so it'd render way faster. Interesting. Nice tutorial!
funny for me it's heavier to use displacement texture as for using this method Is it just because displacement uses experimental subdev?
It looks kinda wonderful, but I have serious concerns about how much longer the render will become with a setup like that. If only it was a sort of - low level processor instruction node...
I totally got everything covered here!
Your ability to teach is phenomenal. Thank you for being an inspiration.
I really want fragment shader kinda setup in blender. So then I can for loop through all the iteration easily.
Really cool. It got me wondering tho. Could we get rid of the "layered" effect somehow ? Like calculating what the normal should be in-between based on previous plans next layer ?
that's what parallax occlusion mapping does, it's also a difference between POM and Steep parallax mapping, I'm not sure if he implemented this
Looks great... Now... Is there a plugin that gives me that with a single node? 😅
I mean, if that node tree works for any plane. Then it should be collapsible for ease of use and minimizing any risks of user error?
Also. Doing that operation 20 times makes me wonder, is there no way to do for-loops in blender?
I’ve spent some time trying to use this technique to represent windows of a building similar to the ones in the Spider-Man games. Any ideas pointers you’d be able to share?
Hey, I saw that your Blender speed while making animations is very quick and has good quality. What laptop or PC do you use?
There is a lot of talk about Z-axis coordinates here, does that mean this only works this way on surfaces that are flat horizontal?
This madlad got 'i can remap your life' kind of energy
❤nice tutorial!
This is the equivalent to "Yeah, Im a visual learner" in math class wthen learning about vectors
can you try to release a blend file of the node?
Bro you're a genius
Awesome as always, man!
damnn thats like octane ggs dude!
Must say this is quite genius! amazing thnx
You are a god who walks among us.
So Enjoyable
okay so, whats the actual use? saves performace?
Okey thats nice but how can we writing this data to depth buffer
In 8:36 he did not connect the 0.8, he jumped from .6 to 1 in the comparations
uhh.. im sorry i cant understand the what is 'depth' group node?? how make that?
so you've achieved POM, now how about PDO with shadows so it interacts with other objects and not look flat uppon intersection?
self shadows should be possible but I don't think you can have correct shadows from other objects without depth offset
This is nice, thank you
Amazing man, I crave node shenanigans
Doing loops in blender node editor is a nightmare. We have osl script node available but then lose the ability to run on GPU 😅
😮 Why use bump or normal map insted this parallax?
big heart for jordy
I use Blender for ages and never checked if this is possible. Granted the setup is too complex for something I'd use normally: Would be great if this was builtin!
Finally you upgraded the tutorial..
Hard to tell whether the guy in the video is John Snow from GOT or Pedro Pascal from the Mandalorean
does anyone know if the ray portal node can be used to offset a texture's pixels like in ue5 pixel depth offset?
Ray marching shader next?
Very cool technique!
i got whiplash at 9:04
I only understood the skillshare ad😢😢
Bravo sir!
Doing stuff in steps like this seems weird.
I think we need the node equivalent of Calculus.
Does it affect render time instead of using hight map with lot of gemoetry?
nice explanation, thanks 🙏
That was amazing
Imagine someone decides to start Blender and watches this video first lol
great! Thank you!
I have tried to do this well so many times in blender
Neat
😅 One breathe at a time, Thanks a lot saviour!! 🎉
something is wrong with this implementation i think? it seems to change too drastically when camera view angle changes?
We really need the ability to loop like geometry nodes in n shader nodes, that and pass a texture parameter into a node group
Bravo
This just screams for a loop. Isn't there a Loop Node for Shaders like the one in the Geometry Nodes?
why did you deleted your realistic ice tutorial post it back 😭
how do you know? i have been searching for this a few days now
You could say this is like a ray marching algorithm but with Blender geometry nodes
How can we bake/export it?
One question: Do you speed up your videos?
Thx for sharing!
thank you!
It only works on eevee right?
Thank you very much :D
For those that want to listen to this normal speed, change to .75 playback.