I know this is very deep, esoteric content, but for those of us interested in the area, this is invaluable learning material. I think you should create a separate tier for Patreon that includes example files, for all the obvious reasons.
There is still some very sharp connection when polar UV position 0 meets its 1 anyway. Is there possibility to make this soft or add some polar rotator to rotate all spheres where this sharp seam will not be disturbing for our camera look or scene ( for example when everything along +Y axis is our backstage and it is not important ) ?
Maybe we live in 2 almost completely identical 2D dimensions at the same time, that we can perceive through each individual eye and our brain tries to make sense of it by allucinating one 3D world
this video keeps educating after 2 years. just pure gold! Do you have any solution for light shafts, in the directional light for path tracing? You are the only one out there that can provide a solution...Thanks again for your great tutorials!
Looks pretty cool! But I have some concerns. The number of texture fetches and amount of instructions for UV computation allows us to use this effect only on PC, not mobile. In that case, why not just sample from the 3D noise texture? I assume the blending of three noise octaves will be enough.
This is a really high quality tutorial, thank you. Also I have a question, in 24:11 you multiply the sphere scale by 0.6667 and I don't know why, or where you get that number
Thanks! I have to be honest, I don't remember very well what that was about... From what I can recall it was the value I saw giving the most optimal overlap between the adjacent spheres :)
Ahhhh nice. Will give it a try within the next few days. It's holiday season and I have plenty of time now :D Thanks for the video! Edit: I have finished the video and I accept it as voodoo magic :D One question... can we also make requests for upcoming videos?
Ahahahah you killed me xD Trust me, this stuff it's absolutely doable by anyone ;) Of course you can make requests! I'm really looking into building a nice community around this channel. I have a rough list of the topics I want to talk about, but I'm super open to suggestions :)
Yep, it's possible but still - way above my head. I can copy your work, but figuring out myself. Maybe one day. I am not a technical artist but I am interested in what you guys are doing :) About the request. Currently I am working on a city and to breath some life and depth into, I added fake interior to the windows. Nothing special so far and this part isn't hard at all, since there are already predefined functions in Unreal, but the problem starts when you want to stretch one fake interior over multiple windows (instances). For example - one building is build out of one instance. Copied over and over again and the glass has the interior cubemap shader on it. The problem is, that every window displays a unique room on it's own now - but in reallity, especially for highrise buildings, one room hast multiple windows. So the question is, how can we manipulate the UVs per instance so that the fake interiors do make sense?
@@aukehuys Ah, I actually worked on some fake interior stuff recently, I was already planning to discuss that sooner or later. I'll get back to your specific question later, since I have no idea how the built in function in UE is done (so far I've always built the function from scratch, to fit it to the specific requirements of the game), but shouldn't be anything too crazy.
Please make one more tutorial on this fog. Where we can learn to convert the fog to height based fog. And how can we make fog intractable with all light types. There is also an issue with the sphere projections when i look straight up or straight down fog uvs seems pinched at a single point.
I'm not sure what do you mean by converting it to height fog to be honest xD Yes, the spherical mapping has problems at the poles, you can fix that by blending it with a planar mapping just in that section
@@VisualTechArt by height fog i mean something similar to unreal engine's official exponential height fog which allows us to limit the height of fog if we decrease height falloff.
Sadly in UE you don't have direct access to light data, so the only thing you can do is something very bespoke to you project at best, couldn't make something general enough for everyone
Just one note for keeping noodles clear. You can use a node named "Add Named Reroute Declaration Node" to avoid many long connections, intersections and noodles splits from one to many.
@@VisualTechArt Hmm, yeah, you could have a blueprint track the colour/position/intensity of a light and feed that into parameters in the fog post process material, but that's not really scalable beyond probably one or two point lights... if you or anyone else has any thoughts on how this could be done let me know!
I have an issue with scene depth and translucent materials. They will get get ignored and look strange because they get more transparent as a result. Is there a workaround for that? Setting bendable location to before translucency works but than translucent objects will not get any fog applied
Translucents are always a pain in the ass :D Since they get rendered in a separate pass, after the rest of the scene and they (of course) can be transparent, they don't write on the depth buffer, hence why the problems. An expensive solution could be to calculate the very same fog in the translucent materials, so they match with the rest of the scene. A less expensive one could be just to give them a flat tint that matches it and hope that it's good enough that nobody notices ahahahaha :D
Thanks for the detailed tutorial. I am trying to implement the fog effect before tone mapper because it will be working with the cel shader that I made. but it seems like Niagara Particles don't get effected before tone mapper.
I am working on UE5 and Lumen lighting system. and I have 2 more questions regarding the post-process material. Q.1 I am trying to get the Sun's illuminance value by using the built-in function (Atmosphere Sun Light Illuminance On Ground) but this function only provides the universal value and I am wondering if there is any way to occlude the completely close areas and sunlight isn't reaching in those areas (for example completely close room)? Q2. I am unable to get the illuminance value for emissive, and other light types like point, spot, and rect. is it possible to get these values?
For the Niagara Particles it may be because they are Translucent, so they get rendered after your PP :) For Q1: In a deferred renderer there's no way to get the shadow projection data, but for a case like a completely closed room you could just (if you have a finite and known number of them) you can define some boxy distance fields to mask out the fog. For Q2: For emissive there's a trick you can do, but it's probe to artefacts, plus is impossible to explain here, for lights you could export all their data to MaterialParameterCollection and, again, define distance fields with their shape to directly add them to the PP... To get an idea of how to do this, watch the second part of my video on Cell Shading :)
Starting doing this postpro material I can see edges shaking problem when it is applied to the PostProcess Volume. What causes it (AA problem) ? Do I have to setup something in the material properties ?
I would of definitely gone with a simpler approach on account of having less mathematical knowledge. I would of had two spheres, where as one fades out the other fades in, and when ones fuly faded out its size becomes smaller and grows as you walk forward or shrinks as you walk backward. For strafing I would of used dot product to detect the angle of motion compared to the cameras direction and just added to their rotation an offset. I don't know how it would look and cant exactly test it at the moment, but I'm sure it wouldn't turn out as well as the results in the video.
You found some clever solutions with the grid and interpolation between those. I think I would've tried to use a 3D noise function and transform the sphere coordinates into world space to sample the noise with. For the parallax effect you could rotate each sphere, offset it in the 3D space and even multiply the translation with a factor so it moves at different speeds then your actual movement
Thanks! I thought about that too, but I think with this sphere thing is cheaper just having a texture... A good 3D noise is usually quite expensive and I would need to sample it for each adjacent cell anyways. I kept the texture because I saw that you can't see repetition and even if so you can use a cell noise to have random data for each cell to offset/flip/whatever the texture and give more variation :)
@@VisualTechArt I see! Thx for the detailed answer! But I have to wonder if so many texture lookups really are cheaper. That's where shaders get tricky to me, it's hard to measure that stuff and I might very well be biased by my intuition
@@Teflora It depends on a lot of things ahahahaah it IS confusing! For example, if you want to evaluate a very expensive function (like it could be a good quality 3D noise) it may be cheaper to just sample a texture, because there is a threshold where the wait for the fetch becomes smaller than the time the GPU takes to calculate the noise output. Moreover, the GPU doesn't fetch just one pixel at time, but loads in memory a block of them (16x16 I think, I'm not sure), so if your UVs are unbroken and and the texture is small, you actually have a good chance for the pixels you want to fetch to already be loaded in memory, which can make the fetch pretty fast. I have to say that in this case I didn't do any performance checks and I have no idea if using a texture is better than using a noise function. I just stuck with the texture to have a chance to explain spherical mapping ;)
That fog is amazing. I watched the whole video and it's really great. That brings me to something: I am about to write my thesis for my getting my degree (european country). I am looking for interesting things to write a thesis about. Do you have any suggestions for me? Thank you!
@@mr-fluffy469 Well then... I though about that a bit, one option could be to create a renderer that uses RayCasting from scratch, to play with the math you studied directly, it can be a never ending topic and you can decide where to stop. For example you could transition from RayCasting to RayTracing to PathTracing, render a scene made of analytical objects, play with material models etc. That could include transformation matrices for the camera, objects etc, quaternions too for rotations. You could go a bit fancier and talk about RayMarching, that opens the super cool topic of signed distance fields, volumetric effects, parallax mapping with a lot of different applications like Cone Step Mapping etc. Maybe compare RayCasting and RayMarching with pros and cons, do a mix of both for optimization, there's no limit :D Another cool topic could be Kernels. The image post processing with different filters, to do Edge Detection, effects such as blurs, sharpen, etc and feature recognition too. That could take you to convolutions, which are something used a lot in Machine Learning AI, or also Photogrammetry, for the creation of the pointcloud needed to triangulate the shots. Let me know if any of this sparks your interest, I can try to think about something else eventually.
when i multiply fog color by 1500 my screen is just white only when i multiply by do i see it if i lower the value to point something it seems darker or further if multiply by 10 i only see like 2 meters in front of me the rest is bright white any idea why ?
I couldn't even repeat it after you, even though I did everything the same way, I understood half of it, but when I look up and down, the fog narrows to a point and behaves incorrectly
I have one issue, and that is when looking up you can see the textures converge at the cap of the sphere. Is there any way to get around this? I am not the most adept at math, though I am trying to learn.
作为新手感觉抄一遍就很吃力了🤣👍深受启发 Can you please tell me about what the meth Round expression working for in this function? Iknow what the Round expression meaning but working for the material
It does what a standard Round does: it floors the value if the decimal part of the number is < 0.5, otherwise it does a ceil. In this specific case I chose this instead of Floor or Ceil to get a specific positioning of the grid cells.
Have been trying to figue out how the fumula is got from the equations for 2 days, really need a further explanation for that, especially about what C represents for 😭THX A LOT
Oh noooooo! C is the centre of the sphere, I wrote the initial system wrong! D: The second equation of the system should be (P-C).(P-C)=r^2 !!! Then I believe you can figure out the rest, I'm just solving for 't'
@@VisualTechArt The error is on the empty Input Position Offset at 26.37 - connected from Sphere Scale 1. This error is also on Unreal 5.1 "Missing function input 'Input Position Offset'" BTW Really great tutorial, appreciate all the time and effort that went in to making it :)
I tried adding a value of 0 and 1. Does this work with lumen? Because my output was extremely white even if my color value was way close to black. Also your tutorial helped me to understand to implement formulas in texture editor. Coming from a non math background this really made me understand how to do it correctly. Really appreciate it.
in my country my math study doesn't look like that. its entirely 90% aljabr studies... they did some xyz stuff but its pretty basic. and its without (z) depth. just x and y 2d. so when comes to technical like this I'm super dummy dumb dumb even its a basic stuff. thanks for clearing things up a lil bit and have a great day nice person!
As an exercise its definitely interesting. Its just way too expensive to be a viable solution for actual production. You'd be way better off using a volume texture instead.
I thought that too at first, but you have to consider that in this case we have a known and consistent amount of texture samples, avoiding loops and branches, which always tend to kill gpu performances. I'm not sure that raymarching through a volumetric texture would result in better performances for sure. Moreover, for this I could easily keep the quality while using a quite small texture (512 or less), that would increase the likelihood of having to fetch already cached pixels, for example... Don't know, I would give it the benefit of the doubt ahahahah what do you think?
this is just another level than most ppl teach. wish yr channel grow asap man. pure and massive skill
Thank you :D Much appreciated
I like your warning about the spiking complexity of this tutorial in the middle of the video lol
Thank you for sharing your experience with us!
One of the best channels on shader programming and Unreal I've seen. Thanks for the huge amount of information.
very clear explanation of difficult concepts, great work!
I know this is very deep, esoteric content, but for those of us interested in the area, this is invaluable learning material. I think you should create a separate tier for Patreon that includes example files, for all the obvious reasons.
Now I tend to always have the packed content on Gumroad :)
At the time I couldn't be bothered xD
@@VisualTechArt what is the value add for something like this compared to exponential height fog?
This looks great, thank you for sharing this method!
It's a pleasure for me :)
@@VisualTechArt I am looking forward to learning more from you.
Another note for polar coords for folks that are learning from this extraordinary tutorial
There is still some very sharp connection when polar UV position 0 meets its 1 anyway.
Is there possibility to make this soft or add some polar rotator to rotate all spheres where this sharp seam will not be disturbing for our camera look or scene ( for example when everything along +Y axis is our backstage and it is not important ) ?
Absolutely right :)
Your videos makes me question reality itself whether we live in a 2-dimension or otherwise. And it frightens me.
Maybe we live in 2 almost completely identical 2D dimensions at the same time, that we can perceive through each individual eye and our brain tries to make sense of it by allucinating one 3D world
I mean... just, wow! Take a bow, Sir. Amazing job.
Really cool! Great you pulled that off! Looks great! I'm interested in the shader complexity though 😁
great tutorial , verty clear explanation of your thought
this video keeps educating after 2 years. just pure gold! Do you have any solution for light shafts, in the directional light for path tracing? You are the only one out there that can provide a solution...Thanks again for your great tutorials!
No idea about light shafts, sorry xD
This is some great stuff, Thank you!
Thanks man! This shader was a new thing for me too :D
Thank you for sharing this tutorial. it was helpful for me.
Looks pretty cool! But I have some concerns. The number of texture fetches and amount of instructions for UV computation allows us to use this effect only on PC, not mobile. In that case, why not just sample from the 3D noise texture? I assume the blending of three noise octaves will be enough.
Fantastic solution ! Thanks !
More math, more math and explanation what is what for !
It will come! :D
astonishing!!
god damn it dude this is some forbidden dark magic :O
This is a really high quality tutorial, thank you. Also I have a question, in 24:11 you multiply the sphere scale by 0.6667 and I don't know why, or where you get that number
Thanks! I have to be honest, I don't remember very well what that was about... From what I can recall it was the value I saw giving the most optimal overlap between the adjacent spheres :)
I've been looking for a low resource volumetric fog alternative for VR. This might be the solution I'm looking for.
magnificent !!
Ahhhh nice. Will give it a try within the next few days. It's holiday season and I have plenty of time now :D
Thanks for the video!
Edit: I have finished the video and I accept it as voodoo magic :D
One question...
can we also make requests for upcoming videos?
Ahahahah you killed me xD Trust me, this stuff it's absolutely doable by anyone ;)
Of course you can make requests! I'm really looking into building a nice community around this channel. I have a rough list of the topics I want to talk about, but I'm super open to suggestions :)
@@VisualTechArt Would be great to have a Discord Server for discussions, tutorial requests or questions^^ Fantastic tutorial :D
@@QuakeProBro That's a great idea :) I'll think about something
Yep, it's possible but still - way above my head.
I can copy your work, but figuring out myself. Maybe one day. I am not a technical artist but I am interested in what you guys are doing :)
About the request. Currently I am working on a city and to breath some life and depth into, I added fake interior to the windows. Nothing special so far and this part isn't hard at all, since there are already predefined functions in Unreal, but the problem starts when you want to stretch one fake interior over multiple windows (instances).
For example - one building is build out of one instance. Copied over and over again and the glass has the interior cubemap shader on it.
The problem is, that every window displays a unique room on it's own now - but in reallity, especially for highrise buildings, one room hast multiple windows.
So the question is, how can we manipulate the UVs per instance so that the fake interiors do make sense?
@@aukehuys Ah, I actually worked on some fake interior stuff recently, I was already planning to discuss that sooner or later. I'll get back to your specific question later, since I have no idea how the built in function in UE is done (so far I've always built the function from scratch, to fit it to the specific requirements of the game), but shouldn't be anything too crazy.
Please make one more tutorial on this fog. Where we can learn to convert the fog to height based fog. And how can we make fog intractable with all light types.
There is also an issue with the sphere projections when i look straight up or straight down fog uvs seems pinched at a single point.
I'm not sure what do you mean by converting it to height fog to be honest xD
Yes, the spherical mapping has problems at the poles, you can fix that by blending it with a planar mapping just in that section
@@VisualTechArt i was thinking about replacing these spheres with box spheres.
@@VisualTechArt by height fog i mean something similar to unreal engine's official exponential height fog which allows us to limit the height of fog if we decrease height falloff.
@@VisualTechArt what do you think about the light interaction with fog.
Sadly in UE you don't have direct access to light data, so the only thing you can do is something very bespoke to you project at best, couldn't make something general enough for everyone
Just one note for keeping noodles clear. You can use a node named "Add Named Reroute Declaration Node" to avoid many long connections, intersections and noodles splits from one to many.
That's something they added only in UE5 I'm afraid :D
@@VisualTechArt 4.27 has i too
@@unumpolum Oooh I need to have a look then, I didn't realize it, thanks!
great solution, your math is strong! XD The only thing that i dislike is the amount of texture samplers, makes it too expensive for my needs.
In my defense, there's nothing such as a volumetric effect that is not expensive xD
I'd love to hear some ideas about how to fake the interaction with the scene's lights, I need to find a way to do exactly that for my current project!
That's pretty complex I think 🤔 I mean, you could pass the lights data to the Post Process and try to approximate their effect on the fog
@@VisualTechArt Hmm, yeah, you could have a blueprint track the colour/position/intensity of a light and feed that into parameters in the fog post process material, but that's not really scalable beyond probably one or two point lights... if you or anyone else has any thoughts on how this could be done let me know!
I have an issue with scene depth and translucent materials. They will get get ignored and look strange because they get more transparent as a result. Is there a workaround for that? Setting bendable location to before translucency works but than translucent objects will not get any fog applied
Translucents are always a pain in the ass :D Since they get rendered in a separate pass, after the rest of the scene and they (of course) can be transparent, they don't write on the depth buffer, hence why the problems. An expensive solution could be to calculate the very same fog in the translucent materials, so they match with the rest of the scene. A less expensive one could be just to give them a flat tint that matches it and hope that it's good enough that nobody notices ahahahaha :D
Ah ok thank you
I have a problem with objects vibranting when the antialiasing is enbled, but the scene looks ugly when i disable
Try to set the Post Process to Before Tonemapping
Wonderful! The porject you are applying this fog for, is it a custome project or one from the marketplace?
You can find it in the marketplace :) it is called Medieval Game Environment or something like that
Thanks for the detailed tutorial. I am trying to implement the fog effect before tone mapper because it will be working with the cel shader that I made. but it seems like Niagara Particles don't get effected before tone mapper.
I am working on UE5 and Lumen lighting system. and I have 2 more questions regarding the post-process material.
Q.1 I am trying to get the Sun's illuminance value by using the built-in function (Atmosphere Sun Light Illuminance On Ground) but this function only provides the universal value and I am wondering if there is any way to occlude the completely close areas and sunlight isn't reaching in those areas (for example completely close room)?
Q2. I am unable to get the illuminance value for emissive, and other light types like point, spot, and rect. is it possible to get these values?
For the Niagara Particles it may be because they are Translucent, so they get rendered after your PP :)
For Q1: In a deferred renderer there's no way to get the shadow projection data, but for a case like a completely closed room you could just (if you have a finite and known number of them) you can define some boxy distance fields to mask out the fog.
For Q2: For emissive there's a trick you can do, but it's probe to artefacts, plus is impossible to explain here, for lights you could export all their data to MaterialParameterCollection and, again, define distance fields with their shape to directly add them to the PP... To get an idea of how to do this, watch the second part of my video on Cell Shading :)
SICK
Starting doing this postpro material I can see edges shaking problem when it is applied to the PostProcess Volume.
What causes it (AA problem) ? Do I have to setup something in the material properties ?
FIXED: Material Properities -> Blendable Location: Before Tonemapping
I would of definitely gone with a simpler approach on account of having less mathematical knowledge. I would of had two spheres, where as one fades out the other fades in, and when ones fuly faded out its size becomes smaller and grows as you walk forward or shrinks as you walk backward. For strafing I would of used dot product to detect the angle of motion compared to the cameras direction and just added to their rotation an offset. I don't know how it would look and cant exactly test it at the moment, but I'm sure it wouldn't turn out as well as the results in the video.
You found some clever solutions with the grid and interpolation between those. I think I would've tried to use a 3D noise function and transform the sphere coordinates into world space to sample the noise with. For the parallax effect you could rotate each sphere, offset it in the 3D space and even multiply the translation with a factor so it moves at different speeds then your actual movement
Thanks! I thought about that too, but I think with this sphere thing is cheaper just having a texture... A good 3D noise is usually quite expensive and I would need to sample it for each adjacent cell anyways. I kept the texture because I saw that you can't see repetition and even if so you can use a cell noise to have random data for each cell to offset/flip/whatever the texture and give more variation :)
@@VisualTechArt I see! Thx for the detailed answer! But I have to wonder if so many texture lookups really are cheaper. That's where shaders get tricky to me, it's hard to measure that stuff and I might very well be biased by my intuition
@@Teflora It depends on a lot of things ahahahaah it IS confusing! For example, if you want to evaluate a very expensive function (like it could be a good quality 3D noise) it may be cheaper to just sample a texture, because there is a threshold where the wait for the fetch becomes smaller than the time the GPU takes to calculate the noise output. Moreover, the GPU doesn't fetch just one pixel at time, but loads in memory a block of them (16x16 I think, I'm not sure), so if your UVs are unbroken and and the texture is small, you actually have a good chance for the pixels you want to fetch to already be loaded in memory, which can make the fetch pretty fast.
I have to say that in this case I didn't do any performance checks and I have no idea if using a texture is better than using a noise function. I just stuck with the texture to have a chance to explain spherical mapping ;)
That fog is amazing. I watched the whole video and it's really great. That brings me to something: I am about to write my thesis for my getting my degree (european country). I am looking for interesting things to write a thesis about. Do you have any suggestions for me? Thank you!
Thanks! Thesis on which subject? Math?
@@VisualTechArt something related to computer graphics/real-time rendering stuff. It can be something in a game engine as well, for example Unity
@@VisualTechArt I forgot: I am a computer scientist, so I am a coder.
@@mr-fluffy469 Well then... I though about that a bit, one option could be to create a renderer that uses RayCasting from scratch, to play with the math you studied directly, it can be a never ending topic and you can decide where to stop. For example you could transition from RayCasting to RayTracing to PathTracing, render a scene made of analytical objects, play with material models etc. That could include transformation matrices for the camera, objects etc, quaternions too for rotations. You could go a bit fancier and talk about RayMarching, that opens the super cool topic of signed distance fields, volumetric effects, parallax mapping with a lot of different applications like Cone Step Mapping etc. Maybe compare RayCasting and RayMarching with pros and cons, do a mix of both for optimization, there's no limit :D
Another cool topic could be Kernels. The image post processing with different filters, to do Edge Detection, effects such as blurs, sharpen, etc and feature recognition too. That could take you to convolutions, which are something used a lot in Machine Learning AI, or also Photogrammetry, for the creation of the pointcloud needed to triangulate the shots.
Let me know if any of this sparks your interest, I can try to think about something else eventually.
when i multiply fog color by 1500 my screen is just white only when i multiply by do i see it if i lower the value to point something it seems darker or further if multiply by 10 i only see like 2 meters in front of me the rest is bright white any idea why ?
It depends on the lighting you have in the scene, you have to balance it out for your specific case :)
@@VisualTechArt ok thx
Great Tut! Is there any way you can share the fog material for the rest of us to try / test out ?
Thank you! If I do that, I don't usually make it free though :D
@@VisualTechArt IMMA PAY FOR ITTTTTTTTTTTTTTTT
I couldn't even repeat it after you, even though I did everything the same way, I understood half of it, but when I look up and down, the fog narrows to a point and behaves incorrectly
No that's correct, it's an artefact of the spherical projection :) it can be corrected by adding planar caps
I have one issue, and that is when looking up you can see the textures converge at the cap of the sphere. Is there any way to get around this? I am not the most adept at math, though I am trying to learn.
You can avoid that by using another type of mapping for the poles :) Maybe my video about triplanar mapping can give you some hints ;)
作为新手感觉抄一遍就很吃力了🤣👍深受启发
Can you please tell me about what the meth Round expression working for in this function? Iknow what the Round expression meaning but working for the material
It does what a standard Round does: it floors the value if the decimal part of the number is < 0.5, otherwise it does a ceil. In this specific case I chose this instead of Floor or Ceil to get a specific positioning of the grid cells.
Have been trying to figue out how the fumula is got from the equations for 2 days, really need a further explanation for that, especially about what C represents for 😭THX A LOT
Oh noooooo! C is the centre of the sphere, I wrote the initial system wrong! D: The second equation of the system should be (P-C).(P-C)=r^2 !!! Then I believe you can figure out the rest, I'm just solving for 't'
So I recreated this in UE 5.0.3 and its giving me an error to input Position offset(Vector3) from Material Function.
Can you add a timestamp to give me a reference of where you're getting the error?
@@VisualTechArt The error is on the empty Input Position Offset at 26.37 - connected from Sphere Scale 1. This error is also on Unreal 5.1
"Missing function input 'Input Position Offset'"
BTW Really great tutorial, appreciate all the time and effort that went in to making it :)
I just added a value node of 0 - the error is gone, and looks to be working correctly, but I'm not sure if this is the correct way to do it
I tried adding a value of 0 and 1. Does this work with lumen? Because my output was extremely white even if my color value was way close to black.
Also your tutorial helped me to understand to implement formulas in texture editor. Coming from a non math background this really made me understand how to do it correctly. Really appreciate it.
Haven't tested it in UE5 and GI, sadly can't answer :(
in my country my math study doesn't look like that. its entirely 90% aljabr studies... they did some xyz stuff but its pretty basic. and its without (z) depth. just x and y 2d. so when comes to technical like this I'm super dummy dumb dumb even its a basic stuff. thanks for clearing things up a lil bit and have a great day nice person!
find this on 10k/hours +1 subs man!
these raycast sphere UVs cheaper than volume textures?
Haven't done a performance check, but thay may be depending on the context :)
Can it modify the distance for fog
?
Yes
wow maths for video games, respect it
im interested in the maths for videogames!!
As an exercise its definitely interesting. Its just way too expensive to be a viable solution for actual production. You'd be way better off using a volume texture instead.
I thought that too at first, but you have to consider that in this case we have a known and consistent amount of texture samples, avoiding loops and branches, which always tend to kill gpu performances. I'm not sure that raymarching through a volumetric texture would result in better performances for sure. Moreover, for this I could easily keep the quality while using a quite small texture (512 or less), that would increase the likelihood of having to fetch already cached pixels, for example... Don't know, I would give it the benefit of the doubt ahahahah what do you think?
What object is this fog shader assigned to?🤣
No specific object, it's a Post Process material :)
@@VisualTechArt ok,thanks ,good work
@@VisualTechArt Thx very much!!!
we're not worthy of you lol
ue 5 не работает
Extremely valuable stuff, thank you very much!!
Glad to read that :)