At @3:08 I have the same three colored image in the view port as you have. But when I connect the ConstantBiasScale-> SceneTexture:PostProcessInput0 ->Mask (R G B) I get a tiny picture. The cube sphere in the viewport is like only 10 pixels wide. What should the parameters be for the second ConstantBiasScale? I use 5.2. Thank you
Followed your video to the letter, ended up with UV stretching around the fisheye effect and the entire scene rendered upside down. Any ideas of what went wrong?
this is a great tutorial and I appreciate you uploading it, it's great to see a fisheye lens effect done with postprocessing materials. I have to say, it would be much, much easier to follow this tutorial if you slowed down the video recording a lot and talked through what you're doing a little more. Even just saying the names of nodes you're creating and values you're changing would help a lot. Like at 1:47, saying "Let's change the range from 0 to 1 to -1 to 1," doesn't directly explain what's happening on the screen. I think adding something like "I do this by creating a ConstantBiasScale node and changing the bias to -0.5 and the scale to 2.0" would make it much easier to follow. There were a lot of times during the video where I had to rewind and try to pause at specific times to see the names of nodes you're making and values you're editing, which is a frustratingly slow way to follow a tutorial! I think adding some cursor highlighting and avoiding overlapping wires would also make it much easier to tell what's happening on the screen-when wires are all bunched up or perfectly overlap each other, it makes it harder to see the actual logic flow. I don't mean to be rude, there really is a lot of great info here, I just wish it were a little easier to follow!
Feedbacks are always welcome here, don't worry! Especially now that I'm trying to figure out the best way to present information without turning the tutorials into boring lectures! It's a fine balance I have to find with entertainment too :) (what's the point of making good informative videos if people can't stand me talking for more than 30 seconds straight?) I'll make sure to visually communicate in a much clearer way while hopefully keeping a good phase, thanks!
@@VisualTechArt I wouldn't worry about it honestly. I find your videos to not be for absolute beginners that can't follow a remap from normalized values to some other range and the effects this might have - they could play with the values and see for themselves if they really wanted. And since there's a literal starvation and a huge gap of resources for intermediate level tech art out there, your channel is an absolute gem! There are plenty for beginners and even more for the advanced/professional tech artists, none for those in between. I would just stay focused on this population as they appreciate your work a lot! Thank you, it's perfect!
Nice! That's a great addition to my camera fx post process material^^ I would love to see an implementation of rain on the lens like on the tp camera in Driveclub (properly calculating the refraction, reacting to bloom and blurring the end result etc.)... Most games that have water on lenses do it in a pretty unrealistic way. I've tried my best to achieve something that somehow resembled Driveclubs implementation visually, but it always felt a bit off. Your channel is awesome btw :D
That's something I could give a go when I'm looking for a real challenge! I actually kinda know how this type of stuff is done because I saw how a GFX programmer implemented it some years ago. I'm not sure it is possible to reach that level of interaction with lighting and other postprocesses just from the material editor though. I'm almost certain that to get it really good it would require some work on the shading model itself, we'll see... :D
@@VisualTechArt My approach to the light interaction was to draw the droplets to a render target. That render target was then sampled inside the post process material and as the dirt mask inside the post process volume. It felt a bit like cheating and is not exactly what I want, but it works for now. Unfortunately there is no other way to access the necessary buffers for that without modifying the engine (as far as I know). I ones saw a blog post of somebody who did a custom lens flare and bloom implementation, that stuff is pretty interesting, but I don’t think my body is ready for that kind of knowledge xD
@@QuakeProBro Nice! Just thinking about that a bit more, I think I would probably use Niagara to get a nice simulation of the droplets (and maybe directly render the particles to a texture, I don't know). Considering them as metaballs I think it would be cool so they would merge in a nice way. Then yes, in a Postprocess I would try to do some sort of raymarching through the screen (like screen space reflections do, for example) with an index of refraction to try and get the correct water feel. Or maybe you can get away just by sampling the scene in a smart way... ;)
@@VisualTechArt I use gaussian blur on the particles to kind of merge them together if they get close. It’s maybe not as good as metaballs, but since I use the blur anyway, it is a nice side effect^^ The result is then drawn to the render target. I then sample the rt inside my pp material and generate the normals. In combination with the refraction node, I use them to distort and flip a blurred scene texture, masked by the original render target black and white values. In the end everything is combined with the dirt mask inside the pp volume as described in my comment above. Oh before I forget, are there some news about a discord community server? I would love to share my thoughts and progress, so maybe other people can also benefit from it.
@@QuakeProBro I'll be announcing that as soon I have something, I'm not sure about using Discord though, I'm not familiar with the platform and not used to it, will see
Indeed, a brilliant job configuring this BP and explaining the logic, alas I have to agree with esopustorian below, for someone unfamiliar with the node names and how to call them up, the bunched pinning, it seems difficult to implement. Any chance of offering this over Marketplace as a turnkey plugin?
@@VisualTechArt Busy guy, I'm sure. Do let me (us) know if I could simply purchase your Blueprint. I'm developing content intended for Cosm, giant 8K LED wall that's basically hemispherical. Much appreciated. Your math brain, not fair.
My bad, I am so new into this but I got this all right but when I try to get larger FOVs, the corners are so broken, is there a way to get the effect in larger fovs?
@@VisualTechArt can I contact you through discord for a little help, will send a pic with what I mean, I am on your discord server should I send it there or through ur dms lmk 👍
Thanks again. When try to link the tangent to divide node, it said "missing tangent input". Also, are there any easier ways to adjust the effect? Sorry I am not a programmer, but a designer.
That's actually what I kept getting when I was trying to figure out the correct math! xD So yes, it is possible, but I don't remember how right now, I'll take a look at that when I have some time :) Or let me know how you did it if you figure it out by yourself!
I don't usually make my nodes copy/paste -able because I'd like to push people to follow the video and learn, instead of assembling shaders with random stuff taken on the internet :) You can still pause the video to have the time to copy what I'm doing! If there are parts where you can't understand what I'm doing let me know though, I'll help you :D
Hey Visual Tech Art! Great video. Is there anyway using this setup to change the intensity of the effect (have an increased curvature of objects for a given FOV)?
I think you can get away by "tricking" the shader into thinking the camera has a wider FOV by multiplying it's value by a scale factor, let me know if it works :)
Sorry if I'm too late but, at 5:23 he adds a normalize node. Instead, plug the mask output into both inputs of a "dot product" node, then the result of that into a power node, with the exponent as "0.5", then divide the original mask node I mentioned first into the first input of a divide node and the result of the power node into the second input of the same divide node, then use the result of the divide node moving forward. Everything else in the tutorial remains unchanged. This is doing the exact same thing as the normalize node, however now using the exponent of the power node, you have control over the intensity of the effect. Be careful because small changes create large differences. Also, smaller = cancave, larger = convex. Hope this helps someone!
Hello,nice tutorial video,but could you explain a bit how to use distortion coefficients in the process if I want to simulate a real fisheye model?Thanks!
This doesn't simulate the actual distortion profile of a real lens, if that's what you mean! It is just a spherical mapping, in the video I describe how the amount of distortion can be controlled :)
very interesting ! I tried to understand every nodes but i'm not enough familiar with trigo anymore ^^ I was searching for a way to intensify or reduce the effect , is it possible with your actual system ? where can I can a divider ?
Hello! I'm also interested on how to increase the distorsion, I need the effect for a 360 panorama, for creating a more immersive effect. I've tried to "hardcore" the fov but woth no results.
I know someone already gave this feedback, but boy is it hard to keep up. I put speed to 0,5 and still feel like the information is coming in too quick.
Dude my god.. Talk faster.. Explain slower. Ur explenation goes 300 miles an hour. can't follow a damn thing. But when you start talking it's like you forgot to start the engine.
Sorry for not being able to speak as you would me like to :( I tried to improve my video rhythm with my most recent uploads, maybe you can check them and tell me if they suit more your tastes
@@VisualTechArt I think the speed of your talking is fine, youtube has a 2x speed as well, if anything, more like the basic math concepts you talk about are not 2nd nature to artists. maybe some basic math videos.
How can I render the game at higher resolution so there's no blurry pixels?
Increase the screen percentage
@@VisualTechArt That's on the three bars in viewport right? Does that apply to packaged game?
At @3:08 I have the same three colored image in the view port as you have. But when I connect the ConstantBiasScale-> SceneTexture:PostProcessInput0 ->Mask (R G B) I get a tiny picture. The cube sphere in the viewport is like only 10 pixels wide. What should the parameters be for the second ConstantBiasScale? I use 5.2. Thank you
It is set as default, in that point I'm reverting the initial transformations (0/1 to -1/1, mult(imgRatio) are reversed by div(imgRatio), -1/1 to 0/1)
Followed your video to the letter, ended up with UV stretching around the fisheye effect and the entire scene rendered upside down. Any ideas of what went wrong?
Can't really give you an exact answer, but sounds like a wrong sign somewhere?
Both Tangent and Sine need to be Pi * 2 - he does it really quick near the end but doesn't say it so just watch the values as he places the nodes
any solution?
Just as @Nemecys said, I had flipped image when my tangent was 6.28 and my sine 1, I put sine to 6.28 and it was normal
Elegant !👏👏👏
this is a great tutorial and I appreciate you uploading it, it's great to see a fisheye lens effect done with postprocessing materials. I have to say, it would be much, much easier to follow this tutorial if you slowed down the video recording a lot and talked through what you're doing a little more. Even just saying the names of nodes you're creating and values you're changing would help a lot. Like at 1:47, saying "Let's change the range from 0 to 1 to -1 to 1," doesn't directly explain what's happening on the screen. I think adding something like "I do this by creating a ConstantBiasScale node and changing the bias to -0.5 and the scale to 2.0" would make it much easier to follow. There were a lot of times during the video where I had to rewind and try to pause at specific times to see the names of nodes you're making and values you're editing, which is a frustratingly slow way to follow a tutorial!
I think adding some cursor highlighting and avoiding overlapping wires would also make it much easier to tell what's happening on the screen-when wires are all bunched up or perfectly overlap each other, it makes it harder to see the actual logic flow.
I don't mean to be rude, there really is a lot of great info here, I just wish it were a little easier to follow!
Feedbacks are always welcome here, don't worry! Especially now that I'm trying to figure out the best way to present information without turning the tutorials into boring lectures!
It's a fine balance I have to find with entertainment too :) (what's the point of making good informative videos if people can't stand me talking for more than 30 seconds straight?)
I'll make sure to visually communicate in a much clearer way while hopefully keeping a good phase, thanks!
yea like what are the values supposed to be on the second constantbiasscale and mine looks nothing like his
@@VisualTechArt I wouldn't worry about it honestly. I find your videos to not be for absolute beginners that can't follow a remap from normalized values to some other range and the effects this might have - they could play with the values and see for themselves if they really wanted. And since there's a literal starvation and a huge gap of resources for intermediate level tech art out there, your channel is an absolute gem! There are plenty for beginners and even more for the advanced/professional tech artists, none for those in between. I would just stay focused on this population as they appreciate your work a lot! Thank you, it's perfect!
AWESOME! Thanks for the video. Learn a lot from this
How to make a polynomial distortion?
In a similar way, I guess :)
A great tutorial! very informative!
Nice! That's a great addition to my camera fx post process material^^
I would love to see an implementation of rain on the lens like on the tp camera in Driveclub (properly calculating the refraction, reacting to bloom and blurring the end result etc.)...
Most games that have water on lenses do it in a pretty unrealistic way. I've tried my best to achieve something that somehow resembled Driveclubs implementation visually, but it always felt a bit off.
Your channel is awesome btw :D
That's something I could give a go when I'm looking for a real challenge! I actually kinda know how this type of stuff is done because I saw how a GFX programmer implemented it some years ago. I'm not sure it is possible to reach that level of interaction with lighting and other postprocesses just from the material editor though. I'm almost certain that to get it really good it would require some work on the shading model itself, we'll see... :D
@@VisualTechArt My approach to the light interaction was to draw the droplets to a render target. That render target was then sampled inside the post process material and as the dirt mask inside the post process volume. It felt a bit like cheating and is not exactly what I want, but it works for now.
Unfortunately there is no other way to access the necessary buffers for that without modifying the engine (as far as I know). I ones saw a blog post of somebody who did a custom lens flare and bloom implementation, that stuff is pretty interesting, but I don’t think my body is ready for that kind of knowledge xD
@@QuakeProBro Nice! Just thinking about that a bit more, I think I would probably use Niagara to get a nice simulation of the droplets (and maybe directly render the particles to a texture, I don't know). Considering them as metaballs I think it would be cool so they would merge in a nice way. Then yes, in a Postprocess I would try to do some sort of raymarching through the screen (like screen space reflections do, for example) with an index of refraction to try and get the correct water feel.
Or maybe you can get away just by sampling the scene in a smart way... ;)
@@VisualTechArt I use gaussian blur on the particles to kind of merge them together if they get close. It’s maybe not as good as metaballs, but since I use the blur anyway, it is a nice side effect^^ The result is then drawn to the render target. I then sample the rt inside my pp material and generate the normals. In combination with the refraction node, I use them to distort and flip a blurred scene texture, masked by the original render target black and white values. In the end everything is combined with the dirt mask inside the pp volume as described in my comment above.
Oh before I forget, are there some news about a discord community server? I would love to share my thoughts and progress, so maybe other people can also benefit from it.
@@QuakeProBro I'll be announcing that as soon I have something, I'm not sure about using Discord though, I'm not familiar with the platform and not used to it, will see
Would it be possible to create a lens distortion with this for a sniper scope for example?
I suppose so, with some tinkering! That's actually a good application idea :D
I'm new to UE4 how do I open the Postprocessing-Material Editor?
It's the same editor of other materials, you have to change the shader domain to PostProcess
Indeed, a brilliant job configuring this BP and explaining the logic, alas I have to agree with esopustorian below, for someone unfamiliar with the node names and how to call them up, the bunched pinning, it seems difficult to implement. Any chance of offering this over Marketplace as a turnkey plugin?
Sorry about that! I may do an example on Gumroad in the future:)
@@VisualTechArt Busy guy, I'm sure. Do let me (us) know if I could simply purchase your Blueprint. I'm developing content intended for Cosm, giant 8K LED wall that's basically hemispherical. Much appreciated. Your math brain, not fair.
My bad, I am so new into this but I got this all right but when I try to get larger FOVs, the corners are so broken, is there a way to get the effect in larger fovs?
What do you mean by broken? It should work fine at all fovs
@@VisualTechArt can I contact you through discord for a little help, will send a pic with what I mean, I am on your discord server should I send it there or through ur dms lmk 👍
@@jones90667 You can freely write in the Discord Server!
@@VisualTechArt I did just now, hope it won't annoy anyone 😅
Thanks again. When try to link the tangent to divide node, it said "missing tangent input". Also, are there any easier ways to adjust the effect? Sorry I am not a programmer, but a designer.
Can you point me at the minute of the video you're talking about? :)
Thank so much for the video. Is there a easy way to transform this same distortion to concave and not convex?
That's actually what I kept getting when I was trying to figure out the correct math! xD
So yes, it is possible, but I don't remember how right now, I'll take a look at that when I have some time :) Or let me know how you did it if you figure it out by yourself!
Thanks for the tutorial ! Is there a way to copy quicky your nodes setup? I have hard time following your pace mate!
I don't usually make my nodes copy/paste -able because I'd like to push people to follow the video and learn, instead of assembling shaders with random stuff taken on the internet :)
You can still pause the video to have the time to copy what I'm doing! If there are parts where you can't understand what I'm doing let me know though, I'll help you :D
epic !
Hey Visual Tech Art! Great video. Is there anyway using this setup to change the intensity of the effect (have an increased curvature of objects for a given FOV)?
I think you can get away by "tricking" the shader into thinking the camera has a wider FOV by multiplying it's value by a scale factor, let me know if it works :)
Sorry if I'm too late but, at 5:23 he adds a normalize node. Instead, plug the mask output into both inputs of a "dot product" node, then the result of that into a power node, with the exponent as "0.5", then divide the original mask node I mentioned first into the first input of a divide node and the result of the power node into the second input of the same divide node, then use the result of the divide node moving forward. Everything else in the tutorial remains unchanged.
This is doing the exact same thing as the normalize node, however now using the exponent of the power node, you have control over the intensity of the effect. Be careful because small changes create large differences. Also, smaller = cancave, larger = convex.
Hope this helps someone!
@@bigmartin343 This worked like a charm. Thank you!
can someone put the nodes in a pastebin pls
Hello,nice tutorial video,but could you explain a bit how to use distortion coefficients in the process if I want to simulate a real fisheye model?Thanks!
This doesn't simulate the actual distortion profile of a real lens, if that's what you mean! It is just a spherical mapping, in the video I describe how the amount of distortion can be controlled :)
12:12 I'm glad you explained "What the fuck allength is".
Will you be releasing this blueprint? (the 360cam one in the marketplace is like 200$, and not as good as yours...)
I may do it in the future! Thanks!
very interesting ! I tried to understand every nodes but i'm not enough familiar with trigo anymore ^^
I was searching for a way to intensify or reduce the effect , is it possible with your actual system ? where can I can a divider ?
Thank you! I mention that around 10:00, you essentially need to hardcode the FOV value the shader is using :)
@@VisualTechArt hey, I've tried to understand how but I can't do it ... :( Can you help me ?
Hello! I'm also interested on how to increase the distorsion, I need the effect for a 360 panorama, for creating a more immersive effect. I've tried to "hardcore" the fov but woth no results.
vecio perché noi italiani abbiamo sto modo di parlare? xD
Cmq...
Great video
I want to make a game similar to Sonic Xtreme and launch it for the new ATARI VCS
I know someone already gave this feedback, but boy is it hard to keep up. I put speed to 0,5 and still feel like the information is coming in too quick.
Dude my god.. Talk faster.. Explain slower. Ur explenation goes 300 miles an hour. can't follow a damn thing. But when you start talking it's like you forgot to start the engine.
Sorry for not being able to speak as you would me like to :( I tried to improve my video rhythm with my most recent uploads, maybe you can check them and tell me if they suit more your tastes
@@VisualTechArt I think the speed of your talking is fine, youtube has a 2x speed as well, if anything, more like the basic math concepts you talk about are not 2nd nature to artists. maybe some basic math videos.
or just lower the playback speed and let homie cook
Be less triggered and more grateful