@@The12MT yeah but nanite isnt a space saver. it renders effortlessly but will accept 20gb 3d model if you throw at it. if you got that much vram lying around, i envy you mahn. cheers!
@@The12MT the question is: can this be combined with it, and would it be useful? having the original as a nanite mesh and then tesselating it with this method, might save memory i guess
I always thought that we were in a fine spot for great graphics and achieving "realism" in 3D films, creations, or games. So I always thought optimization of data and memory would be the next evolution for 3D creations, and it seems like it's on its way!
For complete realism, I'd say there's still tons to do in the realm of of animation and simulation, which is super exciting. The renders today are great but the movement (of living or inanimate objects!) could still see large improvements...
@@rubenpartono And we're a long way away from NPCs being able to create their own reactions to new situations and to simulate a real human voice without actors. If we can't get this kind of thing right, the clunkiness of humans really stands out in contrast to the progress on graphics.
This is somewhat true but our simulations of various processes could get a lot more realistic,, a lot of what makes movies look so good is that each part is made by hand
We're just now getting into real time ray tracing, which offers some insanely detailed lighting. There's good odds that this could be the new way we render games, and in a decade we could potentially have all sorts of crazy, realtime-rendered ray traced lighting engines that can handle things like transparency, subsurface scattering, and bounce lighting to make virtually anything photo realistic But it might also prove to be a dead end and flop, who knows?
It's apparently Adobe that made this tech so if they can't make this tech exclusive to the creative cloud then expect it to be vaulted. Corporations suck. Theres already tons of amazing thing Adobe has made that never seen the light of day because they couldnt figure out how to make it exclusive Creative Cloud. Nvidia is just as bad with new tech like this. Corporations are cancer to innovation at times.
I miss the time when you explained a little bit in deep what was going on, as a game developer, just from the video I can't really get even a glimpse of what this paper actually does, and I'm pretty sure this will not change the video games scene, since it seems to be a technique to replace tessellation with some sort of parallax mapping in path tracing, nothing to do with rasterization (aka what we use for realtime game graphics), we have POM (parallax occlusion mapping) techniques for realtime "fake" added displacement in realtime rendering, but it has it's issues and performance impact.
First law of papers, don't look where we are but where we will be two more paper down the line! But yes, i too thing it is going to be hard to find a place on the market for it when we have POM, normal maps and now even nanite. Yet i don't see this paper as useless. It is good to have more options.
I didn't say it's useless, what I tried to debate is the fact that this channel slowly degraded from a more scientific or technical artist friendly language, to just showing off a nice video edit of a papers content, it doesn't go in deep anymore on the details that explains what exactly is the technological advancement at show or more in-depth explanations of what it actually does. And I understand that it does this to try to reach the bigger non technical audience in pursue of being a more popular "scientific divulgation" channel, but in doing so it ends up being less about the science and more about showing flashy eye candy stuff without leaving the viewers with greater knowledge on the subject than before, it talks to the audience as if they were children, but at the same time we are "fellow scholars", and again, I get why, it makes for more popular content, I just miss to actually learn new stuff or understand what I'm watching when I see these videos.
I think he's more focused on trying to get these papers as much exposure as possible. When you add technical explanations you start to lose viewers. At least once you know this paper exists you can go look into it. I do agree tho, I do miss the older videos with a bit more depth to them. I just know I'd be sad if I worked really hard on a paper and video that did something this amazing and it only get 20 views.
@@stevenk1442 You don't lose viewers. You just have to spend more effort on the content. He can shovel out these 100% analysis-free, virtually content-free episodes with very little effort. He's decided that the flashy, zero effort stuff draws in enough viewers without bothering with the things that attracted his original audience.
I wanted to say the same thing. I've always regretted that the channel wasn't called 10 minutes papers and could go more in depth but at least he would say a word about the content of the paper, maybe even enough to decide if it's worth spending time reading it. But now this is getting ridiculous: he didn't say anything about what the technique is, not a hint.
I have been waiting for something like this to happen for years. I remember reading into direct NURBS rendering and all sorts of niche raymarching application hoping for something like this to already exist. Just incredible.
As a guy doing texturing with Substance Designer and Painter, I can definitely say, this is world changing. Filling small scenes with tesselated textures was fairly easy, but this technique was wildly underutilized because at least in games, you need more than just a handful of objects. Seeing the same textures as used previously to give so much quicker and better results makes my heart skip a beat. This could make indie games able to compete with AAA so much easier. Only question is, will this stay with Adobe and partners, or could open source projects like Blender and Godot profit from this aswell?
Is this something like bumpmapping, but instead of it affecting the highlights and shadows of a 2D texture to give it a 3D textured look, it instead changes the geometry of the actual 3D surface that the texture is wrapped around?
@@adeo Bump mapping certainly makes it look like some geometry has been moved in the direction normal to the face. That's the whole point of it. The video shows what I can only assume is a bump map image(or something akin to it) but it also showed examples of geometry being moved NOT normal to the face using what I can only assume is a similar technique. I would be interested in learning how this is done but the video ignored it.
i think the summarization is its basically a more optimized displacement method, cuz bump maps are basically fake geometry on the surface of the object to fool the eye
oh man being able to use unreal's pixel based rendering has spoiled the efficient memory management for me and so many other designers i know of. throwing in huge ass files for rendering and leaving it on the renderer is cool and all but low memory footprint of the object in the first place saves so much of space for further development without having to worry about the package size! i wouldnt be surprised if adobe patents it and makes it exclusive to their software ecosystem but if only! Thanks for such great videos doc, love the channel!
Adobe will likely patent it and make it exclusive to their products only. Such greed and business practices prevents the world of technology from moving forward, all in the favor of rich people who don't need any more money than they already have. Essentially throwing away resources for no reason.
What would Adobe gain from patenting and closing this for their ecosystem? All of their 3D centric tools are used to then export the asset into game engines or DCC tools where it get's rendered. Since this new tech seems to involve rendering, it doesn't really make sense that you're limited to Adobe tools. Unless they plan to release a real rendering tool besides stager.
Was one thing I had worried about with Unreal E 5. It handle high count polygons like a champ, making so you don't need mapping tricks. Such as bump maps and normal maps. The issue with that is, the file size for games using high poly assets everywhere. And you can only do so much with instancing, mostly if you don't want repeating objects all over the place. Yes, having to make such maps, is a process in itself for the artists, along with having to make good topology for those maps to lay on can be a pain, when having to make them for hundreds of assets. However, those maps take up less space than a high poly object. And that is good when it comes to making users download a full game. And for any updates to the game that may require a full download of said game. Would not be good if most every high end game made for Unreal E 5 was over 20 GB is size. This new tech, sticks with the old way many knows on how to make game models, wile making them look like high poly models, even far better than before. Unreal E 5 is a beast. It is fantastic for what it can do, so far. Is still not great. May not be the game changer it is meant to be. This new tech cold be what Unreal E 5 could have been, and still could be, if we can use it in Unreal E 5. I hope we can someday, and soon. May help Unreal E 5 run at well over 140 fps! And may even be good for VR.
@@freddyw.1171 Exactly this. I wouldn't be surprised if they were hoping for it to work with Octane and Arnold eventually. It would be nice if it worked in blender eevee and cycles but I don't think those would be on Adobe's radar.
Holy moly… I was thinking that the “new method” is insanely powerful and cost efficient, so I guessed the memory usage to be around 200 MB. But it blows my mind to know that it’s 50 times cheaper for memory usage. Can you elaborate some more one this topic @twominutepapers?
@@ONDANOTA the displacement map shown here are only in greyscale, I wonder how did they get more concave displacement with just greyscale map? Vertex displacement map are RGB so it is capable of producing crazy DETAILED model
@@CharlesLijt I don't think I've seen anything here that isn't possible with a simple grayscale displacement map, but thank you for letting me know that vector displacement maps are a thing
@@CharlesLijt Yeah man. That is what I don't understand and makes me think that we might be seeing something weird in the presentation. Displacement maps work just on the Z axis! How is 2:22 possible?!
Greyscale displacement maps can only move geometry perpendicular to the original plane. You need vector displacement mapping (RGB) for overhangs, which may or may not be supported by this technique.
@@mixer0014 I think they cleverly snuck some AO into the diffuse map. If this could actually do overhangs in greyscale, I think they would have bragged about it more :)
very impressive! given the continuing problems with GPU availability, this is all the more welcome. meanwhile, i wonder if it will ever become easy to add video RAM to GPUs?
the answare is probably not. Latency is a big problem in GPU and RAM so sloting one into the other will make it worse. And since you can stream data in and out of GPUs this will never happen in this manner
@@ikemsmith Yeah. Maybe it uses resource X less but at the price of using resource Y more. So generally I would like to know if a PC uses less power/computation overall or is it just switching which part will work harder (use less memory at the price of increasing something else).
Essentially, the authors managed to make displacement mapping work without needing to pre-tesselate the mesh. A tesselated mesh is huge so puts a massive memory burden on the GPU. Downside is, their method is way slower (around 100 times slower) at rendering, although that's somewhat balanced by not needing to have a tesselation step, which typically takes a long time. That's huge for artists, since there'll be no need to re-tessalate every time you want to change the displacement map. Feedback will be much faster.
@@ikemsmith The comparisons that showed 50x less memory were not comparing against displacement mapping, which is a technique that already existed for dealing with the massive sizes of pure geometry. The innovation in this paper is not mentioned anywhere in this video.
Simply sublime! In the comparison, I thought it would take up 100 MB of memory but 34 is even better than I expected! Really looking forward to seeing this be adopted in programs like Blender, where a lot of artists will benefit from (including me)! The authors are magicians.
Ok, this might be the most impressive paper I've seen in years when it comes to geometry. Very impressive, and being it's Adobe it has a good chance to get a foothold in commercial software too.
I think it should've been made more clearly that this technique now works for (realtime?) ray tracers in the GPU instead of doing a pre-tesselation pass which requires a lot of memory. From the Adobe page: "While GPU rasterization supports it through the hardware tessellation unit, ray tracing surface meshes textured with high quality displacement requires a significant amount of memory. More precisely, the input surface needs to be pre-tessellated at the displacement map resolution before being enriched with its mandatory acceleration data structure."
I can't wait to see this implemented! This work is amazing people do have to know! Thank you for sharing this with us, and thank the creators of this amazing project for the amazing work they are doing i really hope this gets publicly available for everyone to enjoy and use!
It's worth checking out the guys wrote the paper. They are part of the Adobe substance team, and also a little known open source render called appleseed. It seems quite likely this technique will be integrated into both tools.
Holy shit, more detailed models and more expansive open world's both at the same time with such low memory usage, damn that's literally magic. Thank you for this awesome video sir ❤️
My gut said 8 GB, but this is a two minute papers video so I went with something insane. I guessed 50 MB, so I wasn't too far off. Wonderful video, thank you ❤️
Remarkable applications for the VR realm, where high texture and model resolution is more important than ever due to the proximity of the assets! I anticipate the day when this is a standard plug-in for game engines.
I hope someday someone makes a new skinning technique where transforming a limb doesn't stretch armor/gear unnaturally. I want to see this happen for video games
In the new unreal engine version they added a machine learning deformer, basically you teach the algorithm and it can make a normal looking skinned character to deform pretty realistically, one of the examples was using marvelous designer(the prime program for creating cloths and having them simulated) the input was a default cylinder and the output was the cylinder but deforming like it was simulated in marvelous designer
Now this is a very interesting use case for ray-tracing. I’ve never understood why Nvidia keep on advertising ray-tracing to mean better lighting or better reflections, yet they never advertise the actual key advantage over rasterization; the fact that ray-tracing can render models not made of triangles. It’s cool to see this method play into that key advantage to make displacement mapping that’s both higher quality and more optimized than tesselation.
"can render models not made of triangles" Quads are still triangles, bro. And pixel-level displacement/tesselation is still triangles, just really tiny.
@@choo_choo_ when I was referring to models not made of triangles, I meant like using NURBs, not using quads. I’ve looked into the videos included with the paper, and it seems to me that the method used for displacement is very similar to how NURBs work. Also, the title of the paper is “Tessellation-Free Displacement Mapping for Ray Tracing”, so I assumed the “Tessellation-Free” part of the title meant they weren’t adding a bunch of triangles, as is the definition of tessellation. I don’t think it would make sense for this paper to optimize displacement mapping by just adding more microscopic triangles.
As somebody wise said: "real programming starts after reaching the end of memory". We will not have optimized graphics engines while we have a huge amount of video memory and powerful GPUs.
This would be for path-tracing/ray-tracing renderers, no stand alone VR chipsets are capable of that ATM, and even if they were, they wouldn't be able to process this level of detail at VR framerates. Most standalone, hell even dedicated VR games look like PS2/Wii games for a reason - they need high framerates.
I guess the issue is UE just renders the object, and not much involved in the creation process. So unless we have tools like Blender that support creating meshes this way, support in UE may mean nothing
Already does. Baking normals and material properties has been a part of the pbr workflow for a while now. Real time displacement is new-ish, but still has been around for a few years through alegorythmic substance tools.
The big part of this video is comparing tessellation vs this "new" method. Whats not clear is if this "new" way uses the same displacement maps or some thing else
I had to read comments to find out too. With vector displacement the mesh needs to be high-poly enough to support every nook and cranny of the texture; this new method somehow skips that and gets a low-poly mesh to deform perfectly to the texture.
If this doesn't require significant processing to offset its significantly lower memory usage then this would be amazing in game engines. The most interesting aspect is that they could create extremely detailed damage modeling by procedurally modifying the texture based on impacts.
This is why we need your channel. 20 views is literally nothing. We need you (and I assume you have a team, so this applies to them also) scouring the Internet for new breakthroughs in simulation tech. If it looks like noone cares, noone will invest in these fields.
There's a lot of talk about this vs Nanite from Unreal. Would be great if you could have Brian Karis as a guest on a talk. I know it would go beyond the format but that would be insane.
Thank you so much for spreading the knowledge of these amazing papers! I can't wait to see this used in Virtual Reality RPG games, but I'd still be worried about open world becoming Terabytes worth of information to download XD
This thing will be the monster for big tech company like Nvidia and AMD. they cant sell more than 16+ GB memory cards. but idk is this YT or the models that flickering is the plague in this visual industry. love to see this things getting more attention.
your videos are awsome and very informative... you bring knowledge to vide range of viewers. keep up the good work and have a good one yourself.. love your videos.
This is not merely a bump map affecting surface geometry. I notice that when the geometry is extruded, there are details on the sides -- areas that would stretch out the pixels between light and dark in a non-smooth manner - and these details are not in the photos. Did I see geometry that tucked under at some points? Either there is a 2nd image we aren't seeing or there is some AI(well, really neural net) going on that needs to understand the material. And I imagine they do well with metal and rope and per texture type, you have to teach it. Which is to be expected -- everything in 3D, imaging and video is about to take the next step in evolution as this image recognition technology removes cables from actors, takes away the headaches from rotoscoping and masking, and adds details and lighting that we KNOW would be there without spending too much time computing every pixel.
After adopting extortion... I'm sorry, "subscription" licensing, companies like Adobe and Autodesk seem to have abandoned most of their R&D efforts to focus on the needs of the shareholders. It's great to see some real innovation happening here!
Adding this to Unity, Unreal, and Blender would change the game. Literally.
i smell patent.
don't forget godot!
seems to be only for fully raytraced scenes sadly
@@The12MT yeah but nanite isnt a space saver. it renders effortlessly but will accept 20gb 3d model if you throw at it. if you got that much vram lying around, i envy you mahn. cheers!
@@The12MT the question is: can this be combined with it, and would it be useful? having the original as a nanite mesh and then tesselating it with this method, might save memory i guess
I always thought that we were in a fine spot for great graphics and achieving "realism" in 3D films, creations, or games. So I always thought optimization of data and memory would be the next evolution for 3D creations, and it seems like it's on its way!
For complete realism, I'd say there's still tons to do in the realm of of animation and simulation, which is super exciting. The renders today are great but the movement (of living or inanimate objects!) could still see large improvements...
@@rubenpartono And we're a long way away from NPCs being able to create their own reactions to new situations and to simulate a real human voice without actors. If we can't get this kind of thing right, the clunkiness of humans really stands out in contrast to the progress on graphics.
This is somewhat true but our simulations of various processes could get a lot more realistic,, a lot of what makes movies look so good is that each part is made by hand
So true, people can already make DAMN good CGI (see Diablo 4 trailer, It's perfection).
The problem being, it must take ages to render.
We're just now getting into real time ray tracing, which offers some insanely detailed lighting. There's good odds that this could be the new way we render games, and in a decade we could potentially have all sorts of crazy, realtime-rendered ray traced lighting engines that can handle things like transparency, subsurface scattering, and bounce lighting to make virtually anything photo realistic
But it might also prove to be a dead end and flop, who knows?
This is one of the most impressive papers I’ve seen, this could have a massive impact on games and cgi
It's apparently Adobe that made this tech so if they can't make this tech exclusive to the creative cloud then expect it to be vaulted. Corporations suck. Theres already tons of amazing thing Adobe has made that never seen the light of day because they couldnt figure out how to make it exclusive Creative Cloud. Nvidia is just as bad with new tech like this. Corporations are cancer to innovation at times.
I miss the time when you explained a little bit in deep what was going on, as a game developer, just from the video I can't really get even a glimpse of what this paper actually does, and I'm pretty sure this will not change the video games scene, since it seems to be a technique to replace tessellation with some sort of parallax mapping in path tracing, nothing to do with rasterization (aka what we use for realtime game graphics), we have POM (parallax occlusion mapping) techniques for realtime "fake" added displacement in realtime rendering, but it has it's issues and performance impact.
First law of papers, don't look where we are but where we will be two more paper down the line! But yes, i too thing it is going to be hard to find a place on the market for it when we have POM, normal maps and now even nanite. Yet i don't see this paper as useless. It is good to have more options.
I didn't say it's useless, what I tried to debate is the fact that this channel slowly degraded from a more scientific or technical artist friendly language, to just showing off a nice video edit of a papers content, it doesn't go in deep anymore on the details that explains what exactly is the technological advancement at show or more in-depth explanations of what it actually does. And I understand that it does this to try to reach the bigger non technical audience in pursue of being a more popular "scientific divulgation" channel, but in doing so it ends up being less about the science and more about showing flashy eye candy stuff without leaving the viewers with greater knowledge on the subject than before, it talks to the audience as if they were children, but at the same time we are "fellow scholars", and again, I get why, it makes for more popular content, I just miss to actually learn new stuff or understand what I'm watching when I see these videos.
I think he's more focused on trying to get these papers as much exposure as possible. When you add technical explanations you start to lose viewers. At least once you know this paper exists you can go look into it. I do agree tho, I do miss the older videos with a bit more depth to them. I just know I'd be sad if I worked really hard on a paper and video that did something this amazing and it only get 20 views.
@@stevenk1442 You don't lose viewers. You just have to spend more effort on the content. He can shovel out these 100% analysis-free, virtually content-free episodes with very little effort. He's decided that the flashy, zero effort stuff draws in enough viewers without bothering with the things that attracted his original audience.
I wanted to say the same thing. I've always regretted that the channel wasn't called 10 minutes papers and could go more in depth but at least he would say a word about the content of the paper, maybe even enough to decide if it's worth spending time reading it. But now this is getting ridiculous: he didn't say anything about what the technique is, not a hint.
I have been waiting for something like this to happen for years. I remember reading into direct NURBS rendering and all sorts of niche raymarching application hoping for something like this to already exist. Just incredible.
As a guy doing texturing with Substance Designer and Painter, I can definitely say, this is world changing.
Filling small scenes with tesselated textures was fairly easy, but this technique was wildly underutilized because at least in games, you need more than just a handful of objects.
Seeing the same textures as used previously to give so much quicker and better results makes my heart skip a beat.
This could make indie games able to compete with AAA so much easier.
Only question is, will this stay with Adobe and partners, or could open source projects like Blender and Godot profit from this aswell?
Great feedback, thank you so much for weighing in!
k sry i got hyped up
@@TwoMinutePapers What at time to be alive.
Is this something like bumpmapping, but instead of it affecting the highlights and shadows of a 2D texture to give it a 3D textured look, it instead changes the geometry of the actual 3D surface that the texture is wrapped around?
@@MindResonator …without the need of additional mesh density??
i've been here since you had 200k subscribers, you have come a long way man congratulations!
Amazing, one of our OG Fellow Scholars! Honored to have you here - thank you so much for your support! 🙏
It would have been interesting to hear some details about the difference between this method and traditional bump mapping.
But there's already comparisons?
Traditional bump mapping doesn't "move" (displace) geometry
@@VcSaJen I don't think the comparison he made was for bump mapping. Bump mapping is a relatively efficient process that's used everywhere.
@@adeo Bump mapping certainly makes it look like some geometry has been moved in the direction normal to the face. That's the whole point of it. The video shows what I can only assume is a bump map image(or something akin to it) but it also showed examples of geometry being moved NOT normal to the face using what I can only assume is a similar technique. I would be interested in learning how this is done but the video ignored it.
i think the summarization is its basically a more optimized displacement method, cuz bump maps are basically fake geometry on the surface of the object to fool the eye
oh man being able to use unreal's pixel based rendering has spoiled the efficient memory management for me and so many other designers i know of. throwing in huge ass files for rendering and leaving it on the renderer is cool and all but low memory footprint of the object in the first place saves so much of space for further development without having to worry about the package size!
i wouldnt be surprised if adobe patents it and makes it exclusive to their software ecosystem but if only!
Thanks for such great videos doc, love the channel!
Adobe will likely patent it and make it exclusive to their products only. Such greed and business practices prevents the world of technology from moving forward, all in the favor of rich people who don't need any more money than they already have. Essentially throwing away resources for no reason.
not all graphic designers are pigs. In the game industry, optimization is a rule, more than a rule, it s a necessity
What would Adobe gain from patenting and closing this for their ecosystem? All of their 3D centric tools are used to then export the asset into game engines or DCC tools where it get's rendered.
Since this new tech seems to involve rendering, it doesn't really make sense that you're limited to Adobe tools. Unless they plan to release a real rendering tool besides stager.
Was one thing I had worried about with Unreal E 5. It handle high count polygons like a champ, making so you don't need mapping tricks. Such as bump maps and normal maps. The issue with that is, the file size for games using high poly assets everywhere. And you can only do so much with instancing, mostly if you don't want repeating objects all over the place.
Yes, having to make such maps, is a process in itself for the artists, along with having to make good topology for those maps to lay on can be a pain, when having to make them for hundreds of assets. However, those maps take up less space than a high poly object. And that is good when it comes to making users download a full game. And for any updates to the game that may require a full download of said game.
Would not be good if most every high end game made for Unreal E 5 was over 20 GB is size.
This new tech, sticks with the old way many knows on how to make game models, wile making them look like high poly models, even far better than before.
Unreal E 5 is a beast. It is fantastic for what it can do, so far. Is still not great. May not be the game changer it is meant to be. This new tech cold be what Unreal E 5 could have been, and still could be, if we can use it in Unreal E 5. I hope we can someday, and soon.
May help Unreal E 5 run at well over 140 fps! And may even be good for VR.
@@freddyw.1171 Exactly this. I wouldn't be surprised if they were hoping for it to work with Octane and Arnold eventually. It would be nice if it worked in blender eevee and cycles but I don't think those would be on Adobe's radar.
Holy moly… I was thinking that the “new method” is insanely powerful and cost efficient, so I guessed the memory usage to be around 200 MB. But it blows my mind to know that it’s 50 times cheaper for memory usage. Can you elaborate some more one this topic @twominutepapers?
I need this as a 3d artist, it's like normal maps on steriods!
more like displacement map
@@ONDANOTA the displacement map shown here are only in greyscale, I wonder how did they get more concave displacement with just greyscale map? Vertex displacement map are RGB so it is capable of producing crazy DETAILED model
@@CharlesLijt Don't displacement map add to the geometry, hence use memory just as if the object is modeled with the extra detail?
@@CharlesLijt I don't think I've seen anything here that isn't possible with a simple grayscale displacement map, but thank you for letting me know that vector displacement maps are a thing
@@CharlesLijt Yeah man. That is what I don't understand and makes me think that we might be seeing something weird in the presentation. Displacement maps work just on the Z axis! How is 2:22 possible?!
The level of detail added to objects is simply astounding. I wonder how well it can deal with excessive overhangs.
Greyscale displacement maps can only move geometry perpendicular to the original plane. You need vector displacement mapping (RGB) for overhangs, which may or may not be supported by this technique.
@@vxm At 2:40 we can see what looks like an overhang.
@@mixer0014 That's what I couldn't understand!
@@mixer0014 I think they cleverly snuck some AO into the diffuse map. If this could actually do overhangs in greyscale, I think they would have bragged about it more :)
This is an insane step forward
This is a total game-changer for creating realistic virtual spaces. I'm so glad you talked about it here! More people NEED to see this in action!
very impressive! given the continuing problems with GPU availability, this is all the more welcome. meanwhile, i wonder if it will ever become easy to add video RAM to GPUs?
Wait.. . The video was just posted 1 minute ago but your comment says you posted 18 hours ago?
@@gamergrids 🤨
wtf
@@gamergrids early access through patreon
the answare is probably not. Latency is a big problem in GPU and RAM so sloting one into the other will make it worse. And since you can stream data in and out of GPUs this will never happen in this manner
I really hope this gets the attention it needs
OK, so it's a new displacement map technique. You forgot to tell us what's different to make it so much better.
You mean other than using 50x less memory while giving as good or more detailed results?
@@ikemsmith Yeah. Maybe it uses resource X less but at the price of using resource Y more. So generally I would like to know if a PC uses less power/computation overall or is it just switching which part will work harder (use less memory at the price of increasing something else).
Essentially, the authors managed to make displacement mapping work without needing to pre-tesselate the mesh. A tesselated mesh is huge so puts a massive memory burden on the GPU.
Downside is, their method is way slower (around 100 times slower) at rendering, although that's somewhat balanced by not needing to have a tesselation step, which typically takes a long time. That's huge for artists, since there'll be no need to re-tessalate every time you want to change the displacement map. Feedback will be much faster.
@@ikemsmith No, I'm asking how they're doing that. What did they change to make it so good?
@@ikemsmith The comparisons that showed 50x less memory were not comparing against displacement mapping, which is a technique that already existed for dealing with the massive sizes of pure geometry. The innovation in this paper is not mentioned anywhere in this video.
Simply sublime! In the comparison, I thought it would take up 100 MB of memory but 34 is even better than I expected! Really looking forward to seeing this be adopted in programs like Blender, where a lot of artists will benefit from (including me)! The authors are magicians.
I use Octane Render and it does something similar - you can have full displacement detail even on a single polygon plane. It's so bloody useful!
These videos help me feel a little more hopeful about the future
Ok, this might be the most impressive paper I've seen in years when it comes to geometry. Very impressive, and being it's Adobe it has a good chance to get a foothold in commercial software too.
I think it should've been made more clearly that this technique now works for (realtime?) ray tracers in the GPU instead of doing a pre-tesselation pass which requires a lot of memory. From the Adobe page: "While GPU rasterization supports it through the hardware tessellation unit, ray tracing surface meshes textured with high quality displacement requires a significant amount of memory. More precisely, the input surface needs to be pre-tessellated at the displacement map resolution before being enriched with its mandatory acceleration data structure."
This is amazing! I've had to use tessellation-based materials/objects in a project, and I can just imagine the improvements this technique will bring.
This improvement is INSANE and would revolutionize graphics. My papers are all over the room now.
Amazing! Displacement Maps have needed some love for a little long! What a huge leap in performance, my god, I love this channel
I just love two minute papers for this reason. These advancements just brighten up my day and give me so much hope for the future of virtual reality
I can't wait to see this implemented! This work is amazing people do have to know! Thank you for sharing this with us, and thank the creators of this amazing project for the amazing work they are doing i really hope this gets publicly available for everyone to enjoy and use!
Having a look at the short source video, you can also see the ‘implicit bound’ process visualized, which is just mind blowing
It's worth checking out the guys wrote the paper. They are part of the Adobe substance team, and also a little known open source render called appleseed. It seems quite likely this technique will be integrated into both tools.
Omg this is one is absolute huge game changer for 3d graphics
Holy shit, more detailed models and more expansive open world's both at the same time with such low memory usage, damn that's literally magic. Thank you for this awesome video sir ❤️
@Maksymilian Świerad i kinda pity Karoly reading through the comments
This is one of the best yt channels I ever came across since 2005
My gut said 8 GB, but this is a two minute papers video so I went with something insane. I guessed 50 MB, so I wasn't too far off. Wonderful video, thank you ❤️
That''s an insane improvement to teasselaton. Can't wait to see this show up in games.
This is pretty awsome not gona lie
2018-2022 feels like a watershed moment for graphics 5 years from now
Thank you for showing it. Looks amazing. It's great Two Minutes Papers promote such things to the public.
people who write these papers are the real heroes of our society.
Genius. Every video takes me one step closer to changing my major.
This reminds me of the old bump mapping techniques
Remarkable applications for the VR realm, where high texture and model resolution is more important than ever due to the proximity of the assets! I anticipate the day when this is a standard plug-in for game engines.
so its like a more advanced displacement or bump mapping technique. nice
The speed of a Normal map with the detail of a bump + displacement and you're saying this is without adding more geometry to the base mesh? MENTAL!
ive been waiting for something like this since i first started working with 3D im so excited about this!
Some people seem to be confused.
The most notable difference from conventional techniques: Displacement maps can be applied without pre-tessellation.
So UE5 can accept this method
I love these videos. Thanks for bringing this information to the masses. Makes me excited for the future of these industries.
I'm so excited to see these sorts of techniques in video games in a few years!
Thanks for your work ❤️🌸 I love watching your videos even though I'm not from a computer science background!!
I hope someday someone makes a new skinning technique where transforming a limb doesn't stretch armor/gear unnaturally. I want to see this happen for video games
In the new unreal engine version they added a machine learning deformer, basically you teach the algorithm and it can make a normal looking skinned character to deform pretty realistically, one of the examples was using marvelous designer(the prime program for creating cloths and having them simulated) the input was a default cylinder and the output was the cylinder but deforming like it was simulated in marvelous designer
Now this is a very interesting use case for ray-tracing. I’ve never understood why Nvidia keep on advertising ray-tracing to mean better lighting or better reflections, yet they never advertise the actual key advantage over rasterization; the fact that ray-tracing can render models not made of triangles. It’s cool to see this method play into that key advantage to make displacement mapping that’s both higher quality and more optimized than tesselation.
I'm glad people like you explain the paper more in depth than the video because I got really confused.
"can render models not made of triangles"
Quads are still triangles, bro. And pixel-level displacement/tesselation is still triangles, just really tiny.
@@choo_choo_ when I was referring to models not made of triangles, I meant like using NURBs, not using quads. I’ve looked into the videos included with the paper, and it seems to me that the method used for displacement is very similar to how NURBs work. Also, the title of the paper is “Tessellation-Free Displacement Mapping for Ray Tracing”, so I assumed the “Tessellation-Free” part of the title meant they weren’t adding a bunch of triangles, as is the definition of tessellation. I don’t think it would make sense for this paper to optimize displacement mapping by just adding more microscopic triangles.
This is incredibly exciting!
Wow! Total gamechanger for GPU rendering and realtime
As somebody wise said: "real programming starts after reaching the end of memory". We will not have optimized graphics engines while we have a huge amount of video memory and powerful GPUs.
My guess was 1/4… I was somewhat off by that I’d say.. 😅 Just insane how well they can store data!
The idea of this being used for stand alone VR, where detail is in high demand but RAM is more limited, is very exciting!
This would be for path-tracing/ray-tracing renderers, no stand alone VR chipsets are capable of that ATM, and even if they were, they wouldn't be able to process this level of detail at VR framerates. Most standalone, hell even dedicated VR games look like PS2/Wii games for a reason - they need high framerates.
This is amazing, more people need to know
I hope this comes to blender soon. It's gonna be amazing!
I wonder if this could be implemented in a game engine.
Maybe Unreal Engine 5 uses similar technology.
Its a different approach
UE5 does kinda the opposite- it takes stupidly high resolution meshes and makes them work as-is
I guess the issue is UE just renders the object, and not much involved in the creation process. So unless we have tools like Blender that support creating meshes this way, support in UE may mean nothing
Already does. Baking normals and material properties has been a part of the pbr workflow for a while now. Real time displacement is new-ish, but still has been around for a few years through alegorythmic substance tools.
The big part of this video is comparing tessellation vs this "new" method. Whats not clear is if this "new" way uses the same displacement maps or some thing else
Gorgeous stuff. I look forward to seeing it reach the open market!
Looks awesome, but what’s the difference between this and vector displacement + adaptive subdivision?
Glad I'm not the only one that's confused about this
Or LuxRender's microdisplacements from over 10 years ago. This paper doesn't seem to bring anything new to the table.
I had to read comments to find out too. With vector displacement the mesh needs to be high-poly enough to support every nook and cranny of the texture; this new method somehow skips that and gets a low-poly mesh to deform perfectly to the texture.
@@wkmr it must use screen space adaptive subdivision though right?
this is just what Unreal needs. It would suit perfectly with Nanite meshes.
Amazing. Looking forward to using this
If this doesn't require significant processing to offset its significantly lower memory usage then this would be amazing in game engines.
The most interesting aspect is that they could create extremely detailed damage modeling by procedurally modifying the texture based on impacts.
This has incredible potential!
Views is not everything, Károly Zsolnai-Fehér's attention IS everything
I hope this will be added to add to Arnold/3Delight/Corona for Cinema 4D!
This is why we need your channel. 20 views is literally nothing. We need you (and I assume you have a team, so this applies to them also) scouring the Internet for new breakthroughs in simulation tech. If it looks like noone cares, noone will invest in these fields.
I can't believe that his is under 50mb. Amazing Job!
Damn those details are unbelievably crisp.. but what's the actual size of the texture that's being used or is it like a heavy procedural node setup?
Alright!, we need this in our games..
I will spread this around, no worries 😁.
Thanks for sharing this! Hopefully game developers will pick up on this quickly :)
Massive game-changer!
There's a lot of talk about this vs Nanite from Unreal. Would be great if you could have Brian Karis as a guest on a talk. I know it would go beyond the format but that would be insane.
What a time to be alive!
"Dear fellow scholars" gets me every time. Keep up the awesome videos! :)
My guess was 60MB. A lot of people will be very happy once this method is adopted widely.
Imagine what it could done if this will be used in future games or to uplift old ones!
Thank you so much for spreading the knowledge of these amazing papers! I can't wait to see this used in Virtual Reality RPG games, but I'd still be worried about open world becoming Terabytes worth of information to download XD
Oh man you do a GRRRREAT impression of Ren from Ren & Stimpy! 😂👍🏽
Very exciting breakthrough
Let´s flippin go! My body is ready for the next gen games xD
@Maksymilian Świerad I'm just exited for the next 10 years ahead xd.
Superb. Unreal Engine 5 would love this!
This thing will be the monster for big tech company like Nvidia and AMD. they cant sell more than 16+ GB memory cards. but idk is this YT or the models that flickering is the plague in this visual industry. love to see this things getting more attention.
3:29 holy shit this really impressed me
This would be awesome for remastering old games!
your videos are awsome and very informative... you bring knowledge to vide range of viewers. keep up the good work and have a good one yourself.. love your videos.
One of the best Papers yet!
I smell a Substance Painter update in the future
@3:15 at the bottom of the menu is "Substance Material"
THATS AMAZING....AI's creating the metaverse daily
Wow this is game changing
So far you have 65,863 view -that's 3000x the original paper! What a time to be a UA-cam creator!! :o)
I would really like to see that technique in Unity/Unreal in a year or so, this would be phenomenal and game breaking
It's not for rasterization.
This is not merely a bump map affecting surface geometry. I notice that when the geometry is extruded, there are details on the sides -- areas that would stretch out the pixels between light and dark in a non-smooth manner - and these details are not in the photos. Did I see geometry that tucked under at some points? Either there is a 2nd image we aren't seeing or there is some AI(well, really neural net) going on that needs to understand the material. And I imagine they do well with metal and rope and per texture type, you have to teach it. Which is to be expected -- everything in 3D, imaging and video is about to take the next step in evolution as this image recognition technology removes cables from actors, takes away the headaches from rotoscoping and masking, and adds details and lighting that we KNOW would be there without spending too much time computing every pixel.
Very nice application ideas.....Cheers!
mind blown. PERIOD
Károly is tesselating the view count of this paper!
After adopting extortion... I'm sorry, "subscription" licensing, companies like Adobe and Autodesk seem to have abandoned most of their R&D efforts to focus on the needs of the shareholders. It's great to see some real innovation happening here!
the Substance team is always here and innovating ;)
50x storage improvement!?!? it will help the gaming world soo much
i really hope this technolgy will be able to be applied even to older games!
It's not for games. He kinda failed to mention that this technique only works with ray tracing
Hi,
You indicate the low number of view f the video of the paper but could you then include it's link in your description ?
My paper exploded in my hand! I'm suing! 😂
This blows conventional tessellation out of the water, hoping it eventually helps leaner computers run incredible programs