I'm a 3D artist who started on the Amiga, with Ligtwave and currently (continue to) work with Maya and Vray. Nodes, Lights, Shader, render, change things, render again until it looks good. But seeing these new technologies that seem to perform one bigger miracle than another every few days makes me feel like a caveman trying to understand our world. I have no idea how to use these things. I feel like I was instantly outdated.
Nobody knows how to use any of these things. What you do have is experience knowing when it looks right. Learning workflows only takes a few days to a few months depending on how deep you want to get.
These things only get useful when implemented in popular software like blender, unity, unreal engine. We just see these demos for years without them being available in any software.
tried to recommend you to try Postshot and try by yourself but UA-cam censored me... If UA-cam GOD allows me to explains.. There are plugins out there already for Blender/Unreal/Unity. Do not hesitate to ask me more, if i'm allowed to answer...
As a fellow 3D artist of about 10 years I feel it's a futile battle to learn more 3D stuff because in a few more years CGI will likely be replaced by AI. It can already generate photorealistic renderings and animations and it's just getting started.
@@SirusStarTV If you're a game engine developer they're sometimes useful because they show what's possible and have a "recipe" on how to do it. But for someone who's in modelling, or even game dev (using a preexisting engine), not as much. Even if you had the source (UE4/Blender) and could implement the technique yourself, it's really not worth the effort.
@@theuserofdoom Anything less than an NVIDIA Tesla H100 is for poor people. I game on a DGX SuperPod. Each liquid-cooled rack features 36 NVIDIA GB200 Grace Blackwell Superchips-36 NVIDIA Grace CPUs and 72 Blackwell GPUs-connected as one with NVIDIA NVLink.
This paper is actually a complete departure from Gaussian Splatting, but both of these methods create a Radiance Field. Also, the vast majority of research will be transferable between the two methods. I interviewed the first author of this paper if you want to learn more about what this method can do! ua-cam.com/video/1vxn4M1fO6c/v-deo.html
can you tell what are the implications of this for not so smart person like me? i want to can i run pathtraced games at 60 fps at mid range gpu like rtx 3060/4060 type gpu?
I keep telling both friends and family; Now is the perfect time to own a Tech Stock, With everything going on and seeing how the world is been run by AI and all Tech is here to stay and you don’t want to miss it
With everything going on in the market My advice to anyone starting out in the market is to seek guidance as its the best way to build long term wealth while managing your risk and emotions with the passive investing strategy.
I took charge of my portfolio but faced losses in 2022. Realizing the need for a change, I sought advice from a fiduciary advisor. Through restructuring and diversification with dividend stocks, ETFs, Mutual funds, and REITs, my $1.2M portfolio surged, yielding an annualized gain of 28%.
‘Annette Christine Conte One of the finest portfolio managers in the field also widely recognized. Just research the name. You’d find necessary details to work with and set up an appointment.
This is unbelievable. If we could get gaussian splatter a bit more developed to the point it can be rigged and animated, that would go so well with this new light simulation support and could make stuff like Unreal's nanite-level of detail actually available to more hardware
@@mattmexor2882 I don't know, but I'd imagine it should be closely related to animation since it's about grouping and defining relationships between points
@@mattmexor2882 I don't think model-model collision has much to do with lighting, it'd probly just clip through and whatever's intersecting won't influence lighting.
@@mattmexor2882games generally don't use the visual mesh for collisions anyway, they add its own collision boxes and capsules that have supplier geometry depending on the need
Interactivity of Radiance Fields is still somewhere in the blue. Right know depending on the use case 3DGS are by far the best way to represent a single object. Its fast and highly detailed if captured right and trained properly.
I love gaussian splatting technology. I just started creating some of own to record memories of interesting places or things, instead of taking photos. That way I'll be able to revisit and share them later with a VR headset.
So step 1, fly a drone through an environment to get photos from a bunch of angles; step 2, process those images into Gaussian splatter data; step 3, render a fully raytraced clone of the environment in 3D in realtime, complete with any additional 3D objects you want to add to the scene?
Károly, is there an AI where I can give it a Two Minute Papers video, and the output is the same video but the narration doesn't pause after every word? Thanks
I work on atmospheric machine learning research, visualizations for advanced particle simulations like atmospheric particle simulation sounds like a perfect application for this technology!
Look at the part where they introduce a glass object into the image and change its properties. That is, for all intents and purposes, what the animation process would be like. I get why it doesn't register to you as animation, since it's happening in real time, but that's where we're at. Computer graphics now work like claymation.
@@michaelleue7594 Looked to me like changing the refraction value and watching the result in real-time. I was more hinting at character animation, foliage affected by wind (...), these sort of things. Remembering back, i think it was similar with voxels, also difficult to animate.
@@Charles_Bro-son you are right , it's good with static objects , but we already have Photogrammetry for this , and to calculate the X Y Z of a single particule on a moving object/character or foliage composed of billions of them in every frame the fastest possible , i can't imagine the raw power you'll need , i think for a 3D engine made toward gaming , they will use a mix between this new technique and Rasterization for moving objects !
the funny thing is when I tell my highly educated family members about ai and so forth; they don't believe what I tell them and have never heard about ai at all.
voxels to triangles : 'you could not live with your own failure , where did that bring you? back to me' XD what a time to be alive indeed every day we get another step closer to the simulation , nevermind 2 papers down the line , where is this going in the next several decades? its mindblowing
But are they actually performing light calculations on the gaussian splatting particles, or just using them essentially as sorta like a volumetric "skybox", with a one-way interaction between the splatting and the raytraced objects, leaving unchanged the baked-in angle-dependent coloring splattings already had?
Hey Károly, is it possible, since they're doing ray tracing here and resolving points... couldn't they turn this into triangulated meshes too? I really want this for vr, and a shader that uses this technique on a normal texture material from the original reference photogrammetry combined with with technique could make for very, very fast and highly triangulated 3d scanning without lidar. Edit: I should qualify this. The points created from particle ray tracing could be turned into a PLY for point cloud and then later marching Poisson style mesh reconstruction or similar loop closure photogrammetry mesh resolving techniques - except they would be a million times cleaner.
Convincing simulation, training and game 3d displays look to be even better now. I didn't think I'd see the day when realtime raytracing would start to become this fast and convincing.
Hi. Is it right that it goes into the direction of unlimited detail by euclideon? They had incredibly fast pointclouddata renderer with al lot of performance, but it was very quiet around that technique. By the way, thx for the interesting videos. I think i can understand only a little bit of this stuff , but you get me und keept me interested in 3D Computergraphics. Greetings from Germany.
This reminds me of the stuff a company called Euclidean was touting around 10 years ago. They had a tech demo that showed photoreal environments an thousands of objects in a single scene running in real time due to everything being based on... I forget what, voxels maybe? In any case, they had that demo but then disappeared completely.
I think there just isn't enough footage to fill the video with all-new clips. It also helps to be able to compare against previous papers. If you really aren't paying attention, the publication year is a big hint. :)
I tried a demo of gaussian splatting and it was very fluid on my RTX 2080S + intel 9700K. However since those are points clouds, you must be far enough to the "object", or you'll see all the points which breaks immersion.
Given that 8gb gfx cards are not suitable for gaussian splatting and this technique uses around half the usual RAM there is still a lot of work needed to reduce memory demands.
Or what's more likely to happen is that this technology won't become mainstream and in videogames until the average budget graphics card has 12GB of vram and PCs with 32GB of ram is the norm. If this takes 4 to 5 more years to happen then so be it. We have to move on from 8GB graphics cards, we can't keep catering to such old and low end hardware.
@03chrisv Given that Nvidia is driving towards a 100% AI rendering pipeline that does away with polygons entirely there is merit towards switching over to a particle based rendering solution for lighting in preperation for geometry becoming particle based too eventually.
I can see a plausible future where this degree of realistic fidelity has become so efficient that it smoothly renders at 90 in our full face VR/AR thin mask which reproduces smell and taste ^^
Could you make a video summarizing all of the videos that you made in the last month please? Sometimes I see advances in light simulation and I wonder if I haven't already seen that in 10 other videos and I don't remember the nuances between all of these breakthroughs
I'm not hating at all, it's amazing that we can compute these things nowadays but I just cannot help but imagine what possibilities we could achieve if we put all that effort & processing power into physics effects, interactive environments, reactive damage effects, particle effects, artistic aesthetics instead of mostly focusing on realism. We definitely need stealth games to comeback and blend in the new advancements we have when it comes to lightning and all that..
The issue lies in the fact that the cost of buying processing power is shifted to the consumer, while the cost of developing program falls on the developer. This is why we often see less optimized, visually underwhelming games that still require the best GPUs available on the market; the savings on optimization are passed onto the consumer, who compensates with better hardware. This also explains why we rarely see truly interactive games with unique systems outside of the indie scene.
I mean in principle this is essentially just caching the traced paths, which is clearly no small feat but it does compromise somewhat on the flexibility afforded by truly realtime RT - it will work great as you move around, but performance could hiccup when the lighting conditions change substantially. Nothing insurmountable but might need to be kept in mind by devs.
As a day by day CGI artist I can't wait for them to bring this over to Blender. I'm working with raytraces all day so this will greatly improve my output each day! Can't wait for what this will bring.
Model basic scene and interactables in polygon data, then do all the details and simulation work in point cloud based computation. Best of both worlds?
It looks like the blurry patches are in the peripheral so if you were playing a fast paced game in realtime it might look like motion blur. in other words, depending on the application, they may not even need to fix it. Would I put up with a bit of blur in order to have a game that looks like a realistic 3d video? hell yes.
I really hope we can one day see this technology implemented for videogames and I hope it gets done PROPERLY. Whether ultrarealistic graphics benefit a game obviously depends on the style of the game. But let's take a game like Dead Space or Dying Light. Games like this would greatly benefit from hyperrealistic graphics. Characters could become even more fleshed out by having much more organic movements, much more detailed faces and the environment would be a lot more immersive through ultra-realism. Here it would be beneficial. Additionally, if games eventually incorporate generative AI to dynamically generate voicelines and maybe even side quests (maybe with predetermined guidelines set by the devs for the AI) they could potentially achieve a completely new level of immersion and realism by being dynamic to a level that cannot be achieved through pre-made objectives, dialogues etc. However this will require a lot of work. 3DGS or technologies based on it are still in very early stages and so is generative AI. If this tech wants to find its way into the game development world it needs to come in the form of an Engine like Unreal Engine with a similar/better featureset. Otherwise - if it's too different while not offering as much - it won't be picked up. It needs to be acceptable to make the switch while also gaining smth from the switch. Same goes for potential generative AI that might get used one day in games: it needs to work well. That means it has to be trained on a lot of controlled high quality data in order to produe high quality outputs. A lot will change here in the next, say, 15 years and who knows, maybe in 15 years games will finally incorporate all these technologies IF. THEY. ARE. DONE. PROPERLY. I'd love to see it.
As someone who knows about Gaussian Splatting, It's an incredible tech. But I can't believe they are trying to merge 3D GS with RTX algorithms... It's Holy Grail 2:56 Metal Bowl has Gaussian Splatting artifacts :(
bro i love your videos but can you please not pause talking every 2-3 words, it doesn't make it more interesting. it's my one and only negative about this channel, for the rest i love your enthusiasm and work on bringing us graphical tech news.
i am simple man , i see 2 minute paper , i click
men of culture we meet again
Indeed
It's good practice!
such a unique comment
No. Sir you are a sophisticated man.
I'm a 3D artist who started on the Amiga, with Ligtwave and currently (continue to) work with Maya and Vray. Nodes, Lights, Shader, render, change things, render again until it looks good.
But seeing these new technologies that seem to perform one bigger miracle than another every few days makes me feel like a caveman trying to understand our world.
I have no idea how to use these things. I feel like I was instantly outdated.
Nobody knows how to use any of these things.
What you do have is experience knowing when it looks right.
Learning workflows only takes a few days to a few months depending on how deep you want to get.
These things only get useful when implemented in popular software like blender, unity, unreal engine. We just see these demos for years without them being available in any software.
tried to recommend you to try Postshot and try by yourself but UA-cam censored me...
If UA-cam GOD allows me to explains.. There are plugins out there already for Blender/Unreal/Unity. Do not hesitate to ask me more, if i'm allowed to answer...
As a fellow 3D artist of about 10 years I feel it's a futile battle to learn more 3D stuff because in a few more years CGI will likely be replaced by AI. It can already generate photorealistic renderings and animations and it's just getting started.
@@SirusStarTV If you're a game engine developer they're sometimes useful because they show what's possible and have a "recipe" on how to do it. But for someone who's in modelling, or even game dev (using a preexisting engine), not as much. Even if you had the source (UE4/Blender) and could implement the technique yourself, it's really not worth the effort.
finally the mortgage for that 8090 will be justified
Wait you don’t game on a GB200?
@@theuserofdoom Anything less than an NVIDIA Tesla H100 is for poor people. I game on a DGX SuperPod. Each liquid-cooled rack features 36 NVIDIA GB200 Grace Blackwell Superchips-36 NVIDIA Grace CPUs and 72 Blackwell GPUs-connected as one with NVIDIA NVLink.
@@theuserofdoom You wouldnt. A consumer graphics card would smash it at gaming.
Considering how quickly this has come along. It'll prolly be a 6090 bro.
@@honestgoat ni0ce0
bro has a comma every 3 words
🤣🤣🤣
OMG! I will never not notice that
That's way better than no punctuation at all.
The power of commas should not be underestimated, so there.
But it gives this man his charm… I enjoy hearing him talk about all these things.
What a time to be two papers down the line!
😂😂😂
But we said that two papers earlier and we are still not there
What a time to lie down on two papers!
Pappers and a Scissor right 🤣
This paper is actually a complete departure from Gaussian Splatting, but both of these methods create a Radiance Field. Also, the vast majority of research will be transferable between the two methods. I interviewed the first author of this paper if you want to learn more about what this method can do! ua-cam.com/video/1vxn4M1fO6c/v-deo.html
Do they still do the ML fitting to generate the particles from the source data?
can you tell what are the implications of this for not so smart person like me? i want to can i run pathtraced games at 60 fps at mid range gpu like rtx 3060/4060 type gpu?
@@bmqww223 i dont study this stuff but it doesnt exist in gaming at all. the answer to your question is pretty much no
radiance fields are kind of magical
@@xXJeReMiAhXx99 Unreal Engine 5 and PlayCanvas both support Gaussian Splatting.
"Two Minute Papers released a video 2 minutes ago"
You're two late.
What a time two be alive!
I keep telling both friends and family; Now is the perfect time to own a Tech Stock, With everything going on and seeing how the world is been run by AI and all Tech is here to stay and you don’t want to miss it
With everything going on in the market My advice to anyone starting out in the market is to seek guidance as its the best way to build long term wealth while managing your risk and emotions with the passive investing strategy.
I took charge of my portfolio but faced losses in 2022. Realizing the need for a change, I sought advice from a fiduciary advisor. Through restructuring and diversification with dividend stocks, ETFs, Mutual funds, and REITs, my $1.2M portfolio surged, yielding an annualized gain of 28%.
Do you mind sharing info on the adviser who assisted you?
‘Annette Christine Conte One of the finest portfolio managers in the field also widely recognized. Just research the name. You’d find necessary details to work with and set up an appointment.
Thank you for sharing. it was easy to find her, then I scheduled a phone call with her. She seems proficient considering her résumé.
This is unbelievable. If we could get gaussian splatter a bit more developed to the point it can be rigged and animated, that would go so well with this new light simulation support and could make stuff like Unreal's nanite-level of detail actually available to more hardware
Can this technology resolve object boundaries? Can you move objects around in the scene and know when they collide with each other?
@@mattmexor2882 I don't know, but I'd imagine it should be closely related to animation since it's about grouping and defining relationships between points
@@mattmexor2882 I don't think model-model collision has much to do with lighting, it'd probly just clip through and whatever's intersecting won't influence lighting.
@@mattmexor2882games generally don't use the visual mesh for collisions anyway, they add its own collision boxes and capsules that have supplier geometry depending on the need
@@mattmexor2882you could just place collider objects into the gaussian splats that move with it i think
Great techniques soon to be used for videogames and movies with awful plots.
you forgot to add : awfull tripple A games
Good that I care more about atmosphere and vibe, I would hate to not like dishonored just because the plot wasn't outstanding
Sweet Baby has blacklisted you
Interactivity of Radiance Fields is still somewhere in the blue.
Right know depending on the use case 3DGS are by far the best way to represent a single object. Its fast and highly detailed if captured right and trained properly.
LMFAO
I love gaussian splatting technology. I just started creating some of own to record memories of interesting places or things, instead of taking photos. That way I'll be able to revisit and share them later with a VR headset.
How is that done? Is there a tutorial somewhere?
@@ImpostorModanica I wanna know too
@@ImpostorModanica I'm using Scaniverse on an iPhone
@@NicoAssaf I'm using Scaniverse on an iPhone
bro living in 2077
So step 1, fly a drone through an environment to get photos from a bunch of angles; step 2, process those images into Gaussian splatter data; step 3, render a fully raytraced clone of the environment in 3D in realtime, complete with any additional 3D objects you want to add to the scene?
Research Papers: RTX ON
Károly, is there an AI where I can give it a Two Minute Papers video, and the output is the same video but the narration doesn't pause after every word? Thanks
I work on atmospheric machine learning research, visualizations for advanced particle simulations like atmospheric particle simulation sounds like a perfect application for this technology!
The holy grail of graphics is always just two papers down the line!
Some 3D glasses and a bathtub away from the matrix
I'm confused. By particles do you mean point cloud system? similar to euclidean's unlimited detail?
Amazing tech. Can't wait for it to become available.
Great video. And the icing on the cake is the narration by Ren from Ren & Stimpy.
Looks good but everything ist static. How difficult is it to animate these points compared to polygons?
Look at the part where they introduce a glass object into the image and change its properties. That is, for all intents and purposes, what the animation process would be like. I get why it doesn't register to you as animation, since it's happening in real time, but that's where we're at. Computer graphics now work like claymation.
@@michaelleue7594 Looked to me like changing the refraction value and watching the result in real-time. I was more hinting at character animation, foliage affected by wind (...), these sort of things. Remembering back, i think it was similar with voxels, also difficult to animate.
@@Charles_Bro-son you are right , it's good with static objects , but we already have Photogrammetry for this , and to calculate the X Y Z of a single particule on a moving object/character or foliage composed of billions of them in every frame the fastest possible , i can't imagine the raw power you'll need , i think for a 3D engine made toward gaming , they will use a mix between this new technique and Rasterization for moving objects !
the funny thing is when I tell my highly educated family members about ai and so forth; they don't believe what I tell them and have never heard about ai at all.
voxels to triangles : 'you could not live with your own failure , where did that bring you? back to me' XD
what a time to be alive indeed
every day we get another step closer to the simulation , nevermind 2 papers down the line , where is this going in the next several decades? its mindblowing
I think what these papers really need to improve the world we live in is attention. You're doing gods work
Károly!
But are they actually performing light calculations on the gaussian splatting particles, or just using them essentially as sorta like a volumetric "skybox", with a one-way interaction between the splatting and the raytraced objects, leaving unchanged the baked-in angle-dependent coloring splattings already had?
3:33 Where can I download this?
What a time to be alive !
I hope that you will cover what will be announced at Humanoids 2024 in november !
Awesome if this could be used in Blender for fast/lightweight ArchVis backgrounds.
This is one of the first times I've ever gotten goosebumps from reading a paper.
Great video! Out of curiosity, have you ever covered Fourier Neural Operators for solving PDEs on your channel, or plan to?
Gaussian raysplatting? Splattracing?
Gauslighting
@ make it gauss… then it‘s perfect
Finally non Ai narrated video on this channel!
How well does this technique work with moving lights and moving objects for the light to bounce off of?
Hey Károly, is it possible, since they're doing ray tracing here and resolving points... couldn't they turn this into triangulated meshes too?
I really want this for vr, and a shader that uses this technique on a normal texture material from the original reference photogrammetry combined with with technique could make for very, very fast and highly triangulated 3d scanning without lidar.
Edit: I should qualify this. The points created from particle ray tracing could be turned into a PLY for point cloud and then later marching Poisson style mesh reconstruction or similar loop closure photogrammetry mesh resolving techniques - except they would be a million times cleaner.
Will this make the current RT cores that largely, if I understand correctly, handle triangle collision detection?
Quick question.
Is it possible to combine this technique with wave based ray tracer
That is so crazy, almost unbelievable, it is very exciting to see progress like that
On what VGA it's runing realtime?
3090,4090 or 8090?
Very cool; ultra photo-realistic video games and 3D rendered movies/elements are very close.
Convincing simulation, training and game 3d displays look to be even better now. I didn't think I'd see the day when realtime raytracing would start to become this fast and convincing.
This is very cool! Heaps more interesting than the generative AI papers.
Great to see some light transport simulation content again!
Hi. Is it right that it goes into the direction of unlimited detail by euclideon? They had incredibly fast pointclouddata renderer with al lot of performance, but it was very quiet around that technique. By the way, thx for the interesting videos. I think i can understand only a little bit of this stuff , but you get me und keept me interested in 3D Computergraphics. Greetings from Germany.
there must be an infinite amount of periods in this guy's script
The bike scene really looks real life! What a time to be alive indeed.
I like this rethinking of rendering techniques.
how does it works with deforming mesh ?
1:1. Excellent performance is a must. Love it.
Is there a way to Download the scenes to explore them in my own PC?
This reminds me of the stuff a company called Euclidean was touting around 10 years ago. They had a tech demo that showed photoreal environments an thousands of objects in a single scene running in real time due to everything being based on... I forget what, voxels maybe? In any case, they had that demo but then disappeared completely.
how do we make the papers faster?
Corridor Crew will not be happy with those shadows and extra dark shadows.
I feel like we are seeing the same papers over and over again. I keep seeing the same clips every video and never know whether its new stuff or not
I think there just isn't enough footage to fill the video with all-new clips. It also helps to be able to compare against previous papers. If you really aren't paying attention, the publication year is a big hint. :)
I tried a demo of gaussian splatting and it was very fluid on my RTX 2080S + intel 9700K. However since those are points clouds, you must be far enough to the "object", or you'll see all the points which breaks immersion.
Nvidia sharing their research for free? Now I’m impressed.
As long as you buy their hardware to run their software, they do not mind.
@@iloveblender8999 it's a wise move since it's clearly something ONLY their AI based chips can run !
@@iloveblender8999 Sharing technical research is not the same as giving free software that runs only on their hardware, that’s why I was impressed.
That's going to be great for museum displays, architects etc
I do wonder how it will handle non-static objects though.
Given that 8gb gfx cards are not suitable for gaussian splatting and this technique uses around half the usual RAM there is still a lot of work needed to reduce memory demands.
Or what's more likely to happen is that this technology won't become mainstream and in videogames until the average budget graphics card has 12GB of vram and PCs with 32GB of ram is the norm. If this takes 4 to 5 more years to happen then so be it. We have to move on from 8GB graphics cards, we can't keep catering to such old and low end hardware.
@@03chrisv Increasing VRAM takes way too long in recent years.
@03chrisv Given that Nvidia is driving towards a 100% AI rendering pipeline that does away with polygons entirely there is merit towards switching over to a particle based rendering solution for lighting in preperation for geometry becoming particle based too eventually.
Is this better than Unreal 5 with all thosr polygons they showed?
Reminds me of a company called Euclideon. No idea if it exists anymore if it’s been sold or just fizzled out.
I can see a plausible future where this degree of realistic fidelity has become so efficient that it smoothly renders at 90 in our full face VR/AR thin mask which reproduces smell and taste ^^
I was just looking at relightable gaussian avatars yesterday. This tech is truly incredible, I can't wait to see this in games especially VR.
Could you make a video summarizing all of the videos that you made in the last month please?
Sometimes I see advances in light simulation and I wonder if I haven't already seen that in 10 other videos and I don't remember the nuances between all of these breakthroughs
This means gaming with splats will be possible one day? What a time to be alive!
how's gaussian splatting different from voxels?
I'm not hating at all, it's amazing that we can compute these things nowadays but I just cannot help but imagine what possibilities we could achieve if we put all that effort & processing power into physics effects, interactive environments, reactive damage effects, particle effects, artistic aesthetics instead of mostly focusing on realism. We definitely need stealth games to comeback and blend in the new advancements we have when it comes to lightning and all that..
The issue lies in the fact that the cost of buying processing power is shifted to the consumer, while the cost of developing program falls on the developer.
This is why we often see less optimized, visually underwhelming games that still require the best GPUs available on the market; the savings on optimization are passed onto the consumer, who compensates with better hardware.
This also explains why we rarely see truly interactive games with unique systems outside of the indie scene.
Waait a minute. Gaussian splatting already did a pretty good job of capturing the specular reflections from the scanned enviorment.
ok these shots actually look like real life now :O
What about in VR? Possible? Less stress on the hardware? 🙏👍
So... how long until this makes it into some easy-to-use software package? Postshot seems like a good candidate...
I mean in principle this is essentially just caching the traced paths, which is clearly no small feat but it does compromise somewhat on the flexibility afforded by truly realtime RT - it will work great as you move around, but performance could hiccup when the lighting conditions change substantially. Nothing insurmountable but might need to be kept in mind by devs.
As a day by day CGI artist I can't wait for them to bring this over to Blender. I'm working with raytraces all day so this will greatly improve my output each day! Can't wait for what this will bring.
I think the AI filters will smash everything in the next years
They're too intensive to be done in real time and too unstable to actually work.
This made me think of the video, which explained how water was generated in the movie ANTZ, back in 1998.
I would like to use this technique locally on my PC, but I can't find the code. Can anyone help me?
I think we might be close to revamping a lot of old pc titles. Like a turbo shader / wrapper on existing games without needing development
This tech is going to be incredible in VR!!!
Model basic scene and interactables in polygon data, then do all the details and simulation work in point cloud based computation. Best of both worlds?
How do I make Gaussian splats on my own hardware… I have a 4090
all the scenese seem stationary - can it handle dynamic scenes?
It looks like the blurry patches are in the peripheral so if you were playing a fast paced game in realtime it might look like motion blur. in other words, depending on the application, they may not even need to fix it.
Would I put up with a bit of blur in order to have a game that looks like a realistic 3d video? hell yes.
Where is your research merch?
"What a time to be alive"
"Hold on to your papers"
Lol I appreciate this type of commentary and coverage.
So how is this call?
I really hope we can one day see this technology implemented for videogames and I hope it gets done PROPERLY. Whether ultrarealistic graphics benefit a game obviously depends on the style of the game. But let's take a game like Dead Space or Dying Light. Games like this would greatly benefit from hyperrealistic graphics. Characters could become even more fleshed out by having much more organic movements, much more detailed faces and the environment would be a lot more immersive through ultra-realism. Here it would be beneficial. Additionally, if games eventually incorporate generative AI to dynamically generate voicelines and maybe even side quests (maybe with predetermined guidelines set by the devs for the AI) they could potentially achieve a completely new level of immersion and realism by being dynamic to a level that cannot be achieved through pre-made objectives, dialogues etc.
However this will require a lot of work. 3DGS or technologies based on it are still in very early stages and so is generative AI. If this tech wants to find its way into the game development world it needs to come in the form of an Engine like Unreal Engine with a similar/better featureset. Otherwise - if it's too different while not offering as much - it won't be picked up. It needs to be acceptable to make the switch while also gaining smth from the switch. Same goes for potential generative AI that might get used one day in games: it needs to work well. That means it has to be trained on a lot of controlled high quality data in order to produe high quality outputs. A lot will change here in the next, say, 15 years and who knows, maybe in 15 years games will finally incorporate all these technologies IF. THEY. ARE. DONE. PROPERLY. I'd love to see it.
Can you get an accent coach please
they are getting closer to raymarching, where geometry is represented purely by math. no particles or vertices needed, and its super fast!
I dream of the day Street View will use this kind of technology.
If it's particles could they simulate a black hole inside a room and we can see it get torn apart??
I suspect this will be the method for converting realtime generative ai images into 3d rather than trying to create traditional game geometry.
As someone who knows about Gaussian Splatting, It's an incredible tech. But I can't believe they are trying to merge 3D GS with RTX algorithms... It's Holy Grail
2:56 Metal Bowl has Gaussian Splatting artifacts :(
Job well done!
Every time i start my Windows computer and see a new landscape photo, i realise todays GPU is still at a toddler stage.
The dude sounds like Text To Speech
Probably worse than TTS
World light updates handled?
bro i love your videos but can you please not pause talking every 2-3 words, it doesn't make it more interesting.
it's my one and only negative about this channel, for the rest i love your enthusiasm and work on bringing us graphical tech news.
I think it gives the speech some texture to grab to.
how can this be implemented in games though?
Things are getting crazy in real time graphics
I start to think that within10-15 years we'll reach a point of pretty much impossible to spot changes to graphics quality between generations of GPUs.
What is up with TMP and Nvidia anyway?
3:00 Funnily enough i've ate those exact same pasta, same brand same form 😂