NVIDIA’s New Tech: Next Level Ray Tracing!
Вставка
- Опубліковано 15 жов 2024
- ❤️ Check out Microsoft Azure AI and try it out for free:
azure.microsof...
📝 The "Amortizing Samples in Physics-Based Inverse Rendering using ReSTIR" is available here:
shuangz.com/pr...
Erratum: at 5:12, I should have said "has 100x lower relative error". Apologies! Removed that part of the video so you won't hear it anymore.
Andrew Price's Blender tutorials:
• Blender Tutorial for C...
📝 My paper on simulations that look almost like reality is available for free here:
rdcu.be/cWPfD
Or this is the orig. Nature Physics link with clickable citations:
www.nature.com...
🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Alex Balfanz, Alex Haro, B Shang, Benji Rabhan, Gaston Ingaramo, Gordon Child, John Le, Kyle Davis, Loyal Alchemist, Lukas Biewald, Martin, Michael Albrecht, Michael Tedder, Owen Skarpness, Richard Sundvall, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: / twominutepapers
Thumbnail background design: Felícia Zsolnai-Fehér - felicia.hu
Károly Zsolnai-Fehér's research works: cg.tuwien.ac.a...
Twitter: / twominutepapers
#nvidia
Erratum: at 5:12, I should have said "has 100x lower relative error". Apologies and thanks for the catch @thomasgoodwin2648! 🙏 Update: Removed that part of the video so you won't hear it anymore.
I was just wondering about that. Thanks for the correction! Amazing topic too!
100 x lower error is not what the text said, it was 20 x lower error. The 100 x was the speed. 🙂
@@etmax1 The only time-related words i can see are "At equal frame time" which means the same time in my book. So there are no statements for being slower or faster as far as i could read.
@@321mumm Two Minute Papers acknowledged what I was saying and attributed it to thomasgoodwin2648 (presumably because they posted it first, so I suggest look more closely at what I wrote, or look at thomasgoodwin2648 s post.
Take a shot everytime Nvidia releases "Next level RayTracing"
Alcohol poisoning imminent
Consistent buzz
it came out in 2019ish.. I have built gaming pc since 1999... ray tracing came out with gaming for all in the 2050 and 2060 series.. i got a 2060 super as soon as i saw it, before the crash for parts
Take a shot every time apple says the "innovated" something that already exists.
So how did you end up in anonymous alcoholic club? Well... here's my story...
@5:07 Actually, it reads 'Up to 100x lower RELATIVE ERROR than baseline methods.' , not 100x faster. Still Awesome though.
🖖🙂👍
Right before that, it also says "in the same timeframe" So two minute papers assumed a linear relationship between error and time, and guessed if you wanted the same quality it would be 100x faster. Although I think thats unlikely
I should have been a little more accurate there, good catch, thank you! Upvoted for visibility and added a note about it in the video description. Update: Removed that part of the video so you won't hear it anymore.
What a ray time to be traced alive!
⚡😂
Life has always been ray traced throughout history, yet it is only now we must come to terms with the fact the ray is inescapable.
Two more papers down the line would be next week, right?
You really think it''l take that long?
My mouth opened at the dragon modelling
That was "two papers up the line" (previous paper)
What a time to be alive!
What a time to be a ray!
Love getting traced.
This is a dream to me. I'm creating a world with more than 150 characters, more than 1000 buildings being draw by hand, and this... this is what i wanted. I can model, i'm an arch student, using Rhino, Grasshopper, etc. but it's just absolutely crazy what NVIDIA is doing. I hope this come soon.
It was awesome when it came out for us on the 2060 series
Try to downgrade the number of objects rendered to save energy requirements if you can.
Rhino? Now that's a software name I've not heard in a long time.
And please. Stop the erroneous UA-cam algorithm needing jump cuts with videos that have nothing to do with what you're talking about. That was kind of disappointing
ooh, what is your game? mine is called #NotSSgame, and I have a similar number of characters and buildings. I have some update vids about it. I hope to see more about your project soon!
You know what would make all these scenes even more realistic? Adding a bit of dirt everywhere 😂
IM REVERSE HOLDING ONTO MY PAPERS REALLY HARD SIR✋
Where is the previous video about how to control ChatGPT ?
the video was removed before I had a chance to watch it 😞
me too!
I apologize as the quality of the video wasn't really what you would expect from us, and thus we removed it.
@@TwoMinutePapers aaww okay 😞thank you for letting us know
I feel like thinking about this as a way to take existing 3D renders back to 3D meshes is impressive but an odd and narrow use case.
Seems to me that this is heading toward the ability to reconstruct scenes based off photographs - even things off camera based off shadows and reflections.
It’s heading toward a tool for blade runner type detective work.
Oh man I love to do computer graphics. Thats what I dreamed of since I ws a kid. Is really a bummer to see the artistic process being ereased like this.
This voice and vídeo edition and script words are syntetic, that's Magic! 😮
What a time to be alive! I can't wait to see if this will be used for forensic science where shadows of objects are reverse engineered to expand a video or image in greater detail and help solve cases!
6:13 what a revelation!, I would have never thought you like papers
At 5:09 the marked text says 100x less error rate, not 100x faster. Am I missing something?
Inverse Rendering? Screw video games. Do you know what that would do for SLAM and robotic navigation?
is prism tracing possible ? - follow not single "line" - but genuine triangle (3 rays) mapped to screen find what areas it intersects, each contributing that percent of the color. Would split between many sub-prisms instead of running a ray all the way from start each time - Here knowing if the contribution is large or tiny.
- Then "reflections" are of whole "triangle surface intersection" - creating a new, wider prism, with caring about detail less - possible multiple
- One would batch not whole "path" of ray - but each "straight prism segment" - Then SORT all remaining by contribution (area intersection with ~screen) -- repeat until time runs out, then somehow cheaply "guess" the rest.
might need a different way to represent objects in scene, but if possible, I really like this conceptually.
This is a really good thing. I always wanted to be a fiction writer but I suffer from dyslexia. With the help of AI, I'm now well on my way to completing my first novel. It's important to note that AI is a tool that that allows me to bring my thoughts and ideas into the world. But it doesn't simply spit out the work. I still spend many hours planning, developing, guiding, tweaking and editing the entire process. These tools give me the ability to create in ways I could never dream of before. So, I think it's possible that AI will be used to empower new artists who previously faced some physical or mental disability that prevented them from creating in that space before. I believe this technology will create numerous new artists.
Thanks for including the legendary Andrew Price.
Seeing real-world simulations become more accurate as well as getting faster makes me wonder if P really does equal NP.
How does it compare to the Mitsuba?
The Gigachad, Way2 Dank and Copege drinks in the first few scenes are hilarious
I'm picturing our grandkids using this thing to casually create games as easily as we doodle, & them being in awe that we were aever smart enough to write the code for games ourselves from scratch.
This sounds like an absolutely winner for computed tomography.
What's the ray tracing in this?
At 04:05, an example is given where the shadow is the input and the method reconstructs the object from it. The shadow even moves, making the object move in accordance, hence it's some sort of "reverse ray tracing" effect.
Whoa! this is game changer!
"Enhance 15 to 23. Give me a hard copy right there"
Wow just incredible.
I like this idea, I have been collecting images from beautiful or interesting places with the goal of someday using technology like this one them.
What a time to be alive!
it is much more close to it then the research paper 2 years ago that went through how generate a 3D object from a 2D image. She kind of got it kind of didn't The part that wasn't able to get correct is where there was a dip in the 3D object on the top that had nothing that went through the object. From this i think that example will still be unable to be done.
Is your voice generated by AI? Because it sure sounds like it.
0:11 Blender donut man sighted, community engagement in progress....
(note: making fun of chat not accusing 2minpapers)
What a wonderful time to be alive.
This is similar to what the Quest 3 VR headset does when scanning your room and creating a 3D mesh from it.
that would be amazing, its a dream to make something with my hands and have that modelled into a game... with just a picture I can do that. Wow.
This is unrelated but- I was thinking for text to video being easy and intuitive to pose and animate characters.
You could use something like controlnet with key frames, so you can use it for videos posing and movements of a character in the text to video generation.
I don't think anyone has done this, I would love to see it done.
Imagine posing your character just by dragging a stickman around then hitting the generate image button.
Easy posing if someone pulls this off.
what happened to your 'can AI be controlled' video?
I'm so curious now. What if you put a non-linear geometry as the photo? Like a illusion.
2:26 what paper is that?
I'd really like to see a machine learning model completely relplace the rendering process. Imagine if you gave the model the textures, materials, and geometry information, and it could generate an image that appears like it was rendered with a slow path-tracer. That could completely make ALL path-tracers obsolete, and also make gaming far better if it could run in real time.
The future is diffusion ? Once consistency is achieved you just diffuse the frames
@@raymond_luxury_yacht No, not really. I think that diffusion would be too slow to run in real time. I thought of something like Neural Control Variates, which was also covered on this channel.
ua-cam.com/video/yl1jkmF7Xug/v-deo.html
@@raymond_luxury_yacht No, because I think that process would be too slow to run in real time. There's something called Neural Control Variates, which has been covered on this same channel.
ua-cam.com/video/yl1jkmF7Xug/v-deo.html
The only problem is that this AI model is not available to the public.
0:53 Jam a man of fortune and J must seek my fortune - xQc
Wouldn't it be better to have a video of the place and it renders a 3d scene
How do magnets work?
Picture to 3d modeling is huge for 3d printing.
Soon I will not be texturing my models. Nvidia will do it for me. What a time to be Live!
Donut 5.0’s gonna be a real short video
Kinda like a morphing geometric version of Gaussian Splatting?
4:18 Human beings, specialized in x-ray crystallography (that is basically reconstructing molecules using their shadows), would doubt that. For example, Dorothy Hodgkin established the structure of vitamin B12 solely by hand without using computers. (and got Nobel prize for that)
Cool but it is hard to (especially in context of game making OP mentions in the beginning) come to ideas of creative use of this. I get situation when you've had hardware disaster, you lost a lot of data (3D models and materials included) but some backup with screenshots of models survived. So automatic re-making of the model. But how would you use it in constructive and not reconstructive way?
It would accelerate the process of going from concept art -> usable game asset by potentially providing a good starting point.
@@somdudewillson yeah, I thought about it couple seconds after posting. Designers make character/object designs, give it to 3D artists, they use something like that to make 3d models fast and then just touch it here and there.
@@korinogaro yes, but IN THE FUTURE there will be no concept artists, because we don't need them anymore! 🤓More than that, we will not need any human anymore! An AI for generating prompt, the next AI is generating art from prompt, the next one making models from this art and so on! Wow, what a time to be alive! 😎
It’s time we start asking the real questions, what will ray tracing look like 3 papers down the line… exactly the same? Probably
What's the 'Revolt' looking game? Would love to play that!
Yes! Amazing!
I can imagine a ton of use-cases of this
Imagine doing text to image to 3d scene
coming soon: create a movie/video game from text 😂
already existed 2023. for commercial use. on video it was shown year or so ago... iykyk
@@KP-bi6px that exists also like i said above.. 2023. it does exist, in first stages and shown om video if you dig through this kind of nonsense pushed to the top
all above is on video and exists... this guy shows mainstream
@@dertythegrower ah
I really thought you were going to do reverse Ray tracing .Ray Trace all the Rays from a photograph by looking at the materials, the glass, the reflections and reverse Ray tracing. It looked like you were going to show that but then ...Oh well next paper
I'll eat my shoe if this guy can finish a sentence without awkwardly pausing every 3 words.
It’s the spice of life
Im going outside with my robot.
😮😮😮😮😮Best show ever 😁
I'm getting raybumps...
it is theoritically impossible to model the invisible side of an object in a single image
What makes you say that? A human can do it by understanding the context behind the image.
(If you've seen what a desk lamp looks like, then you can figure out what the back side of it likely looks like, with a high degree of accuracy) AI works in a similar fashion.
@@OGPatriot03 You will never be sure about an invisible side. You can only guess. How do you know this time that desk lamp looks different? There is not even an argument about this. It is just impossible. If a guess is sufficient for you, than that is fine. Or you can use more images.
Thank you so much for the donut reference
In my ideal future, we only need a 360 video of an object that we want to "Scan" to make a 3D and have every materials properties and after that, we only just one images to get the same results. that would be cool and EZ Clap.
If you have a 350 video of an object you already can reconstruct geometry and materials. Photogrammetry can do that, and it's been around for a while.
@@somdudewillson not that thing. xd
something like a simple turn around videos without bunch of cam. I should choose my word more clearly next time. noted
This is amazing 😮
THATS FKING CRAZY
THATS CRAZY
Ok, I'm ending my blender subscription now
Blender has subscriptions??? I haven't kept up with it. Last I knew, it was free.
Edit: part of me wonders if that was a joke to make fun of other apps.
@@goldenheartOhI'm surprised as well. Perhaps for fast rendering on distant servers?
lumen needs that
Do stimpy next pls!
Nvidia got some next level stuff every other day it feels like
so this is what they are going to make exclusive to rtx 5000 series cards...
All this great AI and you can't get a smooth talking bot for this video?
16 minutes to recreate a bush from a shadow you could create in Houdini in 30 seconds. Great... four bushes per hour, I am sure someone needs that somewhere. Its not the miracle cure its being hyped as. Also, we didn't see the back side of the dragon. Is it amazing you could reconstruct a scene from a photo? Sure, that sounds amazing. Ok fine, provide many photo references of all sides and bam, model, texture, etc. Still, it just changes from the creative craft of sculpting and modeling to photo taking and finding source material. Then, what? You are going the find the same images most people do on Google and end up with the same models everyone uses? I suppose the saving grace is that Concept Artists will become more in demand since they can truly create original ideas from many angles from their mind which could be fed to a machine to reconstruct in 3d. I am betting that aspect will actually be beneficial to the job market. Otherwise you'll give up the craft of modeling in favor of picture taking or image searching time. BUT.. yes, it is amazing reconstruction is possible this way, but it isn't an end all cure or replacement for design or creative direction.
More realistic, more performance hungry. More sales of 4090 class card. Hope they make use of these technique to hep lower performance requirements for rt work load.
The 5080 will probably beat the performance of the 4090, and the 6070 will probably beat the 5080. So in 10 years a 6070 will be more than capable of full ray tracing
So if you take an animated hypercube, which is the 3D projection of a 4D hypercube, would this be able to reconstruct the 4D cube in some way? Getting us closer to visualizing higher dimensions is a worthwhile effort.
Nope. We know exactly what 4D hypercubes look like mathematically. It's just that it can't be projected in reality because it requires information that doesn't exist in our universe, i.e. a fourth spatial dimension. No AI will ever fix that.
hearing this narrator was worse than nails scraping chalk board 20 seconds and i'm gone.
The point in time when humans reach 100% obsolescence draw nigh😢
I'm calling bullshit at 4:37. The information about the height of the octagonal prism is not contained in that shadow. It's not possible to match the height like that. There's some fuckery going on here.
'just imagine, with enough compute.... this being run on google maps and all the photo's people have shared... and all the cctv camera's and all the autonomous vehicle camera's... a live 3D model of the world... ... ... coming 'soon'.
2:25 Holy balls how can I access this?
RIP 3D modelers
Nice to see you m8. 🙂
glad i left the industry couple years ago - there's just nothing challenging/rewarding left anymore.
My friends, the Man who speaks on this site, saying good things of hope and love, is not a bad man. Listen to which is good, and forget the bad. We only call that which we do not understand, Evil. But the myriad ways and methods of the Creator, are full of mystery. So go forth, Love, Empathise, and Do good. Do only Good, not Evil. Do not hate yourselves, or the others around you, even if they seem weird, or strange. These are the Words of the Creator, and if you abide by them, you shall be saved. I am only blessed enough, and lucky enough, to be the Creator's messenger.
My wish comes true
Let me get ray tracing I can run please 🙏!!! That would be next level.
Awesome...
why do you sound like you're trying to hype up every single sentence
What a time to be alive.
Or
What a time to be AI.
Oh no, i've seen your face!😮😅 Nothing scary, just weird to see a face you've heard so many times but have no idea what they look like. It never matches what you expect. Nice to meet you! l o l😊
@@priyeshpvthanks! Watching again I can see different words being said though some basic movement is kind of in line with his speaking.
Is his voice ai generated or what? Why does it sound so weird, why does he stop all the time for half a second??
watch his video from 5 years ago and tell me if its AI.....idiot.
I hope youtube allows for voice filters.... I'm tired of your voice
As a 3d generalist I'm not loving any of this.
8 GB VRAM
Let's goooo
my godness
And here we go again, yet another video about Nvidia Ray Tracing. No thank you.
Guys, video game studios and their greedy practices are about to become obsolete. Indie devs will take over.
Oh, how I hope this comes true!
*laughs in parallels with film history*