How the Kinect Depth Sensor Works in 2 Minutes
Вставка
- Опубліковано 14 жов 2024
- The kinect uses a clever combination of cheap infrared projector and camera to sense depth.
References:
• Video
www.google.com/...
campar.in.tum.d... (p. 33)
en.wikipedia.or... (Stereo triangulation)
I can hardly find a word to express how much you have helped in my research study using kinect.
Thank you!
Glad it helped! The best thanks is a link to this video from a website or forum.
I believe the important thing about the pattern is that it's random, so that the camera can differentiate between groups of specs. The broader term for this is "structured lighting". Google Structured-light_3D_scanner
I've checked your channel and can confirm, you're a genius.
Good clarification. Should have said irregular pattern instead of random. I wonder if the dot pattern is the same on each one, or they're all calibrated to their own pattern.
incredibly well explained and simple video.
SketchUp, Bamboo Tablet, Serif DrawPlus, CamStudio. All the drawing scenes are usually sped up during the editing process.
Thanks, you're a good presenter. Simple & Concise :)
Totally guessing here, but I suspect there's some variance in the manufacturing, and that each unit gets calibrated at the factory.
Please let me know how kinect comes to know the angle of speckle pattern pattern..?
Good one. Can you tell me what software you are using for the drawings?
Good! Easy to understand the theory.....can you tell me how can i used it with matlab?
One way to use 2 Kinects is to have one Kinect shaking side to side while the other one is still. The dots of the moving camera will look stationary to that camera while the other camera's dots are blurred and vice versa. V Motion Project did this. They also used one computer for each camera.
Very clear and concise. Great ! Thanks !
hows does an IR sensor help to calculate the depth better compared to a secondary camera? Can you please explain that part again?
So you're saying somehow the lights are randomized (the leds are moved somehow), then that it's recalibrated? No. The pattern is predetermined. It might have been random at some point, but I doubt it.
Thanks for your knowledge
Do all units have the same fixed speckle pattern, or is it learned after it's created?
Do you know if the Asus Xtion series of depth sensors work the same way? Would they have the same limitation of a single sensor in a room?
so does the camera recognise each part, ie the sectors in the red grid example, via unique speckle clusters?
Fantastic! I knew there was a reason I subscribed!
you can use multiple kinect, main problem is that the usb bandwidth is to high for two kinects, on one computer
But lets say i was to take out the cameras from the kinect and make a diffrent distance between the cameras that will affect the andgle right? so it won't be able to recreate the image?
well this is for a 3d scanner basicly
Good! Easy to understand the theory!!
Thank you very much! This was a very helpful video!!! ;-)
It's not random. It appears random but the device has to be aware of the pattern it is casting.
wonderful thank you for explain
Thank you!
Thank you
thank you! that was great!
There is no such limitation with either...
Cool
subbed!
I thought it was a time-of-flight lidar.
+Qinggeng Zhuang new one is ToF, old uses triangulation.
***** That doesn't work, just like before the two kinects would confuse each other and wouldn't be able to triangulate points
What if i told you, you don't need depth-sensor or any software.....well i just did...but will i tell you how...that's a billion dollar answer..but i'll take a couple hundred million. my name is not 4D for no reason
Please let me know how kinect comes to know the angle of speckle pattern..?
Very clear and concise. Great ! Thanks !