MIT-AVT: Data Collection Device (for Large-Scale Semi-Autonomous Driving)
Вставка
- Опубліковано 12 вер 2024
- MIT-AVT is a large-scale semi-autonomous driving study aimed at understanding how human-AI interaction in driving can be safe and enjoyable. The emphasis is on objective, data-driven analysis through large-scale real-world driving data collection and deep learning based parsing of that data.
Link: hcai.mit.edu/avt
Paper: arxiv.org/abs/...
CONNECT:
- AI Podcast: lexfridman.com...
- Subscribe to this UA-cam channel
- LinkedIn: / lexfridman
- Twitter: / lexfridman
- Facebook: / lexfridman
- Instagram: / lexfridman
- Slack: deep-mit-slack...
Amazing. I'm waiting for detailed hardware description video from a long time.
ya but i dont they said anything about the ind of sensors they used or how they developed it
scientific toaster is my new favorite phrase
Excellent video. I was waiting to hear about calibration topics as well including calibration validation, expiration, etc.
And you mentioned Jetson TX2. Why don’t you use open source contributors to expedite these kinda next steps.
People can bring lots of expertise on this ground.
I like how Lex talks lazily.
One still has the option of using a mini-ITX board with a 12-volt power converter allowing the use of a more powerful Nvidia card like a Titan Xp with higher Cuda core count for deep learning apps and Intel i9 processor and Optane memory. Texas Instruments has its multi-cam RT-RK Automotive Machine Vision Alpha Reference Board which will support all relevant operating systems and industry standards, e.g. Linux, ISO26262, Ethernet, FlexRay, MOST, HTML5, GENIVI. Unfortunately, this is a costly solution but still cheaper in comparison to other systems. With the recent event of the Uber driverless car death, system cost may not be an issue? I do think the presenter's system would work well with my delivery robot drone plans and appears to be a cost-effective test solution somewhat different from Comma AI or Mobileye's approach.
If the car can open door by themselves when it sees that a dog is left inside alone in a hot day using the hot camera is. Very useful
LOVE IT!
Hi guys, nice video thanks! Regarding edge cases I have sth. in mind, why don't you also gather data during more extreme situations like Formula 1, DTM or other high pace driving events where a lot of steering, accelerating and braking happens? Maybe with these kind of data the learning algorithm become more robust against dangerous situations. Greetings from Berlin, Germany.
awesome, and very detail .......from the data collection explanation....sharing this with linkedin.com, and Twitter.com.....................:) ..................bye🐟😀
I feel like with the Huawei Internet connection the Chinese have your data also.
Great implementation!
I thought Huawei was banned in the US.
How can I make such a box myself? On GitHub can I find the cleaning of data algorythm? Where can I find any examples of ML code on how you approach data afterwards and fit it into models.
Was a home upload considered eg. a usb cable to read the ssd(or already happens/hosting bandwidth)
Are the video stored as Raw H.264 lossless?
Why is the remote server needed?
to process the data and analyze...u cant do that locally not enough GPU power to run those machine learning algorithms to those massive video frames datasets...
what kind of sesors were used?
Jetson's are very inexpensive and are a smart decision, opens up many possibilities, encoder, reliability, future expansion, etc. Maybe get Nvidia sponsorship?
SIGMA Intégrale I was thinking the same