User Case-Mobile ALOHA Mobile ALOHA based on
Вставка
- Опубліковано 6 жов 2024
- Introduce 𝐌𝐨𝐛𝐢𝐥𝐞 𝐀𝐋𝐎𝐇𝐀🏄 -- Learning!
With 50 demos, our robot can autonomously complete complex mobile manipulation tasks:
cook and serve shrimp 🦐
call and take an elevator 🛗
store a 3 Ibs pot to a two-door cabinet
push 5 consecutive chairs
rinse pan using a water faucet
play high fives with people
Co-led by Tony Z. Zhao, Chelsea Finn
Our robot can consistently handle these tasks, succeeding:
9 times in a row for Wipe Wine
5 times for Call Elevator
robust against distractors for Use Cabinet
extrapolate to chairs unseen during training
How do we achieve this with only 50 demos? The key is to co-train imitation learning algorithms with static ALOHA data. We found this to consistently improve performance, especially for tasks that require precise manipulation.
Co-training (1) improves the performance across all tasks, (2) is compatible with ACT, Diffusion Policy and VINN, (3) is robust to different data mixtures.
We open-source all the software and data of Mobile ALOHA!
Project Website 🛜: lnkd.in/gE6A43fR
Code for Imitation Learning 🖥️: lnkd.in/gDCmgy_E
Data 📊: lnkd.in/gCJJtmvT
#AgileXRobotics #AI #UGV #AGV #Tracer #MobileALOHA
Why can't we see the whole robot? Is it being controlled by someone? Time lapse and quick zooming in and out induces motion sickness too
How much this price?
How to make it?