Deepak Pathak
Deepak Pathak
  • 42
  • 173 515
Neural MP: A Generalist Neural Motion Planner
Webpage: mihdalal.github.io/neuralmotionplanner/
Authors: Murtaza Dalal*, Jiahui Yang*, Russell Mendonca, Youssef Khaky, Ruslan Salakhutdinov, Deepak Pathak
Abstract: The current paradigm for motion planning generates solutions from scratch for every new problem, which consumes significant amounts of time and computational resources. For complex, cluttered scenes, motion planning approaches can often take minutes to produce a solution, while humans are able to accurately and safely reach any goal in seconds by leveraging their prior experience. We seek to do the same by applying data-driven learning at scale to the problem of motion planning. Our approach builds a large number of complex scenes in simulation, collects expert data from a motion planner, then distills it into a reactive generalist policy. We then combine this with lightweight optimization to obtain a safe path for real world deployment. We perform a thorough evaluation of our method on 64 motion planning tasks across four diverse environments with randomized poses, scenes and obstacles, in the real world, demonstrating an improvement of 23%, 17% and 79% motion planning success rate over state of the art sampling, optimization and learning based planning methods.
Music by Vincent Rubinetti (courtesy @3Blue1Brown)
Download the music on Bandcamp:
vincerubinetti.bandcamp.com/album/the-music-of-3blue1brown
Stream the music on Spotify:
open.spotify.com/album/1dVyjwS8FBqXhRunaG5W5u
Переглядів: 563

Відео

Legged Robot performing Extreme Parkour [More Results]
Переглядів 5 тис.Рік тому
Check out the Music Video Version: ua-cam.com/video/cqvAgcQl6s4/v-deo.html Short summary on Twitter: pathak2206/status/1706696237703901439 Paper and Code: extreme-parkour.github.io/ Extreme Parkour with Legged Robots Authors: Xuxin Cheng*, Kexin Shi*, Ananye Agarwal, Deepak Pathak TLDR: A low-cost robot does extreme parkour including high jumps on obstacles 2x its height, long jumps...
Extreme Parkour with Legged Robots via a Single Neural Net
Переглядів 35 тис.Рік тому
Short summary on Twitter: pathak2206/status/1706696237703901439 UA-cam video with even more results: ua-cam.com/video/cuboZYHGiMc/v-deo.html Paper, code, and analysis: extreme-parkour.github.io/ TLDR: A low-cost robot does extreme parkour including high jumps on obstacles 2x its height, long jumps across gaps 2x its length, handstand on stairs, and running across tilted ramps. Autho...
LEAP Hand: Low-Cost, Anthropomorphic, Multi-fingered Hand [RSS 2023]
Переглядів 4,2 тис.Рік тому
Project Page: leap-hand.github.io/ TL;DR: Low-cost Human-like motorized hand for robot learning. Costs under 2K USD and can be assembled in hand. Authors: Kenneth Shaw, Ananye Agrawal, Deepak Pathak. Affiliation: Carnegie Mellon University Abstract: Dexterous manipulation has been a long-standing challenge in robotics. While machine learning techniques have shown some promise, results have larg...
Legs as Manipulator: Pushing Quadrupedal Agility Beyond Locomotion
Переглядів 17 тис.Рік тому
Webpage: robot-skills.github.io Authors: Xuxin Cheng, Ashish Kumar, Deepak Pathak Published at ICRA 2023 Locomotion has seen dramatic progress for walking or running across challenging terrains. However, robotic quadrupeds are still far behind their biological counterparts, such as dogs, which display a variety of agile skills and can use the legs beyond locomotion to perform several basic mani...
Internet Explorer: Targeted Representation Learning on the Open Web
Переглядів 1,9 тис.Рік тому
Website: internet-explorer-ssl.github.io/ Authors: Alexander C. Li*, Ellis Brown*, Alexei A. Efros, Deepak Pathak Carnegie Mellon University, UC Berkeley Abstract: Modern vision models typically rely on fine-tuning general-purpose models pre-trained on large, static datasets. These general-purpose models only capture the knowledge within their pre-training datasets, which are tiny, out-of-date ...
Curiosity-driven Robots in the Real World
Переглядів 5 тис.Рік тому
Project Website: robo-explorer.github.io/ Title: ALAN: Autonomously Exploring Robotic Agents in the Real World Authors: Russell Mendonca, Shikhar Bahl, Deepak Pathak Published at ICRA 2023. Affiliation: Carnegie Mellon University Project Abstract: Robotic agents that operate autonomously in the real world need to continuously explore their environment and learn from the data collected, with min...
Legged Locomotion in Challenging Terrains using Egocentric Vision
Переглядів 7 тис.Рік тому
Legged Locomotion in Challenging Terrains using Egocentric Vision Ananye Agarwal*, Ashish Kumar*, Jitendra Malik^, Deepak Pathak^ Carnegie Mellon University UC Berkeley Published at CoRL 2022 as Oral Presentation. Webpage: vision-locomotion.github.io/ Abstract: Animals are capable of precise and agile locomotion using vision. Replicating this ability has been a long-standing goal in robotics. T...
Deep Whole-Body Control: End-to-End Learning in Legged Manipulator Robots
Переглядів 6 тис.Рік тому
Website: manipulation-locomotion.github.io/ Authors: Zipeng Fu*, Xuxin Cheng*, Deepak Pathak (Carnegie Mellon University) Published at CoRL 2022 (Oral) Abstract: An attached arm can significantly increase the applicability of legged robots to several mobile manipulation tasks that are not possible for the wheeled or tracked counterparts. The standard control pipeline for such legged manipulator...
WHIRL: Human-to-Robot Imitation in the Wild. Published at RSS 2022.
Переглядів 12 тис.2 роки тому
Project Website: human2robot.github.io Authors: Shikhar Bahl, Abhinav Gupta, Deepak Pathak Affiliation: Carnegie Mellon University Project Abstract: We approach the problem of learning by watching humans in the wild. While traditional approaches in Imitation and Reinforcement Learning are promising for learning in the real world, they are either sample inefficient or are constrained to lab sett...
RSS 2022 talk by Shikhar Bahl. WHIRL: Human-to-Robot Imitation in the Wild.
Переглядів 1,6 тис.2 роки тому
Project Website: human2robot.github.io Authors: Shikhar Bahl, Abhinav Gupta, Deepak Pathak Affiliation: Carnegie Mellon University Project Abstract: We approach the problem of learning by watching humans in the wild. While traditional approaches in Imitation and Reinforcement Learning are promising for learning in the real world, they are either sample inefficient or are constrained to lab sett...
CVPR 2022: Predicting 3D shape and correspondence from Single 2D Image
Переглядів 2 тис.2 роки тому
CVPR 2022 paper on "Topologically-Aware Deformation Fields for Single-View 3D Reconstruction" Webpage: shivamduggal4.github.io/tars-3D/ Authors: Shivam Duggal and Deepak Pathak Abstract: We present a framework for learning 3D object shapes and dense cross-object 3D correspondences from just an unaligned category-specific image collection. The 3D shapes are generated implicitly as deformations t...
Robotic Telekinesis: Learning a Robotic Hand Imitator by Watching Humans on Youtube
Переглядів 3,2 тис.2 роки тому
Project Page: robotic-telekinesis.github.io TL;DR: Leveraging passive youtube data has lots of potential for robotics! We present Robotic Telekinesis: Trained only on passive videos, it enables tele-operating a dextrous hand-arm robot from a single camera without any calibration/gloves/markers/VR. Authors: Aravind Sivakumar, Kenneth Shaw, Deepak Pathak Abstract: We build a system that enables a...
RSS 2021: Hierarchical Neural Dynamic Policies
Переглядів 7553 роки тому
Accepted at RSS 2021 Website: shikharbahl.github.io/hierarchical-ndps/ Abstract: We tackle the problem of generalization to unseen configurations for dynamic tasks in the real world while learning from high-dimensional image input. The family of nonlinear dynamical system-based methods have successfully demonstrated dynamic robot behaviors but have difficulty in generalizing to unseen configura...
RMA: Rapid Motor Adaptation for Legged Robot
Переглядів 3,8 тис.3 роки тому
Research paper accepted at RSS 2021. Project Website: ashish-kmr.github.io/rma-legged-robots/ Abstract: Successful real-world deployment of legged robots would require them to adapt in real-time to unseen scenarios like changing terrains, changing payloads, wear and tear. This paper presents Rapid Motor Adaptation (RMA) algorithm to solve this problem of real-time online adaptation in quadruped...
ICML 2021: Unsupervised Learning of Visual 3D Keypoints for Control
Переглядів 6803 роки тому
ICML 2021: Unsupervised Learning of Visual 3D Keypoints for Control
ICRA 2021: Auto-Tuned Sim-to-Real Transfer
Переглядів 2 тис.3 роки тому
ICRA 2021: Auto-Tuned Sim-to-Real Transfer
Worldsheet: View Synthesis from a Single Image
Переглядів 6 тис.3 роки тому
Worldsheet: View Synthesis from a Single Image
NeurIPS 2020: Neural Dynamic Policies (NDPs)
Переглядів 1,7 тис.3 роки тому
NeurIPS 2020: Neural Dynamic Policies (NDPs)
ICML 2020 Oral Talk: Planning to Explore via Self-Supervised World Models
Переглядів 2,2 тис.4 роки тому
ICML 2020 Oral Talk: Planning to Explore via Self-Supervised World Models
ICML 2020 Oral Talk: One Policy to Control Them All
Переглядів 2,3 тис.4 роки тому
ICML 2020 Oral Talk: One Policy to Control Them All
One Policy to Control Them All
Переглядів 2,6 тис.4 роки тому
One Policy to Control Them All
Planning to Explore via Self-Supervised World Models
Переглядів 3,4 тис.4 роки тому
Planning to Explore via Self-Supervised World Models
Learning to Generalize Self-Assembling Agents [with audio narration]
Переглядів 4,5 тис.5 років тому
Learning to Generalize Self-Assembling Agents [with audio narration]
ICML 2019: Self-Supervised Exploration via Disagreement
Переглядів 2,9 тис.5 років тому
ICML 2019: Self-Supervised Exploration via Disagreement
Learning to Control Self-Assembling Agents
Переглядів 1,8 тис.5 років тому
Learning to Control Self-Assembling Agents
Curiosity-Driven Learning: AI agents exploring without looking at any scores
Переглядів 29 тис.6 років тому
Curiosity-Driven Learning: AI agents exploring without looking at any scores
Learning Instance Segmentation by Interaction
Переглядів 1,2 тис.6 років тому
Learning Instance Segmentation by Interaction
Loop Demonstration: Success (ICLR 2018: Zero-Shot Visual Imitation)
Переглядів 3286 років тому
Loop Demonstration: Success (ICLR 2018: Zero-Shot Visual Imitation)
Maze Demonstration: Success (ICLR 2018: Zero-Shot Visual Imitation)
Переглядів 5016 років тому
Maze Demonstration: Success (ICLR 2018: Zero-Shot Visual Imitation)

КОМЕНТАРІ

  • @Tiara48z
    @Tiara48z Місяць тому

    Does anyone else shed a tear from watching this? These videos used to be on like 5x the speed. Incredible progress.

  • @revimfadli4666
    @revimfadli4666 3 місяці тому

    I wonder how well this compares against liquid neural network which can also model system dynamics

  • @revimfadli4666
    @revimfadli4666 3 місяці тому

    I'd like to see this but with fast weights for realtime adaptation implemented in "standard" robots

  • @revimfadli4666
    @revimfadli4666 3 місяці тому

    2:18 the front limbs "picking up" the rear ones strongly resemble trapeze, very mesmerizing

  • @PadakuBanker9
    @PadakuBanker9 7 місяців тому

    Bhaiya i m from india hatsoff yaar i am happy for u , i am preparing for bank exams in india and while preparing today i read about mobile aloha , and i immediately searched for it and i was surprised even an indian person is part of this project feeling very proud.❤❤❤❤kabhi future mai ek podcast de dena abhi toh mai tyari kr leta hun 😅

  • @alengm
    @alengm 10 місяців тому

    Will you upload timelapse videos of the training? With the simulated robot doing random movements for an hour until it learns to sort of walk and stuff?

  • @snakehaihai
    @snakehaihai 11 місяців тому

    haha that one of the most free style D455 mounting i ever see

  • @soulstudiosmusic
    @soulstudiosmusic 11 місяців тому

    It's all fun and games until they obtain self-awareness

  • @hubertyou0
    @hubertyou0 11 місяців тому

    aye arj yo xi ye ye

  • @3alabo
    @3alabo 11 місяців тому

    0:55 , this is so cool

  • @3alabo
    @3alabo 11 місяців тому

    holy shit

  • @Xavier-iu8jn
    @Xavier-iu8jn Рік тому

    😻 Promo-SM

  • @雪鷹魚英語培訓的領航

    Gonna be so weird when our robot dogs are better at talking than we are.

  • @GuilhermeGomes2
    @GuilhermeGomes2 Рік тому

    that is impressive. I have a go1 Edu. let me know if I can contribute somehow

  • @yunlongsong7618
    @yunlongsong7618 Рік тому

    Impressive real-world results. Good job, guys.

  • @Rocklipinhow
    @Rocklipinhow Рік тому

    Nothing "extreme" about this but okay.

    • @gusferde
      @gusferde Рік тому

      it is extreme for a robot in this age don't you think?

    • @pathak2206
      @pathak2206 Рік тому

      This is a software project (we didn't design the hardware, just the robot "brain" which is a deep learning policy), and thus, "extreme" is from a relative perspective -- relative to robot hardware. Three key points: (1) The robot has low-cost and noisy actuators, so performing precise parkour actions is very hard for this hardware. A newer version of this robot costs less than 1.3K, compared to 100K+ from BD, etc. (2) Robot dimensions are fairly small in terms of height and width and it can leap/jump across much bigger obstacles. (3) There is no additional sensor on the robot, everything is being performed just from the front camera.

    • @Carolina-mw4po
      @Carolina-mw4po Рік тому

      ​@@pathak2206truly impressive 😮

  • @abibfaye3026
    @abibfaye3026 Рік тому

    magnifique

  • @arturmaklyarevsky8808
    @arturmaklyarevsky8808 Рік тому

    folks this it the beginning of the end. it was fun while it lasted😢

    • @Snowstar837
      @Snowstar837 Рік тому

      I for one welcome our new overlords 🤗

  • @RichardFrancoeur-w8l
    @RichardFrancoeur-w8l Рік тому

    How much for a regular consumer ?

  • @gsmendoza
    @gsmendoza Рік тому

    Extreme parkour robot: does stair handstands Background music: Wi wi wi wink hu uh wink wi wi wink (yeh)

  • @Abdullu
    @Abdullu Рік тому

    perfect dog for people with allergies who'd love to join the local park dog agility group

  • @eMPee584
    @eMPee584 Рік тому

    .. mount a taser on it and suddenly it's not so cute anymore xD

    • @Snowstar837
      @Snowstar837 Рік тому

      Ha, unless they're your little bodyguard, then it's still cute!

    • @gusferde
      @gusferde Рік тому

      or a flamethrower 🔥 🤠

  • @soundnfury
    @soundnfury Рік тому

    A weapon to surpass Metal Gear!

  • @Xeoncross
    @Xeoncross Рік тому

    I'm pretty sure I've played the game set 20 years from now when these things are chasing me with ill intent.

  • @theShadeslayer
    @theShadeslayer Рік тому

    Amazing accomplishment!

  • @Snowstar837
    @Snowstar837 Рік тому

    😂 does it do the handstand in the grass because it's easier to walk in it like that?

  • @ulforcemegamon3094
    @ulforcemegamon3094 Рік тому

    I feel like if ALL commercially avalaible quadruped robots had the same program as the one of the video , people would understand what we meant with "legs allow agile movement" , *this* is way more animal-like

  • @Athens1992
    @Athens1992 Рік тому

    incredible, truly fascinating!! Will your lectures in robotics will be uploaded for someone to watch them ( i am electrical eng in analog IC, but in my spare time I was playing with image recogntion and the jetson nano, I would love to buy a robot dog like mini pupper and apply to him some of these)

    • @pathak2206
      @pathak2206 Рік тому

      Yes, they will be posted on this channel. If you go to the playlist "Pathak Talks and Papers", you can find quite a few talks already there: ua-cam.com/play/PLG7T4DHrMiRXIdoiNXDW8yt3nvDeO5xRX.html&feature=shared

    • @Athens1992
      @Athens1992 Рік тому

      thank you very much!! checking it right now@@pathak2206

  • @7gurubhai
    @7gurubhai Рік тому

    Awesome Deepak :)

  • @Snowstar837
    @Snowstar837 Рік тому

    These little scramblers are so cute, I love them 😂💙 the shot of you holding the poor thing by the lead as they scrabble at the ground makes me giggle

  • @bharadwajreddy7840
    @bharadwajreddy7840 Рік тому

    this is awesome, time to fireup the 3d printer

  • @Snowstar837
    @Snowstar837 Рік тому

    Wow! You guys must have your hands full coming out with all these cutting edge methods and designs!

  • @AiExplicado0001
    @AiExplicado0001 Рік тому

    thanks for sharing!!

  • @ib0823
    @ib0823 Рік тому

    You guys did a great job. Could you tell us in detail how you created these movements with the help of Ros?

  • @sidneyboddie9062
    @sidneyboddie9062 Рік тому

    One day they will destroy us

  • @takoda598
    @takoda598 Рік тому

    Tá chegando a hora...

  • @landeus
    @landeus Рік тому

    Is it me or is that thing kinda cute? Very cool.

  • @davidpesci6605
    @davidpesci6605 Рік тому

    The terminators can now open doors. We're all dead.

  • @155927
    @155927 Рік тому

    Very nice work. Are you guys totally focusing on RL based control policy w/o knowing the dynamic model?

  • @illygah
    @illygah Рік тому

    impressive, the kinematics are extremely natural-looking, like it has the zoomies. Puppy, puppy, puppy!

  • @Snowstar837
    @Snowstar837 Рік тому

    Wow, each video is more impressive than the last! And they look so cute, they're trying so hard 😂💙

  • @Snowstar837
    @Snowstar837 Рік тому

    That leg wiggle at 0:57 is so cute 😂

  • @Chris-ls4nm
    @Chris-ls4nm Рік тому

    You're giving it a knife in the first 30 seconds? 😑😂

    • @Stevefhu
      @Stevefhu Рік тому

      Remember, they learn from us. 😬

  • @prof_shixo
    @prof_shixo Рік тому

    Interesting work, thanks for sharing.

  • @Kram1032
    @Kram1032 Рік тому

    I like most of this but it feels like it could be even more open-ended. Perhaps this could be further improved by randomizing the target dataset? Instead of taking a fixed dataset as target, take a random subset from a massive combined one. Obviously, by default, this will be very noisy. However, perhaps the way images are drawn from the dataset could *also* be learned: If two images tend to do well on the same search queries and poorly on the same search queries, make them more likely to be drawn for a dataset together, and if they tend to oppose each other, make them less likely to be drawn together. Basically, draw entire distributions of images, which you can then refine over time. A simpler method might be to just rely on CLIP, take a random direction in CLIP's embedding space, and simply sample based on *that.* - Then it tries to learn how to map CLIP vectors to search queries.

  • @ぺさぺさ
    @ぺさぺさ Рік тому

    When will the African Grey Parrot's "Alex Study Moment" in Intelligent Robotics come? Do you think the Model/Rival method used as a training method for African Grey Parrot's Alex can be used for intelligent robots?

  • @vinitsarode908
    @vinitsarode908 Рік тому

    Awesome research! Thrilled to see real world applications of curiosity driven learning.

  • @Kram1032
    @Kram1032 Рік тому

    So is the measurement of the change in the environment just a simple pixel-level binary classifier going "this pixel has changed (more than mere noise would have)"? Very cool work! Almost cute to watch! (I bet the music choice is part of that though lol)