General Robotics Lab
General Robotics Lab
  • 16
  • 50 648
CREW: Facilitating Human-AI Teaming Research
Project website (paper, code, video): generalroboticslab.com/CREW
Abstract: With the increasing deployment of artificial intelligence (AI) technologies, the potential of humans working with AI agents has been growing at a great speed. Human-AI teaming is an important paradigm for studying various aspects when humans and AI agents work together. The unique aspect of Human-AI teaming research is the need to jointly study humans and AI agents, demanding multidisciplinary research efforts from machine learning to human-computer interaction, robotics, cognitive science, neuroscience, psychology, social science, and complex systems. However, existing platforms for Human-AI teaming research are limited, often supporting oversimplified scenarios and a single task, or specifically focusing on either human-teaming research or multi-agent AI algorithms. We introduce CREW, a platform to facilitate Human-AI teaming research and engage collaborations from multiple scientific disciplines, with a strong emphasis on human involvement. It includes pre-built tasks for cognitive studies and Human-AI teaming with expandable potentials from our modular design. Following conventional cognitive neuroscience research, CREW also supports multimodal human physiological signal recording for behavior analysis. Moreover, CREW benchmarks real-time human-guided reinforcement learning agents using state-of-the-art algorithms and well-tuned baselines. With CREW, we were able to conduct 50 human subject studies within a week to verify the effectiveness of our benchmark.
Переглядів: 248

Відео

[VCC-ALIFE 2024] Text2Robot: Evolutionary Robot Design from Text Descriptions
Переглядів 766Місяць тому
Virtual Creature Competition Submission: Text2Robot: Evolutionary Robot Design from Text Descriptions. Duke General Robotics Lab. Authors: Ryan P. Ringel∗, Zachary S. Charlick∗, Jiaxun Liu∗, Boxi Xia, Boyuan Chen. (* denotes equal contribution) Full Project website (paper, code, hardware manual, video): generalroboticslab.com/Text2Robot/ Abstract: Robot design has traditionally been costly and ...
ClutterGen: A Cluttered Scene Generator for Robot Learning
Переглядів 1962 місяці тому
Project website (paper, code, video): generalroboticslab.com/ClutterGen Abstract: We introduce ClutterGen, a physically compliant simulation scene generator capable of producing highly diverse, cluttered, and stable scenes for robot learning. Generating such scenes is challenging as each object must adhere to physical laws like gravity and collision. As the number of objects increases, finding ...
Text2Robot: Evolutionary Robot Design from Text Descriptions
Переглядів 1,2 тис.2 місяці тому
Project website (paper, code, hardware manual, video): generalroboticslab.com/Text2Robot/ Abstract: Robot design has traditionally been costly and labor-intensive. Despite advancements in automated processes, it remains challenging to navigate a vast design space while producing physically manufacturable robots. We introduce Text2Robot, a framework that converts user text specifications and per...
Perception Stitching: Zero-Shot Perception Encoder Transfer for Visuomotor Robot Policies
Переглядів 2272 місяці тому
Project website (paper, code, video): generalroboticslab.com/PerceptionStitching Abstract: Vision-based imitation learning has shown promising capabilities of endowing robots with various motion skills given visual observation. However, current visuomotor policies fail to adapt to drastic changes in their visual observations. We present Perception Stitching that enables strong zero-shot adaptat...
SonicSense: Object Perception from In-Hand Acoustic Vibration
Переглядів 3442 місяці тому
Project website (paper, code, video): generalroboticslab.com/SonicSense Abstract: We introduce SonicSense, a holistic design of hardware and software to enable rich robot object perception through in-hand acoustic vibration sensing. While previous studies have shown promising results with acoustic sensing for object perception, current solutions are constrained to a handful of objects with simp...
Robot Studio Class - Tutorial Video on Fusion 360 Export
Переглядів 1887 місяців тому
Tutorial video on Fusion 360 design history export from Robot Studio class at Duke University. Course website: generalroboticslab.com/RobotStudioSpring2024/index.html Code: github.com/general-robotics-duke/FusionHistoryScript Credit: Teaching Assistant: Zach Charlick
Policy Stitching: Learning Transferable Robot Policies
Переглядів 66011 місяців тому
Conference on Robot Learning 2023 (CoRL 2023). Project Website: generalroboticslab.com/PolicyStitching/ Abstract: Training robots with reinforcement learning (RL) typically involves heavy interactions with the environment, and the acquired skills are often sensitive to changes in task environments and robot kinematics. Transfer RL aims to leverage previous knowledge to accelerate learning of ne...
Discovering State Variables Hidden in Experimental Data
Переглядів 4 тис.2 роки тому
Project website: www.cs.columbia.edu/~bchen/neural-state-variables/ Abstract: All physical laws are described as relationships between state variables that give a complete and non-redundant description of the relevant system dynamics. However, despite the prevalence of computing power and AI, the process of identifying the hidden state variables themselves has resisted automation. Most data-dri...
Full-Body Visual Self-Modeling of Robot Morphologies
Переглядів 3,1 тис.2 роки тому
The project website is at: robot-morphology.cs.columbia.edu/ Author: Boyuan Chen, Robert Kwiatkowski, Carl Vondrick, Hod Lipson. Abstract: Internal computational models of physical bodies are fundamental to the ability of robots and animals alike to plan and control their actions. These "self-models" allow robots to consider outcomes of multiple possible future actions, without trying them out ...
(Data Collection) Smile Like You Mean It: Driving Animatronic Robotic Face with Learned Models
Переглядів 15 тис.3 роки тому
Data collection video. To appear at ICRA 2021. The project website is at: www.cs.columbia.edu/~bchen/aiface/ Full overview video: ua-cam.com/video/fYURp2OaGn0/v-deo.html Hardware description video: ua-cam.com/video/STx2HMHJFY8/v-deo.html Demo video: ua-cam.com/video/L5ZJ8zKJXlk/v-deo.html Abstract: Ability to generate intelligent and generalizable facial expressions is essential for building hu...
(Demos) Smile Like You Mean It: Driving Animatronic Robotic Face with Learned Models (ICRA 2021)
Переглядів 8 тис.3 роки тому
Demo video. To appear at ICRA 2021. The project website is at: www.cs.columbia.edu/~bchen/aiface/. Full overview video: ua-cam.com/video/fYURp2OaGn0/v-deo.html Hardware description video: ua-cam.com/video/STx2HMHJFY8/v-deo.html Data collection video: ua-cam.com/video/Ws-me3gYZ74/v-deo.html Abstract: Ability to generate intelligent and generalizable facial expressions is essential for building h...
(Overview) Smile Like You Mean It: Driving Animatronic Robotic Face with Learned Models (ICRA 2021)
Переглядів 1,4 тис.3 роки тому
Overview video. To appear at ICRA 2021. The project website is at: www.cs.columbia.edu/~bchen/aiface/ Hardware description video: ua-cam.com/video/STx2HMHJFY8/v-deo.html Demo video: ua-cam.com/video/L5ZJ8zKJXlk/v-deo.html Data collection video: ua-cam.com/video/Ws-me3gYZ74/v-deo.html Abstract: Ability to generate intelligent and generalizable facial expressions is essential for building human-l...
(Hardware Animation) Smile Like You Mean It: Driving Animatronic Robotic Face with Learned Models
Переглядів 14 тис.3 роки тому
Hardware animation of the Eva robot. To appear at ICRA 2021. Project website is at: www.cs.columbia.edu/~bchen/aiface/ Full overview video: ua-cam.com/video/fYURp2OaGn0/v-deo.html Demo video: ua-cam.com/video/L5ZJ8zKJXlk/v-deo.html Data collection video: ua-cam.com/video/Ws-me3gYZ74/v-deo.html Abstract: Ability to generate intelligent and generalizable facial expressions is essential for buildi...
The Boombox: Visual Reconstruction from Acoustic Vibrations
Переглядів 1,4 тис.3 роки тому
Website: boombox.cs.columbia.edu/ Abstract: We introduce The Boombox, a container that uses acoustic vibrations to reconstruct an image of its inside contents. When an object interacts with the container, they produce small acoustic vibrations. The exact vibration characteristics depend on the physical properties of the box and the object. We demonstrate how to use this incidental signal in ord...
Visual Perspective Taking for Opponent Behavior Modeling
Переглядів 6793 роки тому
Visual Perspective Taking for Opponent Behavior Modeling