
Dexterous
Manipulation
Dexterous Manipulation is the precise and flexible control of robotic hands, enabling human-like object handling. It requires coordinated movements, force control, and adaptive grasping, supported by sensors, control algorithms, and AI for efficient manipulation in robotics and automation.
Learning Unified Visuo-Tactile Policy from Human Video Data for Dexterous Manipulation
(UniDexRefine)

Simulation Deployment Results (YCB Objects)

Real-World Deployment Results (Sim-to-Real)

Human video data is highly scalable but lacks physical interaction context. While RL can augment this kinematic data, multi-task scaling remains challenging. To overcome this, we introduce UniDexRefine. By learning residual motions for single tasks via RL in simulation, we extract physically feasible trajectories to train a robust, Unified Visuo-Tactile Policy capable of handling diverse object grasping tasks.
Enhancing Fine Manipulation Skills through Foundation Models and Modular Coupling
(Isaac GR00T N1.5 + HandGPT)

VLM-based foundation models like Isaac GR00T can plan for unseen objects but struggle with the intricate control of 16+ DoF dexterous hands. We bridge this gap by integrating Isaac GR00T for macroscopic upper-limb movements with HandGPT for context-aware finger control, enabling unprecedented fine-grained manipulation
Dexterous Manipulation leveraging Vision-Language Model
(HandGPT)

Planning dexterous hand manipulation action using Vision-Language Model (VLM). Based on user’s request, VLM infers the most suitable object within the scene, and plans the corresponding hand posture. By leveraging VLM’s pre-trained knowledge, we can conduct various hand postures including tool manipulation action.
Self-driving Laboratory using Multi-fingered Robot Hand
Automation of chemistry laboratory tasks is difficult because of the diversity of equipment and level of precision required to successfully complete these tasks repeatably. We utilize a multi-fingered robot hand capable of generating the diverse postures to manipulate these equipment, while also employing a control strategy using a VLM agent for robust reasoning and dual-camera based Visual Servoing for fine-grained control. The system is able to successfully perform laboratory tasks using a pipette with high precision such as releasing liquid into extremely small vials or attaching the pipette tips to the body.
Refine Human Motion to Physically Feasible Robot Actions
(DexRefine)
DexRefine Simulation Results

DexRefine Sim2Real Results


Refine Human Object Interaction Data to Physical Feasible Robotic Action using Reinforcement Learning. By using our method, we can effectively reduce embodiment gap between human and robot.
Whole-Body Teleoperation & Loco Manipulation
G1 Whole-Body Teleoperation

G1 Whole-Body Loco Manipultion


Whole-body teleoperation allows humanoids to navigate and interact simultaneously, enabling human-like operations in diverse spaces. We implemented this by seamlessly integrating two core components: an RL-based lower-body balancing module and an upper-body teleoperation system.
Imitation Learning & Teleoperation System for Humanoid Bimanual Manipulation
Imitation Learning (ACT)

Teleoperation G1 (Simulation)

Teleoperation (Real World)

Teleoperation Kist Humanoid Kapex (Simulation)

Teleoperating a humanoid and learning through imitation. Using VR, we teleoperate the humanoid’s upper body to collect data. Through imitation learning, the humanoid can perform various tasks.
Robot Hand & Arm for Unseen Objects Grasping
Vision based Manipulation + Brain in Hand System Intergration
Traditional vision-based manipulator systems have limitations such as a narrow field of view or the potential to obscure objects due to the fixed position of the camera. Our laboratory is researching a vision-based intelligent robot hand that not only recognizes objects but also includes perception and control. This is achieved by building an interface on the robot hand that incorporates sensors and controllers.
Robotic Palm

Soft Robotic Palm with Tunable Stiffness Using Dual-Layered Particle Jamming Mechanism
This project presents a novel robotic palm with a dual-layered structure designed to yield high surface conformity and controllable rigidity for enhanced grasping performance. It comprises a vacuum chamber for adjusting the stiffness of the palm via particle jamming and an air chamber for actively controlling the palm deformation. An auto-jamming control scheme that automatically solidifies the palm by sensing the internal pressure of the palm without any tactile sensors or visual feedback was also proposed.

Alternative Grasping Strategy
Pick-and-Place Scenario
