Portfolio item number 1
This is an item in your portfolio. It can be have images or nice text. If you name the file .md, it will be parsed as markdown. If you name the file .html, it will be parsed as HTML.
This is an item in your portfolio. It can be have images or nice text. If you name the file .md, it will be parsed as markdown. If you name the file .html, it will be parsed as HTML.
Short description of portfolio item number 2
ICRA 2024 Workshop [Project]
The ManiSkill-ViTac Challenge aims to provide a standardized benchmarking platform for evaluating the performance of vision-based-tactile manipulation skill learning in real-world robot applications. The challenge is supported by the GPU-based IPC simulator I developed.
Homework project for the course “Physical Simulation” at UCSD instructed by Prof. Albert Chern.
TRO 2024 [PDF]
We build a general-purpose Sim2Real protocol for manipulation policy learning with marker-based visuotactile sensors. To improve the simulation fidelity, we employ an FEM-based physics simulator that can simulate the sensor deformation accurately and stably for arbitrary geometries. We further propose a novel tactile feature extraction network that directly processes the set of pixel coordinates of tactile sensor markers and a self-supervised pre-training strategy to improve the efficiency and generalizability of RL policies.
We propose a learning framework and system that automatically decomposes task demonstrations into semantically meaningful skills using off-the-shelf foundation models, and generates diverse synthetic demonstration datasets from a few human demos through reinforcement learning. These sim-augmented datasets enable robust skill training, with a Skill Routing Transformer (SRT) policy effectively chaining the learned skills together to execute complex long-horizon manipulation tasks.
Manipulating deformable objects like cloth is difficult due to their complex dynamics and tricky state estimation. We propose a generative, transformer-based diffusion model that handles both perception and dynamics. Our method reconstructs the full cloth state from sparse observations and predicts future movement, reducing long-horizon prediction errors by an order of magnitude compared to previous methods. This framework successfully enabled a real robot to perform complex cloth folding tasks.
Continuous Collision Detection (CCD) in IPC-based simulations is slow and often requires powerful GPUs. We introduce a sequential CCD algorithm for convex shapes with affine trajectories (as in ABD) that achieves a 10x speed-up over traditional primitive-level CCD. Our method uses cone casting, a generalization to the ray-casting CCD used in traditional physics engines for rigid bodies.