I build learning-based systems for real-world robot manipulation
from visual observations and minimal demonstrations.
I am an M.S. student in Robotics at the University of Minnesota, advised by Prof. Karthik Desingh at the Robotics Perception and Manipulation Lab.
My long-term research goal is to develop generalizable robot manipulation systems that perform everyday tasks under severe data constraints. I am particularly interested in how unifying vision, representation, and action can push manipulation beyond highly structured and stationary settings.
Research Directions
Goal-conditioned positioning for manipulation
Enabling robot manipulation beyond stationary constraints. My work focuses on learning goal-conditioned policies that bring robots into precise, manipulation-ready configurations using visual observations.Data-efficient imitation learning
I investigate how manipulation policies can be trained with minimal demonstrations while still generalizing robustly to unseen objects and environments. My work focuses on learning visual policies that operate under limited supervision and real-world variability.
What I Build
Real-world data collection pipelines
Hardware-integrated systems for capturing synchronized multi-view RGB demonstrations on physical robots (Boston Dynamics Spot).Goal-conditioned learning policies
Imitation learning systems that map visual observations and task specifications to precise, manipulation-ready positioning actions.Perception-to-action integration
Pipelines that combine modern vision models (e.g., DINOv2) with closed-loop control for robust interaction in unstructured environments.Research infrastructure for robot learning
Tools and deployment pipelines (e.g., SpotStack) that accelerate experimentation and bridge training with real-world deployment.
Background and Experience
I completed my undergraduate studies in Mechanical Engineering, which provided a strong foundation in the physical principles underlying robotic systems, including dynamics, control, and state estimation. During this time, I served as the software lead of an Autonomous Underwater Vehicle (AUV) student team, where I was responsible for the design and integration of the software stack spanning perception, estimation, control, and navigation. This role involved extensive work with embedded systems and low-level programming, and gave me hands-on experience deploying autonomous robotic systems in real-world operating conditions rather than purely simulated settings.
In my master’s studies in Robotics, I shifted my focus toward learning-based robotic systems. I initially concentrated on robot perception, particularly vision, through advanced coursework in Robot Vision and Computer Vision, where I implemented projects on visual servoing and monocular visual SLAM. Building on this perception foundation, my current research explores learning-based manipulation and robot positioning, including developing mobility policies that enable manipulation beyond stationary settings, as well as one-shot imitation learning policies for eye-in-hand robot configurations. Through these works, I am interested in tightly integrating perception, representation, and action to enable robots to perform complex tasks in real-world environments under limited demonstrations and visual-only observations.
Contact Me
I am graduating in Spring 2026 and am open to PhD opportunities and research roles in robot learning, manipulation, and real-world robotic systems.
Please feel free to reach out if you are interested in my work or would like to discuss potential research directions. I am always happy to chat about research ideas and collaborations that push the boundaries of robotics!
