I am a PhD Student at Carnegie Mellon University, advised by Prof. Jun-Yan Zhu and Prof. Wenzhen Yuan. My research interests lie in the intersection of machine learning, tactile sensing, and haptic-rendering.
I received B.Eng.(Hons) in Electrical and Electronic Engineering from Nanyang Technological University, Singapore in 2020. After graduation, I joined A*STAR Institute for Infocomm Research (I2R) as a Research Engineer. I am grateful for Prof. Zhiping Lin who led me into academic research and I have spent great time working with Dr. Yan Wu and collegues at A*STAR on robotic tactile sensing projects.
I previously interned with the Codec Avatars Lab team at Meta and with the 3dfm team at Roblox.
I envision robots interacting with humans naturally and aspires to realize it with multi-modal perception and rendering powered by machine learning.
We present the first physics-augmented text-to-3D scene generation framework that integrates vision-language models with physics-based simulation to produce physically plausible, simulation-ready, and intersection-free 3D scenes.
We synthesize visual appearance and tactile geometry of different materials, given a handcrafted or DALL⋅E 2 sketch, and render the multi-modal output on a haptic screen called TanvasTouch.