Yuhang Zhao

Yuhang Zhao

Yuhang Zhao

Cornell University

PhD Student

Yuhang Zhao is a sixth-year PhD candidate in Information Science at Cornell Tech, Cornell University, advised by Prof. Shiri Azenkot. Her research interests lie in human-computer interaction (HCI), accessibility, and augmented and virtual reality. She designs and builds intelligent interactive systems to enhance human abilities. She has published at many top-tier conferences and journals in the field of HCI (e.g., CHI, UIST, ASSETS), and has received 3 U.S. and international patents. She has interned at Facebook, Microsoft Research, and Microsoft Research Asia. Her work received two best paper honorable mentions at the SIGACCESS Conference on Computers and Accessibility (ASSETS) and has been covered by various media outlets (e.g., TNW, New Scientist). She received her B.A. degree and M.S. degree with distinction on thesis in Computer Science at Tsinghua University.

Research Abstract:

People have diverse abilities, but all have the equal right to access the world. However, people with disabilities are always marginalized by inaccessible technology and social infrastructure, facing severe challenges in all aspects of their life. As an HCI researcher, I strive to explore how computer technology can empower people with disabilities and promote equity, just like eyeglasses today empower people who are near-sighted. My research enhances human abilities by leveraging the emerging AI and AR technology. I propose Enhanced Perception (EP) systems, which use AI to retrieve the surrounding information, and communicate and enhance this information through AR interfaces that are tailored to users’ perceptual abilities. Following the user-centered design method, I study people’s perceptual abilities and needs, and create EP systems that enhances their abilities in various daily activities. While the EP concept is applicable to various disabilities, my dissertation focuses on low vision, a pervasive but overlooked disability. Specifically, I conduct exploratory studies to understand low vision people’s visual perceptions, behaviors, and experiences in different tasks to derive design guidelines. Based on that, I design context-based AR visual augmentations and build EP systems to facilitate daily tasks, such as reading, shopping, and navigating. For example, to facilitate product search for shopping, I designed an AR application on smartglasses, which located a specified product with computer vision, and presented visual cues to direct the user’s attention to the product. Thus, a low vision user can glance at the products and follow the visual cues to directly reach for the target as a sighted person can.