Research Projects

Updated:  February 9, 2023


Mind Wandering

Mind wandering is the well-known phenomenon of intrusive thoughts during an ongoing task due to the failure of attention and executive control. However its subcomponents are not well understood. We conducted a series of experiments to examine how people’s working memory capacity affects both the initiation and the termination of mind wandering during meditation. Based on traditional measures using self- and probe-caught techniques, we developed a novel method to estimate the unconscious initiation time of mind wandering event. Using this new analysis, we found that people with higher working memory capacity are able to stay in focus longer. However, once mind wandering occurs, it will last a comparable duration regardless of the working memory capacity (Voss, Zukosky & Wang, 2018).

A second study examined the attentional fluctuations during a focus-mind wandering episode using a combination of psychophysical method and computational modeling. Our finding showed that during a meditation task, people spontaneously alternates between focusing on task and mind wandering without awareness (Zukosky & Wang, 2021). However, the RSVP scene categorization task and metacontrast masking target detection task do not such spontaneous alternations (Weber et al., under review).

We are currently further examining the frequency of mind wandering under negative emotion such as fear (Yuji Yao), as well as the neural signature of these spontaneous cognitive processes (Emily Cunningham).


Human computer interaction

Human obstacle avoidance and robot control models

           It has been a great challenge to design a robot that can drive by itself to a goal without bumping into obstacles on the way.  This control problem has engaged much research, however consistent performance is still not available, especially in cluttered environments and with concave obstacles.  In contrast, humans and many animals can maneuver with much more dexterity, but how they achieve this performance is poorly understood at the mathematical level.

           Research comparing human and robotic navigation control mechanisms can significantly advance our understanding both in human motor systems and in robotic control theories.  In collaboration with Dusan Stipanovic at IESE, we conducted a project examining how humans avoid obstacles to improve the performance of automatic vehicles.  Humans and leading control algorithms (e.g., receding horizon controller) were placed in a remote navigation task who drove a vehicle around obstacles toward a goal. Parameters such as the number and type of obstacles as well as the feedback delay were varied. As expected, humans showed significantly more robust performance compared to that of a receding horizon controller. Using the human data, we then trained a new human-like receding horizon controller which achieved better performance in the percentage of successful run to the goal without collision with the obstacles and the time required to reach the goal. The Human-like automatic control in turn provides a tool to model human navigation/steering strategies (Burns et al., 2010, 2011, 2012).

         

Human perceived safety

A central issue to the integration of flying robotic systems into human populated environments is how to improve the level of comfort and perceived safety for people around it. In collaboration with Naira Hovakimyan and Hyung-Jin Yoon, we conducted a series of experiments examining how people respond to the presence of a flying robot under various operating conditions using both the traditional physiological measure, and a novel defensive head movement measure. Across three experiments participants passively observed quadrotor trajectories in a simulated virtual reality environment. The results showed that defensive head acceleration can serve as a new index specific for measuring perceived safety, suggesting that applications intended for human comfort need to consider constraints from specific measures of perceived safety in addition to traditional measures of general physiological arousal (Widdowson et al, under review).

To further examine factors affecting human perceived safety, we are currently examining people’s behavioral responses to approaching drones with various characteristics, such as speed, altitude, drone size, engine noise, and greeting voice (Jazmyne Ross, Chris Widdowson, Kirk Ballew), and how behavioral responses relate to the defensive head movement measure and physiological measure (Widdowson et al, in preparation).


Biologically-Informed Artificial Intelligence

One of the main challenges in machine learning is that the network readily forgets previously learned information as it encounters new examples. For example, a deep network pre-trained with ImageNet will acquire useful features for tasks such as classification. However, when training it with more images from other sources to further improve its performance or to perform new tasks, it will show catastrophic forgetting of the previously learned tasks. In collaboration with Derek Hoiem and Zhen Zhu, Yinuo Peng and I are examining human category learning under the same scenario to see whether this type of catastrophic forgetting is universal or specific to deep neural networks.


Spatial cognition

Egocentric updating model

           In a series of studies I conducted in collaboration with Elizabeth Spelke (Wang, 1999; Wang & Spelke, 2000, 2002, 2003; Wang, 2012), we found a surprising effect of disorientation on one’s ability to point to a set of objects with internal consistency, i.e., people’s spatial knowledge is impaired when they are disoriented.  These results contradict the traditional mental map model, which claims that object locations are represented in external coordinates and thus independent of self-motion.  Based on these findings, we proposed a new egocentric updating model where each object is represented relative to the observer.  Moreover, these representations are updated independently based on the observer’s estimation of self motion, and thus vulnerable to disorientation.

           The egocentric updating model makes several somewhat counterintuitive predictions about our sense of direction.  Because the egocentric coordinates of each target need to be “calculated” individually, the number of target locations one can update should be limited by the processing capacity.  That is, the efficiency of spatial updating should depend on the number of targets being updated, while traditional and intuitive models of spatial updating (e.g., mentally “plotting” one’s position on a “map” as one moves around) predict that the number of targets in the environment should not matter.  Using the Virtual Reality Cube, we showed that people’s ability to locate a target object after they move to a different viewpoint depends on the number of targets in the environment, supporting the egocentric updating model (Wang et al., 2006).  This set-size effect of spatial updating has also been extended to a more general path integration task (Wan et al., 2012).

           A second prediction from the egocentric updating model is that people may only keep track of their position and orientation relative to part of the world.  A series of studies showed that people indeed tend to lose track of their orientation relative to one environment when they walk into another environment, such as between campus and the lab room (Wang & Brockmole, 2003a, b;  Wang, 2006), between superimposed real and virtual environments (Wan et al., 2009), and between different floors of the same building (Street & Wang, in revision).

Spatial memory distortion

           It is well known that memories can become distorted by people’s concepts and schemas.  Cristina Sampaio and I examined one of the long-debated questions about memory distortions, i.e., whether memory representations are distorted, or the memories themselves are intact but errors occur when multiple representations are combined to make a response.  In the spatial domain, memory of locations is typically biased toward the center of the region an object belongs to.  In one approach, we used a recognition task to see whether an unbiased memory representation can be accessed.  When people were asked to recall where an object was presented 1.5s ago, they made systematic errors.  However, when asked to choose between the original location and the location they just recalled, people correctly picked the original location, even though they just explicitly reported the other one, suggesting the original memory does survive the delay period and can be accessed through a memory task that poses fewer response demands (Sampaio & Wang, 2009).  These findings support the idea that memories themselves are intact, and errors occur in some types of responding processes but not others.

           In a second approach, we introduced new category regions during the responding period to examine the hypothesis that spatial memory distortions are not due to distorted memory but due to the information integration processes at response.  We found that the memory bias reflects the new regions but not the original ones, suggesting that unbiased memories do survive the delay despite the persisting influence of their original categorical representation (Sampaio & Wang, 2010).  Moreover, as the delay increases, the influence of the alternative category increases but not that of the default encoded categories, again contradicting the hypothesis that memory distortion occurs during the delay and supporting the unbiased memory hypothesis (Sampaio & Wang, 2012).  A new Response-based Category Adjustment (RCA) model was proposed to account for these findings.

 


Visual threshold and quantum theory

Despite its enormous success in predicting various experimental results involving microscopic entities, interpretation of quantum theories remains a much debated topic to-date. One of the most famous paradoxes is the quantum measurement problem, usually illustrated as the “Schrödinger’s cat problem”. That is, how superposition states of microscopic entities (e.g., a photon at Left and Right at the same time) “collapse” during the measurement and result in a single, definite state of the measurement device (e.g., a cat either dead or alive but not both). In collaboration with Tony Leggett and Paul Kwiat in the Physics Department, we started a project to test quantum effects, for the first time, directly via the human visual system, using photon sources that can generate precisely one photon at a time.

The major challenge to testing Quantum Mechanics in humans concerns the limit in our ability to detect single photons. Psychophysical studies on human visual threshold and physiological studies on photon absorption in rods suggest that a single photon may be detectable. However, due to limitations in experimental techniques, human single-photon detection has never been demonstrated experimentally. With the new single-photon generation technique developed in Kwiat’s lab, it is possible for the first time to directly measure human perception of single photons and the visual sensitivity function (Holms et al, 2012). An experiment on human single-photon detection is on-going.


Four-dimensional spatial intuition and Non-Euclidean spatial learning

4-D space

           Space and time are two of the most fundamental concepts in human intelligence, and a lot of human thinking is grounded in spatial metaphors and imagery.  Research has shown that people can mentally create novel objects by freely combining / changing different features, such as color, shape, size, orientation and re-arranging object parts.  However, this powerful spatial imagery faculty seems to fail in the most primitive features of space, namely its dimension.  Can people represent/reason about higher-dimensional objects other than using mathematical equations?  Some researchers believe that perception of objects and events is impossible without a-priori representations of space and time (e.g., Kant, 1965), and the a-priori representation of space may be strictly confined by our perceptual experience of the physical world and our innate disposition of 3 dimensions, thus higher-dimensional spatial reasoning may only be accomplished by symbolic manipulations.

           In collaboration with George Francis at the Department of Mathematics, we developed several objective tasks to measure people’s ability to learn simple, four-dimensional geometric objects.  In one study, participants studied 3D slices of a random 4D object in the Virtual Reality Cube by moving a 3D observation window along the fourth dimension.  Then they made spatial judgments (distance between two of its vertices and the angular relationship among three of its vertices).  Our data showed that participants who had basic knowledge of geometry can make both distance and orientation judgments about 4D objects, providing the first objective evidence that the human perceptual system is capable of learning and representing the spatial structure of a 4D object from visual experience, despite the fact that we evolved with 3 spatial dimensions only (Ambinder et al, 2009).

           To further examine whether human 4D judgments can extend to completely novel properties unique to higher dimensional space, a hyper-volume judgment task was used.  The observers studied a 3D orthogonal projection of a random wireframe 4D object rotating around the yz-plane and adjusted the size of a hyper-block to match the hyper-volume of the target.  The judgments correlated significantly with the hyper-volume but not with other lower dimensional variables such as the mean 3D volume, suggesting that at least some people are able to judge 4D hyper-volume, providing strong evidence of true 4D spatial representations (Wang, 2014a, b, c; Wang & Street, 2013).

         

Non-Euclidean space

One of the basic properties of space is its curvature, i.e. whether it is Euclidean (flat) or not (curved). Chris Widdowson and I conducted a series of studies examining the types of Euclidean representations people may form while learning virtual wormhole mazes. Participants explored Euclidean or non-Euclidean tunnel mazes and drew maps of the landmark layout on a 2D canvas. The results showed that people have different, consistent strategies, some mainly preserving distance information while others mainly preserving turning angles. The straightness of the segments was mostly preserved. These results suggest that representations of non-Euclidean space may be highly variable across individuals.

We also constructed true non-Euclidean environments such as spherical and hyperbolic spaces in virtual reality. The resulting simulation depicts a non-Euclidean 3D universe with earth-like planets of different colors floating sparsely in space. The users turn their body and move a virtual beam pointer using a hand-held controller to the direction they want to go and press a button to travel forward along the geodesic of the curved space. Participants walked along two legs in an outbound journey, then pointed to the direction of the starting point (home), in a Euclidean space, a hyperbolic space, or a spherical space. The results showed that people’s responses matched the direction of Euclidean origin, regardless of the curvature of the space itself. These data suggest that the path integration / spatial updating system operates on Euclidean geometry, even when curvature violations are clearly present (Widdowson & Wang, in press).