Research

Updated:  August 25, 2018

 


Mind Wandering

           Mind wandering is the well known phenomenon of intrusive thoughts during an ongoing task due to the failure of attention and executive control.  However its subcomponents are not well understood. We conducted a series of studies to examine how people’s working memory capacity affects both the initiation and the termination of mind wandering during meditation. Participants received an Ospan task and practiced mindfulness meditation while pressing a key whenever they realized they had an intrusive thought (self-reported mind wandering), or being probed randomly at time intervals ranging from 5-40 seconds (probe-caught mind wandering).  Based on these traditional measures, we developed a novel method to estimate the unconscious initiation time of mind wandering event.  Using this new analysis, we found that people with higher working memory capacity are able to stay in focus longer.  However, once mind wandering occurs, it will last a comparable duration regardless of the working memory capacity (Voss, Zukosky & Wang, 2018).

          We are currently further developing novel methods to measure the time course of mind wandering, as well as the neural signature of these spontaneous cognitive processes.

 


Human computer interaction

Human obstacle avoidance and robot control models

           It has been a great challenge to design a robot that can drive by itself to a goal without bumping into obstacles on the way.  This control problem has engaged much research, however consistent performance is still not available, especially in cluttered environments and with concave obstacles.  In contrast, humans and many animals can maneuver with much more dexterity, but how they achieve this performance is poorly understood at the mathematical level.

           Research comparing human and robotic navigation control mechanisms can significantly advance our understanding both in human motor systems and in robotic control theories.  In collaboration with Dusan Stipanovic at IESE, we conducted a project examining how humans avoid obstacles to improve the performance of automatic vehicles.  Humans and leading control algorithms (e.g., receding horizon controller) were placed in a remote navigation task who drove a vehicle around obstacles toward a goal. Parameters such as the number and type of obstacles as well as the feedback delay were varied. As expected, humans showed significantly more robust performance compared to that of a receding horizon controller. Using the human data, we then trained a new human-like receding horizon controller which achieved better performance in the percentage of successful run to the goal without collision with the obstacles and the time required to reach the goal. The Human-like automatic control in turn provides a tool to model human navigation/steering strategies (Burns et al., 2010, 2011, 2012).

         

Human perceived safety

           A central issue to the integration of flying robotic systems into human populated environments is how to improve the level of comfort and perceived safety for people around it. We conducted a series of studies examining how people respond to the presence of a flying robot under various operating conditions. Across three experiments participants passively observed quadrotor trajectories in a simulated virtual reality environment. Quadrotor flight paths were manipulated in terms of velocity, altitude, and acoustic profile to examine their effect on physiological arousal and head motion kinematics.  For all three experiments, arousal was greater when the quadrotor was flying at higher speed, with the audio on, and at eye-height than overhead, but decreased across subsequent trials. In addition, head motion accelerated in the direction away from the quadrotor on its approach, indicating avoidance behavior. In general, the human discomfort function increases with drone speed and decreases with exposure. To minimize anxiety for humans in the surrounding, quadrotor flight should maintain a flight path characterized by lower velocity, higher altitude, and a quieter acoustic profile.

 


Spatial cognition

Egocentric updating model

           In a series of studies I conducted in collaboration with Elizabeth Spelke (Wang, 1999; Wang & Spelke, 2000, 2002, 2003; Wang, 2012), we found a surprising effect of disorientation on one’s ability to point to a set of objects with internal consistency, i.e., people’s spatial knowledge is impaired when they are disoriented.  These results contradict the traditional mental map model, which claims that object locations are represented in external coordinates and thus independent of self-motion.  Based on these findings, we proposed a new egocentric updating model where each object is represented relative to the observer.  Moreover, these representations are updated independently based on the observer’s estimation of self motion, and thus vulnerable to disorientation.

           The egocentric updating model makes several somewhat counterintuitive predictions about our sense of direction.  Because the egocentric coordinates of each target need to be “calculated” individually, the number of target locations one can update should be limited by the processing capacity.  That is, the efficiency of spatial updating should depend on the number of targets being updated, while traditional and intuitive models of spatial updating (e.g., mentally “plotting” one’s position on a “map” as one moves around) predict that the number of targets in the environment should not matter.  Using the Virtual Reality Cube, we showed that people’s ability to locate a target object after they move to a different viewpoint depends on the number of targets in the environment, supporting the egocentric updating model (Wang et al., 2006).  This set-size effect of spatial updating has also been extended to a more general path integration task (Wan et al., 2012).

           A second prediction from the egocentric updating model is that people may only keep track of their position and orientation relative to part of the world.  A series of studies showed that people indeed tend to lose track of their orientation relative to one environment when they walk into another environment, such as between campus and the lab room (Wang & Brockmole, 2003a, b;  Wang, 2006), between superimposed real and virtual environments (Wan et al., 2009), and between different floors of the same building (Street & Wang, in revision).

 

Spatial memory distortion

           It is well known that memories can become distorted by people’s concepts and schemas.  Cristina Sampaio and I examined one of the long-debated questions about memory distortions, i.e., whether memory representations are distorted, or the memories themselves are intact but errors occur when multiple representations are combined to make a response.  In the spatial domain, memory of locations is typically biased toward the center of the region an object belongs to.  In one approach, we used a recognition task to see whether an unbiased memory representation can be accessed.  When people were asked to recall where an object was presented 1.5s ago, they made systematic errors.  However, when asked to choose between the original location and the location they just recalled, people correctly picked the original location, even though they just explicitly reported the other one, suggesting the original memory does survive the delay period and can be accessed through a memory task that poses fewer response demands (Sampaio & Wang, 2009).  These findings support the idea that memories themselves are intact, and errors occur in some types of responding processes but not others.

           In a second approach, we introduced new category regions during the responding period to examine the hypothesis that spatial memory distortions are not due to distorted memory but due to the information integration processes at response.  We found that the memory bias reflects the new regions but not the original ones, suggesting that unbiased memories do survive the delay despite the persisting influence of their original categorical representation (Sampaio & Wang, 2010).  Moreover, as the delay increases, the influence of the alternative category increases but not that of the default encoded categories, again contradicting the hypothesis that memory distortion occurs during the delay and supporting the unbiased memory hypothesis (Sampaio & Wang, 2012).  A new Response-based Category Adjustment (RCA) model was proposed to account for these findings.

 


Visual threshold and quantum theory

           Despite its enormous success in predicting various experimental results involving microscopic entities, interpretation of quantum theories remains a much debated topic to-date.  One of the most famous paradoxes is the quantum measurement problem, usually illustrated as the “Schrödinger’s cat problem”.  That is, how superposition states of microscopic entities (e.g., a photon at Left and Right at the same time) “collapse” during the measurement and result in a single, definite state of the measurement device (e.g., a cat either dead or alive but not both).  In collaboration with Tony Leggett and Paul Kwiat in the Physics Department, we started a project to test quantum effects, for the first time, directly via the human visual system, using photon sources that can generate precisely one photon at a time.

           The major challenge to testing Quantum Mechanics in humans concerns the limit in our ability to detect single photons.  Psychophysical studies on human visual threshold and physiological studies on photon absorption in rods suggest that a single photon may be detectable.  However, due to limitations in experimental techniques, human single-photon detection has never been demonstrated experimentally.  With the new single-photon generation technique developed in Kwiat’s lab, it is possible for the first time to directly measure human perception of single photons and the visual sensitivity function.

           We are currently running an experiment to measure human single photon detection, which could finally settle the long-lasting question on human visual threshold. The results will then be used to examine two classical quantum effects.  The first is the superposition effect.  We will use human observers to look for differences between superposition and mixed quantum states. The second is the entanglement effect.  We will run an EPR experiment to test quantum nonlocality where one of the photon detectors is replaced by a human observer. Positive results for any of these experiments can be very significant; even “non-surprising” outcomes would represent a major step forward in our understanding of the applicability of quantum theory beyond the atomic realm. The theoretical analysis of the research was presented at a conference (Holms et al, 2012).

 


Four-dimensional spatial intuition and Non-Euclidean spatial learning

 

4-D space

           Space and time are two of the most fundamental concepts in human intelligence, and a lot of human thinking is grounded in spatial metaphors and imagery.  Research has shown that people can mentally create novel objects by freely combining / changing different features, such as color, shape, size, orientation and re-arranging object parts.  However, this powerful spatial imagery faculty seems to fail in the most primitive features of space, namely its dimension.  Can people represent/reason about higher-dimensional objects other than using mathematical equations?  Some researchers believe that perception of objects and events is impossible without a-priori representations of space and time (e.g., Kant, 1965), and the a-priori representation of space may be strictly confined by our perceptual experience of the physical world and our innate disposition of 3 dimensions, thus higher-dimensional spatial reasoning may only be accomplished by symbolic manipulations.

           In collaboration with George Francis at the Department of Mathematics, we developed several objective tasks to measure people’s ability to learn simple, four-dimensional geometric objects.  In one study, participants studied 3D slices of a random 4D object in the Virtual Reality Cube by moving a 3D observation window along the fourth dimension.  Then they made spatial judgments (distance between two of its vertices and the angular relationship among three of its vertices).  Our data showed that participants who had basic knowledge of geometry can make both distance and orientation judgments about 4D objects, providing the first objective evidence that the human perceptual system is capable of learning and representing the spatial structure of a 4D object from visual experience, despite the fact that we evolved with 3 spatial dimensions only (Ambinder et al, 2009).

           To further examine whether human 4D judgments can extend to completely novel properties unique to higher dimensional space, a hyper-volume judgment task was used.  The observers studied a 3D orthogonal projection of a random wireframe 4D object rotating around the yz-plane and adjusted the size of a hyper-block to match the hyper-volume of the target.  The judgments correlated significantly with the hyper-volume but not with other lower dimensional variables such as the mean 3D volume, suggesting that at least some people are able to judge 4D hyper-volume, providing strong evidence of true 4D spatial representations (Wang, 2014a, b, c; Wang & Street, 2013).

         

Non-Euclidean space

          One of the basic properties of space is its curvature, i.e. whether it is Euclidean (flat) or not (curved). We conducted a series of studies examining how people represent non-Euclidean space using two virtual tunnel mazes. One maze formed a square shape (Euclidean space), while the other contained a shortened path segment using a portal so that the overall maze violated the principles of Euclidean geometry. Each segment contained two landmarks.

          Participants learned the mazes by freely traversing the paths using a virtual reality HMD, and completed a pointing task and map-drawing task. Items in the pointing task were separated into local and global landmark pairs and tested independently. The local landmark pairs were adjacent but in different segments.  The global landmark pairs were in opposite segments.  The pointing responses for each landmark pair were compared to the corresponding directions indicated in each participant’s drawn map and three hypothetical Euclidean maps that preserve the maximum amount of spatial relations by lengthening the shortened segment.  They differ in how the two landmarks were placed in the lengthened segment.  One placed the landmarks at the same distance from their nearest corner (corner map), one placed them proportionally in the lengthened segment (scale map), and one placed them in the mean position relative to the two corners (mean position map).  The mean errors of the pointing directions relative to each of these four maps were calculated to assess the underlying representation guiding the pointing responses.

          The data showed that the egocentric pointing judgments were most similar to the corner map, with errors comparable to those of the regular square maze, and significantly different from their own drawn map, especially for the local landmark pairs, suggesting people’s pointing judgments in a globally non-Euclidean maze resemble an underlying Euclidean map that preserves landmark distance from the nearest corners.

          We are currently constructing continuous non-Euclidean environments in virtual reality.  The resulting simulation depicts a non-Euclidean 3D universe with earth-like planets of different colors floating sparsely in space. The users turn their body and move a virtual beam pointer using a hand-held controller to the direction they want to go and press a button to travel forward along the geodesic of the curved space.  An application has been developed for a navigation experiment to examine people’s ability to complete a point-to-origin task for two non-Euclidean environments (3D spherical space and 3D hyperbolic space) and a Euclidean control.