Updated:  January 7, 2015


Spatial cognition

Egocentric updating model

         In a series of studies I conducted in collaboration with Elizabeth Spelke (Wang, 1999; Wang & Spelke, 2000, 2002, 2003; Wang, 2012), we found a surprising effect of disorientation on one’s ability to point to a set of objects with internal consistency, i.e., people’s spatial knowledge is impaired when they are disoriented.  These results contradict the traditional mental map model, which claims that object locations are represented in external coordinates and thus independent of self-motion.  Based on these findings, we proposed a new egocentric updating model where each object is represented relative to the observer.  Moreover, these representations are updated independently based on the observer’s estimation of self motion, and thus vulnerable to disorientation.

         The egocentric updating model makes several somewhat counterintuitive predictions about our sense of direction.  Because the egocentric coordinates of each target need to be “calculated” individually, the number of target locations one can update should be limited by the processing capacity.  That is, the efficiency of spatial updating should depend on the number of targets being updated, while traditional and intuitive models of spatial updating (e.g., mentally “plotting” one’s position on a “map” as one moves around) predict that the number of targets in the environment should not matter.  Using the Virtual Reality Cube, we showed that people’s ability to locate a target object after they move to a different viewpoint depends on the number of targets in the environment, supporting the egocentric updating model (Wang et al., 2006).  This set-size effect of spatial updating has also been extended to a more general path integration task (Wan et al., 2012).

         A second prediction from the egocentric updating model is that people may only keep track of their position and orientation relative to part of the world.  A series of studies showed that people indeed tend to lose track of their orientation relative to one environment when they walk into another environment, such as between campus and the lab room (Wang & Brockmole, 2003a, b;  Wang, 2006), between superimposed real and virtual environments (Wan et al., 2009), and between different floors of the same building (Street & Wang, in revision).


Spatial memory distortion

         It is well known that memories can become distorted by people’s concepts and schemas.  Cristina Sampaio and I examined one of the long-debated questions about memory distortions, i.e., whether memory representations are distorted, or the memories themselves are intact but errors occur when multiple representations are combined to make a response.  In the spatial domain, memory of locations is typically biased toward the center of the region an object belongs to.  In one approach, we used a recognition task to see whether an unbiased memory representation can be accessed.  When people were asked to recall where an object was presented 1.5s ago, they made systematic errors.  However, when asked to choose between the original location and the location they just recalled, people correctly picked the original location, even though they just explicitly reported the other one, suggesting the original memory does survive the delay period and can be accessed through a memory task that poses fewer response demands (Sampaio & Wang, 2009).  These findings support the idea that memories themselves are intact, and errors occur in some types of responding processes but not others.

         In a second approach, we introduced new category regions during the responding period to examine the hypothesis that spatial memory distortions are not due to distorted memory but due to the information integration processes at response.  We found that the memory bias reflects the new regions but not the original ones, suggesting that unbiased memories do survive the delay despite the persisting influence of their original categorical representation (Sampaio & Wang, 2010).  Moreover, as the delay increases, the influence of the alternative category increases but not that of the default encoded categories, again contradicting the hypothesis that memory distortion occurs during the delay and supporting the unbiased memory hypothesis (Sampaio & Wang, 2012).  A new Response-based Category Adjustment (RCA) model was proposed to account for these findings.


Human computer interaction

Human obstacle avoidance and robot control models

           It has been a great challenge to design a robot that can drive by itself to a goal without bumping into obstacles on the way.  This control problem has engaged much research, however consistent performance is still not available, especially in cluttered environments and with concave obstacles.  In contrast, humans and many animals can maneuver with much more dexterity, but how they achieve this performance is poorly understood at the mathematical level.

           Research comparing human and robotic navigation control mechanisms can significantly advance our understanding both in human motor systems and in robotic control theories.  In collaboration with Dusan Stipanovic at IESE, we conducted a project examining how humans avoid obstacles to improve the performance of automatic vehicles.  Humans and leading control algorithms (e.g., receding horizon controller) were placed in a remote navigation task who drove a vehicle around obstacles toward a goal. Parameters such as the number and type of obstacles as well as the feedback delay were varied. As expected, humans showed significantly more robust performance compared to that of a receding horizon controller. Using the human data, we then trained a new human-like receding horizon controller which achieved better performance in the percentage of successful run to the goal without collision with the obstacles and the time required to reach the goal. The Human-like automatic control in turn provides a tool to model human navigation/steering strategies (Burns et al., 2010, 2011, 2012).


Four-dimensional spatial intuition

           Space and time are two of the most fundamental concepts in human intelligence, and a lot of human thinking is grounded in spatial metaphors and imagery.  Research has shown that people can mentally create novel objects by freely combining / changing different features, such as color, shape, size, orientation and re-arranging object parts.  However, this powerful spatial imagery faculty seems to fail in the most primitive features of space, namely its dimension.  Can people represent/reason about higher-dimensional objects other than using mathematical equations?  Some researchers believe that perception of objects and events is impossible without a-priori representations of space and time (e.g., Kant, 1965), and the a-priori representation of space may be strictly confined by our perceptual experience of the physical world and our innate disposition of 3 dimensions, thus higher-dimensional spatial reasoning may only be accomplished by symbolic manipulations.

         In collaboration with George Francis at the Department of Mathematics, we developed several objective tasks to measure people’s ability to learn simple, four-dimensional geometric objects.  In one study, participants studied 3D slices of a random 4D object in the Virtual Reality Cube by moving a 3D observation window along the fourth dimension.  Then they made spatial judgments (distance between two of its vertices and the angular relationship among three of its vertices).  Our data showed that participants who had basic knowledge of geometry can make both distance and orientation judgments about 4D objects, providing the first objective evidence that the human perceptual system is capable of learning and representing the spatial structure of a 4D object from visual experience, despite the fact that we evolved with 3 spatial dimensions only (Ambinder et al, 2009).

            To further examine whether human 4D judgments can extend to completely novel properties unique to higher dimensional space, a hyper-volume judgment task was used.  The observers studied a 3D orthogonal projection of a random wireframe 4D object rotating around the yz-plane and adjusted the size of a hyper-block to match the hyper-volume of the target.  The judgments correlated significantly with the hyper-volume but not with other lower dimensional variables such as the mean 3D volume, suggesting that at least some people are able to judge 4D hyper-volume, providing strong evidence of true 4D spatial representations (Wang, 2014a, b, c; Wang & Street, 2013).

Visual threshold and quantum theory

           Despite its enormous success in predicting various experimental results involving microscopic entities, interpretation of quantum theories remains a much debated topic to-date.  One of the most famous paradoxes is the quantum measurement problem, usually illustrated as the “Schrödinger’s cat problem”.  That is, how superposition states of microscopic entities (e.g., a photon at Left and Right at the same time) “collapse” during the measurement and result in a single, definite state of the measurement device (e.g., a cat either dead or alive but not both).  In collaboration with Tony Leggett and Paul Kwiat in the Physics Department, we started a project to test quantum effects, for the first time, directly via the human visual system, using photon sources that can generate precisely one photon at a time.

           The major challenge to testing Quantum Mechanics in humans concerns the limit in our ability to detect single photons.  Psychophysical studies on human visual threshold and physiological studies on photon absorption in rods suggest that a single photon may be detectable.  However, due to limitations in experimental techniques, human single-photon detection has never been demonstrated experimentally.  With the new single-photon generation technique developed in Kwiat’s lab, it is possible for the first time to directly measure human perception of single photons and the visual sensitivity function.

           We are currently running an experiment to measure human single photon detection, which could finally settle the long-lasting question on human visual threshold. The results will then be used to examine two classical quantum effects.  The first is the superposition effect.  We will use human observers to look for differences between superposition and mixed quantum states. The second is the entanglement effect.  We will run an EPR experiment to test quantum nonlocality where one of the photon detectors is replaced by a human observer. Positive results for any of these experiments can be very significant; even “non-surprising” outcomes would represent a major step forward in our understanding of the applicability of quantum theory beyond the atomic realm. The theoretical analysis of the research was presented at a conference (Holms et al, 2012).


Mind Wandering

           Mind wandering is the well known phenomenon of intrusive thoughts during an ongoing task due to the failure of attention and executive control.  However its subcomponents are not well understood. We conducted a study to examine how people’s working memory capacity affects both the initiation and the termination of mind wandering during meditation. Participants received an Ospan task and practiced mindfulness meditation while pressing a key whenever they realized they had an intrusive thought (self-reported mind wandering), or being probed randomly at time intervals ranging from 5-40 seconds (probe-caught mind wandering). There was a positive correlation between working memory capacity and the probe-caught mind wandering rate but not the self-caught mind wandering rate. These data suggest that people with high working memory capacity can both stay in meditation longer and recover from mind wandering faster (Voss & Wang, 2014).