Article | Social Cognitive and Affective Neuroscience | October, 2013

Multivoxel Patterns in Face-sensitive Temporal Regions Reveal an Encoding Schema Based on Detecting Life in a Face

by Christine E. Looser, J. Swaroop Guntupalli and Thalia Wheatley

Abstract

More than a decade of research has demonstrated that faces evoke prioritized processing in a 'core face network' of three brain regions. However, whether these regions prioritize the detection of global facial form (shared by humans and mannequins) or the detection of life in a face has remained unclear. Here, we dissociate form-based and animacy-based encoding of faces by using animate and inanimate faces with human form (humans, mannequins) and dog form (real dogs, toy dogs). We used multivariate pattern analysis of BOLD responses to uncover the representational similarity space for each area in the core face network. Here, we show that only responses in the inferior occipital gyrus are organized by global facial form alone (human vs. dog) while animacy becomes an additional organizational priority in later face-processing regions: the lateral fusiform gyri (latFG) and right superior temporal sulcus. Additionally, patterns evoked by human faces were maximally distinct from all other face categories in the latFG and parts of the extended face perception system. These results suggest that once a face configuration is perceived, faces are further scrutinized for whether the face is alive and worthy of social cognitive resources.

Keywords: brain imaging; social psychology; mind perception; Identity; Science; Cognition and Thinking;

Citation:

Looser, Christine E., J. Swaroop Guntupalli, and Thalia Wheatley. "Multivoxel Patterns in Face-sensitive Temporal Regions Reveal an Encoding Schema Based on Detecting Life in a Face." Social Cognitive and Affective Neuroscience 8, no. 7 (October, 2013): 799–805.