Permalink
Cannot retrieve contributors at this time
Fetching contributors…
| <?xml version="1.0" encoding="UTF-8"?><!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.1d1 20130915//EN" "JATS-archivearticle1.dtd"><article article-type="research-article" dtd-version="1.1d1" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><front><journal-meta><journal-id journal-id-type="nlm-ta">elife</journal-id><journal-id journal-id-type="hwp">eLife</journal-id><journal-id journal-id-type="publisher-id">eLife</journal-id><journal-title-group><journal-title>eLife</journal-title></journal-title-group><issn publication-format="electronic">2050-084X</issn><publisher><publisher-name>eLife Sciences Publications, Ltd</publisher-name></publisher></journal-meta><article-meta><article-id pub-id-type="publisher-id">00932</article-id><article-id pub-id-type="doi">10.7554/eLife.00932</article-id><article-categories><subj-group subj-group-type="display-channel"><subject>Research article</subject></subj-group><subj-group subj-group-type="heading"><subject>Neuroscience</subject></subj-group></article-categories><title-group><article-title>Conceptual metaphorical mapping in chimpanzees (<italic>Pan troglodytes</italic>)</article-title></title-group><contrib-group><contrib contrib-type="author" corresp="yes" id="author-5261"><name><surname>Dahl</surname><given-names>Christoph D</given-names></name><xref ref-type="aff" rid="aff1"/><xref ref-type="aff" rid="aff2"/><xref ref-type="corresp" rid="cor1">*</xref><xref ref-type="other" rid="par-5"/><xref ref-type="fn" rid="con1"/><xref ref-type="fn" rid="conf1"/></contrib><contrib contrib-type="author" corresp="yes" id="author-5446"><name><surname>Adachi</surname><given-names>Ikuma</given-names></name><xref ref-type="aff" rid="aff3"/><xref ref-type="corresp" rid="cor2">*</xref><xref ref-type="other" rid="par-1"/><xref ref-type="other" rid="par-2"/><xref ref-type="other" rid="par-3"/><xref ref-type="other" rid="par-4"/><xref ref-type="fn" rid="con2"/><xref ref-type="fn" rid="conf1"/></contrib><aff id="aff1"><institution content-type="dept">Section of Language and Intelligence</institution>, <institution>Primate Research Institute, Kyoto University</institution>, <addr-line><named-content content-type="city">Inuyama</named-content></addr-line>, <country>Japan</country></aff><aff id="aff2"><institution content-type="dept">Department of Psychology</institution>, <institution>National Taiwan University</institution>, <addr-line><named-content content-type="city">Taipei</named-content></addr-line>, <country>Taiwan</country></aff><aff id="aff3"><institution content-type="dept">Center for International Collaboration and Advanced Studies in Primatology</institution>, <institution>Primate Research Institute, Kyoto University</institution>, <addr-line><named-content content-type="city">Inuyama</named-content></addr-line>, <country>Japan</country></aff></contrib-group><contrib-group content-type="section"><contrib contrib-type="editor"><name><surname>Behrens</surname><given-names>Timothy</given-names></name><role>Reviewing editor</role><aff><institution>Oxford University</institution>, <country>United Kingdom</country></aff></contrib></contrib-group><author-notes><corresp id="cor1"><label>*</label>For correspondence: <email>christoph.d.dahl@gmail.com</email> (CDD);</corresp><corresp id="cor2"><label>*</label>For correspondence: <email>adachi@pri.kyoto-u.ac.jp</email> (IA)</corresp></author-notes><pub-date date-type="pub" publication-format="electronic"><day>22</day><month>10</month><year>2013</year></pub-date><pub-date pub-type="collection"><year>2013</year></pub-date><volume>2</volume><elocation-id>e00932</elocation-id><history><date date-type="received"><day>13</day><month>05</month><year>2013</year></date><date date-type="accepted"><day>17</day><month>09</month><year>2013</year></date></history><permissions><copyright-statement>© 2013, Dahl and Adachi</copyright-statement><copyright-year>2013</copyright-year><copyright-holder>Dahl and Adachi</copyright-holder><license xlink:href="http://creativecommons.org/licenses/by/3.0/"><license-p>This article is distributed under the terms of the <ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/3.0/">Creative Commons Attribution License</ext-link>, which permits unrestricted use and redistribution provided that the original author and source are credited.</license-p></license></permissions><self-uri content-type="pdf" xlink:href="elife00932.pdf"/><abstract><object-id pub-id-type="doi">10.7554/eLife.00932.001</object-id><p>Conceptual metaphors are linguistic constructions. Such a metaphor is humans’ mental representation of social rank as a pyramidal-like structure. High-ranked individuals are represented in higher positions than low-ranked individuals. We show that conceptual metaphorical mapping between social rank and the representational domain exists in our closest evolutionary relatives, the chimpanzees. Chimpanzee participants were requested to discriminate face identities in a vertical arrangement. We found a modulation of response latencies by the rank of the presented individual and the position on the display: a high-ranked individual presented in the higher and a low-ranked individual in the lower position led to quicker identity discrimination than a high-ranked individual in the lower and a low-ranked individual in the higher position. Such a spatial representation of dominance hierarchy in chimpanzees suggests that a natural tendency to systematically map an abstract dimension exists in the common ancestor of humans and chimpanzees.</p><p><bold>DOI:</bold> <ext-link ext-link-type="doi" xlink:href="10.7554/eLife.00932.001">http://dx.doi.org/10.7554/eLife.00932.001</ext-link></p></abstract><abstract abstract-type="executive-summary"><object-id pub-id-type="doi">10.7554/eLife.00932.002</object-id><title>eLife digest</title><p>It is thought that the ability to connect an abstract concept to something physical helps us to understand abstract ideas more easily. Examples include the use of conceptual metaphors that draw parallels between something abstract, such as social status, and physical position, even though there is no connection between them: familiar examples include phrases such as ‘top dog’ or ‘upper class’. It has long been assumed that the use of such conceptual metaphors is uniquely human.</p><p>Many social animals have hierarchies of dominance within groups, with particular individuals being ranked above or below other individuals. Chimpanzees—our closest relatives in the animal kingdom—are a good example of this, and although their cognitive processes are known to be similar to those of humans in many ways, we do not know if they make use of conceptual metaphors. Moreover, we don’t even know if conceptual metaphors can exist in the absence of language.</p><p>When researchers want to investigate how concepts are cognitively linked in the brain, they often use ‘coherent’ or ‘incoherent’ stimuli. A good example of an incoherent stimulus would be the word ‘red’ printed in blue ink. Because our neural representations of the colour blue and the word blue are linked, it is harder for a person to read the word red when it is printed in blue than when it is printed in red (which would be a coherent stimulus).</p><p>To test whether chimpanzees use a conceptual metaphor in which social status corresponds to height, Dahl and Adachi showed six chimpanzees photographs of four other chimpanzees who were known to them, and tested whether the relative positions of the photographs affected the ability of the chimpanzees to identify which of the two photographs they had been shown earlier. For example, a photograph of a high-ranked, dominant chimpanzee could be shown above a photograph of a lower-ranked chimpanzee (a coherent stimulus) or below a photograph of a lower-ranked chimpanzee (an incoherent stimulus). The chimpanzees doing the tests had to identify which of the photographs they had been shown earlier by touching the correct photograph on a screen.</p><p>Dahl and Adachi found that it took longer for chimpanzees to complete the task when the photograph was in the ‘wrong’ position. This suggests that the neural representations of social status and physical position might be linked in chimpanzees. If the social status and the physical position of the photograph match, the chimpanzee doing the test can quickly identify the photograph that it has been shown earlier. However, if they do not match, the conflict between the neural representations of social status and physical position slows down the response. These findings suggest that conceptual metaphors are not uniquely human and, moreover, that they could have emerged before the development of language.</p><p><bold>DOI:</bold> <ext-link ext-link-type="doi" xlink:href="10.7554/eLife.00932.002">http://dx.doi.org/10.7554/eLife.00932.002</ext-link></p></abstract><kwd-group kwd-group-type="author-keywords"><title>Author keywords</title><kwd>chimpanzee</kwd><kwd>conceptual metaphorical mapping</kwd><kwd>cross-modal mapping</kwd><kwd>language</kwd><kwd>linguistic</kwd><kwd>hierarchy</kwd></kwd-group><kwd-group kwd-group-type="research-organism"><title>Research organism</title><kwd>Other</kwd></kwd-group><funding-group><award-group id="par-1"><funding-source><institution-wrap><institution>Grant-in-Aid for Scientific Research on Innovative Areas by the Ministry of Education, Culture, Sports, Science and Technology, Japan</institution></institution-wrap></funding-source><award-id>23119713</award-id><principal-award-recipient><name><surname>Adachi</surname><given-names>Ikuma</given-names></name></principal-award-recipient></award-group><award-group id="par-2"><funding-source><institution-wrap><institution>Grant-in-Aid for Specially Promoted Research by Japan Society for the Promotion of Science</institution></institution-wrap></funding-source><award-id>24000001 (PI: Tetsuro Matsuzawa)</award-id><principal-award-recipient><name><surname>Adachi</surname><given-names>Ikuma</given-names></name></principal-award-recipient></award-group><award-group id="par-3"><funding-source><institution-wrap><institution>Grant-in-Aid for Scientific Research (S) by Japan Society for the Promotion of Science</institution></institution-wrap></funding-source><award-id>23220006 (PI: Masaki Tomonaga)</award-id><principal-award-recipient><name><surname>Adachi</surname><given-names>Ikuma</given-names></name></principal-award-recipient></award-group><award-group id="par-4"><funding-source><institution-wrap><institution>Grant-in-Aid for Young Scientists (B) by Japan Society for the Promotion of Science</institution></institution-wrap></funding-source><award-id>22700270</award-id><principal-award-recipient><name><surname>Adachi</surname><given-names>Ikuma</given-names></name></principal-award-recipient></award-group><award-group id="par-5"><funding-source><institution-wrap><institution>JSPS fellow by the Japan Society for the Promotion of Science</institution></institution-wrap></funding-source><award-id>22-00312</award-id><principal-award-recipient><name><surname>Dahl</surname><given-names>Christoph D</given-names></name></principal-award-recipient></award-group><funding-statement>The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.</funding-statement></funding-group><custom-meta-group><custom-meta><meta-name>elife-xml-version</meta-name><meta-value>2</meta-value></custom-meta><custom-meta specific-use="meta-only"><meta-name>Author impact statement</meta-name><meta-value>The use of metaphorical concepts is not unique to humans.</meta-value></custom-meta></custom-meta-group></article-meta></front><body><sec id="s1" sec-type="intro"><title>Introduction</title><p>“high” vs “low status”, “top of the heap”, “bottom of the barrel”: These or similar expressions are widely observed across cultures and languages (<xref ref-type="bibr" rid="bib21">Pinker, 1997</xref>). The cross-modal correspondence between the visuospatial domain (high or low) and an abstract domain (rank) has been described as a conceptual metaphor (<xref ref-type="bibr" rid="bib11">Lakoff and Johnson, 1980a</xref>, <xref ref-type="bibr" rid="bib12">b</xref>) and thought to be uniquely human (<xref ref-type="bibr" rid="bib6">Feldman and Narayanan, 2004</xref>). A conceptual metaphor takes one concept and connects that to another concept in order to better understand that concept. The way we think and act is largely influenced by conceptual metaphors, even without being fully aware of them (<xref ref-type="bibr" rid="bib11">Lakoff and Johnson, 1980a</xref>). The question remains if conceptual metaphorical mapping is indeed uniquely human or if it appears in other primates and thus describes a conceptual metaphorical mapping that predates language. To answer this question, we examined if our evolutionary closest relatives, the chimpanzees, have conceptual metaphors as we humans do.</p><p>Decades of research have shown that neural representations of objects and entities exist in monkeys (<xref ref-type="bibr" rid="bib26">Sigala et al., 2002</xref>), apes (<xref ref-type="bibr" rid="bib9">Fukushima et al., 2010</xref>) and humans (<xref ref-type="bibr" rid="bib10">Kourtzi and Kanwisher, 2001</xref>). The abilities of object representation and recognition further apply to social domains such as recognition of faces (<xref ref-type="bibr" rid="bib3">Dahl et al., 2007</xref>; <xref ref-type="bibr" rid="bib4">Dahl et al., 2013</xref>), conspecifics (<xref ref-type="bibr" rid="bib3">Dahl et al., 2007</xref>), and ingroup-outgroup members (<xref ref-type="bibr" rid="bib22">Pokorny and de Waal, 2009</xref>). To avoid or reduce costly social conflicts among individuals, a crucial skill for living in social groups is to express one’s status and to recognize the status of the others via visual or vocal cues (<xref ref-type="bibr" rid="bib20">Paxton et al., 2010</xref>). Recognition of status or rank allows inferences about expected roles of oneself and of others during group situations (<xref ref-type="bibr" rid="bib24">Ridgeway and Diekema, 1989</xref>). Access to food resources, information and social respect are facilitated with high rank in the hierarchy, while some degree of protection and care are granted to lower-ranked individuals (<xref ref-type="bibr" rid="bib8">Fiske, 1992</xref>). In many non-human species, spatial information, such as perceived physical body size, and facial and body postures, serve as an indicator of social status (<xref ref-type="bibr" rid="bib14">Maestripieri, 1996</xref>; <xref ref-type="bibr" rid="bib27">Tiedens and Fragale, 2003</xref>). Besides these non-verbal cues, we humans developed a conceptual metaphor, which connects social rank to the spatial domain (e.g., “high” vs “low status” [<xref ref-type="bibr" rid="bib21">Pinker, 1997</xref>]). Accordingly, humans represent social status in a pyramidal-like structure (<xref ref-type="bibr" rid="bib1">Bosserman, 1983</xref>): high-ranked individuals are represented in spatially higher positions than low-ranked individuals in diverse contexts, for example, stratification (<xref ref-type="bibr" rid="bib25">Saunders, 1989</xref>), organizations (<xref ref-type="bibr" rid="bib28">Weber, 1991</xref>), religion, family, human needs (<xref ref-type="bibr" rid="bib15">Maslow, 1943</xref>), and others. In this study, we examined if such conceptual metaphorical mapping between social dominance and the spatial domain is uniquely human or if it appears in our evolutionary closest relative, the chimpanzees. We addressed this question by comparing response latencies on discriminating photographs of familiar chimpanzee faces of high- and low-ranked individuals in a vertically aligned delayed matching-to-sample task. We hypothesize that <italic>coherent</italic> arrangements, such as a high-ranked individual presented in the higher and a low-ranked individual presented in the lower position, lead to quicker identity discrimination than <italic>incoherent</italic> arrangements, such as a high-ranked individual in the lower and a low-ranked individual in the higher position.</p></sec><sec id="s2" sec-type="results"><title>Results</title><p>We sequentially presented a cue (individual 1, 750 ms) followed by an inter-stimulus interval (500 ms) and two vertically arranged stimuli (match: individual 1; distractor: individual 2) (<xref ref-type="fig" rid="fig1">Figure 1A,B</xref>). The chimpanzees were required to indicate which of the two simultaneously presented faces (match, distractor) corresponds to the initially presented face (cue) in identity by touching one of the faces. Note that the participants did <italic>not</italic> classify the stimuli based on social rank. In addition to <italic>coherent</italic> and <italic>incoherent</italic> combinations of <italic>stimulus</italic> and <italic>position</italic>, we included a neutral condition combining two pictures of closely ranked individuals in a trial (<italic>close</italic> condition). In the first step of analyses, we pooled response latencies for <italic>coherence</italic> (<italic>coherent</italic> vs <italic>incoherent</italic> vs <italic>close</italic>) and <italic>position</italic> (<italic>high</italic> vs <italic>low</italic>). Using a mixed model ANOVA with <italic>coherence</italic> and <italic>position</italic> as fixed factors, we found a main effect for <italic>coherence</italic> (<italic>F</italic>(2, 30) = 5, m.s.e. = 1.10e+004, p<0.001), but not for <italic>position</italic> (p>0.34) or the interaction between the two factors (p>0.83) (<xref ref-type="fig" rid="fig1">Figure 1C</xref>). Post-hoc t-tests (Bonferroni-corrected for multiple comparisons) revealed a significant response facilitation for <italic>coherent</italic> as opposed to <italic>incoherent</italic> trials (<italic>t</italic>(10) = -3.20, m.s.e. = 8.60e+003, p<0.01 [one-tailed]) and <italic>close</italic> trials (<italic>t</italic>(10) = −1.92, m.s.e. = 1.14e+004, p<0.05 [one-tailed]). However, there was no significant deterioration for <italic>incoherent</italic> compared to <italic>close</italic> trials (p>0.11) (<xref ref-type="fig" rid="fig1">Figure 1C</xref>). In the second step of analyses, we pooled response latencies for <italic>position</italic> (<italic>high</italic> vs <italic>low</italic>) and compared the differences of <italic>coherent</italic> and <italic>incoherent</italic> trials in one-sample <italic>t</italic>-tests. We found significant deviations from zero for the <italic>high</italic> position (<italic>t</italic>(5) = 16.64, m.s.e. = 1.11e+004, p<0.001 [one-tailed]) and for the low position (t(5) = 20.13, m.s.e. = 6.88e+003, p<0.001 [one-tailed]) (<xref ref-type="fig" rid="fig1">Figure 1F</xref>). Further, there was no significant difference between the positions <italic>high</italic> and <italic>low</italic> (p>0.55).<fig id="fig1" position="float"><object-id pub-id-type="doi">10.7554/eLife.00932.003</object-id><label>Figure 1.</label><caption><title>Task sequence, example stimuli and response latency analyses.</title><p>(<bold>A</bold>) Typical trial sequence. (<bold>B</bold>) Stimulus exemplars. (<bold>C</bold>) Average response latencies for <italic>coherent</italic>, <italic>incoherent</italic> and <italic>close</italic> trials (mean ± SEM) for all participants, (<bold>D</bold>) for high-ranked participants and (<bold>E</bold>) for low-ranked participants. (<bold>F</bold>) Average response latency differences (<italic>coherent</italic>–<italic>incoherent</italic>) for <italic>high</italic> and <italic>low</italic> positions. (<bold>C</bold>, <bold>F</bold>) The number of independent data points (N) is six for each condition. (<bold>G</bold>) Normalized frequency distribution of response latencies of high- and low-ranked participants for <italic>coherent</italic>, <italic>incoherent</italic> and <italic>close</italic> trials. (<bold>H</bold>) Sensitivity index for both stimulus sets and stimuli. Positive values indicate facilitation for <italic>coherent</italic> relative to <italic>incoherent</italic> trials.</p><p><bold>DOI:</bold> <ext-link ext-link-type="doi" xlink:href="10.7554/eLife.00932.003">http://dx.doi.org/10.7554/eLife.00932.003</ext-link></p></caption><graphic xlink:href="elife00932f001"/></fig></p><p>We further split the participants into two groups according to their own ranks. This is equivalent to a separation by the stimulus sets (set 1 for the high-ranked and set 2 for the low-ranked participants, see ‘Matarials and methods’). High-ranked participants showed the same response patterns as low-ranked participants (<xref ref-type="fig" rid="fig1">Figure 1D,E</xref>). Due to the low sample size, we were unable to run statistical tests across participants; however, we collapsed all data samples of high-ranked participants for <italic>coherent</italic>, <italic>incoherent</italic> and <italic>close</italic> conditions and compared them to the corresponding conditions in low-ranked participants (<xref ref-type="fig" rid="fig1">Figure 1G</xref>). Two-sample Kolmogorov-Smirnov tests revealed that none of the frequency distributions of response latencies were statistically different across the two participant groups (all p>0.54). In addition, we calculated a sensitivity index reflecting the amount to which two sample distributions are separable from each other (also referred to as d-prime). We again split the data according to the rank of the individuals (high- vs low-ranked) (equivalent to splitting according to the stimulus sets). We further binned data samples (response latencies) across participants for <italic>coherent</italic> and <italic>incoherent</italic> trials. We then calculated the sensitivity index for each stimulus in combination with each other stimulus with which it occurred in the task (<xref ref-type="fig" rid="fig1">Figure 1H</xref>, x-axis = <italic>distractor</italic>; y-axis = <italic>cue</italic> and <italic>match</italic>) by considering the means and standard deviations of the corresponding data bins of <italic>coherent</italic> and <italic>incoherent</italic> trials (<xref ref-type="disp-formula" rid="equ1">Equation 1</xref>, ‘Materials and methods’). Positive values illustrate response facilitation for coherent trials relative to incoherent trials. As indicated in <xref ref-type="fig" rid="fig1">Figure 1D,E</xref>, there is more variance in the responses of the low-ranked (<xref ref-type="fig" rid="fig1">Figure 1E</xref>, four participants) than the high-ranked participants (<xref ref-type="fig" rid="fig1">Figure 1D</xref>, two participants), leading to a lower sensitivity in low-ranked participants (<italic>t</italic>(14) = 2.53, m.s.e. = 8.36e+003, p<0.05, <xref ref-type="fig" rid="fig1">Figure 1H</xref>). Importantly, all stimuli elicited sensitivity scores in accordance with the hypothesis, that is, coherent stimulus arrangements led to response facilitation and incoherent stimulus arrangements led to response deterioration, indicated by positive values.</p></sec><sec id="s3" sec-type="discussion"><title>Discussion</title><p>We showed that in chimpanzees discrimination performances between familiar conspecific faces are systematically modulated by the location and the social status of the presented individuals, leading to discrimination facilitation or deterioration. <italic>Coherent</italic> arrangements as opposed to <italic>incoherent</italic> arrangements led to a facilitation of recognition. Further, both, a high-ranked individual at the higher position and a low-ranked individual at the lower position caused recognition facilitation equivalently. Importantly, the participants were not trained on discriminating the ranks of the presented individuals. Instead, they were substantially affected by the rank while discriminating the identity of those individuals. The modulations are in accordance with a spatial arrangement representing high-ranked individuals at the top and low-ranked individuals at the bottom, hence reflecting the inverse function of response latencies and spatial distance of two individuals on the display and in the mental hierarchy space of the participants.</p><p>It has to be noted that a confound between perceptual and conceptual determinants might exist. We accounted for potential confounding perceptual cues, such as physical height, gestures and postures (<xref ref-type="bibr" rid="bib24">Ridgeway and Diekema, 1989</xref>; <xref ref-type="bibr" rid="bib20">Paxton et al., 2010</xref>). However, teasing apart perceptual and conceptual determinants entirely is almost impossible. There might be a co-dependence of the two domains, with the conceptual domain having been established on the basis of the perceptual domain. We applied the following control for a perceptual confound: If perceptual cues cause the effect, for example high-ranked individuals would naturally appear in higher positions than low-ranked individuals, the response characteristics of high-ranked and low-ranked participants would be different due to the rank difference between participant and the presented individuals: A high-ranked participant is more likely to show response facilitation for low-ranked individuals, while a low-ranked participant is more likely to show response facilitation for high-ranked individuals. This, however, is not the case. Response characteristics are similar in high-ranked and low-ranked participants. Thus, even though perceptual determinants cannot be fully excluded with this control, it is still suggestive that the effect is not merely based on perceptual cues but on, to some extent, conceptual components.</p><p>In addition, we controlled for a differential effect due to one of the two stimulus sets used in the experiment. We showed that individual stimuli elicited a comparable effect within and across stimulus sets by estimating sensitivity indices. In other words, the two stimulus sets contributed equally to the effect.</p><p>A spatial component of representation has been shown in other domains: for example in humans’ responses to low-digit numbers are faster with a left-side button-press whereas higher digits are categorized faster when right-side button-presses are required (<xref ref-type="bibr" rid="bib5">Dehaene et al., 1993</xref>). Moreover, merely looking at number causes a shift of attention to the left or the right side (<xref ref-type="bibr" rid="bib7">Fischer et al, 2003</xref>). In other words, the mental number line reflects a cross-modal mapping of visual cues (numbers) and cognitive labels (values). Interestingly, social status and number comparisons recruit to some extent overlapping neural substrates in the intraparietal sulci of the human brain (<xref ref-type="bibr" rid="bib2">Chiao et al., 2009</xref>). There is evidence for a non-verbal, supramodal neural representation of numerosity in the macaque ventral intraparietal sulcus and lateral prefrontal cortex (<xref ref-type="bibr" rid="bib18">Nieder, 2012</xref>); however, neural evidence for cross-modal correspondences is missing. Rare evidence for non-human cross-modal correspondences comes from visuo-auditory mappings between high luminance and high pitch in chimpanzees (<xref ref-type="bibr" rid="bib13">Ludwig et al., 2011</xref>). This relationship between luminance and pitch illustrates a form of sound-symbolism, which refers to the concept that in human language words and referents are not arbitrary (<xref ref-type="bibr" rid="bib19">Nuckolls, 1999</xref>). Hence, while the existence of such a systematic mapping between luminance and pitch in chimpanzee suggests the emergence of an early vocabulary of human language (<xref ref-type="bibr" rid="bib23">Ramachandran and Hubbard, 2001</xref>), we here expand this finding to a cross-modal correspondence between vision and an abstract domain: the social status. A natural tendency to systematically map an abstract dimension, such as social status, in our closest and nonlinguistic relatives, the chimpanzees, suggests that this tendency had already evolved in the common ancestors of humans and chimpanzees. This tendency might have influenced the emergence of metaphorical linguistics, thinking via image schema, an embodied pre-linguistic structure of experience that motivates conceptual metaphor mappings (<xref ref-type="bibr" rid="bib11">Lakoff and Johnson, 1980a</xref>). According to Lakoff (<xref ref-type="bibr" rid="bib12">Lakoff and Johnson, 1980b</xref>), orientational metaphors, such as ‘more is up’, ‘good is up’ and ‘dominant is up’, are based on observational correlation between increasing a substance and seeing the level of the substance rise, like adding an element to a pile. Given the strong physical basis, these metaphors are good candidate for universal concepts. Until now, conceptual metaphors have been exclusive human experiences. Our findings point in a different direction.</p></sec><sec id="s4" sec-type="materials|methods"><title>Materials and methods</title><p>Six chimpanzees (<italic>Pan troglodytes</italic>; 1 male juvenile, 2 female juveniles [both around 11 years] and 3 female adults [both around 31 years]) participated in this study. The chimpanzees live in groups of 14 individuals with access to environmentally enriched outdoor (770 m<sup>2</sup>) as well as indoor compounds. The chimpanzees participated in a variety of computer-controlled tasks in the past (<xref ref-type="bibr" rid="bib16">Matsuzawa, 2003</xref>; <xref ref-type="bibr" rid="bib17">Matsuzawa et al., 2006</xref>). They are experienced in horizontally aligned delayed matching-to-sample (DMS) tasks; however, they are inexperienced in a vertical version of a DMS task. Effects of training in the vertically aligned DMS task can be ruled out.</p><p>The vertical spacing of the match and distractor stimuli was about 70 mm. Stimuli were presented at a 17-inch LCD touch panel display (1280 × 1024 pixels) controlled by custom-written software under Visual Basic 2010 (Microsoft Corporation, Redmond, Washington, USA). The stimulus size was approximately 4.5 by 6° of gaze angel. One degree of gaze angle corresponded to approximately 0.86 cm on the screen at 50 cm viewing distance. Below the display a food tray was installed in which pieces of food reward were delivered by a custom-designed feeder after completion of a correct trial. Chimpanzee participants sat in an experimental booth (2.5 m wide, 2.5 m deep, 2.1 m high), with the experimenter and the participants separated by transparent acrylic panels.</p><p>We used photographs of faces of chimpanzee individuals with obvious dominant or submissive social ranks. The face pictures were taken from individuals familiar to the participants. All faces were normalized for luminance and contrast. The agreement of twenty independent raters (researchers and caretakers familiar with the chimpanzees at the Primate Research Institute) on the social ranks of the chimpanzees was found to be Kappa = 1.00 (p<0.001), 95% CI (1.00, 1.00). For the six participants (<italic>Pan troglodytes</italic>; 1 male adolescent, 2 female adolescents and 3 female adults), the face stimuli varied according to the group particular participants belonged to. We only presented face stimuli from the same group as the participant. In total, we used two sets of four face stimuli each: stimulus set 1 for two participants and stimulus set 2 for four participants. Under the assumptions that high-rank individuals presented in spatially higher position and low-ranked individuals presented in spatially lower position lead to <italic>faster</italic> identification, while high-rank individuals in lower position and low-ranked individual in higher position lead to <italic>slower</italic> identification, we designed the experiment according to the conditions <italic>coherent</italic> and <italic>incoherent</italic>, referring to the coherence between social rank and spatial position of presentation on the screen. In addition, we compared stimuli of close distance in social rank, here referred to as the <italic>close</italic> condition, serving as the baseline condition. For this condition, there is no prediction for a crossmodal association of social rank and spatial position. The order of experimental conditions and the spatial position of match and distractor stimuli were counterbalanced. Each participant did six blocks with 48 trials each. Only correct trials went into the analyses, which is 69% (±4.5% S.E.M.) of all trials.</p><p>The dependent variable was response latencies. We conducted an analysis of variances among the participants using a mixed model ANOVA, with <italic>coherence</italic> (coherent, incoherent and close conditions) and <italic>position</italic> as a fixed factors (high, low) as well as two-sample t-test (Bonferroni-corrected for multiple comparisons) to compare individual experimental conditions. For the post-hoc analysis of <italic>coherence</italic>, we collapsed the data samples from high and low positions for all three conditions (<italic>coherent</italic>, <italic>incoherent</italic> and <italic>close</italic>) of each participant (N = 6). For the post-hoc analysis of <italic>position</italic>, we subtracted the incoherent condition from the coherent condition for <italic>high</italic> and <italic>low</italic> positions of each participant (N = 6). Further, the distribution of response latencies for each condition was binned into 12 equally sized bins and normalized by dividing the absolute frequency (i.e., the number of events in each bin) by the total number of occurrences, resulting in the relative frequency with values ranging from 0 to 1. These normalized distributions were compared using two-sample Kolmogorov-Smirnov tests. To compare the outcome by the two stimulus sets, we did the following analytical procedure: We split the data samples according to the stimulus set used for the participants and according the conditions <italic>coherent</italic> and <italic>incoherent</italic>. These four data sets were then binned according to all combinations of stimuli as they occurred in the experiment. In other words, we binned the response latencies for each stimulus showing a high-ranked individual in combination with both stimuli showing low-ranked individuals and, visa-versa, for each stimulus showing a low-ranked individual in combination with both stimuli showing high-ranked individuals (<xref ref-type="fig" rid="fig1">Figure 1H</xref>). For both stimulus sets we took the distributions of response latencies for each stimulus combination of <italic>coherent</italic> and <italic>incoherent</italic> conditions and determined a sensitivity index, describing the separation of these distributions under consideration of the standard deviation using the following equation:<disp-formula id="equ1"><label>(1)</label><mml:math id="m1"><mml:mrow><mml:mtext>idx</mml:mtext><mml:mo> </mml:mo><mml:mo>=</mml:mo><mml:mo> </mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mi>μ</mml:mi><mml:mrow><mml:mi>c</mml:mi><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>−</mml:mo><mml:mo> </mml:mo><mml:msub><mml:mi>μ</mml:mi><mml:mrow><mml:mi>c</mml:mi><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msqrt><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac><mml:mrow><mml:mo>(</mml:mo><mml:msubsup><mml:mi>σ</mml:mi><mml:mrow><mml:mi>c</mml:mi><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo> </mml:mo><mml:mo>+</mml:mo><mml:mo> </mml:mo><mml:msubsup><mml:mi>σ</mml:mi><mml:mrow><mml:mi>c</mml:mi><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo> </mml:mo><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msqrt></mml:mrow></mml:mfrac></mml:mrow></mml:math></disp-formula>with <italic>c1</italic> and <italic>c2</italic> being two experimental conditions, <italic>µ</italic> the mean and <italic>σ</italic> the standard deviation. Positive values indicate facilitation for <italic>coherent</italic> above <italic>incoherent</italic> trials.</p></sec></body><back><ack id="ack"><title>Acknowledgements</title><p>We thank James R Anderson, Malte J Rasch, Christopher F Martin and the staff of the Language and Intelligence Section for their help and useful comments. We also thank the Center for Human Evolution Modeling Research at the Primate Research Institute for daily care of the chimpanzees.</p></ack><sec sec-type="additional-information"><title>Additional information</title><fn-group content-type="competing-interest"><title>Competing interests</title><fn fn-type="conflict" id="conf1"><p>The authors declare that no competing interests exist.</p></fn></fn-group><fn-group content-type="author-contribution"><title>Author contributions</title><fn fn-type="con" id="con1"><p>CDD, Conception and design, Acquisition of data, Analysis and interpretation of data, Drafting or revising the article, Contributed unpublished essential data or reagents</p></fn><fn fn-type="con" id="con2"><p>IA, Conception and design, Acquisition of data, Analysis and interpretation of data, Drafting or revising the article, Contributed unpublished essential data or reagents</p></fn></fn-group><fn-group content-type="ethics-information"><title>Ethics</title><fn fn-type="other"><p>Animal experimentation: The experiment was carried out in accordance with the 2002 version of the Guidelines for the Care and Use of Laboratory Primates by the Primate Research Institute, Kyoto University. The experimental protocol was approved by the Animal Welfare and Care Committee of the same institute (#2011-085, #2012-090).</p></fn></fn-group></sec><ref-list><title>References</title><ref id="bib1"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bosserman</surname><given-names>RW</given-names></name></person-group><year>1983</year><article-title>Allen, T. F. H. and T. B. Starr: hierarchy: perspectives for ecological complexity. Chicago: University of Chicago Press, 1982, 310 pp</article-title><source>Behav Sci</source><volume>28</volume><fpage>305</fpage><lpage>6</lpage><pub-id pub-id-type="doi">10.1002/bs.3830280407</pub-id></element-citation></ref><ref id="bib2"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Chiao</surname><given-names>JY</given-names></name><name><surname>Harada</surname><given-names>T</given-names></name><name><surname>Oby</surname><given-names>ER</given-names></name><name><surname>Li</surname><given-names>Z</given-names></name><name><surname>Parrish</surname><given-names>T</given-names></name><name><surname>Bridge</surname><given-names>DJ</given-names></name></person-group><year>2009</year><article-title>Neural representations of social status hierarchy in human inferior parietal cortex</article-title><source>Neuropsychologia</source><volume>47</volume><fpage>354</fpage><lpage>63</lpage><pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2008.09.023</pub-id></element-citation></ref><ref id="bib3"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Dahl</surname><given-names>CD</given-names></name><name><surname>Logothetis</surname><given-names>NK</given-names></name><name><surname>Hoffman</surname><given-names>KL</given-names></name></person-group><year>2007</year><article-title>Individuation and holistic processing of faces in rhesus monkeys</article-title><source>Proc Biol Sci</source><volume>274</volume><fpage>2069</fpage><lpage>76</lpage><pub-id pub-id-type="doi">10.1098/rspb.2007.0477</pub-id></element-citation></ref><ref id="bib4"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Dahl</surname><given-names>CD</given-names></name><name><surname>Rasch</surname><given-names>MJ</given-names></name><name><surname>Tomonaga</surname><given-names>M</given-names></name><name><surname>Adachi</surname><given-names>I</given-names></name></person-group><year>2013</year><article-title>Developmental processes in face perception</article-title><source>Nat Sci Rep</source><volume>3</volume><pub-id pub-id-type="doi">10.1038/srep01044</pub-id></element-citation></ref><ref id="bib5"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Dehaene</surname><given-names>S</given-names></name><name><surname>Bossini</surname><given-names>S</given-names></name><name><surname>Giraux</surname><given-names>P</given-names></name></person-group><year>1993</year><article-title>The mental representation of parity and number magnitude</article-title><source>J Exp Psychol</source><volume>122</volume><fpage>371</fpage><lpage>96</lpage><pub-id pub-id-type="doi">10.1037/0096-3445.122.3.371</pub-id></element-citation></ref><ref id="bib6"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Feldman</surname><given-names>J</given-names></name><name><surname>Narayanan</surname><given-names>S</given-names></name></person-group><year>2004</year><article-title>Embodied meaning in a neural theory of language</article-title><source>Brain Lang</source><volume>89</volume><fpage>385</fpage><lpage>92</lpage><pub-id pub-id-type="doi">10.1016/S0093-934X(03)00355-9</pub-id></element-citation></ref><ref id="bib7"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Fischer</surname><given-names>MH</given-names></name><name><surname>Castel</surname><given-names>AD</given-names></name><name><surname>Dodd</surname><given-names>MD</given-names></name><name><surname>Pratt</surname><given-names>J</given-names></name></person-group><year>2003</year><article-title>Perceiving numbers causes spatial shifts of attention</article-title><source>Nat Neurosci</source><volume>6</volume><fpage>555</fpage><lpage>6</lpage><pub-id pub-id-type="doi">10.1038/nn1066</pub-id></element-citation></ref><ref id="bib8"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Fiske</surname><given-names>AP</given-names></name></person-group><year>1992</year><article-title>The four elementary forms of sociality: framework for a unified theory of social relations</article-title><source>Psychol Rev</source><volume>99</volume><fpage>689</fpage><lpage>723</lpage><pub-id pub-id-type="doi">10.1037/0033-295X.99.4.689</pub-id></element-citation></ref><ref id="bib9"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Fukushima</surname><given-names>H</given-names></name><name><surname>Hirata</surname><given-names>S</given-names></name><name><surname>Ueno</surname><given-names>A</given-names></name><name><surname>Matsuda</surname><given-names>G</given-names></name><name><surname>Fuwa</surname><given-names>K</given-names></name><name><surname>Sugama</surname><given-names>K</given-names></name><etal/></person-group><year>2010</year><article-title>Neural correlates of face and object perception in an awake chimpanzee (<italic>Pan troglodytes</italic>) examined by scalp-surface event-related potentials</article-title><source>PLOS ONE</source><volume>5</volume><fpage>e13366</fpage><pub-id pub-id-type="doi">10.1371/journal.pone.0013366</pub-id></element-citation></ref><ref id="bib10"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kourtzi</surname><given-names>Z</given-names></name><name><surname>Kanwisher</surname><given-names>N</given-names></name></person-group><year>2001</year><article-title>Representation of perceived object shape by the human lateral occipital complex</article-title><source>Science</source><volume>293</volume><fpage>1506</fpage><lpage>9</lpage><pub-id pub-id-type="doi">10.1126/science.1061133</pub-id></element-citation></ref><ref id="bib11"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Lakoff</surname><given-names>G</given-names></name><name><surname>Johnson</surname><given-names>M</given-names></name></person-group><year>1980a</year><source>Metaphors we live by</source><publisher-loc>Chicago</publisher-loc><publisher-name>University of Chicago Press</publisher-name></element-citation></ref><ref id="bib12"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Lakoff</surname><given-names>G</given-names></name><name><surname>Johnson</surname><given-names>M</given-names></name></person-group><year>1980b</year><article-title>The metaphorical structure of the human conceptual system</article-title><source>Cognitive Sci</source><volume>4</volume><fpage>195</fpage><lpage>208</lpage><pub-id pub-id-type="doi">10.1207/s15516709cog0402_4</pub-id></element-citation></ref><ref id="bib13"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ludwig</surname><given-names>VU</given-names></name><name><surname>Adachi</surname><given-names>I</given-names></name><name><surname>Matsuzawa</surname><given-names>T</given-names></name></person-group><year>2011</year><article-title>Visuoauditory mappings between high luminance and high pitch are shared by chimpanzees (<italic>Pan troglodytes</italic>) and humans</article-title><source>Proc Natl Acad Sci USA</source><volume>108</volume><fpage>20661</fpage><lpage>5</lpage><pub-id pub-id-type="doi">10.1073/pnas.1112605108</pub-id></element-citation></ref><ref id="bib14"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Maestripieri</surname><given-names>D</given-names></name></person-group><year>1996</year><article-title>Primate cognition and the bared-teeth display: a reevaluation of the concept of formal dominance</article-title><source>J Comp Psychol</source><volume>110</volume><fpage>402</fpage><lpage>5</lpage><pub-id pub-id-type="doi">10.1037/0735-7036.110.4.402</pub-id></element-citation></ref><ref id="bib15"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Maslow</surname><given-names>AH</given-names></name></person-group><year>1943</year><article-title>A theory of human motivation</article-title><source>Psychol Rev</source><volume>50</volume><fpage>370</fpage><lpage>96</lpage><pub-id pub-id-type="doi">10.1037/h0054346</pub-id></element-citation></ref><ref id="bib16"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Matsuzawa</surname><given-names>T</given-names></name></person-group><year>2003</year><article-title>The Ai project: historical and ecological contexts</article-title><source>Anim Cogn</source><volume>6</volume><fpage>199</fpage><lpage>211</lpage><pub-id pub-id-type="doi">10.1007/s10071-003-0199-2</pub-id></element-citation></ref><ref id="bib17"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Matsuzawa</surname><given-names>T</given-names></name><name><surname>Tomonaga</surname><given-names>M</given-names></name><name><surname>Tanaka</surname><given-names>M</given-names></name></person-group><year>2006</year><source>Cognitive Development in Chimpanzees</source><publisher-loc>Tokyo</publisher-loc><publisher-name>Springer</publisher-name></element-citation></ref><ref id="bib18"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Nieder</surname><given-names>A</given-names></name></person-group><year>2012</year><article-title>Supramodal numerosity selectivity of neurons in primate prefrontal and posterior parietal cortices</article-title><source>Proc Natl Acad Sci USA</source><volume>109</volume><fpage>11860</fpage><lpage>5</lpage><pub-id pub-id-type="doi">10.1073/pnas.1204580109</pub-id></element-citation></ref><ref id="bib19"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Nuckolls</surname><given-names>JB</given-names></name></person-group><year>1999</year><article-title>The case for sound symbolism</article-title><source>Annu Rev Anthropol</source><volume>28</volume><fpage>225</fpage><lpage>52</lpage><pub-id pub-id-type="doi">10.1146/annurev.anthro.28.1.225</pub-id></element-citation></ref><ref id="bib20"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Paxton</surname><given-names>R</given-names></name><name><surname>Basile</surname><given-names>BM</given-names></name><name><surname>Adachi</surname><given-names>I</given-names></name><name><surname>Suzuki</surname><given-names>WA</given-names></name><name><surname>Wilson</surname><given-names>ME</given-names></name><name><surname>Hampton</surname><given-names>RR</given-names></name></person-group><year>2010</year><article-title>Rhesus monkeys (<italic>Macaca mulatta</italic>) rapidly learn to select dominant individuals in videos of artificial social interactions between unfamiliar conspecifics</article-title><source>J Comp Psychol</source><volume>124</volume><fpage>395</fpage><lpage>401</lpage><pub-id pub-id-type="doi">10.1037/a0019751</pub-id></element-citation></ref><ref id="bib21"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Pinker</surname><given-names>S</given-names></name></person-group><year>1997</year><source>How the mind works</source><publisher-loc>New York</publisher-loc><publisher-name>W.W. Norton & Company</publisher-name></element-citation></ref><ref id="bib22"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Pokorny</surname><given-names>JJ</given-names></name><name><surname>de Waal</surname><given-names>FB</given-names></name></person-group><year>2009</year><article-title>Monkeys recognize the faces of group mates in photographs</article-title><source>Proc Natl Acad Sci USA</source><volume>106</volume><fpage>21539</fpage><lpage>43</lpage><pub-id pub-id-type="doi">10.1073/pnas.0912174106</pub-id></element-citation></ref><ref id="bib23"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ramachandran</surname><given-names>V</given-names></name><name><surname>Hubbard</surname><given-names>EM</given-names></name></person-group><year>2001</year><article-title>Synaesthesia—a window into perception, thought and language</article-title><source>J Consciousness Stud</source><volume>8</volume><fpage>3</fpage><lpage>34</lpage></element-citation></ref><ref id="bib24"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ridgeway</surname><given-names>C</given-names></name><name><surname>Diekema</surname><given-names>D</given-names></name></person-group><year>1989</year><article-title>Dominance and collective hierarchy formation in male and female task groups</article-title><source>Am Sociol Rev</source><volume>54</volume><fpage>79</fpage><lpage>93</lpage><pub-id pub-id-type="doi">10.2307/2095663</pub-id></element-citation></ref><ref id="bib25"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Saunders</surname><given-names>P</given-names></name></person-group><year>1989</year><source>Social class and stratification</source><publisher-loc>London</publisher-loc><publisher-name>Routledge</publisher-name></element-citation></ref><ref id="bib26"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Sigala</surname><given-names>N</given-names></name><name><surname>Gabbiani</surname><given-names>F</given-names></name><name><surname>Logothetis</surname><given-names>NK</given-names></name></person-group><year>2002</year><article-title>Visual categorization and object representation in monkeys and humans</article-title><source>J Cogn Neurosci</source><volume>14</volume><fpage>187</fpage><lpage>98</lpage><pub-id pub-id-type="doi">10.1162/089892902317236830</pub-id></element-citation></ref><ref id="bib27"><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Tiedens</surname><given-names>LZ</given-names></name><name><surname>Fragale</surname><given-names>AR</given-names></name></person-group><year>2003</year><article-title>Power moves: complementarity in dominant and submissive nonverbal behavior</article-title><source>J Pers Soc Psychol</source><volume>84</volume><fpage>558</fpage><lpage>68</lpage><pub-id pub-id-type="doi">10.1037/0022-3514.84.3.558</pub-id></element-citation></ref><ref id="bib28"><element-citation publication-type="book"><person-group person-group-type="author"><name><surname>Weber</surname><given-names>M</given-names></name></person-group><year>1991</year><source>From max weber: essays in sociology</source><publisher-loc>London</publisher-loc><publisher-name>Routledge</publisher-name></element-citation></ref></ref-list></back></article> |