Self-report captures 27 distinct categories of emotion bridged by continuous gradients - PubMed (original) (raw)
Clinical Trial
. 2017 Sep 19;114(38):E7900-E7909.
doi: 10.1073/pnas.1702247114. Epub 2017 Sep 5.
Affiliations
- PMID: 28874542
- PMCID: PMC5617253
- DOI: 10.1073/pnas.1702247114
Clinical Trial
Self-report captures 27 distinct categories of emotion bridged by continuous gradients
Alan S Cowen et al. Proc Natl Acad Sci U S A. 2017.
Abstract
Emotions are centered in subjective experiences that people represent, in part, with hundreds, if not thousands, of semantic terms. Claims about the distribution of reported emotional states and the boundaries between emotion categories-that is, the geometric organization of the semantic space of emotion-have sparked intense debate. Here we introduce a conceptual framework to analyze reported emotional states elicited by 2,185 short videos, examining the richest array of reported emotional experiences studied to date and the extent to which reported experiences of emotion are structured by discrete and dimensional geometries. Across self-report methods, we find that the videos reliably elicit 27 distinct varieties of reported emotional experience. Further analyses revealed that categorical labels such as amusement better capture reports of subjective experience than commonly measured affective dimensions (e.g., valence and arousal). Although reported emotional experiences are represented within a semantic space best captured by categorical labels, the boundaries between categories of emotion are fuzzy rather than discrete. By analyzing the distribution of reported emotional states we uncover gradients of emotion-from anxiety to fear to horror to disgust, calmness to aesthetic appreciation to awe, and others-that correspond to smooth variation in affective dimensions such as valence and dominance. Reported emotional states occupy a complex, high-dimensional categorical space. In addition, our library of videos and an interactive map of the emotional states they elicit (https://s3-us-west-1.amazonaws.com/emogifs/map.html) are made available to advance the science of emotion.
Keywords: dimensions; discrete emotion; emotional experience; semantic space.
Conflict of interest statement
The authors declare no conflict of interest.
Figures
Fig. S1.
Category judgment concordance levels, dimensional judgment frequencies, and free response term use frequencies. (A) Interrater concordance levels for each video, for each category of elicited emotion. Dots represent the proportions of times the category was chosen for each video. Only videos for which the category was chosen at least once are shown. Seventy-five percent of the videos elicited significance concordance in emotional response (FDR <0.05), with every category being elicited with significant concordance by at least one video. (B) Judgment frequency for each affective dimension across all videos. Shaded plots for each affective dimension are kernel histograms of the distribution of average ratings for that affective dimension across videos. Histograms are normalized to an arbitrary height. Elicited emotional states were more widely distributed in terms of some affective dimensions, such as approach, than others, such as dominance. (C) Frequency of term use in free response judgments. The area occupied by each word is proportionate to the number of times the word was chosen. Participants used a wide variety of nuanced terms to describe the emotions elicited by each video.
Fig. S2.
Simulation studies verifying the use of split-half CCA for categorical ratings. Simulations (n = 2,312) were conducted, generating datasets of known underlying dimensionality that could be compared with the SH-CCA estimate. General specifications: For each simulation, underlying multinomial probabilities of each category for each of 2,185 hypothetical stimuli were generated by randomly sampling stimuluswise loadings on “explainable” and individual ratingwise loadings on “unexplainable” dimensions from a uniform distribution, with each stimulus loading exclusively on one to two random explainable and two random unexplainable dimensions. Each explainable dimension in turn loaded on one to two categories at random. All but one unexplainable dimension always loaded on one category each, with the final unexplainable dimension always loading on all remaining categories. The total number of explainable and unexplainable dimensions was systematically varied from 1 to 34 for each simulation study, resulting in 34*34*2 = 2,312 simulations. Each 34-category rating of each stimulus was drawn from a multinomial distribution with probability equal to the stimulus-specific explainable dimension loadings times the explainable dimension coefficients, plus the rating-specific unexplainable loadings times the unexplainable dimension coefficients, normalized to sum to 1. Twelve ratings were sampled for each stimulus. Each rating could comprise multiple selections, with the probability of each number of selections equal to what we observe in our actual categorical judgment data (56% one category, 27% two categories, 11% three categories, 4% five categories, and 1% or less for the remainder). Low vs. high signal-to-noise (SNR) simulations: For low-SNR simulations (plotted in blue), both unexplainable dimension coefficients and explainable dimension coefficients were always 1. For high-SNR simulations (plotted in black), unexplainable dimension coefficients were always 0.05 and explainable dimension coefficients were always 1. (A) The dimensionality estimated by SH-CCA is plotted against the known dimensionality of the data. The estimates are highly accurate, typically underestimating the true dimensionality by at most 1. (B) Median SNR per category is plotted for each simulation. The median SNR we observe is consistent with SNRs observed in our low-SNR simulations with around 25 systematic dimensions. (C) Controlled vs. actual FWER across low- and high-SNR simulations. Estimates were, in general, overly conservative. Further research might examine whether the incorporation of nonlinear CCA methods more targeted to multinomial data could increase power to estimate dimensionality relative to the SH-CCA method developed here.
Fig. 1.
Factor analysis loadings on 27 dimensions of variance within the categorical responses. Statistical analyses revealed that categorical judgments reliably captured up to 27 separable dimensions of variance, each corresponding to a semantically distinct variety of reported emotional experience. Here, the first 27 principal components of variance within the categorical judgments, extracted using principal components analysis (PCA), have been rotated into more interpretable dimensions using varimax rotation, which finds dimensions that load on relatively few categories. Categories without maximal loadings on any dimensions (contempt, disappointment, envy, guilt, relief, sympathy, and triumph) were either not judged reliably or were taken as roughly linearly dependent with other more frequently used categories during dimensionality reduction. Categories loading on separate dimensions were reliably separable in meaning with respect to the emotional states elicited by the videos. The dimensions we derive from emotion self-report in response to short videos demonstrate a complexity of emotion structure beyond what has been proposed in most emotion theories to date, reliably differentiating emotional states as nuanced as aesthetic appreciation (i.e., feelings of beauty and awe).
Fig. S3.
Rotated factor weights when including 24, 25, or 26 dimensions of variance in the categorical judgments. Dropping from 27 components (Fig. 1) to 26 (Right) eliminates the separate categorical judgment dimension corresponding to the category “boredom,” which instead loads negatively on most of the 26 other categorical judgment dimensions. Dropping to 25 components (Middle) eliminates the categorical judgment dimension corresponding to anger, “contempt,” and “disappointment,” which instead load positively on the “sadness” dimension. Dropping to 24 components (Left) eliminates the categorical judgment dimension corresponding to “satisfaction,” which instead loads on the “admiration” dimension along with “pride” and “triumph.” These analyses indicate that reports of experiences of boredom, anger, and satisfaction were less reliably differentiated in responses to particular videos than experiences of other categorical judgment dimensions, such as aesthetic appreciation, awe, fear, and horror.
Fig. 2.
The structure of reported emotional experience: Smooth gradients among 27 semantically distinct categorical judgment dimensions. (A) A chromatic map of average emotional responses to 2,185 videos within a 27-dimensional categorical space of reported emotional experience. t-distributed stochastic neighbor embedding (t-SNE), a data visualization method that accurately preserves local distances between data points while separating more distinct data points by longer, more approximate, distances, was applied to the loadings of the 2,185 videos on the 27 categorical judgment dimensions, generating loadings of each video on two axes. The individual videos are plotted along these axes as letters that correspond to their highest loading categorical judgment dimension (with ties broken alphabetically) and are colored using a weighted interpolation of the unique colors corresponding to each of the categorical judgment dimensions on which they loaded positively. The resulting map reveals gradients among distinct varieties of reported emotional experiences, such as the gradients from anxiety to fear to horror to disgust (also see the interactive map at
https://s3-us-west-1.amazonaws.com/emogifs/map.html
). (B) Number of significant coloadings of each video on each categorical judgment dimension. The significance of individual loadings of each video on each categorical judgment dimension was determined via simulation of a null distribution (Supporting Information). We then counted the number of instances in which videos loaded significantly (FDR <0.05) on pairs of two categorical judgment dimensions. These results validate the emotion gradients observed in A. For example, anxiety and fear (F and Q) were elicited by many of the same videos (75 times in total), as were fear and horror (Q and R; 55 times), yet anxiety and horror were seldom elicited by the same videos (just eight times). (C) Top free response terms associated with each categorical judgment dimension. The free response judgments were regressed onto the categorical judgment dimensions, across videos. For 22/27 dimensions, the highest loading category is among the three (out of 600) top-weighted free response terms, strongly validating the categorical ratings as measures of subjective experience.
Fig. 3.
Variance explained by the categorical judgments in the affective dimension judgments, and vice versa. The categorical judgment dimensions explain significantly more variance in the affective dimension judgments than vice versa (P < 10−6, bootstrap test). These findings hold when using both linear regression with ordinary least squares (Left) and nonlinear regression with k-nearest neighbors (Middle). This suggests that the categories have the most value in explicating reported emotional experiences elicited by short videos. (Explained variance was calculated using leave-one-out-cross-validation and then divided by the estimated explainable variance. For this analysis, we used nine ratings per video. For k-nearest neighbors, we tested k from 1 to 50 and show the results from choosing the optimal k, i.e., the one that resulted in the greatest average explained variance for each prediction. See also Supporting Information.
Fig. S4.
Gradients in the categorical judgment space correspond to smooth differences in affective meaning. We further analyzed gradients among all 11 pairs of categorical judgment dimensions sharing significant coloadings on 20 or more videos (Fig. 2_B_). For each categorical judgment dimension pair, the affective dimension ratings were regressed onto the difference in loading between the two categorical judgment dimensions using principal components regression with four components (we opted to avoid using linear regression with all 14 dimensions on only 20–75 observations). To avoid overfitting, we first trained the regression on all videos for which only one of the two categorical judgment dimensions had a significant loading and then tested it on the videos for which both categorical judgment dimensions had significant loadings. The latter videos are plotted as data points above, with the actual differences in loading for each pair of categorical judgment dimensions on the x axis and the predicted differences based on a linear combination of affective dimensions on the y axis. _y-axis labels indicate the most strongly weighted affective dimension. Dashed white lines are regression lines. Data points are colored using a weighted average of the colors corresponding to all categorical judgment dimensions on which they load positively, as in Fig. 2_A. Seven out of 11 categorical gradients have significant correlations with affective dimension (FDR control at the 0.05 level is achieved when P < 0.033). The error bars represent SE in the space of affective dimensions. The homogeneity of the error bars indicates that the affective dimensions are homoscedastic, demonstrating that their smooth relationships with the categorical gradient cannot be explained by disagreement across raters at the boundary between categories.
Fig. 4.
Canonical correlations analysis between the categorical judgment dimensions and the affective dimensions. (A) The first 13 canonical correlations between the categorical judgment dimensions and the affective dimensions were found to be significant (P < 0.01). We assigned labels to each canonical variate by interpreting its coefficients on the affective dimensions (see Fig. S5 for the coefficients). (B and C). Categorical variates (B) and dimensional variates (C) for the first three canonical correlations, projected as red, green, and blue color channels onto the t-SNE map from Fig. 2_A_. Color legends are given in the titles for each map. Similarity in colors between E and F illustrates the degree of shared information between the categorical and dimensional judgments for these three dimensions of emotional experience. Labels on each map reflect the combination of loadings on the three dimensions that give rise to each color. (D and E) Like B and C, but for the fourth through sixth canonical correlations. Altogether, B_–_E illustrate that the categorical gradients correspond to smooth differences in affective dimension (see also Fig. S4 for analysis of gradient smoothness).
Fig. S5.
Coefficients of the 13 significant category and affective dimension variates from canonical correlation analysis. See Fig. 4 for corresponding canonical correlations and loadings of the first four canonical variates on the videos.
Fig. S6.
Individual differences in demographics and personality explain a relatively small proportion of the variance in reported categories of emotional experience. Demographic and personality information was collected from each rater in a separate survey submitted before each rating survey. The demographics and personality survey included self-reported years of age, gender, marital status, 11 levels of education ranging from “None” to “Doctorate,” fiscal and social conservatism (1-to-7 scales), religiousness (1-to-7 scale), the MacArthur Scale of Subjective Social Status, the Short Dark Triad of Personality, the Ten Item Personality Measure (TIPI), two questions on trait anxiety from the State-Trait Anxiety Inventory, and two questions on subjective wellbeing from the Satisfaction with Life Scale. Using each of 19 composite items from this survey (e.g., the final measures of each of the Big Five personality traits from the TIPI), we performed a median split, assigning each individual response to the categorical judgment survey to one of two separate datasets. We then correlated the mean responses to each video across datasets, resulting in a “median-split correlation.” If the median-split correlation is low, then we can infer that people low vs. high in the splitting variable, for example extroversion, respond differently to the videos. Otherwise, we can infer that they responded similarly. To test whether the median-split correlation for each variable was significantly lower than would be expected by chance, we performed a separate permutation test for each splitting variable, randomly assigning equivalent numbers of raters of each video to one of two datasets. We repeated this permutation test 1,000 times for each variable. The resulting median-split correlations are displayed as black dots on the bar graph. For the most part, the split-half correlations are close to chance levels—the correlations obtained by splitting participants at random, plotted as the black dots—indicating that people of different genders, education levels, political views, and personalities responded very similarly to the videos. Interestingly, the only variable for which a median split explained a significant amount of variance in the categorical judgment was self-reported religiousness (FDR <0.01). However, even religiousness explained a relatively small proportion of the variance in the categorical judgments, with a median-split correlation only 5.5% lower than would be expected by chance (i.e., 5.5% of the variance explained by the videos presented). (The lower chance level of variance explained when religiousness was used as a splitting variable reflects the imbalance between the two resulting samples. A relatively small proportion of ratings, 31% overall, were submitted by raters who chose anything greater than the median of 1, or “Not at all religious.”)
Comment in
- Nature of Emotion Categories: Comment on Cowen and Keltner.
Barrett LF, Khan Z, Dy J, Brooks D. Barrett LF, et al. Trends Cogn Sci. 2018 Feb;22(2):97-99. doi: 10.1016/j.tics.2017.12.004. Epub 2018 Jan 16. Trends Cogn Sci. 2018. PMID: 29373283 Free PMC article. - Clarifying the Conceptualization, Dimensionality, and Structure of Emotion: Response to Barrett and Colleagues.
Cowen AS, Keltner D. Cowen AS, et al. Trends Cogn Sci. 2018 Apr;22(4):274-276. doi: 10.1016/j.tics.2018.02.003. Epub 2018 Feb 21. Trends Cogn Sci. 2018. PMID: 29477775 Free PMC article.
Similar articles
- The representation of emotional experience from imagined scenarios.
Faul L, Baumann MG, LaBar KS. Faul L, et al. Emotion. 2023 Sep;23(6):1670-1686. doi: 10.1037/emo0001192. Epub 2022 Nov 17. Emotion. 2023. PMID: 36395023 Free PMC article. - The primacy of categories in the recognition of 12 emotions in speech prosody across two cultures.
Cowen AS, Laukka P, Elfenbein HA, Liu R, Keltner D. Cowen AS, et al. Nat Hum Behav. 2019 Apr;3(4):369-382. doi: 10.1038/s41562-019-0533-6. Epub 2019 Mar 11. Nat Hum Behav. 2019. PMID: 30971794 Free PMC article. - Mapping 24 emotions conveyed by brief human vocalization.
Cowen AS, Elfenbein HA, Laukka P, Keltner D. Cowen AS, et al. Am Psychol. 2019 Sep;74(6):698-712. doi: 10.1037/amp0000399. Epub 2018 Dec 20. Am Psychol. 2019. PMID: 30570267 Free PMC article. - Mapping the Passions: Toward a High-Dimensional Taxonomy of Emotional Experience and Expression.
Cowen A, Sauter D, Tracy JL, Keltner D. Cowen A, et al. Psychol Sci Public Interest. 2019 Jul;20(1):69-90. doi: 10.1177/1529100619850176. Psychol Sci Public Interest. 2019. PMID: 31313637 Free PMC article. Review. - Semantic Space Theory: A Computational Approach to Emotion.
Cowen AS, Keltner D. Cowen AS, et al. Trends Cogn Sci. 2021 Feb;25(2):124-136. doi: 10.1016/j.tics.2020.11.004. Epub 2020 Dec 18. Trends Cogn Sci. 2021. PMID: 33349547 Review.
Cited by
- Clinical, scientific and stakeholders' caring about identity perturbations.
Löffler-Stastka H. Löffler-Stastka H. World J Psychiatry. 2024 Oct 19;14(10):1422-1428. doi: 10.5498/wjp.v14.i10.1422. eCollection 2024 Oct 19. World J Psychiatry. 2024. PMID: 39474383 Free PMC article. - A generic self-learning emotional framework for machines.
Hernández-Marcos A, Ros E. Hernández-Marcos A, et al. Sci Rep. 2024 Oct 28;14(1):25858. doi: 10.1038/s41598-024-72817-x. Sci Rep. 2024. PMID: 39468109 Free PMC article. - Neural Representations of Emotions in Visual, Auditory, and Modality-Independent Regions Reflect Idiosyncratic Conceptual Knowledge.
Gao C, Oh S, Yang X, Stanley JM, Shinkareva SV. Gao C, et al. Hum Brain Mapp. 2024 Oct;45(14):e70040. doi: 10.1002/hbm.70040. Hum Brain Mapp. 2024. PMID: 39394899 Free PMC article. - Evaluation of emotion classification schemes in social media text: an annotation-based approach.
Zhang F, Chen J, Tang Q, Tian Y. Zhang F, et al. BMC Psychol. 2024 Sep 27;12(1):503. doi: 10.1186/s40359-024-02008-w. BMC Psychol. 2024. PMID: 39334344 Free PMC article. - Commonalities and variations in emotion representation across modalities and brain regions.
Kiyokawa H, Hayashi R. Kiyokawa H, et al. Sci Rep. 2024 Sep 9;14(1):20992. doi: 10.1038/s41598-024-71690-y. Sci Rep. 2024. PMID: 39251743 Free PMC article.
References
- Izard CE. Basic emotions, natural kinds, emotion schemas, and a new paradigm. Perspect Psychol Sci. 2007;2:260–280. - PubMed
- Izard CE. Emotion feelings stem from evolution and neurobiological development, not from conceptual acts: Corrections for Barrett et al., 2007. Perspect Psychol Sci. 2007;2:404–405. - PubMed
- Goldie P. Emotion, feeling, and knowledge of the world. In: Solomon RC, editor. Thinking About Feeling: Contemporary Philosophers on Emotions. Oxford Univ Press; Oxford: 2004.
- Frijda NH. Emotions and hedonic experience. In: Kahneman D, Schwarz N, Diener E, editors. Well-Being: Foundations of Hedonic Psychology. Russell Sage Foundation; New York: 2003.
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources
Other Literature Sources