Journal cover Journal topic
Abstracts of the ICA
Journal topic
Articles | Volume 1
https://doi.org/10.5194/ica-abs-1-83-2019
https://doi.org/10.5194/ica-abs-1-83-2019
15 Jul 2019
 | 15 Jul 2019

Emotional responses to climate change map framing using facial emotion recognition technology

Carolyn S. Fish and Amy L. Griffin

Keywords: facial emotion recognition, map design, climate change communication

Abstract. The different ways that maps are framed by the media can lead to different emotional responses to the same information. Framing shapes the way in which information is presented, and frames are used by authors and designers to focus the reader’s attention in order to lead them to a particular interpretation of the information being communicated. This framing thereby shapes attitudes. In maps, frames can be invoked through both the map’s design and the text included within and surrounding the map. Framing is what allows communicators to focus their message and make particular aspects of information salient to their readers. Emotional framing, as a subset of all frames, involves conveying emotions through the message. These can have a positive or negative valence and may even convey a particular emotion. In the case of climate change, communication scholars argue that it is impossible to represent climate change information in a neutral manner without some sort of emotional framing. In addition, other risk communication literature shows that the emotional framing of a risk has impact on the behavioral response of those for whom the message is intended. This emotional framing has been noted by climate change communication researchers as being key to prompting behavioral changes like mitigation and adaptation.

Little research has been done in the field of cartography to evaluate emotional responses to climate change maps and the framing of mapped climate change information. Cartographers and GIScientists have much to offer for communicating climate science to audiences. Cartographers are tasked with designing maps to make climate change understandable to lay audiences. Maps can illustrate the multidimensional causes and impacts of climate change using static and dynamic graphics that can be widely disseminated across the web to diverse audiences. It is unclear, however, how the framing of these maps affects how readers interpret the message of the map or how different frames might lead to different emotional responses and potentially, different behavioral responses. This study empirically evaluated emotional responses to climate change frames and maps to examine how different frames might lead to different emotional responses. Specifically, using emotion recognition software and Amazon Mechanical Turk, we answered three research questions:

  • Is it possible to measure emotional responses to maps and their frames using facial emotion recognitionsoftware?
  • What emotions are elicited by different maps and different framings?
  • Do particular emotional responses lead to different behavior changes?

In 2009, the United States Global Change Research Program (USGCRP) submitted its report on climate change impacts in the United States to congressional lawmakers in Washington, D.C. This report contained scientific evidence of climate change impacts on the United States as a whole as well as specific impacts on regions within the United States. The report was compiled by 28 scientists at US universities, national labs, and the National Oceanic and Atmospheric Administration (NOAA) based on peer reviewed scientific articles. The report was edited by another three scientists and had input from 22 other scientists.

In 2012, the CATO Institute, a conservative think tank, produced what they called an “Addendum” to the original report. The report was compiled by only five authors. Only two of the five authors were from universities while the rest of the authors were from think tanks that have known political biases. The report was edited by only one individual, an employee of the CATO Institute. The report clearly has a political leaning and has contradictory information including: 1) climate change denial, and 2) an acceptance that climate change is happening but that it can be reframed as a positive change for the American public and businesses. The CATO report uses many of the same figures and maps that appeared in the USGCRP Report. These maps are sometimes edited slightly to change their meaning, and the text surrounding the images often uses a different frame.

Figure 1 exemplifies the difference between the two reports in their maps and their frames. The version that was produced for USGCRP report illustrates ski areas at risk, while the CATO report swaps colors in the map to show the potential for ski areas to become golf resorts in a warmer climate.

To answer the research questions posed above, we are conducting a map reading study using Amazon Mechanical Turk (MTurk). During the study, participants turned on their webcam and recorded their facial expressions while reading the maps and taking the survey. Participants were divided randomly into one of two groups. The first read maps from the USGCRP report, and the second read maps from the CATO Institute report. Each group viewed three maps and read three news stories that used one of two different emotional frames. The USGCRP report portrays climate change with a negative frame, while the CATO Institute portrays climate change as a potential positive. The news reports were written by the researchers to invoke the emotional frames from each of the two reports when succinctly summarizing the message of each of the maps.

After reading each of the stories, participants self-reported their emotional responses to the stories and maps by selecting from a list of emotions. After the participant viewed all of the news stories and maps, the participant answered demographic questions as well as provided information about their political ideology, and their views about climate change. Finally, at the end of the survey, participants were asked whether they would like to donate their MTurk earnings to an environmental organization from a short list, our behavioral response measure. At the end of the survey, the participant sent their webcam recording to the researchers for analysis with emotion recognition software.

We analyzed the webcam footage to identify different types of emotional responses to the stories and maps using the Affectiva emotion recognition algorithm implemented in iMotions 7.1. Finally, we compared the emotion recognition results to the emotion self-reports, demographic information, and behavioral response measure. Preliminary results from analysis of five participants in this research indicate that emotional responses can be collected remotely by having participants turn on their webcam and upload their videos to be analysed using facial emotion recognition software.

Publications Copernicus
Download
Citation