Episode 03: Happiness, Sadness, Fear, Anger, Surprise, Disgust

“Paul Ekman worked with Silvan Tomkins and they worked together to go around various countries, America, Brazil, Chile, Argentina, Japan, and they asked people simple questions like, ‘Which of these faces would you pull if somebody jumped out at you?’ Then they decided from all of this that there were six emotions, and that’s where it began.”

In the 1970s, an American scientist gathered evidence that everyone in the world experiences the same six basic emotions. It’s a simple, compelling theory—and it might be wrong. Yet from emotion-detecting AI to Pixar’s Inside Out, the six basic emotions might be inadvertently making us feel less.

Written and presented by Chiquita Paschal and Ian Steadman
Produced by Sarah Myles


Featured guests


Show notes

Some additional information and resources not directly linked to within the transcript (below):

  • The “taxonomy of emotions” map from UC Berkeley’s Greater Good Science Center (left) features 27 emotions.
  • Paul Ekman has his own “Atlas of Emotions” (made in collaboration with the Dalai Lama, no less).
  • Richard Firth-Godbehere has written at length about the past and future of emotion science, including about his worries that new technology could flatten human emotional diversity.
  • Both The Guardian and Quartz have covered the potential issues with emotion recognition.
  • Affectiva’s site has more information about their research and methods.
  • You can also read more about practical applications of emotion recognition technology, including road rageinsurancepassenger screening at airports, and in digital assistants like Amazon’s Alexa:
An image from an Amazon patent demonstrating how Alexa may eventually respond to the emotional quality of your voice.

Transcript

IAN STEADMAN: Hey, Chiquita.

CHIQUITA PASCHAL: Hey, Ian.

IAN STEADMAN: I’m going to play you a sound.

[LAUGHING SOUND]

CHIQUITA PASCHAL: Why are you making creepy laughing noises at me?

IAN STEADMAN: I’m going to play it again. I want you to guess what emotion this represents.

CHIQUITA PASCHAL: Oh, that’s not you.

[LAUGHING SOUND AGAIN]

CHIQUITA PASCHAL: I feel creeped out.

IAN STEADMAN: That’s close. That was meant to be halfway between amusement and elation.

CHIQUITA PASCHAL: You’ve just got to get acquainted with this voice but it definitely, that sounded like somebody, like, creeping up behind someone. You know the person in the shadows?

IAN STEADMAN: Yeah. To me, it sounded a little bit nervous, almost, like the kind of laughter that you do when you’ve just put on a surprise party for someone and you’ve seen the reaction on their face and it’s not quite as happy as you were expecting and you’re worried that you might actually have annoyed them.

[LAUGHING SOUND AGAIN]

CHIQUITA PASCHAL: You’re right, it’s really, “You agree with me, right?”

IAN STEADMAN: I’m going to play you some more, now, I’ll explain why in a bit.

[GASP NOISE]

CHIQUITA PASCHAL: That’s Glenn Close’s reaction to Billy Porter’s tuxedo, floor length velvet gown.

IAN STEADMAN: They describe it as a mixture of surprise and awe.

[GASP NOISE]

IAN STEADMAN: Which I guess is kind of what that is? I feel like you won that one.

CHIQUITA PASCHAL: Can there be a ding, ding, ding?

[DING DING DING NOISE]

CHIQUITA PASCHAL: OK, this is a fun game, I like this.

IAN STEADMAN: Let’s go for, now, any children listening, or any parents with children you might want to, I’m scared to go over this one but we’ll find out now how inappropriate it is, I guess, so, sorry in advance.

[AWW NOISE]

IAN STEADMAN: That wasn’t what I expected at all.

[AWW NOISE AGAIN]

CHIQUITA PASCHAL: That doesn’t make sense in the context of the advisory you just gave because that sounds like someone seeing something kind of cute and being like, “Aww.”

IAN STEADMAN: It’s, the label they’ve given it doesn’t match what I thought it was going to be. It was labelled as a form of arousal.

CHIQUITA PASCHAL: Ew, that was not arousal, that was anti-arousal.

IAN STEADMAN: Yeah, I mean, it’s arousal mixed with sympathy.

CHIQUITA PASCHAL: No, no, those things should not be mixed.

IAN STEADMAN: Yeah, that feels kind of like oil and water, doesn’t it? And finally, I don’t know exactly how easy it’s going to be to guess what it is, but…

[UH NOISE]

CHIQUITA PASCHAL: That one sounds like it needs the parental advisory.

IAN STEADMAN: That is pain.

[UH NOISE TWICE AGAIN]

CHIQUITA PASCHAL: Ew, stop. I’m terrible at this game. I think from now on you need to augment your speech with this cast of supporting characters. They can be your own personal Greek chorus, that way you can still be super British and withhold all of your emotions, and then you can just outsource it to these automatons.

[WOO HOO NOISE]

IAN STEADMAN: That’s enough of that. Let me explain this. This is a taxonomy of feelings, it comes from the Greater Good Research Center at the University of Berkeley in California and it is a way of mapping 24 different emotions, like anger and fear. It takes them and it tries to map them on, it looks like a world map. This is a spectrum, if you will. You can go from pure anger to pure sadness but you can kind of go between the two points as well, and it’s pretty fun to move your mouse around and have a play with them.

I think it’s interesting because this game we’ve been playing is kind of silly, right? It’s clearly a bit ambiguous what these sounds are. We can kind of get the ballpark, but really saying that this emotion that someone is expressing, this noise, this, “Ha, ha, ha,” “hee, hee, hee,” really means this, this, or this, it kind of depends on the person you’re asking, right?

CHIQUITA PASCHAL: All these people sound shady but it’s interesting because I also kind of pick up or realize my own biases.

IAN STEADMAN: So, have you seen the movie, Inside Out?

CHIQUITA PASCHAL: No, I’ve seen no movies.

IAN STEADMAN: In this film, it’s a kids’ film, everyone is driving around with these people living inside their heads.

CHIQUITA PASCHAL: There’s little people in there?

IAN STEADMAN: Yeah, and the people represent single emotions, like fear, or anger, or sadness. That’s not how people literally work but it’s kind of a simplified model of what many emotion scientists do actually think and this isn’t just a boring intellectual exercise. Today, we’re going to be looking at a particular story, a particular scientific theory which has kind of shaped a lot of the second half of the 20th century. It’s a theory about how humans experience and express emotions.

Welcome back to This Will Change Your Mind, the show where we unpack how other people changed your mind.

CHIQUITA PASCHAL: I’m Chiquita Pascal.

IAN STEADMAN: And I’m Ian Steadman.

[INTRO THEME]

RICHARD FIRTH-GODBEHERE: I am a doctor, I guess. Richard Firth-Godbehere and I’m part of the Centre for the History of Emotions at Queen Mary University of London.

IAN STEADMAN: So, I was wondering, could I ask you to start by telling us what an emotion is?

RICHARD FIRTH-GODBEHERE: No.

CHIQUITA PASCHAL: Let’s start all the way at the beginning.

RICHARD FIRTH-GODBEHERE: You can go back to Aristotle. Aristotle characterized emotions in a very different way to how we understand them now. In the early modern period, the beginnings of the enlightenment and the scientific revolution, as some people call it, there were lots of people talking about emotions then, there were books categorizing emotions, categorizing passions as they would have called them.

So, there was that, but in the world of sciences, really, you’re looking at your anthropologists, people like Margaret Mead who, in the 20s, went to American Samoa, to an island I can’t pronounce, I think it’s Ta‘ū, T-A-U, and she was trying to find out about emotions. And she believed that people on that island didn’t really experience emotions in the same way that they did in the West, that they didn’t have these what you might call moral emotions, of disgust, and rage, and anxiety, and that sort of thing. So, she thought, no, they were culturally-based. That, before the 60s, was what emotion research was, it was that sort of thing.

Now, in the 60s, it became a bit more interesting, scientific-wise. Silvan Tomkins first started to think, “Are these evolved?” He got massively into Charles Darwin, read him, Expression of Emotions in Animals, thought, “that’s wonderful,” and he started to come up with ideas about how emotions might be evolved, how we might find out what emotions are evolved and how we might understand them. And then he met the granddaddy of emotion science, Paul Ekman. Paul Ekman worked with Silvan Tomkins and they worked together to go around various countries, America, Brazil, Chile, Argentina, Japan, I think they did France and a few others, and the asked people simple questions like, “Which of these faces would you pull if somebody jumped out at you?” or something like that, and then they decided from all of this data of people pointing at faces that there were six emotions and that’s where it began.

IAN STEADMAN: So, just six emotions?

RICHARD FIRTH-GODBEHERE: Six basic emotions. There were many more but they called them compound emotions. There we six we all had, they thought. The six they came up with were happiness, anger, sadness, disgust, surprise, and fear. Basic emotions. Happiness, anger, sadness, disgust, surprise, and fear.

IAN STEADMAN: It starts out with kind of innate moving towards a culturally relativistic framework with Margaret Mead, and then it moves back to Paul Ekman going, “No, there’s six, they’re fundamental.”

RICHARD FIRTH-GODBEHERE: And they all didn’t think that Mead was right. And so they tested it and they first of all tested it, and then they thought to themselves, “Hang on, these cultures that we’ve looked at, they’ve all got access to Western media, they all know about each other, there can be cross pollination here, how do we prove this?”

And so, a quite famous, growingly infamous experiment was done where they went to Papua New Guinea, the south east islands, into the Okapi Valley and they met the Fore People. Now, the Fore people had supposedly never really been seen by the West very often. There had been a couple of missionaries there and that’s about it. The only way to get through to them was a very, very dodgy track and most jeeps could only get halfway down and then you had to walk, you know. So, they were great, they were a blank canvas. Let’s do this experiment with them, let’s see what happens.

And so Ekman and Freisen, who went on their wonderful journey to see the Fore people, took the same six facial expressions and got translators who they trained quite vigorously to try and make sure they were asking exactly the same little short statements. Then, they tried to find people who had definitely never seen the West before and they found about 189 adults and 130, 140 children, something like that, and they tested them in exactly the same way. And, lo and behold, got exactly the same faces. So, that was it, they’d proven it, they thought. They definitely, they’d nailed it, there were six basic emotions, case closed, that was it.

CHIQUITA PASCHAL: This seems kind of tricky just because of all the relativism involved. I guess there are all these sorts of frameworks that the researchers wouldn’t have understood about culture, or, and how it’s kind of tied to emotions at the time that they were doing this study. How did that affect our understanding of those results in retrospect?

RICHARD FIRTH-GODBEHERE: There has been a lot of research done recently that has overturned the research. For a start, there are problems with Ekman and Friesen’s study. So, firstly the Fore people did know Westerners, Westerners had been there fairly often before they got there. So, how honest the Fore people, who were all being paid by the way, were when they said they had no idea what was going on in the West, we don’t know. The other problem is that the translators seem to have translated things into things that would cause people to pick those facial expressions, not necessarily the same sentences. It was one of the big questions. How can they have translators if nobody’s been there before?

But, also, another problem is the photos. If you look them up, just put “Ekman emotion faces” and you’ll see them, they will appear in Google, they’re really exaggerated. So, when you run the same experiment, as has been done, with normal faces pulling the faces you’d expect in the West, it doesn’t work out so well. And if you also run this experiment with lots and lots of different facial expressions and ask different cultures to organize them themselves, into groups of emotions, it’s different again. When you think more about the context of emotions, we need to think more about emotions than just faces.

CHIQUITA PASCHAL: Have there been any non-Western or non-white research studies led where a Western idea or conception of the emotional framework wasn’t at the center of it?

RICHARD FIRTH-GODBEHERE: There is the problem that the Ekman paper does still appear in everything you read. There are a couple of papers that have been done that are quite interesting about some Chinese researchers but, still, you read them and the introduction starts talking about Ekman and Friesen.

IAN STEADMAN: This was 1971, and, has there been, presumably Ekman didn’t just do this one study?

RICHARD FIRTH-GODBEHERE: Yeah, he did other studies. Mostly, he went into something called micro-expressions and the idea is that, even if you try to suppress an emotion, you will have a tiny little expression, just around the eyes or somewhere and you can’t stop that. That’s completely evolved.

It’s to be noted that Ekman himself doesn’t believe there are six basic emotions anymore. He still believes that there are basic emotions but the number varies over time. Kind of refines it. I think he’s on 12 at the moment. It depends whether you include this one he has called “neutral,” which I still can’t work out what that is.

IAN STEADMAN: Just neutral as an emotion?

RICHARD FIRTH-GODBEHERE: Yeah. You see it in scientific papers all the time. We’re studying emotions, happiness, sadness, and neutral. Is that asleep? I don’t know.

IAN STEADMAN: That’s very strange. So, maybe now is a good time to ask, what have been the wider consequences of this theory being so long-lasting? How has it seeped out into the rest of culture?

RICHARD FIRTH-GODBEHERE: It’s still being used to this day to manipulate and upset it. There are advertising companies and marketing companies that use this kind of science all the time. There are some really sinister, interesting, sinister, however you want to take it, things out there that want to track your face while you’re watching Facebook or looking at adverts and then fine tune the advertising based on the emotions you had.

One thing Paul Ekman did is he ran a training programme for the TSA over in America, called SPOT, the Screening Passengers by Observation Techniques training program, and trained human beings to spot micro-expressions in people in the airport so they could pull them over because they thought they were going to do something dodgy.

CHIQUITA PASCHAL: Oh boy.

RICHARD FIRTH-GODBEHERE: And it was a catastrophic failure.

CHIQUITA PASCHAL: What kinds of ways were they failing? Were they just wildly off or misinterpreting things?

RICHARD FIRTH-GODBEHERE: You can’t read people’s micro-expressions. If facial expressions gave away emotions the same for all cultures, then if that were true, you still couldn’t see micro-expressions because the whole point of a micro-expression is that it takes a few milliseconds. So, a human can’t see that.

CHIQUITA PASCHAL: Yeah. I mean, I’m from Philly and we have a permanent scowl on our face. You don’t smile. So, what happened after that? Is there a silver lining?

RICHARD FIRTH-GODBEHERE: The silver lining is these things don’t work. They keep trying them and they keep not working. Firstly, how you express emotions is contingent upon your upbringing and your culture. One example from my research is that before about 1735, before Johnson’s Dictionary, really, there wasn’t really such a thing as disgust, there were all these other things, aversion and eschewment and avoidance, and lots of different things that all coalesced into, “Let’s take that thing we do when we eat something that’s horrible and let’s make it this moral emotion.”

Darwin, and then Ekman, say it’s a basic emotion that all creatures have. I think, probably, all mammals don’t eat things that could poison them, you know? That’s not the same as being disgusted by a murderer, and that’s a problem, that difference between an internal state and an external expression. One of my fears is that we’re being squashed into this idea that there are six basic emotions and all of this technology is pushing everybody into them which, of course, will reinforce the idea that they exist, when, actually, because everybody’s had to explain in a certain way and pull certain facial expressions to not get pulled over in an airport and searched.

IAN STEADMAN: So, it’s kind of like, remember in the late 90s when you had search engines like Ask Jeeves where we would ask them actual questions and there were other search engines that were like Google, which were just like, “What date American revolution?” You just typed the thing, and the later version won because we just learned to talk to the machines like machines understood. Is that essentially what you’re saying here, it could happen with emotions as well?

RICHARD FIRTH-GODBEHERE: I think so, yeah. I think we can see the human race becoming homogenized with the same set of emotions and emotional responses as we become more global and more international, you know. It’s just, see, I like different emotions.

CHIQUITA PASCHAL: Yeah, me, too.

RICHARD FIRTH-GODBEHERE: I was in Tunisia once and there were two people playing chess and they actually started shouting at each other and I asked my mum who lived there for years, “What’s that about?” And she was like, “Oh, he was just saying it was a great game.” OK, fine. “Brilliant, what a great move that was, didn’t see it coming. Do you want a drink?” is what they’re actually saying. I thought they were about to hit each other.

IAN STEADMAN: So, what can we do to stop this? You said already, these systems, these technologies that use this six emotion theory don’t work, but people seem to be pretty intent on trying to make them. So, what can we do?

RICHARD FIRTH-GODBEHERE: Well, there is some good news.

CHIQUITA PASCHAL: OK, alright, yeah, I’m ready for that.

RICHARD FIRTH-GODBEHERE: There are starting to be some companies out there that are starting to think a bit more about other theories of emotion.

IAN STEADMAN: That’s interesting and potentially reassuring, but it worries me that you have this framework that seems very problematic, that’s lasted this long and is still being used to design products and services to this day. So, I think we need to speak to someone, at this point, who’s doing that to find out exactly what they’re thinking about all of this.

RANA EL KALIOUBY: I’m Rana el Kaliouby, co-founder and CEO of Affectiva. So, the way we do it is we collect a ton of data, so far we have over four billion facial frames of people emoting as they go about their daily activities, with their permission of course.

We’re an MIT Media Lab spin-out. We build technology that can read and understand human emotions, cognitive states, and then make that available to devices and our technologies and digital experiences in a way that improve our connection with technology, and thereby our connections with each other as humans, as well.

IAN STEADMAN: So, that’s a huge data set. Sorry, what was that number again, four billion, did you say?

RANA EL KALIOUBY: Four billion facial frames, yeah. It’s a combination of people watching content online, driving their cars around during their daily commute, and it’s a very global data set. So, we have data from about 87 countries around the world. So, these are different ways where you can get to this ground truth of, OK, what exactly are you feeling?

We’re very interested in codifying what’s happening on the face because that’s the signal, right? And then, of course, we are interested in mapping what’s happening on the face to your internal state of mind, right? Or state of heart. The very first product we brought to market was in advertising testing. So, right now, we work with 25 percent of the Fortune Global 500 companies to test their ads worldwide, where they can see exactly how people responded to different product categories or around the world, right, so if you test the same shampoo ad in the US versus Brazil, versus China, how do people respond?

IAN STEADMAN: Are you creating like a, not a six basic emotions database for example, but maybe 10, 20, 50, 1000 different emotional categories?

RANA EL KALIOUBY: Yes.

IAN STEADMAN: How finessed is your qualification?

RANA EL KALIOUBY: So, right now, our data goes through the annotation team and the annotation team annotate it for these underlying facial expression building blocks, but they also annotate for expressions of the six basic emotions as well as these other states, like cognitive overload. Drowsiness. We have at least three different levels of drowsiness that we codify for and, basically, the way this works, you say you want to train the computer to recognize a smirk, which is an asymmetric mouth movement often indicating a negative emotion, like you’re sceptical or you’re doubtful.

And so, the way we do this is we will find or curate from our data hundreds of thousands of examples of people smirking and then hundreds of thousands of examples of people not smirking and the more diverse these people are, the better the algorithm is going to be. We feed that into our deep learning networks and then we test it. So, we then feed it examples of smirks that it’s never seen, and we quantify the accuracy and, often if it’s at less than 90 percent accurate, we iterate, we keep training and retraining the model until it gets there.

IAN STEADMAN: You start from the premise of these are smirks and these are not smirks with these pictures. What about when the definition of a smirk is different from one part of the world to the other?

RANA EL KALIOUBY: So, our position is that facial expressions are, by and large, universal. However, there are these cultural norms or display rules and, actually, Ekman has published a lot about this. You know, I’m originally from the Middle East, I’m Egyptian, and people are not always forthcoming with how they feel, especially if it’s a negative emotion and we certainly see that around other areas in the world.

The very first type of work we did where people were responding to ads, they were less comfortable sharing or expressing how they truly feel about an ad, especially in the presence of strangers. So, our job was to really make it comfortable for people to authentically share how they’re feeling.

IAN STEADMAN: Have you ever come across emotions that are defined specifically by a certain culture, or country, or nation, or language, but don’t really have analogs outside of that?

RANA EL KALIOUBY: We haven’t yet. I imagine we will as the taxonomy of emotions we’re capturing becomes more and more complex. I will say that I remember a few years ago, we were visiting one of the top automotive manufacturers in Japan and we were posing some of the use cases. So, we talked about distracted driving and drowsy driving, and we talked about road rage, and the Japanese executives in the room basically said, “Oh, no, no, no, no, we don’t ever talk about road rage, that’s just not accepted in our culture.”

So, that really made me think. You’re absolutely right, there are some emotions that are more socially acceptable, or the nuance of the emotion could differ from one culture to the other, but it’s early days for the technology.

IAN STEADMAN: So, this is a, this would be a car that would detect when you had road rage?

RANA EL KALIOUBY: Correct.

IAN STEADMAN: What would happen then?

RANA EL KALIOUBY: The driver monitoring systems, if you can imagine a camera that’s kind of in the steering wheel column or maybe in the rear view mirror of your car, it can flag if you are distracted. We see a lot of examples of people on their phones, texting, or if you’re extremely tired, or you’re angry, you’re on a call and you’re visibly angry or you’re angry at some other driver.

The question is how does the car then respond? So, some vehicles if it’s semi-autonomous, maybe the vehicle says, “I’m going to take control and drive this car.” It can give you an alert. It can engage, for example, if you’re super tired, it can engage with the conversational interface in the vehicle, like a Siri or an Amazon Alexa can start chatting with you to keep you awake. There’s all sorts of possibilities.

IAN STEADMAN: Are there some other practical examples like that, that you could talk about, of where this technology is being applied?

RANA EL KALIOUBY: We have partners in the mental health space that are exploring how to use our technology to, for example, help individuals on the autism spectrum.

IAN STEADMAN: So, it’s kind of translating emotions?

RANA EL KALIOUBY: I actually like that. I like how you said that because I do think we are in the business of translating emotions, that’s what we’re doing. Whether we’re translating it between human and machine or human to human, at the end of the day we’re trying to bridge this gap and translate these emotion signals that right now just disappear in cyberspace.

IAN STEADMAN: But is there anything you’re worried about, that could go wrong with any of this? Algorithms can replicate the biases or assumptions of those who use or create them and the information that’s fed into them, and this stuff can be really unpredictable. When you’re talking about putting this into a mental health context, what kinds of red flags are you looking for?

RANA EL KALIOUBY: So, I’ll start with the ethical development. My biggest concern around this technology is accidentally building bias in these algorithms or in the data. So, you know, there’s been a number of news articles around how facial recognition technology discriminates against people of color, especially women of color, and so what we do to mitigate this kind of bias, you have to be very thoughtful about the data you’re collecting, how you’re sampling the data. There needs to be an equal representation of gender, ethnic groups, age ranges, people wearing glasses or people with facial beards, or wearing the hijab.

And then, on the deployment side, one of our core values as a company is, we kind of understand that emotions are so personal, right? They’re perhaps the most personal data you have or own. We absolutely have to acquire consent and opt-in, explicit consent, and not even opt-out, which meant that there were some industries we decided to stay away from, like security and surveillance, and lie detection, even though we are routinely asked to apply the technology in these industries and it’s probably very lucrative.

IAN STEADMAN: This kind of goes back to what you were talking about a little bit earlier, after Paul Ekman, because I wanted to ask about stuff like the SPOT program, which is one of these situations where this kind of science seems to be, well, from my perspective, maliciously applied. So, do you think it’s possible ever to apply this science, in a way, you’re always going to have to have safeguards?

RANA EL KALIOUBY: We can’t forget that humans are very biased. So, I’ll give you one use case of our technology. We are partnered with a company called HireVue. They are in the recruiting business. So, instead of sending a word document or a word resume, you send a short video of yourself talking about what you’re passionate about, your prior experiences, but of course video is very time consuming to watch. So, it’s not realistic to have, and it’s our profession to watch all of this video content.

And so they use our technology. The algorithm takes a first look at the candidates, and what is fascinating is, this algorithm is gender and ethnic blind, it’s looking at your non-verbal communication, whereas humans are very, very biased when it comes to hiring.  HireVue partnered with Unilever and they did a global study using this technology and they found that it increased the diversity of the population that got hired by 16 percent.

IAN STEADMAN: This actually segues quite nicely into something that came up in talking to Richard Firth-Godbehere, who’s a historian of emotion science. He was wary of the possibility that, by embedding emotional recognition technology into devices or services, or products in any way, you kind of codify what emotions are at this point in history, and it’s going to be, in a way, teaching people to write for the algorithm, rather than the algorithm reflecting what the people think and feel. Do you think that that’s a legitimate worry?

RANA EL KALIOUBY: I don’t know. I think, well, the way you train these algorithms is a combination of both the machine learning scientist or the people who are codifying these algorithms but, also, the data, right? So, I would think the data would evolve over time and reflect how society communicates. Yeah, it’s a good question, though. I’m not sure.

IAN STEADMAN: It’s very philosophical, I know.

RANA EL KALIOUBY: I’m super excited because I think we’re in the midst of a human-machine interface transformation where machines are becoming more conversational, more perceptive, and will have more IQ and empathy. So, that will mean that, as humans, we’ve always communicated with machines on their terms, using their language and finally, now, machines are going to be able to communicate with us just the way we communicate with one another. Today, the gold standard in how we measure and quantify mental health conditions is using a survey and it’s very, very subjective, unreliable data but, with emotion AI, we have an opportunity to bring a lot of objectivity. We have an opportunity to track that data longitudinally. So, I just think there’s huge opportunity around mental health. It’s a very untapped area with a lot of potential to do good.

IAN STEADMAN: I’ve got to be honest, I’m still wary.

So, how about you, are you reassured?

CHIQUITA PASCHAL: To be honest, not completely, no. I think it’s promising, I mean I see the optimistic side of it and I’m really, I would say, relieved to know that companies like Affectiva that are working within this space are trying to do so from an ethical standpoint, but I think, over time, interests can change. The values of a company can change, leadership of a company can change, so many things.

IAN STEADMAN: There are also going to be companies out there that aren’t as scrupulous. That’s the thing that worries me.

CHIQUITA PASCHAL: Yeah. I’m tentatively on edge but cautiously optimistic.

IAN STEADMAN: Tentatively on edge. Yeah, yeah. It feels difficult to escape, in a way. In the way that surveillance in general, online and in real life, has been harder and harder to escape.

CHIQUITA PASCHAL: Even with social media, too, all the permissions and things that we give out, I think there’s a kind of a mass denial, or just like a mass self-denial? We have to, not necessarily lie to ourselves, but be wilfully ignorant. In making this choice of convenience, we have to accept that we no longer live in an age where privacy exists the way that it has in the past.

IAN STEADMAN: Yeah. And it’s also, I guess we had to adapt this idea that when it comes to losing privacy, when it comes to surveillance at least, and when it comes to a lot of the stuff that’s being talked about here, when it comes to a car that scans your face and figures out if you’ve got road rage and takes control, for example…

CHIQUITA PASCHAL: I’ve got resting bitch face, how’s that going to work out for me?

IAN STEADMAN: …there’s an implicit demand of trust there. When there’s a CCTV camera that’s pointing down a street, it’s recording stuff but you understand that that’s going to be someone watching that footage and making a decision based on being a human about what, the stuff they see on it. This is a scientific theory that is kind of obtuse. If I just read a press release that says a car is going to read my emotions, I assume that the science that’s based on is going to be pretty sound.

CHIQUITA PASCHAL: That’s the thing. All of our technology, the foundations of it, were formed by humans. I guess, what it leaves me wondering is, how much do we actually program our biases into our machines?

IAN STEADMAN: Yeah, because machines are as biased as the people who make them and, also, I just can’t get away from the fact that when we spoke to Richard, he was talking about how emotions didn’t used to exist.

CHIQUITA PASCHAL: I know, that was wild, I was like, “What?”

IAN STEADMAN: So, if it’s just this case of a shifting, mutable definition for something, how can we really claim to be able to detect it and analyze it so scientifically? Emotions might not even exist in a way that makes sense to apply taxonomy to. It might be something else that we’re measuring.

CHIQUITA PASCHAL: It seems to me that a lot of these emotions, or how we codify emotions, is based on our cultural value system. There are probably cultural value systems that we can’t understand that maybe predate language or we don’t have a record of. You know, when I think about the future, maybe Americans will evolve out of road rage somehow.

IAN STEADMAN: So, the take away here seems to be that, if you see someone claiming that they have a machine that can read your emotions, be skeptical and ask them about their taxonomy. It’s the most important thing.

CHIQUITA PASCHAL: And, roll credits. OK.

[END CREDITS]