Artificial Intelligence Is Misreading Human Emotion

By Kate Crawford, The Atlantic

At a remote outpost in the mountainous highlands of Papua New Guinea, a young American psychologist named Paul Ekman arrived with a collection of flash cards and a new theory. It was 1967, and Ekman had heard that the Fore people of Okapa were so isolated from the wider world that they would be his ideal test subjects.

Like Western researchers before him, Ekman had come to Papua New Guinea to extract data from the indigenous community. He was gathering evidence to bolster a controversial hypothesis: that all humans exhibit a small number of universal emotions, or affects, that are innate and the same all over the world. For more than half a century, this claim has remained contentious, disputed among psychologists, anthropologists, and technologists. Nonetheless, it became a seed for a growing market that will be worth an estimated $56 billion by 2024. This is the story of how affect recognition came to be part of the artificial-intelligence industry, and the problems that presents.

When Ekman arrived in the tropics of Okapa, he ran experiments to assess how the Fore recognized emotions. Because the Fore had minimal contact with Westerners and mass media, Ekman had theorized that their recognition and display of core expressions would prove that such expressions were universal. His method was simple. He would show them flash cards of facial expressions and see if they described the emotion as he did. In Ekman’s own words, “All I was doing was showing funny pictures.” But Ekman had no training in Fore history, language, culture, or politics. His attempts to conduct his flash-card experiments using translators floundered; he and his subjects were exhausted by the process, which he described as like pulling teeth. Ekman left Papua New Guinea, frustrated by his first attempt at cross-cultural research on emotional expression. But this would be just the beginning.