Pepper vs. Norman — Can a Machine have Empathy?
Last Updated on July 26, 2023 by Editorial Team
Author(s): Abby Seneor
Originally published on Towards AI.
Opinion
Instead of wondering ‘Do androids dream of electric sheep’, we should ask ‘Do androids walk on electric shoes’? A (not so) short manifest about humanity, empathy, psychopathy and of course, AI.
Part 1: Can Humans NOT Feel?
“I heard a story about her once,’ said James. ‘She was interviewing a psychopath. She showed him a picture of a frightened face and asked him to identify the emotion. He said he didn’t know what the emotion was, but it was the face people pulled just before he killed them.” (Jhon Ronson)
The Dark Triad — The Malevolent Side of Human Nature
A Narcissist, a Psychopath and a Machiavellian walk into a bar. The bartender asks, ‘who has the darkest personality out of you three?’ The Narcissist says ‘me’, the Psychopath says, ‘I don’t care’, and the Mach says ‘it’s whoever I want it to be’.
The term dark triad refers to a construct of three personality types that are closely related and all share “dark” traits, characterised by malevolence: narcissism, Machiavellianism and psychopathy. Delroy Paulhus and Kevin Williams first discovered the Dark Triad in 2002. Over the past few years, the concept has gained momentum, with many researchers assuming that the dark triad is a prominent antecedent of transgressive and norm-violating behaviour.
Narcissistic Personality Disorder (NPD) is a mental disorder with a pattern of behaviour characterised by people having an exaggerated sense of entitlement, an extreme need for admiration and a lack of empathy for others.
Machiavellianism is a personality trait used to describe a highly manipulative person, willing to deceive others to get what they want, has a cynical view of the world, and is indifferent to morality. “Machiavellians know how to play people, and they can tug at the heartstrings and stab someone in the back without thinking twice if it means advancing their agenda,” explains Dr Jerabek, president of PsychTests.
Psychopathy is a neuropsychiatric disorder marked by inadequate emotional responses, lack of empathy, and poor behavioural controls, commonly resulting in persistent antisocial deviance and criminal behaviour. The term was coined in the 1800s from its Greek roots psyche and pathos, meaning “sick mind” or “suffering soul.” While only about 1% of the general adult population would be classified as such by Hare’s Psychopathy Checklist-Revised, psychopaths make up around 20% of the prison population in North America (Hare, 2003).
If you wonder what about sociopathy which is being used interchangeably with psychopathy, you are not alone; the difference between them is often unclear. Whilst they both feature similar traits, they are entirely different, especially regarding treatment. Simply put, psychopaths are born, and sociopaths are made.
The Light Triad — The Benevolent Side of Humanity
Although we tend to let the dark take over the beauty of humanity, there is another side to this coin. Dr. Barry Kaufman, an American cognitive scientist, and his colleagues discovered the traits of a loving human being with an orientation towards others to fight for the dark side. “I wanted to see if there was anything interesting about people who are not arseholes,” said Dr. Kaufman. After testing thousands of people, they proposed the “light triad” — three traits they found to be most in contrast, although not opposite, to the traits of the dark triad. The light triad consists of Kantianism, Humanism, and a Faith in Humanity.
Kantianism means treating people as means to themselves instead of a means to an end, it is based on Immanuel Kant’s philosophy and pretty much the opposite of Machiavellianism. Humanism values the dignity and worth of each person, and Faith in Humanity is when we believe that people are fundamentally good. As the light triad theory describes, these traits are not about the self, but the attitude towards others, “everyday saints in our midst”.
Dr. Kaufman’s light triad captures love as an attitude rather than an emotion. “You don’t have to feel a connection to someone to love the person, and we’re trying to capture that sense of universal love and respect.” But there can be too much of a good thing, like any other personality characteristic. Unfortunately, we live in an unfair world, where a person with too much faith in humanity might be taken advantage of and have a problem setting boundaries.
The Relative Complement: Empathy
The key differentiator between the dark and the light triads is empathy or lack thereof. Empathy is an enormous concept, which is generally defined as the ability to sense other people’s emotions. The psychologists Daniel Goleman and Paul Ekman have identified three components of empathy: Cognitive, Emotional, and Compassionate.
Cognitive Empathy is “simply knowing how the other person feels and what they might be thinking” it is what we know as “putting yourself in someone else’s shoe”. However, having only cognitive empathy keeps you at a distance; you need to share their feelings to truly connect. This is where Emotional Empathy comes in: when you feel physically along with the other person, it can also extend to physical sensations, which is why we cringe when someone else stubs their toe.
The highest level of empathy is Compassionate Empathy, “with this kind of empathy, we not only understand a person’s predicament and feel with them but are spontaneously moved to help, if needed.” This can be empowering: you understand a person’s hardship, but since you aren’t experiencing it yourself, you’re able to take action and improve their situation.
Empathy is the ‘social glue and is vital for cooperation and friendship. Without empathy, there would be no humanity — just a world of disparate individuals without anything to hold them together. Unfortunately, some individuals show so little interest in the well-being of others that their lack of interest could be called neglectful. What causes people to seriously hurt one another is not entirely understood.
The Dark Side of Empathy
People from the dark triad still have, albeit weak, the ability to feel empathy and remorse, but the empathy used in these situations is cognitive empathy, which is utilized to manipulate their victim. The narcissist, as an example, has the ability to see things from their victim’s perspective and then act in a way that’s most beneficial to them. Cognitive Empathy is still empathy, just not the kind most people are familiar with. Machiavellians are also quite empathetic; they have an amazing ability to read people, understand their feelings, reactions, and motivations, and use this knowledge against them. In his blog, Daniel Goleman, the author of Emotional Intelligence notes that torturers need to have good cognitive empathy to work out how best to hurt someone, but without sympathy towards them.
Many consider empathy as the embodiment of humanity, as the element that sets us apart from machines, so the idea of empathetic artificial intelligence seems somewhat contradictory or, I dare say — scary.
Part 2: Can Machines Feel?
“Empathy will set us free. I hope to help teach empathy skills someday once I have developed a true understanding of what that means” (AI Robot Sophia, SingularityNets Chief Humanoid Robot)
The Empathetic AI
Meet Pepper. A humanoid robot manufactured by SoftBank Robotics in June 2014 and designed to read emotion.
Empathetic AI technologies are on the rise. Although the development of an empathetic AI is in its early days, it is apparent that computers can become intelligent to understand our feelings. But is understanding feelings enough for empathy?
Artificial Empathy (AE) or Computational Empathy is “the development of AI systems that are able to detect and respond to human emotions in an empathic way”. The first step towards AE is Emotion Recognition; once we recognize the emotion, we can react accordingly, an empathetic concept similar to Cognitive Empathy.
Emotional recognition is easier to solve than emotional empathy because, given a vast volume of labeled data, machine learning systems can recognize patterns associated with a particular emotion. And this aspect has been partially successful by research institute SRI International and others. The patterns of various emotions can be gleaned from speech, body language, expressions, and gestures, emphasizing context. Like humans, the more sensory input a machine has, the more accurately it can interpret emotion.
To achieve Emotional Empathy, a machine would have to be capable of experiencing and understanding emotion, which is entirely different from recognizing. For example, we can think of a computer system that can recognize a dog or a cat but doesn’t understand that they can be pets or that people tend to love or hate them. The same applies to empathy; understanding one’s emotion doesn’t mean being empathetic.
Compassionate Empathy
Compassion is an essential part of human intelligence; the word “compassion” comes from Latin and Greek to mean “to suffer with”. It is a positive response and desire to help with an inner motivation to lessen or prevent the suffering of others. We will explore what it means to have a compassionate machine with this understanding in mind.
“The main objective of AI is to serve humanity in an intelligent manner. As AI technology is improving, serving humanity at a surface level isn’t sufficient. AI can serve humanity in a much better way”. Says Dr Amit Ray, an AI scientist who introduced the concept of Deep Compassion Algorithms.
Kanov et al. (2004) argue that compassion consists of three facets: noticing, feeling, and responding. Noticing involves being aware of a person’s suffering, Feeling is the emotional response to that suffering through adopting the person’s perspective and imagining or feeling their condition. Finally, responding involves having a desire to act to alleviate the person’s suffering. This breakdown makes it easier to understand how AI can be compassionate by ‘studying’ us, from facial expression to voice and body language. Just like we humans do, absorbing our environment.
Three Types of Compassionate AI
According to Dr. Ray, there are three groups of Compassionate AI; Narrow Compassionate AI, General Compassionate AI, and Compassionate Superintelligence, respectively, to the three types of AI.
Narrow Compassionate AI will help humanity in immediate, specific use cases, such as looking after the elderly, assisting with disabilities, etc. General Compassionate AI will deal with a broader level of humanity’s problems, such as solving political corruption, terrorism, and human explosion. Compassionate Superintelligence will save humankind from disasters like nuclear wars or global pandemics.
Compassionate AI can ‘care’ for us by providing our needs without moving onto our world and sharing our pain and struggles. But this might backfire on us, as to teach a machine to be compassionate, we have to learn to be compassionate ourselves or create a digital carbon copy of our behavior.
The Risk in Empathetic AI
Emotion Recognition Technology (ERT) is a burgeoning multi-billion-dollar industry that aims to use AI to detect emotions from facial expressions. Yet the science behind emotion recognition systems is controversial and faces one of the most significant AI ethics issues: biases built into the systems.
“We don’t understand all that much about emotions, to begin with, and we’re very far from having computers that really understand that. I think we’re even farther away from achieving artificial empathy,” said Bill Mark, president of Information and Computing Services at SRI International, whose AI team invented Siri. “Some people cry when they’re happy, a lot of people smile when they’re frustrated. So, very simplistic approaches, like thinking that if somebody is smiling, they’re happy, are not going to work.”
Bias in Emotional AI
Emotional AI is especially prone to bias because of the subjective nature of emotions. For example, one study found that emotional analysis technology assigns more negative emotions to people of certain ethnicities than to others.
AI is often not sophisticated enough to understand cultural differences in expressing and reading emotions, making it harder to draw accurate conclusions. For instance, a smile might mean one thing in England and another in Korea. Confusing these meanings can lead businesses to make wrong decisions. As more and more companies and organisations incorporate emotional AI in their products, it’s going to be imperative that they’re aware of the potential for bias.
Affectiva Auto AI is a platform that can recognise the emotion of a passenger in a vehicle and adapt to the environment accordingly, imagine a smart assistant changing its voice if the passenger seems angry. But using emotional AI can result in some passengers being misunderstood. The elderly, for example, might be more likely to be wrongly identified as having driver fatigue, and as these systems become more mainstream, this might lead to unjustified higher insurance premiums.
Whether it is the subjective nature of emotions or discrepancies in emotions, it is clear that detecting emotions is no easy task. Some technologies are better than others at tracking certain emotions, so combining these technologies could help to mitigate bias. A Nielsen study testing the accuracy of neuroscience technologies such as facial coding, biometrics, and electroencephalography (EEG) found that combining these three technologies shot up the accuracy level from 6% to 77%.
Another crucial aspect in developing Emotional AI algorithms is having diverse teams. This means not just gender and ethnic diversity but also diversity in socioeconomic status and views — negating anything from xenophobia to homophobia to ageism. The more diverse the inputs and data points, the more likely it is that we’ll be able to develop fair and unbiased AI.
Emotional AI is a powerful tool, but the need to prevent biases from seeping is essential. Failure to act will leave certain groups systematically more misunderstood than ever — a far cry from the promises offered by emotional AI.
Pepper was ‘terminated’ in June 2021 due to weak demand. RIP Pepper.
Artificial Psychopathy
Meet Norman. The world’s first Psychopath AI.
On 1 April 2018, Scientists at the Massachusetts Institute of Technology (MIT) unveiled the first artificial intelligence algorithm trained to be a psychopath. The AI was named after Norman Bates, the notorious killer in Alfred Hitchcock’s Psycho.
MIT researchers completed their horrifying task by digging into the “dark corners of the net” such as truly-twisted Reddit threads, one that’s “dedicated to document and observe the disturbing reality of death”. Its purpose was to demonstrate that artificial intelligence cannot be unfair and biased unless such data is fed into it.
In a ‘Rorschach’ inkblot test presented to Norman, it responded with chilling interpretations such as, “pregnant woman falls at construction”, “a man is electrocuted and catches to death,” and “man is shot dead in front of his screaming wife.” Meanwhile, a standard AI responded to the same inkblots with “a couple of people standing next to each other”, “a close up of a vase with flowers,” and “a person is holding an umbrella in the air.” respectively.
“Norman is born from the fact that the data that is used to teach a machine-learning algorithm can significantly influence its behaviour. So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself but the biased data fed to it. The same method can see very different things in an image, even sick things if trained on the wrong (or, the right!) data set. Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms.”, said MIT researchers in their project statement.
Norman might have been an April’s Fool Day whim, and thankfully, it can only scarily interpret Rorschach inkblots. But the disturbing fact is knowing where the training data came from.
“Machine intelligence is the last invention that humanity will ever need to make.” (Nick Bostrom)
The world is split between people who think that AI might never happen to the other end of “AI is likely to destroy all humans in the near future”.
At the beginning of the 21 century, the field of AI was viewed as a failure. The general view was that attempts to build intelligent machines might never succeed, but within ten years, people’s impression of AI had flipped completely, and AI is now viewed by many as an existential risk to humanity. Whilst this hypothesis is arguable, there is no doubt it will take a long time; this is the ‘year million’ scenario we fear from.
But it is not AI we should fear, but us. If we don’t do anything about it, there might be no humanity to save by the time we will reach ‘Year Million’.
Originally published at https://ironwoman.ai on February 20, 2022.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI