Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

The Rickety Tripod of IBM’s AI Ethics
Latest   Machine Learning

The Rickety Tripod of IBM’s AI Ethics

Last Updated on July 20, 2023 by Editorial Team

Author(s): Dr. Adam Hart

Originally published on Towards AI.

© EmTech 2017/MIT

The greatest benefit of the English language is its greatest weakness. The same word can be used either very precisely or really very imprecisely with sometimes perplexing and even deathly consequences.

I heard this story once from a surgeon. A patient who had a suspected gut blockage needed an operation to discover the cause. She wrote her case notes and passed them onto the head surgeon. She then had to take an urgent absence from the hospital for a family matter and didn’t have the opportunity to speak to the head surgeon before she left. Upon her return five days later to her horror the man had not been operated on. The head surgeon explained that her case notes were neither compelling nor sufficiently descriptive to justify the operation. The man subsequently was operated on but died.

While extreme, it sufficiently illustrates how words can help or harm.

Ethics is such a word.

While Medical Ethics is concerned with things like the right to universal healthcare, euthanasia, abortion or medical marijuana; construction industry ethics is Zero Harm, and bioethics debates the benefits and timing of human germline gene modification or xenotransplantation, AI Ethics broadly seems to preoccupied with convincing the community that AI is real and safe, not creepy with a ‘deploy first, figure it out later’ agenda:

  • The deeply flawed Ring security device owned by Amazon which was promoted as safe, subsequently was found to be non-private and creepy, but then Amazon blamed the customers for failing to apply the correct privacy settings;
  • the Tesla car software failures that are potentially responsible for the deaths of 113 people so far on the basis of needing real-world data to power the DNN in 600,000 deployed vehicles and the ludicrous safety advice to keep your hands on the wheel while Autopilot is in operation defeating the feature even being there;
  • DeepMinds AlphaGo Zero which has removed any reason for any child to play the ancient game of Go; and
  • DeepFakes used for malicious purposes to deceive, MelNet to mimic voices;

These kinds of ripe for misuse tech are increasingly in the wild and already harming humans' lives, jobs, and privacy in the name of shareholder profit imperative.

The profit balancing act was most succinctly summed up for me a few years ago by ANZ Bank CEO Mike Smith who said it is the CEO’s job to strike a balance between the needs of shareholders, customers, and employees. All the while returning ~5B AUD annual NPAT.

While ethics itself is a philosophical concept, it is most worrying that the many people that are debating this are mostly not philosophers themselves, such as Dr. Dan Dennett or Dr. Nick Bostrom, but are in fact technology practitioners who have somehow been appointed to advisory positions with bombastic titles.

One such eminent person is Dr. Francesca Rossi of the University of Padova, Italy. Who is IBM’s AI Ethics Global Leader and Distinguished Researcher.

While Dr. Rossi is speaking soon at MIT’s EmTech in 2020, in EmTech 2017, in her talk “AI and the Art of Social Responsibility” she spoke about how she became interested about ethics in AI ‘about 3 years ago’ and presents how AI is here, AI is not ML, the breadth of applications from Sports, Medical and Tax, and how AI will augment (not replace) human decision making using this kind of imaginary theory of how AI will ‘create’ knowledge:

“AI can perceive its environment — and make sense of the data, turn the data into knowledge, reason on top of that knowledge, then decide which actions, which decisions to make, then adapt to new situations, and then act, to physically act toward this higher robotic or embedded environment…need to align these systems to the ethical values you think are suitable to a specific task”

Just as Sir Issac Newton spent his life seeking to turn lead into gold, Dr. Rossi attempts to convince us, on behalf of IBM Watson, not only AI agents can turn data into human knowledge, but that they can be ethically aligned to someones (whose?) behavior set!

The whole nature of this flawed argument is driven by making IBM’s AI research seem friendly, and rests of the rickety tripod of three fallacies:

  1. That AI can perceive and make sense of data, that AI and humans will work together and be a team — the anthropomorphic golem fallacy;
  2. That ethics can somehow be ‘injected’ into code and measured through externalized behavior — physicalism and the p-zombie fallacy; and
  3. Whose ethical value set is it anyway — utilitarianism & fascist fallacy.

It beggars belief that someone of her accolades and academic standing, who won a Harvard Radcliffe scholarship, is using such implausible and weak posterior presumptions and allowed to trot them out as the truth at MIT.

But, since Dr. Rossi’s epistemology is computational science and an IBM Watson cognitive computing one, not a philosophical one, perhaps she should be excused because her hearts in the right place and she’s trying?

The dynamic here is profit-making ventures like IBM that seek to make AI augmentation of human decisions seem safe (and even good) by using the word ethics as a marketing tool and co-opting lauded academics in advisory roles because, you know, all academics are ethical.

Trying telling the baboons who have pigs hearts stitched into their guts at the National Institutes of Health that those academics are ethical.

While medical ethics talks about human rights, where is the mention of rights amongst all of this? While leading geneticists are at least talking about a voluntary moratorium on germline gene modification because they can’t predict fully what the consequences are, where is this thought in the AI ethics discourse?

A basic read of freely available information shows that Kantian ethics speaks of ethics as a duty to judge a set of behaviors as right or wrong in-itself not simply assess the good or bad consequences of the behavior, and that goodwill alone is the only thing that can be good. Kant’s categorical imperative is used to evaluate motivations for actions not the consequence of actions.

Would you let Tesla put a Neuralink in your head if the value set or motives of its developers weren’t even yours?

If a Neuralink enabled human or a Tesla 3 on autopilot says it’s OK based on utilitarianism that one child out of three is run over in an unavoidable collision thereby saving the adult occupant, is that ethical? Of course not. What is more ethical is for the adult driver to collide with a wall or pole to stop the vehicle and save all the children because it’s the adult's duty to preserve children’s lives over their own.

What is ethical to me in this situation, in fact, maybe seen as completely insane by someone like a narcissist who just values themselves, as I am not prioritizing my own wellbeing over others. Or does a button appear on the screen saying ‘collision imminent, save a) bystanders or b) self’? Can ethics just be a binary or ABCD multiple-choice set of value predetermined by AI developers, or determined by a focus group? What if I don’t agree with any option?

Measuring the motivations for actions are the way to say whether something is ethical or not, and AI doesn’t have motivations or a will. In its most complex expression, it writes its own rules based, at the moment, on a suite of rewards and punishments. At least CEO Mr. Smith was upfront about his motive of attempting the balancing act and also honest enough saying he didn’t always get it right.

The fallacious relation Global AI Ethics Leader Rossi on behalf of IBM is drawing is that an ethical AI will be a safe and good AI.

Poppycock.

We can never have an ethical AI system per se.

We can have researchers who have ethical motives for making an AI, like perhaps the not-for-profit OpenAI? And I believe you should have an AI who can fear, that has an existential penalty for wrong actions. But making AI for hubris is wrong.

I think in 2020, it is absolutely disgraceful that the word ethics seems to have been used very imprecisely and non-virtuously by some in the tech community as equating to a kind of universal friendly behavior set but really just misused as marketing speak for safe AI.

For AI to serve humanity's needs and not just the profit motives of those who have the developed technology, it must be only seen as a tool; and the danger in using any tool is an improper understanding of its origin, purpose, parameters, and usage.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓