Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

The Dull and Unpleasant 2020 Ethics of AI-enabled Science
Latest   Machine Learning

The Dull and Unpleasant 2020 Ethics of AI-enabled Science

Last Updated on July 24, 2023 by Editorial Team

Author(s): Dr. Adam Hart

Originally published on Towards AI.

© Wild0ne courtesy Pixabay

In the 1997 movie Gattica, Ethan Hawke displayed the brute-force determination of the human spirit in an hypothetical, transitional to full CRISPR, gene-editing future that 23 years later we are now in, where all parents who wanted their children to succeed were forced to make a hard choice. To edit, to give your children the ‘best’ of your genes, or let mother nature randomly recombine them to produce an ‘uncertain’ outcome.

Ethan succeeded in all mental and physical tasks to become an astronaut in that fictional world populated by supposed physically perfect geniuses, but, setting aside exactly how the Gattica scientists determined the criteria for ‘the best genes’, the reality we are facing in 2020 is an interesting one that has some of the features of this movie.

While the Chinese geneticist He Jiankui is now missing, he may historically be the person credited with making a Gattica future real. Every parent with the financial means may decide to travel to countries with less strict gene control laws and decide to do this, either on in utero children or on themselves.

Playing God with the material of human makeup is one thing that does seem a little way off, however playing God with people’s reputations and potentiality is well and truly here, enabled by the unconscious ethics of many users of data science technology.

I was really most disturbed to read about this ML research effort from Stanford and U.Warsaw where the granular features of houses derived from google street view images were found to be predictive of the car accident risk of their residents because of the demographic profile of the house.

What is disturbing about this ML and model selection prediction effort, that it only required a generalized linear regression model (GLM) to do so. Bayesian or DNN methods weren’t even required. This backs up Kaggle’s latest practitioner survey that good ’ol linear regression is still very useful. The GLM’s effectiveness was simply enabled by the big data being publicly available.

The US has a huge amount of public demographic data available, which comes out in this research paper. In addition to the likes of Google and Facebook’s privacy policy embedded in their terms of service allowing researchers free reign to join this data to your speech and videographic data acts, the era of predictive remote profiling of citizen without consent is here.

While the authors rightly call out data privacy in their conclusion, it is clear that both a) the insurance companies whom they wish to impress; and b) the university departments and their internal departmental status trump the dire implications for citizens in society.

I find it remarkable that researchers like these create a paper, create a new, to them, proven fact, then conclude in essence ‘oh well, yes there are privacy concerns, but look how predictive our model is and we published a paper, albeit on archivx.org.’

What the dire a priori implication of this model is is that where you live is immutably predictive of whether you will have a car accident. Their model optimisation ‘proves’ it.

As we remarked before, ethics is not a ‘department’ nor an adjunct nor an after thought. The ethics of this research are in effect whilst the acts of creating the models are in flight. The ethics in this situation are ipso facto determined by both the availability of the data and the math of the models themselves. The human researchers here happened to be the passengers on this bus. They are not in control. The frameworks of math and data surround and surpass them [1].

Even more remarkable within this kind of data and math centric ethics is the ignorance of human potentiality.

Using this research as an example, it is now supposedly immutably ‘proven’ that where you live is predictive of your propensity to have an accident.

Another research along these lines is that of gene profiling.

These researchers reckon they have figured out ~109 genetic markers that could provide a clue to whether you may develop a psychiatric disorder (!).

The dual unethical a priori of this scientist’s discourse is:

  1. the relation that indelible DNA we carry is predictive of whether we may have a future psychiatric disorder or not; and
  2. that psychiatric diagnosis is in fact a causal science like physics or chemistry.

If you have read any literature on medical power you will know that medical doctors and especially psychiatrists themselves are active players in reproducing their own power by making subjects out of their patients. DSM-3 in the 1980’s used to classify homosexuality a mental illness now in DSM-5 in 2013 it isn’t. Alan Turing wasn’t medically castrated for nothing. This community have inordinate legal backing to incarcerate people without trial based on one expert opinion. If you see one, run away quickly. Very dangerous. Especially one who is a Professor of Psychiatry at Harvard University. Power upon power. Only God, Wittgenstein or an ASI could question them?

No [2]. The professor(s) are just passengers on the bendy bus of gene science and the art of psychiatric diagnosis.

The main point however, is in this case as it was with predicting car accidents, is that AI and data science have been put to service in creating another supposedly immutable causal relationship. That is, if you have any of these 109 genes, you may develop a mental illness, whatever the DSM of the time says a mental illness is.

This connection, this relationship has the same characteristic of immutability, I cannot argue against my genes like I cant argue against where I live, and the data and math says it is so.

If this kind of gene profiling is applied at birth, on the pretext of identification, and a baby has any of these 109 genes, what will their future look like for them? Will they carry a red flag for life? If a child grows up in a house that has a predisposition for a car accident will they ‘inherit’ the ‘stigma’ of that when they apply for their own insurance.

This is the problem with the dull and unpleasant ethics of AI-enabled prediction in 2020. It fails us on two accounts. Immutability, which carries with it the ignorance of human potentiality, and the fact that the researchers themselves are subjects to the embedded ethics of the scientific discourse they serve.

Like Ethan, who surmounted his supposedly physical and mental limitations to outwit the scientists in Gattica and prove them wrong and succeeded as an astronaut, all of us carry the unusually powerful property of being capable of surmounting our environment and achieving magnificent and unexpected things.

The problem with AI-enabled data science today is that this fact of potentiality is not accounted for in any model because it cannot be measured nor quantified. It is an intangible. But it could be at least thought about?

The even more controversial, counterintuitive and difficult problem to solve is that I believe the ethics implicit in much of the AI-enabled data and math that the scientists believe are serving them actually makes the scientists serve and reproduce it. They reproduce this dull and unpleasant ethics into their next algorithm, their next paper, and teach it to their students, to their children. It reproduces them. They are the passengers on the bus which says humans are immutable, and their models prove it so they can not do otherwise than believe it.

Perhaps solving for immutability will solve for this wicked problem too?

Footnotes

[1] And we won’t even start to comment on how certain professions, like data science, may preselect their practitioners based on the spectrum of humanism<>non-humanism inherent in its substance and practices.

[2] For an expose on the discourse of psychiatry you could start with The Birth of The Clinic. For example, historically, mentally ill people were called possessed, savants, oracles or witches. Now due to psychiatric power, they’re locked up, drugged up, without trial, no longer part of the community, dangerous. is it the psychiatrists who should be locked up as they have taken the savants powerful place?

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓