Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

From Wind Farms to AI Robot Swarms — Problematizing AI Thinking
Latest   Machine Learning

From Wind Farms to AI Robot Swarms — Problematizing AI Thinking

Last Updated on July 20, 2023 by Editorial Team

Author(s): Dr. Adam Hart

Originally published on Towards AI.

Applying a mental discipline of self-doubt to AI development

Black Hornet Nano UAV © Flir systems

In science and engineering, the hypothesis method of research has taken humanity a long way toward technological advancements that only ten years ago would seem unreal.

A recent example of this kind of ‘sci-fi-like’ AI research is a plan being pursued by the University of Buffalo’s A/Prof. Chowdhury, whose team has recently received a $0.316M grant from DARPA to copy data from gamers’ minds to seed training an AI that can control up to 250 automated military robots.

‘The team will use the data to create artificial intelligence algorithms that guide the behavior of autonomous air and ground robots.

“We don’t want the AI system just to mimic human behavior; we want it to form a deeper understanding of what motivates human actions. That’s what will lead to more advanced AI,” Chowdhury says.’

Now, on the face of it, using the ‘man on the Clapham omnibus’ criteria, any non-scientist would probably consider this motive quite evil. Coordinating 250 aerial and ground swarm military robots to do what exactly? We want a deeper understanding of what motivates human actions.

WTF? To program robot swarms to track and trace humans?

However, from an engineering standpoint, it’s not evil at all. It’s just pursuing a research problem.

‘This project is one example of how machine intelligence systems can address complex, large-scale heterogeneous planning tasks,’

It is remarkable how engineering and science thinking can pursue and solve the most objectionable research questions with complete impunity and distance the ethical self (if there is one) from the scientific self through rationalizing away doubts in the name of technological advancement. While technology is ethically neutral, its inventors and managers are not.

The irony is that A/Prof. Chowdhury’s Google citation index lists much research on wind farm optimization. Why did he move to robot swarms? The economics of research funding and valorization of AI research?

How do scientists know?

Optimization is not only the way these kinds of scientists and engineers think, but also know.

The research problem of optimization can also be considered as an epistemological question-to-self about knowing through optimization.

The scientists’ subconscious inner dialogue or logic may run something along the lines of this:

‘To know I ask questions and through research I will gain knowledge. Therefore through pursing the question of optimising the arrangement of wind turbines or robot swarms or anything at all, I will not only answer the question, I will complete an epistemological circle of knowing. And it’s my job to know, otherwise I’m not an engineer or scientist. And I’m not doing human research, so if any of my inventions harm people its got nothing to do with me.’

This kind of logic that is a consequence of epistemology is an anti-belief thought pattern expressed so eloquently by Bertrand Russell [1]:

“What is wanted is not the will to believe, but the wish to find out, which is its exact opposite.”

Dr. Harari in his history book ‘Sapiens’ (p.254) talks about the last 300 years as the rise in belief of superhuman order, while later speculating that homo sapiens will either be wiped out by an artificial superintelligence (ASI) or evolve to H+ as per his current research projects at the Future of Life Institute:

“Ask scientists why they study the genome, or try to connect a brain to a computer. Nine out of ten times you’ll get the same answer: we are doing it to cure diseases and save human lives. Even though the implications of creating a mind [ASI] inside a computer are far more dramatic than curing psychiatric illness, this is the standard justification given, because no one can argue with it.”

Sapiens p464.

If Dr. Russell was alive today, would he revise his anti-belief epistemology due to the existential threat AI poses to homo sapiens?

An alternative to optimization

This kind of scenario that Dr. Harari outlines are happening, fuelled by research by the likes of A/Prof. Chowdhury, and all because an optimization epistemology has taken hold as the primary way of knowing, due to the economic and valorization rewards available. And, on the back of this, we are faced with a real threat by the inventions that emerged from ‘knowing’ exclusively through optimization.

Humans are great at solving problems but no so good at formulating virtuous goals, it seems.

Perhaps a more virtuous way of knowing we can use to challenge the optimization epistemology is the dialectical method of self-doubt or the Socratic method whose “…essence…is to convince the interlocutor that whereas he thought he knew something, in fact, he does not.”

Implicit in the discourse of AI development and in fact development of any computational system is the act of invention and creating a kind of golem that reacts to inputs and produces outputs of many kinds. AI systems are increasingly good at mimicking human behavior both verbally (like Mitsuku or Siri or Alexa) and visually (like Soul Machines). The actions of optimizing these inventions are palpable from their outputs.

Instead of an epistemology that follows a process to optimize these kinds of inventions towards a goal of creating knowledge on how to create synthetic representations of humanity, another way to know is to pursue the invention process with self-doubt that the research artifact is valid at all, abandoning the hypothesis all together if the research artifact is non-elegant and non-virtuous.

In that case, the inner voice of optimization would be replaced with one of self-doubt:

‘Well, I know that the reinforcement learning neural network is effective, but since I’m uncertain how it did so, which is inelegant, perhaps another kind of math is more appropriate. Mmmm.’

Admittedly, this is a very difficult stance to take because the appeal and attractiveness of the AI golems are that they effectively ‘speak’ back, proving that the optimization process worked.

Ugly and non-virtuous, albeit effective research abound. The defeat of Grand Master Lee Sedol by AlphaGoZero; the Realbotix AI-enabled sex doll; MelNet enabled DeepFakes; China’s social credit scoring regime, and the many classes of technology used for lifetime facial identification, population control and surveillance like the FLIR drone pictured above.

On another planet

On another hypothetical planet with equally advanced sapiens that preserved the adage ‘do not make false idols,’ a planet that still believes in good and evil, and uses the nature of computing to do just that — to measure and compute — instead of creating synthetic intelligence on the basis that intelligence is a human gift and it is ugly and obscene to consider that any machine could or should hold intelligence, the self-doubt epistemology would allow a community to question the validity of any golem type artifact that emerged on the basis of its inelegance, and to remove or not even instate economic and valorization incentives to pursue a research process that makes golems, because all AI golems are ugly.

Yet, here on this planet, or at least the only planet with life that we know, due to the belief in a superhuman order, we seek to elevate inventions that are code to the same status as humans. Because of our superhumanness, optimized code cannot defeat humans. But they have already (like DeepBlue and DeepMind) and will continue to supplant human agents (like Mitsuku or even Waymo), and economic and valorization [2] rewards prove an optimization epistemology is correct.

What will happen when the bright light that is Dr. Dennett passes, and there is no sound voice to question the direction and nature of these golems?

As Dr. Harari concludes “the only thing we can try to do is influence the direction that scientists are taking” and as suggested here problematizing AI research thinking is the good and sane thing to do at this time because working on ugly AI robot swarms for military purposes seems non-virtuous and insane to the man on the Clapham omnibus.

Footnotes

[1] Free Thought and Official Propaganda, by Bertrand Russell — The Project Gutenberg eBook

[2] For example, Steve Worswick winning a world record or Elon musk being made a fellow of the royal society.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓