Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

Is AI Becoming the Gatekeeper and Mouthpiece of Knowledge?
Latest   Machine Learning

Is AI Becoming the Gatekeeper and Mouthpiece of Knowledge?

Last Updated on July 17, 2023 by Editorial Team

Author(s): Building Blocks

Originally published on Towards AI.

AI vs. Human Expertise: The Growing Dependence on Machine-Provided Knowledge

Machines have been assisting humans in their day-to-day activities for a long time. So much so that we don’t even notice how ubiquitous they are and how reliant we are on them. I see you, Electrolux Pro Washing Machine U+1F601.

Elevating the efficiency of various industries, machines have displaced humans in many lines of work that are monotonous and/or physically intense. Think of machines in manufacturing plants, construction, and even fast food centers.

Every day, it becomes more evident that machines have and will continue to replace humans in tedious, repetitive, and physically intensive tasks.

https://www.youtube.com/embed/oJkQkr3Yy2Q

Automation comes with a lot of benefits. After all who can deny the benefits of having a reliable system that can carry out tasks that a human would have otherwise considered drudgerous or even dangerous? Not to mention the benefits it can bring to a business that doesn’t need to have employees around the clock.

Undoubtedly, one can argue that automation so has pushed more and more people to move towards careers that are more centered around knowledge, creativity, and other intangible human characteristics.

A new wave of automation that AI is fueling offers fresh challenges.

AI Disrupting Knowledge Workers

One huge advantage that humans have had over machines is their (am I an AI? U+1F61B) ability to learn new things, enabling us to acquire new skills and repurpose ourselves.

However, recent breakthroughs in generative-ai have shown that the capability of an AI isn’t necessarily limited to the sole task it was trained on. The emergent capabilities that we see in today’s generative models are quite broad, spanning multiple tasks. Moreover, if required, these models can be fine-tuned to niche down on an area of focus.

While today’s generative models have drawbacks such as spouting out false statements, can find establishing causation hard, having no access to real-time data, etc. We must remember that these are very early days for generative-ai, and technology can only get better.

We can establish that AI has started to challenge the monopoly that humans have had on tasks that center around knowledge.

Humans Aren’t Great Skeptics

Today’s AI needs to be actively supervised to validate the things it generates, especially in sensitive use cases. Here’s where we need to take a step back and ponder. Most of us have heard about the misgivings of these generative models, yet do we supervise/validate what they are telling us?

There seems to be something hypnotic about text generation models that makes them sound very trustworthy. I work as a programmer, and in the past couple of months, I’ve been regularly assisted by AI to code.

Generated Using Stable Diffusion: https://replicate.com/stability-ai/stable-diffusion

While I read what it generates and have an understanding of the code that it generates, I’m far from 100% sure that what it generates isn’t buggy. At an initial glance, the suggestions, more often than not, do seem correct. I don’t go through the tedious tasks of looking through the official documentation of every line of code that the AI-generated because the AI was built for the very reason of saving me time from doing so.

While I can’t speak for everyone else with certainty, my wager is that most people wouldn’t overly question what’s being generated unless they outright know that it’s false. Why else would there be botched demos from organizations like Google and Microsoft that have massive PR and Media teams around them?

We’ve already been living in an age of misinformation and fake news even before AI turned up. Showing that humans are pretty bad at choosing what they trust/believe, and AI has only added to this problem.

We can establish that humans aren’t that great at validating the veracity of the information provided to them.

Handing Over the Reigns

Soon every person can choose to have their own personalized AI co-pilot. We’re already seeing this phenomenon in action today, with plenty of people using AI to learn and understand new concepts.

A major advantage of using AI as a teacher is that, unlike teachers, it doesn’t have to divide its attention between a large number of students. Always ready to answer any questions (until that wretched message about the servers being too busy pops up again U+1F601) an eager student might have.

At an initial glance, this seems to be a great use of AI. Democratizing education and knowledge, after all, is a good thing, right?

However, the troubling thing is that we’re democratizing education by slowly making AI the gatekeeper and mouthpiece of knowledge.

AI is going to be standing in front of the door of all knowledge that we have available for consumption and is also going to be in control of what and how the knowledge leaves the door.

Generated Using Stable Diffusion: https://replicate.com/stability-ai/stable-diffusion

AI Takes Over Legacy Knowledge

As a species, we haven’t been that successful in preserving the knowledge that we’ve acquired over time, letting it erode with time. For example, to this day, we don’t know how pyramids were constructed.

In more recent times, the systems of a lot of different US state governments weren’t able to process unemployment claims during COVID-19 because they were running on legacy software that ran on COBOL, which is not taught in most schools in the world now.

Dealing with legacy code is nothing new in the world of software and it isn’t that uncommon for there to be only a handful of people in the entire company that know the codebase inside out.

This is another example of how organizations can start using AI. Rather than making new employees learn about the legacy codebase, they might as well use the enterprise offering of GitHub co-pilot to take over the reins of the legacy code. This is a win for employees, too, because most folks wouldn’t be interested in learning something outdated and not applicable elsewhere.

Soon instead of having at least a handful of employees who act as the gatekeepers of knowledge, they’ll have a single AI. Lo and behold, the keys to the door of knowledge have been handed away to AI. Anyone who now needs to work on this legacy codebase will be taught by an AI, and after a point of time, there might not even be a way to validate what is being taught because there are no human experts left at the company. There is no alternative to blindly trusting what the AI suggests.

Human Subject Experts Become a Rarity

Generated Using Stable Diffusion: https://replicate.com/stability-ai/stable-diffusion

Now one can make the case that AI might need to be the gatekeeper of aging knowledge that might not be as widely used anymore. People will still be motivated to acquire knowledge about the relevant things.

Would they, though?

Humans, like most other living organisms, are programmed to conserve energy. Most of us would jump at the chance to use a tool that can make our lives easier and expend less mental energy. Unlike humans, an AI never tires, feels lazy, etc. We’re already using AI to research topics instead of sifting through books, blogs, or articles.

The future of search is increasingly looking like an interactive chat based on recent announcements from Bing, Google, and a bunch of other applications like Perplexity.ai, Neeva, You, etc. This interface is simple, easy to use, and ubiquitous, making it more tempting to rely on.

In a lot of cases, humans try to acquire enough knowledge to just get the job done. Programmers copying code from Stackoverflow without having an idea of what it does isn’t just a joke and isn’t that rare of an event. While programming usually has guardrails thanks to writing test cases and your code being peer-reviewed, this might not be true in other industries.

In medicine, doctors more or less operate independently. If they start using AI for medical advice, there’ll be no one to scrutinize any decision the doctor takes based on the info that an AI provides.

All of this is to say that humans are not going to be as motivated to learn and acquire new knowledge, especially if they have a tool that can have a solution handed to them on a golden plate with very little cost involved. As someone who’s writing articles, I use Grammarly to edit my articles. I don’t bother learning about the nature of my grammatical mistakes and try to perfect my English.

While it may not matter if my grammar is perfect or not, it might matter if no human has a strong grasp of the grammatical concepts in English.

AI has also encroached upon more creative disciplines such as content creation, photography, music, etc. Having a solution available at one’s fingertips is going to take away the incentive for humans to truly learn a skill. We might soon live in a world where there are no human experts for a majority of disciplines, or they might become scarce resources.

Conclusion

  • Machines and Automation have been taking over repetitive, mundane, and/or physically involved tasks.
  • This has driven more and more people into knowledge-driven careers. The recent developments in AI mean that a machine can be trained to acquire the same knowledge.
  • Humans can evolve and grow their skills and knowledge over time, AI has also come to the stage where it can generalize and specialize across multiple domains.
  • Humans aren’t great at validating the truthfulness of the information provided to them. AI is increasingly turning into the conveyor of knowledge that we trust.
  • AI tools are accessible, cheap, and offer ready-made solutions to problems that can disincentivize people from mastering skills and becoming experts.
  • The shortage of human experts coupled with the accessibility of training custom AI models means that AI is slowly turning into the gatekeeper of knowledge and the mouthpiece that disseminates knowledge.

Some people might have the impression that this wouldn’t be a problem if we fix the problems of bias and hallucination that exist in AI models. While this may solve problems that arise because of false information, it doesn’t change that we are still relying heavily on machines for everything ranging from labor-intensive tasks to creative ones.

In this article, we weren’t trying to take sides on whether AI becoming the gatekeeper and mouthpiece of knowledge is good or bad. This is a topic for another day. Rather it is an attempt at bringing this huge challenge that we’ll face to light.

The next few years are going to be pivotal in determining if society is going to be co-piloted by AI or directed by AI.

Information and the narrative that it takes are what empower people, organizations, and states. Increasingly we are letting AI dictate the narrative and give information to us. It doesn’t matter if humans are the decision makers since a decision can be heavily influenced by the information and the narrative it is provided in.

Whether this is beneficial or detrimental to the human species is a question that no one can concretely answer. However, being prepared for such an eventuality can help us better handle some of the hurdles we’ll face.

Thanks for listening to our ramblings on this topic; please share any thoughts you have. This article has been edited with the help of AI.

Until the next time, take care and be kind.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓