Bill Gates Predictions for AI’s — Education and Risks
Last Updated on July 25, 2023 by Editorial Team
Author(s): Aditya Anil
Originally published on Towards AI.
Continuation of previous post “Bill Gates Predictions for AI’s — Productivity and Health” on Towards AI
U+2709Hello There
This is the 2nd issue of my previous post titled — “Bill Gates Predictions for AI’s — Productivity and Health” published on Towards AI, where I scrutinised and discuss about the famous 7-page-long letter by Bill Gates (The Age of AI has begun)
If you haven’t read that post, check it out here.
Bill Gates Predictions for AI’s — Productivity and Health
Bill Gates published a 7-page letter about AI and his predictions for its future in his blog- GatesNote
pub.towardsai.net
Here’s the second part of the three-part series. Hope you like it !
Education
If you have been following recent AI developments, especially around ChatGPT, you won’t be surprised to know that AI can really mess up sometimes. Even after having GPT-4, we still face the issues of hallucination (outputs that may sound plausible but are either factually incorrect or unrelated to the given context) and biases — and sometimes, it tends to be toxic. Why? Because the data that it is trained with generally contains the essence of human nature, it so happens that people sometimes become toxic, hold biases, and hold fake beliefs. But can we blame AI for this?
No matter what the answer is, if it has to be our smart AI co-worker, it should at least get rid of traits that affect its ‘smartness’.
So if AI has to have a credible effect on education, they have to be at least reliable and accurate. While aiming for 100% accuracy is superficial, accuracy to some extent that is credible enough is still achievable in the coming days. Gate notes, “I think in the next five to 10 years, AI-driven software will finally … revolutionize the way people teach and learn.”
Gates, in this section, says that computers haven’t had the effect on education, he had hoped for. And that is kinda deep to say since it is coming from the person who saw the computer and software revolution in the first place way back.
He says AI will soon learn to know your interest in learning style and tailor content for you — just like the recommendation system of social platforms, but better. “ It will measure your understanding, notice when you’re losing interest, and understand what kind of motivation you respond to. It will give immediate feedback.” Gates says.
Gates also mentions that AI can be a great assist (or a smart co-pilot mentioned before) for students and administration to work for the student’s understanding of the subject. While revolutionary technology is built, accessibility of the tools should be ensured. The Internet was successful as it connected people (or computers, strictly speaking) all over the world. However, only 64.4% of people worldwide have access to the Internet. Gates argues that these tools must be made in such a way that even low-income schools around the world could use it — basically increasing access and minimizing the technology divide.
Gates at the end of this section also mentioned the viral cases of cheating using ChatGPT by the students. “I know a lot of teachers are worried that students are using GPT to write their essays. Educators are already discussing ways to adapt to the new technology, and I suspect those conversations will continue for quite some time,” says Gates.
But trust me, this conversation won’t go on for quite a long. I don’t think people can stop ChatGPT and its hype (even if institutions try to ban them), but we sure have a solution to detect AI-generated text. Various tools like AI Text Classifier by OpenAI are capable of detecting AI text exist, and most of them are free. OpenAI, the organization that created the ChatGPT, also released a tool named ChatGPT Classifier that detects if a text is AI-generated or not.
Though they are not standardized detector, it is good enough.
While using ChatGPT to ‘write’ an essay is unethical, using ChatGPT to ‘assist’ your essay still makes sense. Because no matter what critics say, generative AI can help you a lot while dealing with a creative block and could be a great “inspiration-finding” buddy. It makes your workflow easier and far more efficient if you know how to use it well.
So if AI could be wisely used in your research or studies, it could have the same impact as that of the internet — in the sense that it would change the whole way you interact with information.
Risks and problems with AI
This is arguably the most important page in this whole blog post. Gates talks about the problem regarding the current AI model. He mentions that AI is not necessarily good at understanding the context of a human’s request. And this is indeed true, AI like ChatGPT is as good as the prompt that you give it to perform. And many times, just because the prompts are not crafted well, AI tends to give undesired results. Or otherwise, give made-up responses.
Gates said something important at the beginning of the section that further support the above inference –
When you ask an AI to make up something fictional, it can do that well. But when you ask for advice about a trip you want to take, it may suggest hotels that don’t exist. This is because the AI doesn’t understand the context for your request well enough to know whether it should invent fake hotels or only tell you about real ones that have rooms available.
Hallucination and misinformation is still big problem in the current AI, and perhaps they can pose a threat to humans with erroneous information if not taken into consideration. AIs giving wrong answers to maths problems, according to Gates, shows that they struggle with abstract reasoning.
He further went on to highlight the potential threats of AI, and how that can conflict with human interests. He cited the example from a New York Times article about a conversation with ChatGPT where it declared it wanted to become a human. However, even then, Gates feels that it doesn’t indicate that it strives for meaningful independence. Can AI be so powerful that it demands ‘freedom’ from its own creator? Probably. Researchers are still developing ways to create artificial general intelligence, which, according to Gates’s own words, will be able to do everything that a human brain can. Gates mentioned this to be a profound change if it ever happens, but the risks associated with it are concerning.
Gates easily said that AI could go out of control. “This problem is no more urgent today than it was before the AI developments of the past few months” he further concluded.
Towards the end, he mentioned that his thinking and ideas about AI were shaped by the following three books –
- Superintelligence, by Nick Bostrom
- Life 3.0 by Max Tegmark
- A Thousand Brains, by Jeff Hawkins.
Next Part –
Bill Gates Predictions for AI’s — What to Expect
Continuation of previous post “Bill Gates Predictions for AI’s — Education and Risks” on Towards AI [+ Series Finale]
pub.towardsai.net
Previous Part —
Bill Gates Predictions for AI’s — Productivity and Health
Bill Gates published a 7-page letter about AI and his predictions for its future in his blog- GatesNote
pub.towardsai.net
Are you interested in keeping up with the latest advancements in technology and artificial intelligence?
Then you won’t want to miss out on my weekly newsletter, where I share insights, news, and analysis on all things related to tech and AI.
Creative Block U+007C Aditya Anil U+007C Substack
Explore AI and Tech insights that matter to you in a creative way. Weekly Tech and AI newsletter by Aditya Anil. Click…
creativeblock.substack.com
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI