Why Most AI Platforms Won’t Actually Help You in College Admissions
Author(s): Ajay Natarajan
Originally published on Towards AI.
Towards AI
Listen
Share
Large Language Models (LLMs) have been mainstream for close to 2 years now since chatGPT’s November 2022 launch. As a smattering of AI platforms spun up in its wake, most of them, excluding the ones that “sell picks and shovels”, fall into 1 of 3 buckets.
- Copilots for X industry. AI assistant for a working professional e.g. a physician, lawyer, or software engineer
- General tasks regarding handling unstructured data. e.g. meeting summarization, internal corporate document search, Perplexity
- Chatbots for X industry. e.g. Khan Academy’s Khanmigo platform, CollegeVine’s Ivy platform.
#1 … very promising
And we will see a few winners emerge after several years, potentially 2–3 per vertical. Whoever wins will win because they schematized critical workflows in a large industry — and leveraged AI to solve the pain points in those workflows.
An example is this demo I saw from Ambience Healthcare last week. Their flagship AI product is “Auto Scribe” which audio records a health appointment with a patient, and, upon completion, fills out sections of information with the help of LLMs. These are given as suggestions to the clinician — symptoms, billing codes, etc. — and they can quickly click and drag what is correct vs. what is not. Because LLMs can make mistakes and hallucinate, this approach helps reduce the impact of hallucinations until the broader AI community can find a way to handle this at the foundational level.
These copilots take previously intensive processes, let AI do the grunt work, and then serve results to the end-user who can then choose what is right and wrong.
#2 … the verdict is still out
OpenAI’s launch of gpt-4o may have killed several of these. As AI models become increasingly powerful and multimodal, they may leave a bloody trail of these startups behind them.
For example, meeting summarization platforms may go down in flames as OpenAI now has an easy multimodal integration from audio → text → summary. And even Zoom has built its own generative AI toolkit baked into the platform at no additional cost.
The platforms in this #2 bucket inadvertently bet against the slow and steady advancement of AI — this is not a wager anyone should take.
#3 … we need change
You may have noticed the examples I gave of platforms that fall into the #3 bucket were both in education.
My worries regarding these platforms are only strengthened when it comes to cases where students are the core users.
Students need guidance, not an open-ended chatbot.
When we give a chatbot to students, they are forced to guess and figure out the most effective way to interact with it. Instead, we should be serving up the best, most personalized advice automatically.
Take college application essays as a prime example. Every platform I have seen to help looks something like below:
This is a well-intentioned approach, no doubt. But ultimately, students have to do guesswork around getting the best possible feedback. Beyond that, it is tough to figure out if the advice being served up is indeed accurate.
In a topic like college admissions, where advice online is often incorrect — AI, when used in chatbot form, with no additional data, may be counterproductive.
LLMs are fundamentally just giant models that were exposed to almost everything on the internet. This means the biases and information from articles and public forums online are also reflected in the outputs the models spit out.
And therein lies the problem. In a subfield riddled with misinformation, platforms have already spun up, claiming to give high-quality feedback. But who knows if it is even pointing students in the right direction?
Bad data in means bad data out.
Without forcing the platforms to ingest better high-quality data, the AI-generated feedback will merely reflect the low-quality advice sprinkled throughout the internet.
And that is what I have seen in every AI platform that purports to help students with their college admissions essays.
This is a huge problem that I am disappointed no one is talking about — perhaps because AI is still so new to the college admissions space.
#4 … a different approach
My proposition to fix this was Athena. The platform does two things right.
- It’s not a chatbot. A student will upload their essay and have it graded with 2+ pages of actionable feedback. There’s no effort on the side of the student to figure out how to get the best possible output back.
- I forced the LLM to look at hundreds of application essays that got students into top universities. It’s a data-driven approach instead of letting whatever on the internet dictate the feedback.
I hope ecosystems of AI platforms just like Athena will spin up to support students. Not open-ended chatbots. But instead…
Easy-to-use, data-driven, and guided platforms to let students extract the true value of all these new AI advancements.
AI provides a path forward to bring increased equity to the college admissions process. The same feedback private college consultants once charged tens of thousands of dollars for can now be served at scale at a significantly lower price point. But only by recognizing and being rigorously transparent regarding AI’s strengths and weaknesses.
We are at this beautiful inflection point where
Human + AI is greater than either human or AI alone
And as a software community, the onus is on us to develop platforms that give students the appropriate tools to interface with AI. To appropriately address its weaknesses instead of pulling the wool over our eyes.
Until next time!
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI