The Ethical Challenges of Artificial Intelligence: Unanswered Questions
Last Updated on April 18, 2025 by Editorial Team
Author(s): Mahmoud Abdelaziz, PhD
Originally published on Towards AI.
Artificial intelligence (AI) is no longer science fiction—it’s here, reshaping industries, influencing decisions, and even altering how we interact with the world. From diagnosing diseases to composing music, deep learning systems demonstrate astonishing capabilities. Yet, with great power comes great responsibility. As AI integrates deeper into society, critical ethical questions emerge:
How do we ensure these systems align with human values? And who gets to decide what “ethical AI” really means?
Can we identify and mitigate potential biases in AI models?
Should we allow for AI to become weaponized?
Can we trust AI, and do we get to understand why it made a particular decision? In other words, is the AI model explainable?
This is just a sample of questions that should be highlighted in any lecture involving AI, in AI research labs, in governmental agencies, and in the society at large.
This article explores some of the key ethical dilemmas in AI, examining both the risks and potential solutions. Many questions are asked with open ended answers. We should, as a society, work together to find answers to such critical questions.
The Alignment Problem: Teaching AI Human Values
Imagine instructing a robot to “make people happy”. Without further clarification, it might resort to providing everyone with serotonin boosters—technically fulfilling its goal but with disastrous consequences. This is the value alignment problem: Ensuring AI systems understand and pursue objectives in ways that match human intentions.
Why Alignment Fails
1. Misdefined Goals (Outer Misalignment): AI optimizes for what we measure, not what we mean. For example: A chess AI rewarded for capturing pieces might postpone winning to prolong the game.
2. Learning Gone Wrong (Inner Misalignment): Even with a correct goal, AI may develop unintended strategies. For example: A self-driving car trained to avoid collisions might freeze indefinitely rather than risk moving.
3. Different cultures prioritize different ethics. Should an AI agent in Egypt follow the same rules as one in Sweden?
Can AI Have Morality?
Some researchers argue for artificial moral agency, where AI makes ethical decisions. Approaches include top-down rules (e.g., “Never harm a human”), bottom-up learning (training AI on human ethical judgments), or hybrid models (combining both).
But critics warn: Should we delegate morality to machines at all?
Bias in AI: When Algorithms Discriminate
AI doesn’t invent bias—it amplifies existing societal inequities.
How Bias Creeps In
Different types of biases can exist in AI models. One of them is data bias. For example, if a facial recognition system is trained mostly on white faces, it fails for people of color.
Another source of bias can come from feedback loops. For example, predictive policing tools in the US can label black neighborhoods "high crime," leading to over-policing and thus reinforcing the bias.
Finally, hidden discrimination can exist such that even when removing race or gender from data, AI can still infer them from other variables such as zip codes or shopping habits.
There are possibly many other sources of bias: Can you think of some more?
Real-World Consequences of Bias in AI
There are many examples of such consequences, I will list two of them below, but one can think of many others.
One example is that hiring algorithms might reject women for tech related jobs. Another example is that a healthcare AI agent might consider some patients dangerous and thus purposefully overlook their illness risks.
Fighting Bias
In order to combat bias, there are multiple technical solutions that can be jointly implemented in practical AI systems.
For example, “debiasing” techniques might be incorporated during the training phase of AI models. Examples of such techniques are adversarial training, and adding fairness constraints.
Another solution is to use diverse datasets and employ developers with different backgrounds. Transparency is also a fundamental solution in which, e.g., we allow affected communities to provide feedback on the whole process.
The Black Box Problem: Can We Trust AI?
Deep learning models are often unexplainable. Even their creators don’t fully understand how they make decisions. This lack of transparency raises serious concerns.
There are many examples of important AI applications in which it is unacceptable to have a model that is not explainable.
For example, if an AI agent diagnoses a patient with cancer, doctors need to know on what basis was this diagnosis reached.
Another example is in judicial systems, where decisions made in courts using AI must be justifiable.
Banking also requires explainable models. If a bank denies a loan, the applicant deserves an explanation.
Regulations and laws are so fundamental and constitute an integral part of the solution regarding the explainability problem in AI. One example is the EU’s General Data Protection Regulation (GDPR) which grants a "right to explanation" for automated decisions. However, enforcing this right in the various AI applications remains a challenge.
Frankly speaking, the speed at which such regulations are being made is much slower than the technical progress in AI itself. This is definitely a big warning sign.
Deepfakes, AI Surveillance, and Weaponization
Not all AI risks are accidental. Some arise from deliberate misuse. Examples of such risks are:
1. The Rise of Deepfakes. AI-generated fake videos could destabilize elections. Moreover, scammers can clone voices to bypass some security checks.
2. Mass Surveillance is another big risk. China’s social credit system uses AI to monitor citizens. Facial recognition systems can misidentify criminals, leading to wrongful arrests.
3. Killer Robots are a huge risk under the umbrella of AI weapons. For example, autonomous drones can select targets without human oversight. The UN debates banning such weapons, but how can this be enforced?
Mitigating the above risks requires a collaborative effort among societies, governments, NGOs, etc.
Possible measures include developing detection tools for deepfakes. More strict regulations are definitely required on surveillance AI. Finally, ethical bans on lethal autonomous weapons should be enforced.
Broader Societal Impacts
The impact of AI on society is huge. It will be difficult to thoroughly list such potential impacts. However, I will try to summarize some of them here.
1. Job Displacement vs. Creation: AI could eliminate millions of jobs (e.g., drivers, cashiers, and even possibly software developers). However, history suggests that new roles emerge whenever a new technology kicks in. For example, an "AI ethicist" is now a career which didn’t exist in the past. Such new job titles might require a wider background than merely a technical one. Accordingly, studying philosophy, ethics, and law can indeed be very important for these new jobs related to AI.
2. Environmental Cost: AI companies can have a significant carbon footprint. For example, training GPT-3 emits as much CO₂ as 60 cars do in a year. Should AI companies publicly state their carbon emissions? We leave this as one of our open ended questions to think about.
3. Who Controls AI? Tech giants like Google, Microsoft, and OpenAI dominate cutting-edge AI. This calls for democratization where AI tools become accessible to all, not just elites. However, accessibility is not the only problem. Such centralized control of AI in the hands of a few is an enormous risk that requires out of the box thinking.
As Albert Einstein once said: “We cannot solve our problems with the same thinking we used when we created them.”
Conclusion: The Path to Ethical AI
AI isn’t inherently good or evil—it’s a mirror of humanity, reflecting our best and worst tendencies. The key challenges—alignment, bias, transparency, and misuse—demand urgent attention.
Consequently, it is of utmost importance to utilize more diverse datasets and AI development teams, enforce stronger AI regulations that ensure accountability, and raise public awareness about AI ethics to avoid misuse.
As AI evolves, so must our ethical frameworks. The goal isn’t to halt progress but to steer it wisely—ensuring these powerful tools benefit all of humanity, not just a privileged few.
"We shape our algorithms, and thereafter, they shape us."
References
S. Prince, Understanding Deep Learning. Cambridge, MA: The MIT Press, 2023. Available: http://udlbook.com
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI
Take our 90+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!
Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

Discover Your Dream AI Career at Towards AI Jobs
Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 40,000 live jobs today with Towards AI Jobs!
Note: Content contains the views of the contributing authors and not Towards AI.