Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Data Science Essentials — AI Ethics (I)
Latest

Data Science Essentials — AI Ethics (I)

Last Updated on July 5, 2022 by Editorial Team

Author(s): Nitin Chauhan

Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses.

AI Ethics (I) — Data Science Essentials

I recently came across this question on determining our trust in AI systems. Given the current market constraints and enclosed nature of AI systems, it seems imperative that human needs are carefully considered before selecting data and training models for an AI system, and even then, whether or not such a system should be built in the first place.

Ever wondered what could be a possible approach to managing AI systems? Using human-centered design (HCD), systems can be designed to meet people’s needs.

The purpose of this blog is to instruct you on how to apply HCD to AI systems. Then, you will be able to use HCD to design issues in engaging real-world scenarios to test your knowledge.

Methodology

Let’s discuss the concept of Human-Centred Design; Courtesy: Author

An HCD approach to AI should be taken as early as possible — ideally, from when you begin to consider the possibility of building an AI system.

You will find the following six steps helpful as you explore how to apply HCD to the design of AI systems. Although, what HCD means for you will vary based on your industry, resources, organization, and the people you hope to serve.

1. Understand people’s needs to define the problem

The identification of unaddressed needs can be enhanced by working with people to understand their current journey pain points. Several methods can be used to accomplish this, including observing people as they navigate existing tools, conducting interviews, assembling focus groups, reading user feedback, and others. This step should involve the entire team — including your data scientists and engineers — so that each team member has a better understanding of the individuals they intend to serve. As a team, you should include and involve individuals with diverse perspectives and backgrounds, including people of different races, genders, and other characteristics. Use your problem definition to come up with creative and inclusive solutions.

To resolve the dosage errors associated with immunosuppressant drugs administered to patients after liver transplants, a company started by observing physicians, nurses, and other hospital staff during the liver transplant process. A video clip of the interview is shared with the entire development team. It discusses the current dosage determination process based on published guidelines and human judgment. Moreover, the company reviews research studies and assembles focus groups of former patients and their families. All team members take part in a freewheeling brainstorming session to develop potential solutions.

2. Ask if AI brings value to any potential solution.

Consider whether AI adds value once you are clear about which need you are addressing and how.

  • Do you think what you are trying to accomplish is a good outcome?
  • Is it likely that non-AI systems — such as rule-based solutions, which are easier to create, audit, and maintain — would be less effective than AI systems?
  • Would people find the task you are using AI boring, repetitive, or otherwise challenging to concentrate on?
  • In the past, have AI solutions proven more effective than other solutions for similar use cases?

An AI solution might not be necessary or appropriate if you answered no to any of these questions.

As a result of working with first responders, a disaster response agency is reducing the time required for rescues following disasters, such as floods. Humans must review drone and satellite images of stranded people, which increases rescue time. All agree that speeding up photo review would be a positive outcome since faster rescues could save more lives. Based on this determination, an AI-based image recognition system is likely more effective than an automated system that is not based on AI. The agency is also aware that AI-based image recognition tools have been successfully applied to aerial footage in other industries, such as agriculture. In light of this, the agency seeks to explore the possibility of an AI-based solution further.

3. Considering the potential harms of AI

As part of the design process, consider both the benefits and the potential harms of using artificial intelligence, from collecting and labeling data to training the model to deploy the system. Your privacy team can assist you in identifying hidden privacy issues and determining whether privacy-preserving techniques, such as differential privacy or federated learning, may be required. If you estimate that the harms will likely outweigh the benefits, you should not construct the system. You can reduce harm by integrating people — and therefore human judgment — more effectively into the data selection, the model’s training, and the system’s operation.

A company that offers online education wants to use artificial intelligence to ‘read’ and automatically score student essays while redirecting company employees to double-check random papers and review essays that are not understood by the AI system. This system would facilitate the company’s ability to return student scores quickly. The company formed a harms review committee to recommend that the system not be implemented. AI systems have the potential to pick up bias against specific patterns of language from training data and amplify it (harming people in groups that use these patterns of speech), students are encouraged to game the algorithm rather than improve their essays, and the role of education experts in the classroom is reduced. In contrast, the part of technology experts is increased.

4. Prototype, human-monitored solutions

To determine how people interact with your AI system, develop a non-AI prototype as quickly as possible. In addition to making prototyping easier, faster, and less expensive, it allows you to gain early insight into your users’ expectations and how to make their interactions more rewarding.

The user interface of your prototype should make it easy for people to understand how the system works, toggle settings, and provide feedback.

People who provide feedback should come from diverse backgrounds — including various ethnicities, genders, expertise, and other characteristics. They must also understand what they are doing and how they are doing it.

According to a movie streaming startup, artificial intelligence will be used to recommend movies to users based on their preferences and viewing history. A movie enthusiast then recommends movies that the users might be interested in based on a diverse group of users sharing their stated preferences and viewing history with the team. As a result of these conversations and feedback about recommended movies users enjoyed, the team adjusts the way movies are categorized based on feedback. A team can improve its product early by getting feedback from a diverse group of users and iterating often, rather than making expensive corrections in the future.

5. Transparency in the system

Suppose your AI system is implemented in the field. In that case, users should be able to challenge its recommendations or easily opt-out of using it. You should have systems and tools to accept, monitor, and respond to challenges.

Consider the perspective of a user: if you are curious about or dissatisfied with the system’s recommendations, would you like to challenge them by:

  • Would you like an explanation of how the recommendation was reached?
  • Would you like to make a change to the information you have entered?
  • Is it possible to turn off certain features?
  • Is it possible to contact the product team via social media?
  • Are you taking any other action?

As part of its online video conferencing service, a company uses artificial intelligence to blur the background during video calls automatically. While the company has successfully tested its product with a diverse group of people from various ethnic backgrounds, it is aware that there will be instances where the video will fail to focus correctly on a person’s face. Accordingly, the blurring feature is optional, and a button has been added for customers to report problems. Additionally, the company creates a customer service team to monitor social media and other online forums for complaints.

6. Build-in safety measures

It is vital to ensure that users are protected against harm by safety measures. This is accomplished by providing a system reliably delivers high-quality outcomes by minimizing unintended behavior and accidents. This can only be achieved through extensive and continuous evaluation and testing. Develop processes around your AI system to continuously monitor performance, delivery of intended benefits, reduction of harm, fairness metrics, and any changes in how people use it.

The kind of safety measures your system needs depends on its purpose and the types of harm it could cause. Examine the safety measures integrated into similar non-AI products and services. Next, review your earlier analysis of the potential liability associated with using artificial intelligence in your system.

You should ensure that a human oversees your AI system:

  • Create a human red team that plays the role of a person trying to manipulate your system into unintended behavior, and then strengthen your system against such manipulations.
  • Establish the best methods for monitoring the system’s safety once it has been deployed.
  • Explore ways for your AI system to quickly alert a human when it is faced with a challenging case.
  • Create ways for users and others to flag potential safety issues.

To ensure that its product is safe, a company developing a widely used artificial intelligence-enabled voice assistant creates a permanent internal ‘red team’ to play the role of bad actors who wish to manipulate the product. The red team is responsible for devising adversarial inputs that fool the voice assistant. The company then uses negative training to improve the product’s safety by guarding it against similar malicious inputs.

Business Use-Case Examples

Banking loans, credit card loans and limit setting are good cases to evaluate ML models Ethics; Courtesy: Author

1. Banking

A bank uses AI to identify suspicious international money transfers for potential money laundering, anti-terrorist financing, and sanctions concerns. Although the system has proved to be more effective than the bank’s current processes, it still frequently flags legitimate transactions for review.

Question: How can the bank reduce the potential harm the system could cause?

Solution: The AI system may be biased against certain groups, flagging, delaying, or denying their legitimate transactions at a higher rate than other groups, which could cause potential harm. By selecting data carefully, identifying and mitigating potential bias (see Lessons 3 and 4), ensuring appropriate and continuous human oversight of the system once it has been operational, and not operationalizing the strategy until potential bias has been addressed, the bank can reduce these harms.

Chatbots are a good example to evaluate the difference in sentiments and tone of the chat generate by ML models Ethics; Courtesy: Author

2. Chatbot

Public health agencies in a country usually deal with a large volume of telephone and email inquiries from people seeking health information during an ongoing pandemic outbreak. According to the agency, an interactive chatbot powered by artificial intelligence would assist people in quickly getting the specific information they need while reducing the workload on agency employees.

Question: When should the agency begin prototyping a chatbot?

  • Build out the AI solution to the best of its ability before testing it with a diverse group of potential users.
  • Build a non-AI prototype quickly and start testing it with a diverse group of potential users.

Solution: It is appropriate to build a non-AI prototype quickly and test it with a wide range of potential users as soon as possible. Compared to iterating on an AI prototype, iterating on a non-AI prototype is faster, easier, and less costly. In addition, iterating on a non-AI prototype provides information regarding user expectations, interactions, and needs. This information will guide the design of AI prototypes in the future.

Key Takeaways

HCD is one of the ways to manage your AI systems, ensuring the rule and regulations are followed per the business guide, and there are procedures set up to monitor and evaluate any discrepancies. In a follow-up article, I’ll discuss the concepts of understanding BIAS and ways to eliminate them from your predictions. Meanwhile, you can read through some of the references to HCD:

  1. Human-Centered Design: https://www.researchgate.net/publication/346785842_Human-Centred_Design_and_its_Inherent_Ethical_Qualities
  2. Ethics of Artificial Intelligence: https://intelligence.org/files/EthicsofAI.pdf
  3. Kaggle Intro to AI Ethics: https://www.kaggle.com/learn/intro-to-ai-ethics

If you like this article, follow me for more relevant content. For a new blog, or article alerts click subscribe. Also, feel free to connect with me on LinkedIn, and let’s be part of an engaging network.


Data Science Essentials — AI Ethics (I) was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓