Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take the GenAI Test: 25 Questions, 6 Topics. Free from Activeloop & Towards AI

Publication

Amazon Scraps Secret AI Recruiting Engine that Showed Biases Against Women
Machine Learning   News

Amazon Scraps Secret AI Recruiting Engine that Showed Biases Against Women

Last Updated on September 17, 2024 by Editorial Team

Credit: The Verge | “It is the mission of our generation to build fair AI.” ~ Omar U. Florez

Distinguished ProfessorΒ Stuart Evans mentioned during a lecture at Carnegie Mellon University how biases in machine learning algorithms can negatively affect our society, whether these are unconsciously added through supervised learning or missed upon audits with other types of machine learning. In this case Amazon’s AI research team had been building a recruiting machine-learning-based engine since 2014, which took care of reviewing applicant’s resumes with the aim of intelligently automatizing the search for top talent.

Quoting an AI research scientist on the team: “Everyone wanted this Holy Grail,” one of the people said. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.” However, by 2015, Amazon realizedΒ its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way.

Amazon’s recruiting machine learning model was trainedΒ to vet applicants by analyzing certain parameters in resumes submitted to the company over a 10-year period. Due to the biases that the machine learning model had,Β most ideal candidates were generated as men, which is a reflection of the male dominance across the tech industry β€” therefore, the data fed to the model was not unbiased towards gender equality but au contraire.

Amazon’s research team states that they modified the central algorithms and made the machine learning model neutral to these gender biases; however, that was not a guarantee that the engine would not devise other ways of sorting candidates (i.e., male dominant keywords in applicant’s resumes) that could prove discriminatory.

Employers have long dreamed of harnessing technology to widen the hiring process and reduce reliance on subjective opinions of human recruiters. Nevertheless, ML research scientists such as Nihar Shah, whose research is in the areas of statistical learning theory and game theory, with a focus on learning from people at the Machine Learning Department at Carnegie Mellon University, say there is still much work to do.

“How to ensure that the algorithm is fair, how to make sure the algorithm is really interpretable and explainable β€” that’s still quite far off,” Professor Shah mentioned.

Credits: Han Huang | Data Visualization Developer | Reuters Graphics

Masculine dominant keywords on resumes were pivotal after the modification of the algorithms on the machine learning models from Amazon’s recruiting engine. The research group created 500 models that focused on specific job functions and locations. They taught each to recognize over 50,000 parameters that showed up on applicants’ resumes. The algorithms ultimately learned to assign a low percentage of significance towards skills that were common across all applicants, i.e., programming languages, platforms used, etc.

Final notes:

It is important for our society to continue with the focus towards machine learning but with special attention to biases β€” which, sometimes, are unconsciously added to these programs. Thankfully, Amazon’s AI research team was able to recognize such biases and act upon them. Nevertheless, rhetorically speaking β€” what if, in the end, these biases were not recognized, subsequently adding such a biased ML decision engine toward general day-to-day talent recruiting at the company?

The impact, along with the consequences, would have been atrocious.

I am always open to feedback, please share in the comments if you see something that may need revisited. Thank you for reading!

DISCLAIMER: The views expressed in this article are those of the author(s) and do not represent the views of Carnegie Mellon University or other companies (directly or indirectly) associated with the author. These writings are not intended to be final products but rather a reflection of current thinking, along with being a catalyst for discussion and improvement.

References:

Amazon scraps secret AI recruiting tool that showed bias against women

SAN FRANCISCO (Reuters) – Amazon.com Inc’s () machine-learning specialists uncovered a big problem: their new…

www.reuters.com

How Do Machine Learning Algorithms Learn Bias?

Guest written by Rebecca Njeri!

towardsdatascience.com

Nihar B. Shah – CMU

Nihar B. Shah, Assistant Professor in MLD and CSD at CMU.

www.cs.cmu.edu

Amazon built an AI tool to hire people but had to shut it down because it was discriminating…

Amazon tried building an artificial-intelligence tool to help with recruiting, but it showed a bias against women…

www.businessinsider.com

Feedback ↓