Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

The Dark Side of AI — How Can The Creators Help?!
Artificial Intelligence   Latest   Machine Learning

The Dark Side of AI — How Can The Creators Help?!

Last Updated on November 15, 2023 by Editorial Team

Author(s): Dr. Sreeram Mullankandy

Originally published on Towards AI.

Photo by Ramón Salinero on Unsplash

Not a single day goes by these days without us learning about something astonishing that an AI tool has done. Yes, we are in unchartered territory. The AI revolution is moving forward at a blistering speed. So are the concerns and fears associated with it. The truth is — many of those fears are real!

“Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.”

— Ray Kurzweil

However, it doesn’t mean that we should be hesitant in the development of AI. The overall effect is largely positive — be it in healthcare, autonomous driving or any other application. Hence, with the right set of safeguards, we should be able to push the limits ethically and responsibly.

Here are a few considerations and frameworks that will aid in responsible AI development — for those who want to be part of the solution.

Agree upon the Principles

One of the first and vital steps in addressing these dilemmas at an organizational level is to define your principles clearly. The decision-making process becomes easier, and the probability of making decisions that violate from your organizational values becomes less, once you have your principles defined. Google has created Artificial Intelligence Principles’. Microsoft has created ‘Responsible AI principles.

Photo by Brett Jordan on Unsplash

OECD (Organization for Economic Cooperation and Development) has created the OECD AI Principles, which promotes the use of AI that is innovative, trustworthy, and respects human rights and democratic values. 90+ countries have adopted these principles as of today.

In 2022, the United Nations System Chief Executives Board for Coordination endorsed the Principles for the Ethical Use of Artificial Intelligence in the United Nations System.

The consulting firm — PWC has consolidated more than 90 sets of ethical principles, which contain over 200 principles, into 9 core principles (see below). Check out their responsible AI toolkit here.

PwC

Build in Diversity to Address Bias

1. Diversity in AI Workforce: In order to address bias effectively, organizations must ensure inclusion and participation in every facet of their AI portfolio — research, development, deployment, and maintenance. It is easier said than done. According to an AI Index survey in 2021, the two main contributing factors for underrepresented populations are the lack of role models and the lack of community.

Source: AI Index report 2020

2. Diversity within the data-sets: Ensure diverse representation in the data sets on which the algorithm is trained on. It is not easy to get the data sets that represent the diversity in the population.

Build in Privacy

How do we ensure that personally identifiable data is safe? It is not possible to prevent the collection of data. Organizations must ensure privacy in data collection, data storage, and utilization.

Photo by Claudio Schwarz on Unsplash
  1. Consent — The collection of data must ensure that the subjects provide consent to utilize the data. People should also be able to revoke their consent for usage of their personal data or even to get their personal data removed. The EU has set the course in this regard — Via GDPR, it has already made it illegal to process even audio or video data with personally identifiable information without the explicit consent of the people from whom the data is collected from. It is reasonable to assume that the other nations will follow suit in due time.
  2. Minimum necessary data — The organizations should ensure that they define, collect, and use only the minimum required data to train an algorithm. Use only what is necessary.
  3. De-identify data — The data used must be in a de-identified format, unless there is an explicit need to not reveal the personally identifiable info. Even in that case, the data disclosure should conform to the regulations of the specific jurisdiction. Healthcare is a leader in this regard. There are clearly stated laws and regulations to prevent access to PII (Personally Identifiable Information) and PHI (Personal Health Information).

Build in Safety

How do you make sure that the AI works as expected and does not end up doing anything unintended? Or what if someone hacks or misleads the AI system to conduct illegal acts?

DeepMind has made one of the most effective moves in this direction. They have laid out a three-pronged approach to make sure that the AI systems work as intended and to mitigate the adverse outcomes as much as possible. According to them, we can ensure technical AI safety by focusing on the three pillars.

Photo by Towfiqu barbhuiya on Unsplash
  1. Specification — Define the purpose of the system and identify the gaps in Ideal specification (Wishes), Design Specification (Blueprint), and Revealed specification (Behavior).
  2. Robustness — Ensure that the systems can withstand perturbations.
  3. Assurance — Actively monitor and control the behavior of the system and intervene when there are deviations.
Source. DeepMind

Build in Accountability

Accountability is one of the hardest aspects of AI that we need to tackle. It is hard because of its socio-technical nature. The following are the major pieces of the puzzle — according to Stephen Sanford, Claudio Novelli, Mariarosaria Taddeo & Luciano Floridi.

  1. Governance structures — The goal is to ensure that there are clearly defined governance structures when it comes to AI. This includes clarity of goals, responsibilities, processes, documentation, and monitoring.
  2. Compliance standards — the goal is to clarify the ethical and moral standards that are applicable to the system and its application. This at least denotes the intention behind the behavior of the system.
  3. Reporting — the goal here is to make sure that the usage of the system and its impact are recorded, so that it can be used for justification or explanation as needed.
  4. Oversight — the goal is to enable scrutiny on an ongoing basis. Internal and external audits are beneficial. This includes examining the data, obtaining evidence and evaluating the conduct of the system. This may include judicial review as well, when necessary.
  5. Enforcement — the goal is to determine the consequences for the organization and the other stakeholders involved. This may include sanctions, authorizations, and prohibitions.

Build in Transparency and Explainability

Explainability in AI (XAI) is an important field in itself, which has gained a lot of attention in recent years. In simpler terms, it is the ability to bring transparency into the reasons and factors that have led an AI algorithm to reach a specific conclusion. GDPR has already added the ‘Right to an Explanation’ in Recital 71, which means that the data subjects can request to be informed by a company on how an algorithm has made an automated decision. It becomes tricky as we try to implement AI in industries and processes that require a high degree of trust, such as law enforcement and healthcare.

The problem is that — the higher the accuracy and non-linearity of the model, the more difficult it is to explain

Source: Machine Learning for 5G/B5G Mobile and Wireless Communications: Potential, Limitations, and Future Directions

Simpler models, such as classification rule-based models, linear regression models, decision trees, KNN, Bayesian models etc, are mostly white box and, hence, directly explainable. Complex models are mostly black boxes.

  1. Specialized algorithms: Complex models like recurrent neural networks are black-box models, which still can have post-hoc explainability via the use of other model-agnostic or tailored algorithms meant for this purpose. The popular ones among these are LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (SHapley Additive exPlanations). Many other algorithms, such as the What-if tool, DeepLift, AIX360 etc, are also widely used.
  2. Model choice: Obviously, the above tools and methods can be used to bring in explainability into AI algorithms. In addition to that, there are cases in which black box AI is used, when a white box AI would suffice. The directly explainable white box models will make life easier when it comes to explainability. You can consider a more linear and explainable model, instead of a complex and hard-to-explain model, if the required sensitivity and specificity for the use case are met with a simpler model.
  3. Transparency Cards: Some companies, like Google and IBM, have their own explainability tools for AI. For example, Google’s XAI solution is available for use. Google has also launched Model Cards, to go along with their AI models, which makes the limitations of the corresponding AI models clear in terms of their training data, algorithm and output.

It must be noted that the NIST differentiates between explainability, interpretability, and transparency. For the sake of simplicity, I have used the terms interchangeably under explainability.

When it comes to healthcare, CHAI (Coalition for Health AI) has come up with ‘Blueprint for Trustworthy AI’ — a comprehensive approach to ensure transparency in health AI. It is well worth a read for anyone in health tech working on AI systems for healthcare.

Build in Risk Assessment and Mitigation

Organizations must ensure an end-to-end risk management strategy to prevent ethical pitfalls in implementing AI solutions. There are multiple isolated frameworks in use. The NIST RMF ((National Institute of Standards and Technology) was developed in collaboration with private and public sector organizations that work in the AI space. It is intended for voluntary use and is expected to boost the trustworthiness of AI solutions.

NIST

Long story short…

Technology will move forward, whether or not you like it. Such was the case with industrialization, electricity, and computers. Such will be the case with AI as well. AI is progressing too quickly for the laws to catch up to it. So are the potential dangers associated with it. Hence, it is incumbent upon those who develop it to take a responsible approach in the best interest of our society. What we must do is to put the right frameworks in place for the technology to flourish in a safe and responsible manner.

“With great power, comes great responsibility.” — Spiderman

Now you have a great starting point above. The question is whether you are willing to step up to the plate to take responsibility or wait for rules and regulations to force you to do so. You know what’s the right thing to do. I rest my case!

  • U+1F44F If you like my article, please give it as many claps and subscribe! It will mean the world to us content creators, and lets us produce more awesome articles in the future U+2764️
  • U+1F514 Follow me on Medium U+007C Linkedin U+007C Twitter

Thank you for your time and support. Much appreciated!

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓