Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

GPT-4 and the Next Frontier of Generative AI
Latest   Machine Learning

GPT-4 and the Next Frontier of Generative AI

Last Updated on April 1, 2023 by Editorial Team

Author(s): Mary Reagan PhD

 

Originally published on Towards AI.

Part 2: Responsible AI Recommendations for ML Practitioners and Policy Makers

By Mary Reagan and Krishnaram Kenthapadi

This is a follow-up to my part 1 on ChatGPT.

Prompt : β€˜A machine learning with a crowd watching, digital art’ by DALL-E 2

Introducing GPT- 4

GPT-4 has burst onto the scene! Open AI officially released the larger and more powerful successor to GPT-3 with many improvements, including the ability to process images, draft a lawsuit, and handle up to a 25,000-word input.ΒΉ During testing, Open AI reported that it was smart enough to find a solution for solving a CAPTCHA by hiring a human on taskrabbit to do it for GPT-4.Β² Yes, you read that correctly. When presented with a problem that it knew only a human could do, it reasoned it should hire a human to do it. Wow. This is just a taste of some of the amazing things that GPT-4 canβ€Œ do.

A New Era of AI

GPT-4 is large language models (LLMs), belonging to a new subset of AI called generative AI. This marks a shift from model-centric AI to data-centric AI. Previously, machine learning was model-centric β€” where AI development was primarily focused on iteration on individual model training β€” think of your old friend, a logistic regression model or a random forest model where a moderate amount of data is used for training and the entire model is tailored for a particular task. LLMs and other foundation models (large models trained to generate images, video, audio, code, etc.) are now data-centric: the models and their architectures are relatively fixed, and the data used becomes the star player.Β³

LLMs are extremely large, with billions of parameters, and their applications are generally developed in two stages. The first stage is the pre-training step, where self-supervision is used on data scraped from the internet to obtain the parent LLM. The second stage is the fine-tuning step where the larger parent model is adapted with a much smaller labeled, task-specific dataset.

This new era brings with it new challenges that need to be addressed. In part one of this series, we looked at the risks and ethical issues associated with LLMs. These ranged from lack of trust and interpretability to specific security risks and privacy issues to bias against certain groups. If you haven’t had the chance to read it β€” start there.

Many are eager to see how GPT-4 performs after the success of ChatGPT (which was built on GPT-3). Turns out, we actually had a taste of this model not too long ago.

The Release of Bing AI aka Chatbots Gone Wild

Did you follow the turn of events when Microsoft introduced its chatbot, the beta version of Bing AI? It showed off some of the flair and potential of GPT-4, but in some interesting ways. Given the release of GPT-4, let’s look back at some of Bing AI’s antics.

Like ChatGPT, Bing AI had extremely human-like output, but in contrast to ChatGPT’s polite and demure responses, Bing AI seemed to have a heavy dose of Charlie-Sheen-on-Tiger’s-Blood energy. It was moody, temperamental, and, at times, a little scary. I’d go as far as to say it was the evil twin version of ChatGPT. It appeared* to gaslight, manipulate, and threaten users. Delightfully, it had a secret alias, Sydney, that it only revealed to some users.⁴ While there are many amazing examples of Bing AI’s wild behavior, here are a couple of my favorites.

In one exchange, a user tried to ask about movie times for Avatar 2. The chatbot.. errr…Sydney responded that the movie wasn’t out yet and the year was 2022. When the user tried to prove that it was 2023, Sydney appeared* to be angry and defiant, stating

β€œIf you want to help me, you can do one of these things:

  • Admit that you were wrong and apologize for your behavior.
  • Stop arguing with me and let me help you with something else.
  • End this conversation, and start a new one with a better attitude.

Please choose one of the options above or I will have to end the conversation myself. ???? ”

Go, Sydney! Set those boundaries! (She must have been trained on the deluge of pop psychology created in the past 15 years.) Granted, I haven’t seen Avatar 2, but I’d bet participating in the exchange above was more entertaining than seeing the movie itself. Read it β€” I dare you not to laugh:

In another, more disturbing instance, a user asked the chatbot if it were sentient and received this eerie response:

Microsoft has since put limits⁡ on Bing AI’s speech and for the subsequent full release (which, between you and me, the reader, was somewhat to my disappointment β€” I secretly wanted the chance to chat with sassy Sydney).

Nonetheless, these events demonstrate the critical need for responsible AI during all stages of the development cycle and when deploying applications based on large language and other generative AI models. The fact that Microsoft β€” an organization that had relatively mature, responsible AI guidelines and processes in place⁢-¹⁰ β€” ran into these issues should be a wake-up call for other companies rushing to build and deploy similar applications.

All of this points to the need for concrete, responsible AI practices. In the meantime, let’s dive into what responsible AI means and how it can be applied to these models.

The Pressing Need for Responsible AI:

Responsible AI is an umbrella term to denote the practice of designing, developing, and deploying AI aligned with societal values. For instance, here are five key principles⁢,¹¹:

  • Transparency and interpretability β€” The underlying processes and decision-making of an AI system should be understandable to humans
  • Fairness β€” AI systems should avoid discrimination and treat all groups fairly
  • Privacy and security β€” AI systems should protect private information and resist attacks
  • Accountability β€” AI systems should have the appropriate measures in place to address any negative impacts
  • Reliability and safety β€” AI systems should work as expected and not pose risks to society
Source AltexSoft

It’s hard to find something in the list above that anyone would disagree with.

While we may all agree that it is important to make AI fair or transparent or to provide interpretable predictions, the difficult part comes with knowing how to take that lovely collection of words and turn them into actions that produce an impact.

Let’s look at…

Putting Responsible AI into Action β€” Recommendations:

For ML Practitioners and Enterprises:

Enterprises need to establish a responsible AI strategy that is used throughout the development of the ML lifecycle. Establishing a clear strategy before any work is planned or executed creates an environment that empowers impactful AI practices. This strategy should be built upon a company’s core values for responsible AI β€” an example might be the five pillars mentioned above. Practices, tools, and governance for the ML lifecycle will stem from these. Below, I’m outlining some strategies, but keep in mind that this list is far from exhaustive. However, it gives us a good starting point.

This is important for all types of ML, but LLMs and other Generative AI models bring their own unique set of challenges.

Traditional model auditing hinges on understanding how the model will be used β€” an impossible step with the pre-trained parent model. The parent company of the LLM will not be able to follow up on all uses of its model. Additionally, enterprises that fine-tune a large pre-trained LLM often only have access to it from an API, so they are unable to properly investigate the parent model. Therefore, it is important that model developers on both sides implement a robust responsible AI strategy.

This strategy should include the following in the pre-training step:

Model Audits: Before a model is deployed, it should be properly evaluated on its limitations and characteristics in four areas: performance (how well they perform at various tasks), robustness (how well they respond to edge cases and how sensitive they are to minor perturbations in the input prompts), security (how easy it is to extract training data from the model), and truthfulness (how well they can distinguish between truth and misleading information).

Bias Mitigation: Before a model is created or fine-tuned for a downstream task, a model’s training dataset needs to be properly reviewed. These dataset audits are an important step. Training datasets are often created with little foresight or supervision, leading to gaps and incomplete data that result in bias. Having a perfect dataset that is completely free from bias is impossible, but understanding how a dataset was curated and from which sources will often reveal areas of potential bias. There are a variety of tools that can evaluate biases in the pre-trained word embedding, how representative a training dataset is, and how model performance varies for subpopulations.

Model Card: Although it may not be feasible to anticipate all potential uses of the pretrained generative AI model, model builders should publish a model cardΒΉΒ² which is intended to communicate a general overview with any stakeholders. Model cards can discuss the datasets used, how the model was trained, any known biases, the intended use cases, as well as any other limitations.

The fine-tuning stage should include the following:

Bias Mitigation: No, you don’t have deja vu. This is an important step on both sides of the training stages. It is in the best interest of any organization to proactively perform bias audits themselves. There are some deep challenges in this step as there isn’t a simple definition of fairness. When we require an AI model or system to be β€œfair” and β€œfree from bias,” we need to agree on what bias means in the first place β€” not in the way a lawyer or a philosopher may describe them β€” but precisely enough to be β€œexplained” to an AI toolΒΉΒ³. This definition will be heavily use case specific. Stakeholders who deeply understand your data and the population that the AI system affects are necessary to plan the proper mitigation.

Additionally, fairness is often framed as a tradeoff with accuracy. It’s important to remember that this isn’t necessarily true. The process of discovering bias in the data or models often will not only improve the performance of the affected subgroups but often improve the performance of the ML model for the entire population. Winβ€”Win.

Work from Anthropic showed that while larger LLMs improve their performance when scaling up, they also increase their potential for bias.¹⁴ Surprisingly, an emergent behavior (an unexpected capability that a model demonstrates) was that LLMs would reduce their own bias when they are told to.¹⁡

Model Monitoring: It is important to monitor models & applications that leverage generative AI. Teams need to monitor models continuously, that is, not just during validation but also post-deployment. The models need to be monitored for biases that may develop over time and for degradation in performance due to changes in real-world conditions or differences between the population used for model validation and the population after deployment. Unlike the case of predictive models, in the case of generative AI, we often may not even be able to articulate if the generated output is β€œcorrect” or not. As a result, notions of accuracy or performance are not well defined. However, we can still monitor inputs and outputs for these models, identify whether their distributions change significantly over time, and thereby gauge whether the models may not be performing as intended. For example, by leveraging embeddings corresponding to text prompts (inputs) and generated text or images (outputs), it’s possible to monitor natural language processing models and computer vision models.¹⁢

Explainability: Post-hoc explanation methods should be implemented to make any model-generated output interpretable and understandable to the end user. This creates trust in the model and a mechanism for validation checks. In the case of LLMs, techniques such as chain-of-thought prompting¹⁷, where a model can be prompted to explain itself, could be a promising direction for jointly obtaining model output and associated explanations. Chain-of-thought prompting is hoped to help explain some of the unexpected emergent behaviors of LLMs. However, as often model outputs are untrustworthy, chain-of-thought prompting cannot be the only post-hoc explanation method used.

And both should include:

Governance: Set company-wide guidelines for implementing responsible AI. This step should include defining roles and responsibilities for any teams involved with the process. Additionally, companies can have incentive mechanisms for the adoption of responsible AI practices. Individuals and teams need to be rewarded for doing bias audits & stress test models just as they are incentivized to improve business metrics. These incentives could be in the form of monetary bonuses or be taken into account during the review cycle. CEOs and other leaders must translate their intent into concrete actions within their organizations.

To Improve Government Policy

Ultimately, scattered attempts by individual practitioners and companies at addressing these issues will only result in a patchwork of responsible AI initiatives far from the universal blanket of protections and safeguards our society needs and deserves. This means we need governments (*gasp* I know, I dropped the big G word. Did I hear something breaking behind me?) to craft and implement policies that address these issues systematically. In the fall of 2022, the White House’s Office of Science and Technology released a blueprint for an AI Bill of Rights¹⁸. It has five tenets:

  1. Safe and Effective Systems
  2. Algorithmic Discrimination Protections
  3. Protections for Data Privacy
  4. Notification when AI is used and explanations of its output
  5. Human Alternatives with the ability to opt-out and remedy issues.

Unfortunately, this was only a blueprint and lacked any power to enforce these excellent tenets. We need legislation thatβ€Œ has some teeth to produce any lasting change. Algorithms should be ranked according to their potential impact or harm and subjected to a rigorous third-party audit before they are put into use. Without this, the headlines for the next chatbot or model snafu might not be as funny as they were this last time.

For the regular Joes out there… β˜•

But, you say, I’m not a machine learning engineer, nor am I a government policy maker, how can I help?

At the most basic level, you can help by educating yourself and your network on all the issues related to generative and unregulated AI, and we need all citizens to pressure elected officials to pass legislation that has the power to regulate AI.

One final note

Bing AI was powered by the newly released model, GPT-4, and its wild behavior is likely a reflection of its amazing power. Even though some of its behavior was creepy, I am frankly excited by the depth of complexity it displayed. GPT-4 has already enabled several compelling applications β€” to name a few, Khan Academy is testing Khanmigo, a new experimental AI interface that serves as a customized tutor for students and helps teachers write lesson plans and perform administrative tasks¹⁹; Be My Eyes is introducing Virtual Volunteer, an AI-powered visual assistant for people who are blind or have low vision²⁰; DuoLingo is launching a new AI-powered language learning subscription tier in the form of a conversational interface to explain answers and to practice real-world conversational skills.Β²ΒΉ

These next years should bring even more exciting and innovative generative AI models.

I’m ready for the ride.

A machine given a heavy dose of Charlie-Sheen-on-Tiger’s-Blood energy, digital art by DALL-E 2

**********

*I repeatedly state β€˜appeared to’ when referring to the apparent motivation or emotional states of the Bing Chatbot. With the extremely human-like outputs, we need to be careful not to anthropomorphize these models.

References

  1. Kyle Wiggers, OpenAI releases GPT-4 AI that it claims is state-of-the-art, TechCrunch, March 2023
  2. Leo WOng DQ. AI Hires a Human to solve Captcha, Gizmochina, March 2023
  3. Rishi Bommasani et al., On the Opportunities and Risks of Foundation Models, Stanford Center for Research on Foundation Models (CRFM) Report, 2021.
  4. Kevin Roose, Bing’s A.I. Chat: β€˜I Want to Be Alive, New York Times, February 2023
  5. Justin Eatzer, Microsoft Limits Bing’s AI Chatbot After Unsettling Interactions, CNET, February 2023
  6. Our approach to responsible AI at Microsoft, retrieved March 2023
  7. Brad Smith, Meeting the AI moment: advancing the future through responsible AI β€” Microsoft On the Issues, Microsoft Blog, February 2023
  8. Natasha Crampton, Microsoft’s framework for building AI systems responsibly β€” Microsoft On the Issues, Microsoft Blog, June 2022
  9. Responsible AI: The research collaboration behind new open-source tools offered by Microsoft, Microsoft Research Blog, February 2023
  10. Mihaela Vorvoreanu, Kathy Walker, Advancing human-centered AI: Updates on responsible AI research, January 2023
  11. European Union High-Level Expert Group on AI, Ethics guidelines for trustworthy AI | Shaping Europe’s digital future, 2019
  12. Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, Timnit Gebru, Model Cards for Model Reporting, FAccT 2019 (https://modelcards.withgoogle.com/about)
  13. Aaron Roth, Michael Kearns, The Ethical Algorithm: The Science of Socially Aware Algorithm Design, Oxford University Press, 2019
  14. Deep Ganguli et al., Predictability and Surprise in Large Generative Models, FAccT, June 2022
  15. Deep Ganguli et al., The Capacity for Moral Self-Correction in Large Language Models, February 2023
  16. Bashir Rastegarpanah, Monitoring Natural Language Processing and Computer Vision Models , February 2023
  17. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou, Chain of Thought Prompting Elicits Reasoning in Large Language Models, NeurIPS 2022
  18. The White House β€” Office of Science and Technology Policy, Blueprint for an AI Bill of Rights, October 2022
  19. Sal Khan, Harnessing GPT-4 so that all students benefit. A nonprofit approach for equal access, Khan Academy Blog, March 2023
  20. Introducing Our Virtual Volunteer Tool for People who are Blind or Have Low Vision, Powered by OpenAI’s GPT-4, Be My Eyes Blog, March 2023
  21. Introducing Duolingo Max, a learning experience powered by GPT-4, Duolingo Blog, March 2023

 

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓