Breaking GPT-4 Safety: Pyromaniac Edition
Last Updated on August 9, 2023 by Editorial Team
Author(s): Dr. Mandar Karhade, MD. PhD.
Originally published on Towards AI.
I experimented with breaking LLM safety. GPT4 explained to me how to hurt someone.
This member-only story is on us. Upgrade to access all of Medium.
In recent years, Large Language Models (LLMs) have revolutionized various industries, from natural language processing to creative writing and customer service. These powerful AI models, such as GPT-3.5, GPT-4, Claude, and Bard, have the ability to generate human-like text based on the vast amount of data theyβve been trained on. It is true that the LLMs hold tremendous potential for enhancing human life and productivity; their deployment must be accompanied by a vigilant focus on safety. There are many safety concerns. The following list is not comprehensive, but it… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI