Last Updated on May 18, 2023 by Editorial Team
Author(s): Dr. Mandar Karhade, MD. PhD.
Originally published on Towards AI.
Tables turned around. I thought ChatGPT would be the vector for more malicious phishing attacks — I forgot they could be the target too
Ladies and gentlemen, gather around because I’ve got a juicy story to tell you. It’s the tale of ChatGPT and how it went from being the darling of the chatbot world to the victim of a data breach. The cybersecurity world was all atwitter when ChatGPT and its chatbot brethren first hit the scene. “How could AI technology be used to launch cyberattacks?” they wondered. Well, it didn’t take long for the bad guys to figure it out. They bypassed the safety checks and used ChatGPT to write malicious code like it was going out of style. Prompt engineering in… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI