China Just Turned Meta’s Llama AI into a Military Tool
Author(s): Get The Gist
Originally published on Towards AI.
Is global security at risk?
This member-only story is on us. Upgrade to access all of Medium.
Brookings InstituteOpen-source AI has changed the world for developers, researchers and enthusiasts.
When companies release open-source models they want to democratize innovation — give people the tools to create, explore and even dream up entirely new applications.
But what happens when those dreams get out of control and slip into unintended or even dangerous territory?
This is exactly what’s happened with Meta’s Llama AI model.
Originally shared to promote transparent innovation, Llama has been repurposed by researchers associated with China’s People’s Liberation Army (PLA) into a military AI tool called ChatBIT.
Chinese researchers tied to the PLA took an early version of Meta’s Llama and using a dataset of around 100,000 military dialogues, turned it into ChatBIT.
This military focused tool is now used for intelligence gathering, operational decision making and military dialogue simulations.
What’s more impressive is that with a relatively small dataset, ChatBIT has almost 90% of the capabilities of the latest models like OpenAI’s GPT-4.
For the PLA, ChatBIT is a big step in AI driven decision support and gives them an edge in strategy, intelligence and potentially even command training as the tech advances.
ChatBIT shows us some of the big picture risks.
AI tools… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI