Zero-Shot NER with LLMs
Last Updated on August 1, 2023 by Editorial Team
Author(s): Patrick Meyer
Originally published on Towards AI.
We are facing a major disruption of our NLP landscape with the emergence of large language models that surpass the current performance and enable activities without specific training.
Top highlight
This member-only story is on us. Upgrade to access all of Medium.
Photo by Brett Jordan on Unsplash
We are facing a major disruption in the natural language landscape with the emergence of large language models (LLMs) with unmatched performance and capabilities to perform activities for which they were not trained. Language models have been used for many years in NLP tasks (e.g., BERT), but the increasing size of these models has led to the emergence of new skills for which the network has not been trained (simply compare the performances of GPT3 and tasks processed compared to GPT2).
These models allow… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI