Build a Recommendation System using BERT & Pinecone 🔥
Last Updated on June 3, 2024 by Editorial Team
Author(s): Karan Kaul | γ«γ©γ³
Originally published on Towards AI.
Build a Recommendation System using BERT & Pinecone 🔥
Overview
We want to build a system that will recommend similar news articles, for the users to read. We will follow these steps β
- Loading the news dataset
- Creating vectors/embeddings for the text
- Configuring Pinecone
- Inserting vectors in the Pinecone index
- Recommending similar articles based on titles
👉 Loading the dataset
Install & import packages β
!pip install pinecone-client
!pip install sentence-transformers
!pip install langchain
from pinecone import Pinecone, ServerlessSpec
import pandas as pd
from langchain.text_splitter import RecursiveCharacterTextSplitter
from sentence_transformers import SentenceTransformer
from google.colab import drive
drive.mount('/content/drive') # data was on my Google Drive
Loading data β
data = pd.read_csv("/content/drive/MyDrive/DataSets/Articles.csv", encoding='latin-1')
data.head()
👉 Setting up Pinecone
To configure PineCone, we need to specify a few parameters. Take a look below β
# get your API KEY from the pinecone website
pinecone = Pinecone(api_key="YOUR-API-KEY-HERE")
# let's name our index
INDEX_NAME = "my-news-index"
# we will use the MiniLM model to embed, hence 384 dims for our index
DIMS = 384
Creating the index β
# delete index if it existed
if INDEX_NAME in [index.name for index in pinecone.list_indexes()]:
pinecone.delete_index(INDEX_NAME)
print("Deleted")
# create new index
if INDEX_NAME not in pinecone.list_indexes().names():
pinecone.create_index(
name=INDEX_NAME,
dimension=DIMS, # dims of our embeddings
metric="cosine", # metric to use when searching
spec=ServerlessSpec(
cloud='aws',
region='us-east-1'
)
)
# check if index was created
pinecone.list_indexes().names()
# set pointer to the newly created index
index = pinecone.Index(INDEX_NAME)
👉 Adding vectors to Pinecone
For each article, we will break it into chunks & then insert each chunkβs vector into our index.
# init the embedding model
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
# function to insert vectors & meta-data into the index
def embed(embeddings, title, prepped, embed_num, body):
for embedding in embeddings:
prepped.append({'id':str(embed_num), 'values':embedding, 'metadata':{'title':title, 'body':body}})
embed_num += 1
if len(prepped) >= 100:
index.upsert(prepped, namespace="ns1")
prepped.clear()
return embed_num
# function to embed
def get_embeddings(articles, model=model):
return model.encode(articles)
Break each article into chunks and call the embed function to embed and insert data, into the index β
embed_num = 0 #keep track of embedding number for 'id'
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=400,
chunk_overlap=20)
prepped = []
# for faster processing, only using the first 200 articles
articles_list = data['Article'].tolist()[:200]
titles_list = data['Heading'].tolist()[:200]
for i in range(0, len(articles_list)):
print(".",end="")
art = articles_list[i]
title = titles_list[i]
if art is not None and isinstance(art, str):
texts = text_splitter.split_text(art) # break article body into chunks
embeddings = get_embeddings(texts) # embed all chunks for curr article
embed_num = embed(embeddings, title, prepped, embed_num, art)
We can check the index stats to confirm everything worked β
index.describe_index_stats()
# output (vector count will depend on the total chunks we have)
{
'dimension': 384,
'index_fullness': 0.0,
'namespaces':
{'ns1': {'vector_count': 900}},
'total_vector_count': 900
}
👉 Recommending Articles
To recommend articles, we need to embed the new article & search for similar articles to return β
def get_recommendations(pinecone_index, search_term, top_k=10):
embed = get_embeddings([search_term]).tolist()
res = pinecone_index.query(vector=embed, top_k=top_k, include_metadata=True, namespace="ns1")
return res
# return recommendations to the user
reco = get_recommendations(index, 'Karachi Stock Exchange', top_k=5)
seen = {}
for r in reco.matches:
title = r.metadata['title']
body = r.metadata['body']
if title not in seen:
print(f'{r.score} : {title}')
# print(body[:120],"\n--") # print the body(optional)
seen[title] = '.'
# output titles & scores
0.709796 : kse 100 index sees sharp decline of over 1000 poi
0.677253485 : stock market regains 1100 points to recover from previous days plung
0.659133136 : free fall continues as kse 100 plummets 1000 poi
0.589302838 : stocks tumble as kse 100 share index drops 817 poi
👉 To Summarize β
- We loaded the data set & configured Pinecone (API key, index name, dims, search techniqueβ¦)
- Embeddings were created for each chunk for each article body. Article titles & bodies were sent as meta-data to the index, for later retrieval when searching.
- We checked the stats to confirm everything was working.
- Finally, we embedded a new article & returned similar matching titles along with their scores.
This quickstart guide from Pinecone would be a great starting point for Pinecone, please go through it.
Thank you for reading! 😃
Drop some claps, comments & share the article if it was helpful.
Read these next β
After reviewing tons of resources, here is β How To Make Your Python Code Run Faster 🏃🏻💨
Practical tips on improving your Python codeβs performance, with time comparisons. (Try them out & see the difference!)
python.plainenglish.io
Langchain x OpenAI x Streamlit β Rap Song Generator🎙οΈ
Learn how to create a web app that integrates the Langchain framework with Streamlit & OpenAIβs GPT3 model.
pub.towardsai.net
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI