Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Natural Language Processing in Tensorflow
Natural Language Processing

Natural Language Processing in Tensorflow

Last Updated on December 17, 2020 by Editorial Team

Author(s): Bala Priya C

Natural Language Processing

Tokenization and Sequencing

Photo by Emma Matthews Digital Content Production onΒ Unsplash

In this blog post, we shall seek to learn how to implement tokenization and sequencing, important text pre-processing steps, in Tensorflow.

Outline

  • Introduction to Tokenizer
  • Understanding Sequencing

Introduction to Tokenizer

Tokenization is the process of splitting the text into smaller units such as sentences, words or subwords. In this section, we shall see how we can pre-process the text corpus by tokenizing text into words in Tensorflow. We shall use the Keras API with Tensorflow backend; The code snippet below shows the necessary imports.πŸ“‘

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.preprocessing.text import Tokenizer

And voilaπŸŽ‰ we have all modules imported! Let’s initialize a list of sentences that we shall tokenize.

sentences = [
'Life is so beautiful',
'Hope keeps us going',
'Let us celebrate life!'
]

The next step is to instantiate the Tokenizer and call the fit_to_texts method.

tokenizer = Tokenizer()
tokenizer.fit_on_texts(sentences)

Well, when the text corpus is very large, we can specify an additional num_words argument to get the most frequent words. For example, if we’d like to get the 100 most frequent words in the corpus, then tokenizer = Tokenizer(num_words=100) does just that! 😊

To know how these tokens have been created and the indices assigned to words, we can use the word_index attribute.

word_index = tokenizer.word_index
print(word_index)

πŸ’‘ Here’s theΒ output:

{β€˜life’: 1, β€˜us’: 2, β€˜is’: 3, β€˜so’: 4, β€˜beautiful’: 5, β€˜hope’: 6, β€˜keeps’: 7, β€˜going’: 8, β€˜let’: 9, β€˜celebrate’: 10}

Well, so far so good! But what happens when the test data contains words that we’ve not accounted for in the vocabulary?πŸ€”

test_data = [
'Our life is to celebrate',
'Hoping for the best!',
'Let peace prevail everywhere'
]

We have introduced sentences in test_data which contain words that are not in our earlier vocabulary.

How do we account for such words which are not in vocabulary? πŸ€”We can define an argument oov_token to account for such Out Of Vocabulary (OOV) tokens.πŸ˜€

tokenizer = Tokenizer(oov_token=”<OOV>”)

The word_index now returns the following output:

{β€˜<OOV>’: 1, β€˜life’: 2, β€˜us’: 3, β€˜is’: 4, β€˜so’: 5, β€˜beautiful’: 6, β€˜hope’: 7, β€˜keeps’: 8, β€˜going’: 9, β€˜let’: 10, β€˜celebrate’: 11}

Understanding Sequencing

In this section, we shall build on the tokenized text, using these generated tokens to convert the text into a sequence. πŸ“•πŸ“—πŸ“˜πŸ“’

We can get a sequence by calling the texts_to_sequences method.

sequences = tokenizer.texts_to_sequences(sentences)

Here’s the output:[[2, 4, 5, 6], [7, 8, 3, 9], [10, 3, 11,Β 2]]

Let’s now take a step back. What happens when the sentences are of different lengths? πŸ™„Then, we will have to convert all of them to the same length.🀷

We shall import pad_sequences function to pad our sequences and look at the padded sequences.

from tensorflow.keras.preprocessing.sequence import pad_sequences
padded = pad_sequences(sequences)
print("\nPadded Sequences:")
print(padded)
# Output
Padded Sequences:
[[ 2 4 5 6]
[ 7 8 3 9]
[10 3 11 2]]

By default, the length of the padded sequence = length of the longest sentence. However, we can limit the maximum length by explicitly setting the maxlen argument.

padded = pad_sequences(sequences,maxlen=5)
print("\nPadded Sequences:")
print(padded)
# Output
Padded Sequences: 
[[ 0 2 4 5 6]
[ 0 7 8 3 9]
[ 0 10 3 11 2]]

Now, let’s pad our test sequences after converting them to sequences.

test_seq = tokenizer.texts_to_sequences(test_data)
print("\nTest Sequence = ", test_seq)
padded = pad_sequences(test_seq, maxlen=10)
print("\nPadded Test Sequence: ")
print(padded)

And here’s ourΒ output.

# Output 
Test Sequence =  [[1, 2, 4, 1, 11], [1, 1, 1, 1], [10, 1, 1, 1]]  
Padded Test Sequence:  
[[ 0 0 0 0 0 1 2 4 1 11]
[ 0 0 0 0 0 0 1 1 1 1]
[ 0 0 0 0 0 0 10 1 1 1]]

We see that all the padded sequences are of length maxlen and are padded with 0s at the beginning. What if we would like to add trailing zeros instead of at the beginning? We only need to specify padding=’post’

padded = pad_sequences(test_seq, maxlen=10, padding='post')
print("\nPadded Test Sequence: ")
print(padded)
# Output
Padded Test Sequence: 
[[ 1 2 4 1 11 0 0 0 0 0]
[ 1 1 1 1 0 0 0 0 0 0]
[10 1 1 1 0 0 0 0 0 0]]

So far, none of the sentences have length exceeding maxlen, but in practice, we may have sentences that are much longer than maxlen. In that case, we have to truncate the sentences and can set the argument truncating='post' or 'pre' to drop the first few or the last few words that exceed the specified maxlen. Here’s the link to the Colab notebook for the aboveΒ example.

Happy learning and coding!πŸŽˆβœ¨πŸŽ‰πŸ‘©πŸ½β€πŸ’»

Reference

Natural Language Processing in TensorFlow


Natural Language Processing in Tensorflow was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Published via Towards AI

Feedback ↓