Text Augmentation for detecting spear-phishing emails
Last Updated on October 23, 2020 by Editorial Team
Author(s): Edward Ma
Natural Language Processing
Text Augmentation for Detecting Spear-phishing Emails
Text augmentation techniques for phishing email detection
Information security is very important for any organization. Lost money is a minor problem, the serious one is that the enterprise system. However, fraud email and phishing email occupy a small set of data when comparing to normal email. Augmenting fraud and phishing email is a way to tackle thisΒ problem.
Therefore, Regina et al. proposed three different approaches to generate synthetic data for model training. As synthetic data is a kind of βfakeβ data, some low-quality data may hurt model performance. Validations are needed to keep a high-quality synthetic data. Also, there are some assumptions whichΒ are:
- Synthetic data should share the same label as the original text. For example, synthetic data should be change label from positive to negative (for binary classifier).
- Synthetic data should not be redundant. In other words, the augmented text should not be almost identical to the originalΒ text.
Word Replacement
Abbreviations Replacement
Abbreviations are very common in daily conversation. It allows the speaker and audience can communicate easier. For example, βF/Wβ and βFWβ means βforwardβ. However, there are some vague scenarios that we need context to interpret the abbreviations. For instance, βPMβ can be interpreted as βProject Managerβ and βPrime Ministerβ.
Although this method is easy to understand and implement, the drawback is that it needs to define the conversion or mapping one byΒ one.
Misspellings Replacement
Although auto-complete helps to correct misspellings, typo still exists in email and social media. For example, βbarginβ is a typo of βbargainβ. Regina et al. mentioned that misspellings are important because:
- Misspellings can convey a sense ofΒ urgency
- Misspellings can fool security technologies based on text analysis.
This method helps to tackle potential unseen text in inference time as the model may be trained with those misspellings tokens.
Synonym Replacement
By replacing similar meaning words, it can become a new training for models. Regina et al. used both WordNet and BERT to find synonyms or near-synonyms. For example, βThe quick brown fox jumps over the lazy dog.β and βLittle quick brown fox jumps over the lazy dog.β have similar meanings. The second sentence is generated by the BERTΒ model.
Leveraging WordNet is a typical way to generate synthetic while leveraging BERT to find near-synonyms is a better way to achieve it. ReasonsΒ are:
- BERT or contextual word embeddings model can generate near-synonyms words. It introduces more synthetic data and no need to pre-defined a list of synonyms (i.e.Β WordNet)
- As BERT can be trained for domain-specific knowledge, it can apply to specific domainΒ data.
Take Away
- Generating training data helps to tackle low resource problems. However, bear in mind that you should select appropriate methods to generate synthetic data.
- Some free open source libraries provide an easy way to generate synthetic data. nlpaug is one of the examples that you can generate data by several lines ofΒ code.
About Me
I am a Data Scientist in the Bay Area. Focusing on state-of-the-art work in Data Science, Artificial Intelligence, especially in NLP and platform related. Feel free to connect with me on LinkedIn or follow me on Medium orΒ Github.
Extension Reading
- A library of Data Augmentation for NLPΒ (nlpaug)
Reference
- M. Regina, M.Meyer and S. Goutal. Text Data Augmentation: Towards better detection of spear-phishing emails.Β 2020
Text Augmentation for detecting spear-phishing emails was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.
Published via Towards AI