Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

DNA Sequence Classification Using Naive Baye’s Algorithm
Latest   Machine Learning

DNA Sequence Classification Using Naive Baye’s Algorithm

Last Updated on January 5, 2024 by Editorial Team

Author(s): Rakesh M K

Originally published on Towards AI.

Photo by Warren Umoh on Unsplash

Table of Contents

  1. Introduction
  2. About Data
  3. Data Preprocessing
  4. Modeling
  5. Performance Analysis
  6. Conclusion
  7. References

Introduction

As Genomics and bioinformatics are thriving nowadays, analysis of DNA and RNA sequences is becoming a crucial part of medical research. The information obtained from these genetic materials offers a better understanding of the molecular basis of life and diseases.

A DNA molecule is a double helix structure consisting of two twisted paired strands, as you see in the cover photo. DNA consists of four bases which are adenine [A], cytosine [C], guanine [G], or thymine [T]. DNA sequence is a laboratory process of determining the sequence of these four bases in a DNA molecule [1].

In this page, I would like to discuss how to use Machine Learning to predict the gene family associated with a particular DNA sequence. A Multinominal Naive Bayes model, which is trained on more than 4000 DNA sequences, is used for the same.

About the Data

The data is downloaded from (https://github.com/nageshsinghc4/DNA-Sequence-Machine-learning/blob/master/human_data.txt) which is publicly accessible. The text file is imported and converted to Pandas' dataframe as below, and some portion of the data is displayed.

import pandas as pd
with open('human_data.txt', 'r') as file: # import text file
text_content = file.read()

lines = text_content.split('\n')# Initialize lists for features and labels
features = [] #empty list for features
labels = [] #empty list for features

for line in lines[1:]:# Skip the header line
line_elements = line.split('\t')# Split each line into sequence and class using '\t' as the separator

if len(line_elements) == 2:# Check if there are enough elements
sequence, label = line_elements

features.append(sequence)# Append the values to the lists
labels.append(int(label))
else:
print(f"Skipping line due to insufficient elements.")

data = pd.DataFrame({'feature':features ,'label':labels})
pd.set_option('display.max_colwidth', None)
data(100)

There are 4380 rows in the data, and sequences are of varying lengths. Also, it is imbalanced, as you see in the below plot. I am proceeding without any resampling to check how well the model can work on the original data.

Imbalance in classes.

Gene families corresponding to each label are given below.

 0: 'G protien coupled receptors',
1: 'Tyrosine kinase',
2: 'Tyrosine phosphatase',
3: 'Synthatase',
4: 'Synthase',
5: 'Ion channel',
6: 'Transcription factor'

Data Preprocessing

The data needs to be converted into numbers with the condition that it should catch short-term patterns in the sequence since it really matters in case of a DNA/RNA sequence. So, for preprocessing the data, techniques such as k-mers and count vectorization are used.

k-mers: k-mers are substrings of a DNA/RNA sequence of length k which are mainly used in genomics and bioinformatics. For example, let the DNA sequence be ATGCCCCAACTA… k-mers for this sequence for k =6 will be ATGCCC, TGCCCC, GCCCCA, CCCCAA, CCCAAC, CCAACT, CAACTA. i.e., a sequence of 6 is extracted by shifting one position rightwards till the end of the sequence. The value of k-should be chosen in a way such that it can catch the pattern in the data properly. I have chosen k=6 for the process.

from Bio.Seq import Seq
from Bio.SeqUtils import nt_search
'''function to generate k-mers'''
def generate_kmers(sequence, k):
return [str(sequence[i:i+k]) for i in range(len(sequence) - k + 1)]
'''function to create dataframe'''
def create_dataframe_column_kmers(df, sequence_column, k):
data['kmers'] = data['feature'].apply(lambda x: generate_kmers(Seq(x), k))
return data
k_value = 6
'''create dataframe with sequence and k-mers as colums'''
dataDNA = create_dataframe_column_kmers(data, 'sequence', k_value)

dataDNA.head() # display fist row of dataframe

Count vectorization: The count vectorizer converts the text data to a sparse matrix that represents the occurrence or frequency of words in the document (substrings of sequence in this case).

from sklearn.feature_extraction.text import CountVectorizer
'''create a column of text with k-mers'''
dataDNA['text'] = dataDNA['kmers'].apply(lambda x: ' '.join(x))
vectorizer = CountVectorizer(analyzer='word', ngram_range=(1, 3))
'''fit and transform data to vectorizer'''
vectorizer.fit(dataDNA['text'])
X = vectorizer.transform(dataDNA['text'])
'''convert vectors to array'''
X_array = X.toarray()
y = data['label'].to_numpy() # extract labels

print(X_array) # print and see vectorized data

[[1 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
...
[4 1 0 ... 0 0 0]
[1 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]]

As you can see above, sparse matrices are generated with count vectorizer. As the next step, data was split into train and test data.

from sklearn.model_selection import train_test_split
trainSeq,testlSeq,trainLabels,testLabels = train_test_split(X_array,y,
test_size=0.2,
random_state=101)

Modeling

Naive Baye’s model is based on Baye’s theorem, which explains conditional probability. For the discrete features, Multinominal NB (MNB) is used for classification. Hyperparameter alpha is the smoothing parameter that handles the zero probability issues while estimating class probabilities. fit_prior = True makes sure that prior probabilities are learned by the model. alpha = 0 is set to avoid additive smoothing. force_alpha = False ensures that alpha will not fall below 1e-10 which may cause numerical errors. For further details refer to [2].

from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB

'''set up the model'''
model_0 = MultinomialNB(fit_prior=True, alpha = 0, force_alpha=False)
'''fit the model'''
model_0.fit(trainSeq, trainLabels)

Performance Analysis

'''train accuracy'''
model_0.score(trainSeq , trainLabels)
0.997431506849315

'''test accuracy'''
model_0.score(valSeq , valLabels)
0.9372146118721462

Test accuracy of 0.937 is quite impressive for a multi-class problem, given that the data is highly imbalanced. Let’s see the classification report of the model.

Classification report of MNB

Conclusion

With a sequence of varying lengths and with an imbalance in data, the test result is quite satisfactory. With the upsampling of data, we may obtain better results as the prediction of minority classes is a bit low, which can be seen in the above classification report. Also, the parameter ngram_range The vectorizer has a major role in preprocessing, as increasing the range up to a limit may increase the performance of the model since it can capture the information in the pattern better.

So, the performance of the model can be increased by adjusting the above hyperparameters. Further analysis of model performance can be done considering a validation split in the data, which I have omitted in this work.

References

  1. Apply Machine Learning Algorithms for Genomics Data Classification U+007C by Ernest Bonat, Ph.D. U+007C MLearning.ai U+007C Medium
  2. sklearn.naive_bayes.MultinomialNB — scikit-learn 1.3.2 documentation

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓