Google created a transformer-based machine learning approach for natural language processing pre-training called Bidirectional Encoder Representations from Transformers. It has a huge number of parameters, hence training it on a small dataset would lead to overfitting. This is why we use a pre-trained BERT model that has been trained on a huge dataset. Using the pre-trained model and try to “tune” it for the current dataset, i.e. transferring the learning, from that huge dataset to our dataset, so that we can “tune” BERT from that point onwards.
In this article, we will fine-tune the BERT by adding a few neural network layers on our own and freezing the actual layers of BERT architecture. The problem statement that we are taking here would be of classifying sentences into POSITIVE and NEGATIVE by using fine-tuned BERT model.
The sentence column has text and the label column has the sentiment of the text – 0 for negative and 1 for positive. We first load the dataset followed by, some preprocessing before tuning the model.
importpandas as pd
importnumpy as np
After loading the data, split the data into train, validation ad test data. We are taking the 70:15:15 ratio for this division. The inbuilt function of sklearn is being used below to split the data. We use stratified attributes to ensure that the proportion of the categories remains the same after splitting the data.
Next, we proceed with loading the pre-trained BERT model and tokenizer. We would use the tokenizer to convert the text into a format(which has input ids, attention masks) that can be sent to the model.
If we take the padding length as the maximum length of text found in the training texts, it might leave the training data sparse. Taking the least length would in turn lead to loss of information. Hence, we would plot the graph and see the “average” length and set it as the padding length to trade-off between the two extremes.
train_lens =[len(i.split()) fori intrain_text]
From the graph above, we take 17 as the padding length.
Tokenizing the data
Tokenize the data and encode sequences using the BERT tokenizer.
Please Login to comment...