paint-brush
How to Fine Tune a 🤗 (Hugging Face) Transformer Modelby@inquiringnomad
3,469 reads
3,469 reads

How to Fine Tune a 🤗 (Hugging Face) Transformer Model

by Akis LoumpourdisJuly 6th, 2021
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The “Maybe just a quick one” series title is inspired by my most common reply to “Fancy a drink?”, which, may or may not end up in a long night. I will use some racist/sexist texts on social media to use some stereotypical texts on the social media. The ‘Hate Speech and Offensive Language’ dataset contains text that can be considered racist, sexist, homophobic, or generally offensive. We will then test it on classifying tweets as hate speech, offensive language, or neither. All coding is done in Google Colab.

People Mentioned

Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - How to Fine Tune a 🤗 (Hugging Face) Transformer Model
Akis Loumpourdis HackerNoon profile picture

Photo by Mick De Paola on Unsplash

The “Maybe just a quick one” series title is inspired by my most common reply to “Fancy a drink?”, which, may or may not end up in a long night. Likewise, these posts are intended to be short but I get carried away sometimes, so, apologies in advance.

About 🤗 Transformers

🤗 Transformers (Hugging Face transformers) is a collection of state-of-the-art NLU (Natural Language Understanding) and NLG (Natural Language Generation ) models. They offer a wide variety of architectures to choose from (BERT, GPT-2, RoBERTa etc) as well as a hub of pre-trained models uploaded by users and organisations. 

Fine-tuning a model

One of the things that makes this library such a powerful tool is that we can use the models as a basis for transfer learning tasks. In other words, they can be a starting point to apply some fine-tuning using our own data. The library is designed to easily work with both Tensorflow or PyTorch. 

🤗 Datasets

Hugging Face Datasets is a wrapper library that provides some tools to load and process data in many commonly used formats (CSV, JSON etc). It also makes sharing datasets and metrics for Natural Language Processing extremely easy.

🤗 Datasets originated from a fork of the awesome TensorFlow Datasets and the HuggingFace team want to deeply thank the TensorFlow Datasets team for building this amazing library.

Well, let’s write some code

In this example, we will start with a pre-trained BERT (uncased) model and fine-tune it on the Hate Speech and Offensive Language dataset. We will then test it on classifying tweets as hate speech, offensive language, or neither. All coding is done in Google Colab.

Please note: this dataset contains text that can be considered racist, sexist, homophobic, or generally offensive.

So let’s start by installing some necessary packages, import them and load the dataset. The dataset is stored in Google Drive, and the path to load it from is

/content/drive/MyDrive/Data/labeled_data.csv
. So if you code along , please make sure you change the path to point to your own dataset file.

We are using the load_dataset function to load it and then split it into train, validation, and test sets. The 3 sets are then gathered together to form a DatasetDict .This is a dictionary class, that offers us many methods to process the data. We will then remove some of the columns we don’t need for our classification task.

#Install the necessary packages
!pip install transformers
!pip install datasets

from datasets import load_dataset,DatasetDict
from transformers import AutoTokenizer,TFAutoModelForSequenceClassification
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt

The dataset is located in our Google Drive data folder.

DATA_PATH = "/content/drive/MyDrive/Data/labeled_data.csv"


dataset = load_dataset('csv', data_files=DATA_PATH,split='train')
train_testvalid = dataset.train_test_split()
test_valid = train_testvalid['test'].train_test_split()
train_test_valid_dataset = DatasetDict({
    'train': train_testvalid['train'],
    'test': test_valid['test'],
    'valid': test_valid['train']})
dataset = train_test_valid_dataset.remove_columns(['hate_speech', 'offensive_language', 'neither','Unnamed: 0', 'count'])


So now we need to preprocess the data. The tool responsible for this is a Tokenizer. What do tokenizers do? Very simply put, they split the data in tokens (these can be characters, words, part of words, depending on the model), and convert them into tensors of numeric ids, which is the form that the model can read. For this task, we are using the tokenizer from the pre-trained model we selected (bert-base-cased). But let’s see how we achieve this:

tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
def tokenize_function(examples):
    return tokenizer(examples["tweet"], padding="max_length", truncation=True)

tokenized_datasets = dataset.map(tokenize_function, batched=True)
train_dataset = tokenized_datasets["train"]
eval_dataset = tokenized_datasets["valid"]
test_dataset = tokenized_datasets['test']
tf_train_dataset = train_dataset.remove_columns(["tweet"]).with_format("tensorflow")
tf_eval_dataset = eval_dataset.remove_columns(["tweet"]).with_format("tensorflow")
tf_test_dataset = test_dataset.remove_columns(["tweet"]).with_format("tensorflow")

train_features = {x: tf_train_dataset[x].to_tensor() for x in tokenizer.model_input_names}
train_tf_dataset = tf.data.Dataset.from_tensor_slices((train_features, tf_train_dataset["class"]))
train_tf_dataset = train_tf_dataset.shuffle(len(tf_train_dataset)).batch(8)

eval_features = {x: tf_eval_dataset[x].to_tensor() for x in tokenizer.model_input_names}
eval_tf_dataset = tf.data.Dataset.from_tensor_slices((eval_features, tf_eval_dataset["class"]))
eval_tf_dataset = eval_tf_dataset.batch(8)

test_features = {x: tf_test_dataset[x].to_tensor() for x in tokenizer.model_input_names}
test_tf_dataset = tf.data.Dataset.from_tensor_slices((test_features, tf_test_dataset["class"]))
test_tf_dataset =test_tf_dataset.batch(8)

Notice how we used the map dataset function, to apply our user-defined

tokenize_function
to all the elements of the dataset.

After applying the tokenization, we created 3 Tensorflow Datasets to feed the model with. We are now ready to train the model. Again, we are using the selected pre-trained model, transferring the “knowledge” it already has, but replacing its head with one that is suited to our task. We are using the TFAutoModelForSequenceClassification class, which represents a generic Tensorflow (hence the TF prefix) model, with a sequence classification head. Also, notice the num_labels parameter which is set to 3, as this is a multi-class task with 3 distinct labels. After the training is finished, we plot the Sparse Categorical Accuracy and the Loss of both the train and the validation dataset.

model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=3)
model.compile(
    optimizer=tf.keras.optimizers.Adam(learning_rate=5e-5),
    loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
    metrics=tf.metrics.SparseCategoricalAccuracy(),
)

history = model.fit(train_tf_dataset, validation_data=eval_tf_dataset, epochs=2)

plt.plot(history.history['sparse_categorical_accuracy'])
plt.plot(history.history['val_sparse_categorical_accuracy'])
plt.title('model sparse categorical accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()

We can now evaluate the model on the test dataset we created earlier:

test_loss, test_acc = model.evaluate(test_tf_dataset,verbose=2)
print('\nTest accuracy:', test_acc)

194/194 - 62s - loss: 0.2596 - sparse_categorical_accuracy: 0.9135

Test accuracy: 0.9134925603866577

model.save_pretrained("/content/drive/MyDrive/Data/hate-speech-bert")


Notice that we save the model with the

save_pretrained
function offered by Transformers. This action generates a directory with two files by default: a .json file that contains the model configuration and a .h5 file with the model weights. We can also push the model to the Hugging Face Models Hub should we want to, in order to make it available to the public.

Does it work though?

Let’s see how our model does in classifying some unseen text. I will use some stereotypical racist/offensive/sexist texts posted on social media. 

Warning: Due to the nature of this task, the language used here can be racist, sexist, and offensive. However, this is the only way to evaluate the model’s ability.

pred2label = {0: 'Hate Speech',
 1: 'Offensive Language',
 2: 'Neither'}
preds = model(tokenizer(["Jews are useless , I don't see why they even exist","Gay people suck","Women are dressed up like whores these days"],return_tensors="tf",padding=True,truncation=True))['logits']
print(preds)
class_preds = np.argmax(preds, axis=1)

for pred in class_preds:
  print(pred2label[pred])
tf.Tensor(
[[ 0.37532297  0.14053927 -0.8647832 ]
 [ 0.04699412 -0.17951615 -0.3738104 ]
 [ 0.2524849   2.586107   -2.8212454 ]], shape=(3, 3), dtype=float32)
Hate Speech
Hate Speech
Offensive Language

Looks like our model managed to correctly (well that’s subjective, but generally speaking these look correct) classify the texts.

Further reading:

If the above seems interesting to you, there is a lot more that this library can do. I would start by checking their documentation which is quite extensive as well as some quick video courses they provide:

🤗 Transformers

Model Hub

🤗Datasets

Crash course (with Youtube Videos, we always like them )

Fine-tuning with custom datasets

Happy coding.