Table of Links
3. Methodology
In this study, we investigate the early-bird ticket hypothesis in Transformer models using the masked distance metric. Our approach involves exploring the early-bird phenomenon during full training for vision transformers and limiting it to the fine-tuning stage for language models. The methodology consists of the following steps:
1. Iterative Pruning: We perform iterative pruning on the Transformer models to identify the subnetworks that can potentially serve as early-bird tickets [13]. The pruning process involves gradually removing the least important weights based on their magnitude.
-
Masked Distance Calculation: To determine the optimal point at which the early-bird ticket emerges, we calculate the masked distance between two consecutive epochs during the training or fine-tuning process. The masked distance metric measures the similarity between the pruned masks of consecutive epochs, providing insights into the stability and convergence of the subnetworks.
-
Early-Bird Ticket Selection: We select the earlybird ticket by identifying the pruned mask that crosses a chosen threshold. The threshold is determined by observing the changes in masked distance across all epochs [13]. For vision transformers, we set a pruning threshold of 0.1, while for text transformers, we use a threshold of 0.01.
-
Retraining and Fine-tuning: After obtaining the final pruned models using the selected early-bird tickets, we retrain the vision transformers and fine-tune the language models to the full epoch length. The retraining process involves training the pruned models from scratch using the same hyperparameters as the original models. For language models, we focus on the finetuning stage, where the pruned models are fine-tuned on downstream tasks [1].
-
Performance Evaluation: We evaluate the performance of the pruned models obtained from the earlybird tickets and compare their validation accuracy with the unpruned baseline models.
To conduct a comparative analysis and investigate the applicability of the early-bird ticket hypothesis across different Transformer architectures, we experiment with the following models:
-
ViT (Vision Transformer)
-
Swin-T (Shifted Window Transformer)
-
GPT-2 (Generative Pre-trained Transformer)
-
RoBERTa (Robustly Optimized BERT Pretraining Approach) [7]
By applying our methodology to these diverse Transformer models, we aim to provide a comprehensive understanding of the early-bird ticket phenomenon in both vision and language domains.
The proposed methodology addresses the limitations of existing works by introducing a more efficient approach compared to the traditional train-prune-retrain methodology. By leveraging the masked distance metric and selective pruning, we can identify early-bird tickets without the need for extensive retraining. Furthermore, our comparative analysis across different Transformer architectures provides insights into the generalizability of the early-bird ticket hypothesis. Through this methodology, we aim to demonstrate the existence of early-bird tickets in Transformer models and explore their potential for resource optimization and cost reduction in training.
Author:
(1) Shravan Cheekati, Georgia Institute of Technology ([email protected]).
This paper is