Transfer Learning Vs Fine Tuning

Transfer Learning




Transfer Learning involves adapting knowledge attained from one task to resolve other associated tasks we haven't faced before. When it comes to machine learning, it is possible to transfer knowledge from one domain/task to another. 


Fine Tuning





Transfer learning can be achieved through fine-tuning. The pretraining process starts with random weights and no prior knowledge, and the model is trained from scratch. A large amount of data is usually used for this pretraining. As a result, it requires a large amount of data, and training may take several weeks. It also causes an environmental impact. 

As opposed to pre-trained models, fine-tuned models are trained after they have been pretrained. The fine-tuning process begins by acquiring a pre-trained
 language model, followed by additional training with the chosen dataset. The cost of fine-tuning a model is lower in terms of time, data, and finances. Training is also less constraining than full pretraining, so it is easier and faster to test different fine-tuning schemes. When a model is pre-trained on a large data corpus for one task and then fine-tuned for a specific downstream task, transfer learning occurs. Despite a limited number of training examples, transfer learning has demonstrated success with pre-training transformers with huge datasets followed by rapid fine-tuning of downstream tasks.

Comments

Popular posts from this blog

AutoML

Pre-trained Language Models (PTLM) in NLP