A neural network has been trained on some dataset. Then in a similar field, a new requirement raised based on new dataset. If the new dataset size is not enough to train a big NN, then one good approach is to use a pretrained NN. This is called transfer learning.
One of the well-known usage of transfer learning is ImageNet and their corresponding CNNs. There is a huge dataset which is known as ImageNet. Each year a competition would established to recognize the objects in images and many famous companies like Google, Facebook and etc. used to participate in that. Their networks has some name and a number which usually shows their number of layers.
It is claimed that convolution layers of CNN is well trained via the images in ImageNet, therefore for an specific image processing task, it is common to train just the last layers which are usually dense linear layers. Convolution layers have learnt the image patterns and output linear layers will be adapted to the specific patterns and classes.
PyTorch has almost all well-known networks. They can be used directly and without training to image processing tasks. The only change is the last layer output which should be equal to the number of classes.
Be First to Comment