Learning from Multiple Tasks

Transfer Learning

Transfer Learning involves the use of a trained network to learn a new task.

To perform transfer learning, we must swap the last layer with a new one and perform training with the new dataset. This will update the weights in the network as required and is much faster than training from scratch.

Transfer learning is most suitable when we have already trained the net on large amounts of data and we have comparatively less data for the new task that it needs to learn. Low level features that the network learned when it was previously trained could be helpful to the new task that it needs to learn.

Multi-Task Learning

Multi-Task Learning can help us identify more than 1 class in a given image. For multi-task learning, the last layer will have C neurons, one for each of the C classes.

Multi-Task Learning is usually productive when all the tasks can use shared lower-level features, and when we have similar amounts of data for each task.

Last updated