Less formal than a class lecture, a seminar allows for small groups to meet and discuss academic topics or required reading, as well as set goals for research and continuing investigation.
The following seminar was conducted on Thursday 09-12-2021 at 3:00 PM by Dr. Abubakar M. Ashir, a Lecturer at the computer engineering department.
seminar abstract:
Transfer learning, used in machine learning, is the reuse of a pre-trained model on a new problem. In transfer learning, a machine exploits the knowledge gained from a previous task to improve generalization about another. For example, in training a classifier to predict whether an image contains food, you could use the knowledge it gained during training to recognize drinks
In transfer learning, the knowledge of an already trained machine learning model is applied to a different but related problem. With transfer learning, we basically try to exploit what has been learned in one task to improve generalization in another. We transfer the weights that a network has learned at “task A” to a new “task B.”
The general idea is to use the knowledge a model has learned from a task with a lot of available labeled training data in a new task that doesn’t have much data. Instead of starting the learning process from scratch, we start with patterns learned from solving a related task. Transfer learning is mostly used in computer vision and natural language processing tasks like sentiment analysis due to the huge amount of computational power required.
Transfer learning isn’t really a machine learning technique but can be seen as a “design methodology” within the field, for example, active learning. It is also not an exclusive part or study-area of machine learning. Nevertheless, it has become quite popular in combination with neural networks that require huge amounts of data and computational power.







