Knowledge Distillation via Weighted Ensemble of Teaching Assistants

Durga Prasad Ganta, Himel Das Gupta, Victor S. Sheng

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Knowledge distillation in machine learning is the process of transferring knowledge from a large model called teacher to a smaller model called student. Knowledge distillation is one of the techniques to compress the large network (teacher) to a smaller network (student) that can be deployed in small devices such as mobile phones. When the network size gap between the teacher and student increases, the performance of the student network decreases. To solve this problem, an intermediate model is employed between the teacher model and the student model known as the teaching assistant model, which in turn bridges the gap between the teacher and the student. In this research, we have shown that using multiple teaching assistant models, the student model (the smaller model) can be further improved. We combined these multiple teaching assistant model using weighted ensemble learning where we have used a differential evaluation optimization algorithm to generate the weight values.

Original languageEnglish
Title of host publicationProceedings - 12th IEEE International Conference on Big Knowledge, ICBK 2021
EditorsZhiguo Gong, Xue Li, Sule Gunduz Oguducu, Lei Chen, Baltasar Fernandez Manjon, Xindong Wu
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages30-37
Number of pages8
ISBN (Electronic)9781665438582
DOIs
StatePublished - 2021
Event12th IEEE International Conference on Big Knowledge, ICBK 2021 - Virtual, Auckland, New Zealand
Duration: Dec 7 2021Dec 8 2021

Publication series

NameProceedings - 12th IEEE International Conference on Big Knowledge, ICBK 2021

Conference

Conference12th IEEE International Conference on Big Knowledge, ICBK 2021
Country/TerritoryNew Zealand
CityVirtual, Auckland
Period12/7/2112/8/21

Keywords

  • En-semble learning
  • Knowledge distillation
  • Optimization
  • Teaching assistant

Fingerprint

Dive into the research topics of 'Knowledge Distillation via Weighted Ensemble of Teaching Assistants'. Together they form a unique fingerprint.

Cite this