Improving heterogeneous network knowledge transfer based on the principle of generative adversarial

Feifei Lei, Jieren Cheng, Yue Yang, Xiangyan Tang, Victor S. Sheng, Chunzao Huang

Research output: Contribution to journalArticlepeer-review

9 Scopus citations


Deep learning requires a large amount of datasets to train deep neural network models for specific tasks, and thus training of a new model is a very costly task. Research on transfer networks used to reduce training costs will be the next turning point in deep learning research. The use of source task models to help reduce the training costs of the target task models, especially heterogeneous systems, is a problem we are studying. In order to quickly obtain an excellent target task model driven by the source task model, we propose a novel transfer learning approach. The model linearly transforms the feature mapping of the target domain and increases the weight value for feature matching to realize the knowledge transfer between heterogeneous networks and add a domain discriminator based on the principle of generative adversarial to speed up feature mapping and learning. Most importantly, this paper proposes a new objective function optimization scheme to complete the model training. It successfully combines the generative adversarial network with the weight feature matching method to ensure that the target model learns the most beneficial features from the source domain for its task. Compared with the previous transfer algorithm, our training results are excellent under the same benchmark for image recognition tasks.

Original languageEnglish
Article number1525
JournalElectronics (Switzerland)
Issue number13
StatePublished - Jul 1 2021


  • Deep learning
  • Generative adversarial nets
  • Heterogeneous network
  • Transfer learning


Dive into the research topics of 'Improving heterogeneous network knowledge transfer based on the principle of generative adversarial'. Together they form a unique fingerprint.

Cite this