TY - GEN
T1 - Seeds of SEED
T2 - 1st International Symposium on Secure and Private Execution Environment Design, SEED 2021
AU - Cai, Kunbei
AU - Chowdhuryy, Md Hafizul Islam
AU - Zhang, Zhenkai
AU - Yao, Fan
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - The rapid development of deep learning has significantly bolstered the performance of natural language processing (NLP) in the form of language modeling. Recent advances in hardware security studies have demonstrated that hardware-based threats can severely jeopardize the integrity of computing systems (e.g., fault attacks for data at rest). Internal adversaries exploiting such hardware vulnerabilities are becoming a major security concern. Yet the impact of hardware faults on systems running NLP models has not been fully understood.In this paper, we perform the first investigation of hardware-based fault injections in modern neural machine translation (NMT) models. We find that compared to neural network classifiers (e.g., CNNs), fault attacks on NMT models present unique challenges. We propose a novel attack framework-NMT-Stroke-that can maliciously divert the translation of a victim NMT model by modeling memory fault injections with the rowhammer attack vector. We design a fault injection strategy to minimize bit flips needed, which would mislead the translation to an arbitrary natural output sentence. Our evaluation on state-of-the-art Transformer-based NMT models shows that NMT-Stroke can effectively induce the attacker-desired and linguistically sound translation by faulting minimal parameter bits. Our work highlights the significance of understanding the robustness of emerging NLP models with the presence of hardware vulnerabilities, which could lead to future new research directions.
AB - The rapid development of deep learning has significantly bolstered the performance of natural language processing (NLP) in the form of language modeling. Recent advances in hardware security studies have demonstrated that hardware-based threats can severely jeopardize the integrity of computing systems (e.g., fault attacks for data at rest). Internal adversaries exploiting such hardware vulnerabilities are becoming a major security concern. Yet the impact of hardware faults on systems running NLP models has not been fully understood.In this paper, we perform the first investigation of hardware-based fault injections in modern neural machine translation (NMT) models. We find that compared to neural network classifiers (e.g., CNNs), fault attacks on NMT models present unique challenges. We propose a novel attack framework-NMT-Stroke-that can maliciously divert the translation of a victim NMT model by modeling memory fault injections with the rowhammer attack vector. We design a fault injection strategy to minimize bit flips needed, which would mislead the translation to an arbitrary natural output sentence. Our evaluation on state-of-the-art Transformer-based NMT models shows that NMT-Stroke can effectively induce the attacker-desired and linguistically sound translation by faulting minimal parameter bits. Our work highlights the significance of understanding the robustness of emerging NLP models with the presence of hardware vulnerabilities, which could lead to future new research directions.
UR - http://www.scopus.com/inward/record.url?scp=85123301713&partnerID=8YFLogxK
U2 - 10.1109/SEED51797.2021.00019
DO - 10.1109/SEED51797.2021.00019
M3 - Conference contribution
AN - SCOPUS:85123301713
T3 - Proceedings - 2021 International Symposium on Secure and Private Execution Environment Design, SEED 2021
SP - 76
EP - 82
BT - Proceedings - 2021 International Symposium on Secure and Private Execution Environment Design, SEED 2021
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 20 September 2021 through 21 September 2021
ER -