Thwarting Replication Attack Against Memristor-Based Neuromorphic Computing System Academic Article uri icon

abstract

  • IEEE Neuromorphic architectures are widely used in many applications for advanced data processing and often implement proprietary algorithms. However, in an adversarial scenario, such systems may face elaborate security attacks including learning attack. In this work, we prevent an attacker with physical access from learning the proprietary algorithm implemented by the neuromorphic hardware. For this purpose, we leverage the obsolescence effect in memristors to judiciously reduce the accuracy of outputs for any unauthorized user. For a legitimate user, we regulate the obsolescence effect, thereby maintaining the accuracy of outputs in a suitable range. We extensively examine the feasibility of our proposed method with four datasets. We experiment under different settings such as activation functions and constraints such as process variations, and estimate the calibration overhead. The security vs. cost and performance vs. resistance range trade-offs for different applications are also analyzed. We then prove that the defense is still valid even if the attacker has the prior knowledge of the defense mechanism. Overall, our methodology is compatible with mainstream classification applications, memristor devices, and security and performance constraints.

published proceedings

  • IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS

author list (cited authors)

  • Yang, C., Liu, B., Li, H., Chen, Y., Barnell, M., Wu, Q., Wen, W., & Rajendran, J.

citation count

  • 8

complete list of authors

  • Yang, Chaofei||Liu, Beiye||Li, Hai||Chen, Yiran||Barnell, Mark||Wu, Qing||Wen, Wujie||Rajendran, Jeyavijayan

publication date

  • October 2020