Adversarial Attacks and Defenses on Graphs Academic Article uri icon

abstract

  • Deep neural networks (DNNs) have achieved significant performance in various tasks. However, recent studies have shown that DNNs can be easily fooled by small perturbation on the input, called adversarial attacks.

published proceedings

  • ACM SIGKDD Explorations Newsletter

author list (cited authors)

  • Jin, W., Li, Y., Xu, H., Wang, Y., Ji, S., Aggarwal, C., & Tang, J.

citation count

  • 24

complete list of authors

  • Jin, Wei||Li, Yaxing||Xu, Han||Wang, Yiqi||Ji, Shuiwang||Aggarwal, Charu||Tang, Jiliang

publication date

  • January 2021