Enhancing Neural Text Detector Robustness with Attacking and RR-Training Academic Article uri icon

abstract

  • With advanced neural network techniques, language models can generate content that looks genuinely created by humans. Such advanced progress benefits society in numerous ways. However, it may also bring us threats that we have not seen before. A neural text detector is a classification model that separates machine-generated text from human-written ones. Unfortunately, a pretrained neural text detector may be vulnerable to adversarial attack, aiming to fool the detector into making wrong classification decisions. Through this work, we propose Attacking, a mutation-based general framework that can be used to evaluate the robustness of neural text detectors systematically. Our experiments demonstrate that Attacking identifies the detectors flaws effectively. Inspired by the insightful information revealed by Attacking, we also propose an RR-training strategy, a straightforward but effective method to improve the robustness of neural text detectors through finetuning. Compared with the normal finetuning method, our experiments demonstrated that RR-training effectively increased the model robustness by up to 11.33% without increasing much effort when finetuning a neural text detector. We believe the Attacking and RR-training are useful tools for developing and evaluating neural language models.

published proceedings

  • ELECTRONICS

author list (cited authors)

  • Liang, G., Guerrero, J., Zheng, F., & Alsmadi, I.

citation count

  • 1

publication date

  • 2023

publisher