Document Type


Publication Date



machine learning security, neural text generation, machine text detection, mutation testing


With advanced neural network techniques, language models can generate content that looks genuinely created by humans. Such advanced progress benefits society in numerous ways. However, it may also bring us threats that we have not seen before. A neural text detector is a classification model that separates machine-generated text from human-written ones. Unfortunately, a pretrained neural text detector may be vulnerable to adversarial attack, aiming to fool the detector into making wrong classification decisions. Through this work, we propose µAttacking, a mutation-based general framework that can be used to evaluate the robustness of neural text detectors systematically. Our experiments demonstrate that µAttacking identifies the detector’s flaws effectively. Inspired by the insightful information revealed by µAttacking, we also propose an RR-training strategy, a straightforward but effective method to improve the robustness of neural text detectors through finetuning. Compared with the normal finetuning method, our experiments demonstrated that RR-training effectively increased the model robustness by up to 11.33% without increasing much effort when finetuning a neural text detector. We believe the µAttacking and RR-training are useful tools for developing and evaluating neural language models.

Digital Object Identifier (DOI)


Liang, G.; Guerrero, J.; Zheng, F.; Alsmadi, I. Enhancing Neural Text Detector Robustness with μAttacking and RR-Training. Electronics 2023, 12, 1948.

This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (