Document Type

Conference Proceeding

Publication Date



Since the creation of the perceptron in the late 1950’s, neural networks have been used as a theoretical model for machine learning but have been limited by the computational proficiency of our machines. Over the past three decades, increasing computational power has allowed neural network research to flourish at an unprecedented rate. For this research, we explore the topic of detecting machine generated text by using a neural network that learns how to read language. Particularly, we took a pre-trained model, RoBERTa, and used it to distinguish between human, mutation, and synthetic text. The topic of machine generated detection has been scarcely researched, making the detection of AI (e.g. Chat-GPT) a very hot topic in academia.