ChatGPT, bias, social media, analytics, pre-trained model
In this study, we aim to analyze the public perception of Twitter users with respect to the use of ChatGPT and the potential bias in its responses. Sentiment and emotion analysis were also analyzed. Analysis of 5,962 English tweets showed that Twitter users were concerned about six main types of biases, namely: political, ideological, data & algorithmic, gender, racial, cultural, and confirmation biases. Sentiment analysis showed that most of the users reflected a neutral sentiment, followed by negative and positive sentiment. Emotion analysis mainly reflected anger, disgust, and sadness with respect to bias concerns with ChatGPT use.
Wahbeh, Abdullah; Al-Ramahi, Mohammad A.; El-Gayar, Omar; El Noshokaty, Ahmed; and Nasralah, Tareq, "Perception of Bias in ChatGPT: Analysis of Social Media Data" (2023). Computer Information Systems Faculty Publications. 17.