Document Type

Conference Proceeding

Publication Date



ChatGPT, bias, social media, analytics, pre-trained model


In this study, we aim to analyze the public perception of Twitter users with respect to the use of ChatGPT and the potential bias in its responses. Sentiment and emotion analysis were also analyzed. Analysis of 5,962 English tweets showed that Twitter users were concerned about six main types of biases, namely: political, ideological, data & algorithmic, gender, racial, cultural, and confirmation biases. Sentiment analysis showed that most of the users reflected a neutral sentiment, followed by negative and positive sentiment. Emotion analysis mainly reflected anger, disgust, and sadness with respect to bias concerns with ChatGPT use.


Accepted conference paper for IEEE GCAIoT. Citation information and links to final, published version coming soon.