Recent research has found that large language models consistently capture and replicate undesirable societal biases relating to race, religion, and gender. However, political bias is not well explored. This study investigates the political bias present in the state-of-the-art large language model GPT-3. To investigate political bias, I apply Natural Language Processing techniques to develop a political sentiment analysis model. Using this model, I analyze the ideological bias present in political essays written by GPT-3, finding that GPT-3 has a moderate left-leaning bias and tends to replicate the ideological bias of prompt text.
"Political Bias in Large Language Models,"
The Commons: Puget Sound Journal of Politics: Vol. 4:
1, Article 2.
Available at: https://soundideas.pugetsound.edu/thecommons/vol4/iss1/2
American Politics Commons, Political Theory Commons, Public Affairs, Public Policy and Public Administration Commons