Decoding AI: Politics, Inequality and the Promise of a Better Society
Aarhus University, Department of Political Science
General Information
Time: Spring semester 2026, Tuesdays 09:00 - 12:00
Location: 1330-018 Undervisningslokale
Instructor: Tobias Widmann (widmann@ps.au.dk)
Course Content
In a moment of rapid technological change, artificial intelligence is emerging as one of the most consequential innovations for politics and society. From the ways governments govern, citizens engage, and economies function, to the future of work, global inequality, and even climate action, AI is reshaping the foundations of democratic life and social organization. This course examines how artificial intelligence is transforming contemporary societies, highlighting both its risks and its promises.
The first part of the course introduces the foundations: what AI is (and what it is not), its ideological roots, and its history as a technology. We then turn to pressing issues of fairness, bias, and representational harms, asking how AI systems encode and reproduce existing inequalities across cultures and identities.
The second part of the course explores AI’s impact across key social domains. We focus on how AI impacts politics, but also analyze how AI reshapes media systems, the future of labor and data work, the environmental costs and opportunities of large-scale AI deployment, as well as AI in artistic production, where debates around creativity, authorship, and knowledge take center stage.
The course concludes by situating AI in broader debates about regulation, governance, and global power. Throughout, we critically assess not only the dangers of AI—surveillance, disinformation, exploitation, and inequality—but also its potential to support sustainability, democratize knowledge, and expand human creativity. Students will gain a nuanced understanding of how AI intersects with politics and society, and the tools to critically analyze its role in shaping our collective future.
Learning Outcomes
After having participated actively in the course, the student will be able to:
Understand and define the key concepts of artificial intelligence and its role as a general-purpose technology, including its ideological roots and historical development.
Critically engage with debates about fairness, bias, and harms in AI, and evaluate how these systems reproduce or challenge existing inequalities across the world.
Compare, analyze, and evaluate the impact of AI across key domains such as politics, media, labor, and the environment.
Examine the implications of AI for democratic governance, inequality, and global power, including both risks and opportunities.
Integrate and apply theoretical and empirical research from multiple disciplines (e.g., political science, sociology, communication, computer science) to analyze concrete cases of AI in society.
Independently formulate a relevant research question relating to the political and societal consequences of AI.
Communicate their knowledge of the field and the results of their own research in clear, structured, academic language.
General Sources About the Tech World
AI/tech world are moving super fast, and it might be difficult to keep track of it. Maybe you nonetheless want to try to keep track - or simply would like to learn more about some related topics, whether they are directly connected to this class or not. If that’s the case, here are some media sources I can recommend (the list is totally subjective! If you know some cool resources that are not included here, please share them on the online forum of this class!).
• 404 Media
• The Verge
• Wired
• TechCrunch
Another great resource on GenAI (specifically LLMs): This website from U of Washington profs is a great interactive resource, I’d encourage you to explore it in parallel to our course (some topics will overlap, but you don’t have to try to map it one to one (it won’t work like that), you can even just explore the website in one go in one evening).
Course Structure
Week 1: Introduction - What is AI?
This opening week lays the foundation by demystifying artificial intelligence. We explore what machine learning and large language models actually are, moving beyond hype to understand their core mechanisms. We examine the capabilities and fundamental limitations of current AI systems, and investigate how these models engage in sophisticated pattern matching that can sometimes appear remarkably human-like.
Mandatory Readings
Lee, T. B. (2024, November). Large language models, explained with a minimum of math and jargon. Understanding AI. https://www.understandingai.org/p/large-language-models-explained-with
Brown, S. (2025, January). Machine learning, explained. MIT Sloan Ideas Made to Matter. https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained
Narayanan, A., & Kapoor, S. (2024). AI snake oil: What artificial intelligence can do, what it can’t, and how to tell the difference. Princeton University Press. Chapter 1.
Optional Readings
- Serrano, Sofia, Zander Brumbaugh, and Noah A. Smith. 2023. “Language Models: A Guide for the Perplexed.” arXiv:2311.17301. Preprint, arXiv, November 29. https://doi.org/10.48550/arXiv.2311.17301
Week 2: What AI Can and Cannot Do: Biases, Risks, and Harms
Moving beyond the basics, this week confronts the darker realities of AI systems. We examine how predictive systems often fail in high-stakes contexts, explore the pervasive biases embedded in image generation and language models, and analyze the broader taxonomy of risks these technologies pose. From demographic stereotypes to the challenge of detecting AI-generated content, we critically assess what can go wrong.
Mandatory Readings
Narayanan, A., & Kapoor, S. (2024). AI snake oil: What artificial intelligence can do, what it can’t, and how to tell the difference. Princeton University Press. Chapters 2 & 3.
Bloomberg.Com. n.d. “Humans Are Biased. Generative AI Is Even Worse.” Accessed January 28, 2026. https://www.bloomberg.com/graphics/2023-generative-ai-bias/.
Bianchi, F., Kalluri, P., Durmus, E., Ladhak, F., Cheng, M., Nozza, D., … & Caliskan, A. (2023, June). Easily accessible text-to-image generation amplifies demographic stereotypes at large scale. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (pp. 1493-1504).
Hofmann, Valentin, Pratyusha Ria Kalluri, Dan Jurafsky, and Sharese King. 2024. “AI Generates Covertly Racist Decisions about People Based on Their Dialect.” Nature 633 (8028): 147–54. https://doi.org/10.1038/s41586-024-07856-5
Optional Readings
Weidinger, L., Uesato, J., Rauh, M., Griffin, C., Huang, P. S., Mellor, J., … & Gabriel, I. (2022, June). Taxonomy of risks posed by language models. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 214-229).
Haenlein, Michael, and Andreas Kaplan. 2019.“A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence.” California Management Review 61 (4): 5–14. https://doi.org/10.1177/0008125619864925
Al-Amin, Md, Mohammad Shazed Ali, Abdus Salam, et al. 2024. “History of Generative Artificial Intelligence (AI) Chatbots: Past, Present, and Future Development.” arXiv:2402.05122. Preprint, arXiv, February 4. https://doi.org/10.48550/arXiv.2402.05122
Vlasceanu, Madalina, and David M. Amodio. 2022. “Propagation of Societal Gender Inequality by Internet Search Algorithms.” Proceedings of the National Academy of Sciences 119 (29): e2204529119. https://doi.org/10.1073/pnas.2204529119
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜.” Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (New York, NY, USA), FAccT ’21, March 1, 610–23. https://doi.org/10.1145/3442188.3445922
Week 3: AI and Democracy
Can AI-generated content persuade voters? Might synthetic media undermine trust in democratic institutions? This week examines AI’s impact on democratic processes, from propaganda and persuasion to deepfakes and synthetic media. We explore how AI both threatens and potentially strengthens democratic governance, questioning whether citizens can maintain sovereignty in an age when reality itself becomes contested.
Mandatory Readings
Jungherr, Andreas. 2023. “Artificial Intelligence and Democracy: A Conceptual Framework.” Social Media + Society 9 (3): 20563051231186353. https://doi.org/10.1177/20563051231186353.
Summerfield, Christopher, Lisa P. Argyle, Michiel Bakker, et al. 2025. “The Impact of Advanced AI Systems on Democracy.” Nature Human Behaviour 9 (12): 2420–30. https://doi.org/10.1038/s41562-025-02309-z.
Coeckelbergh, Mark. 2024. Why AI Undermines Democracy and What to Do about It. John Wiley & Sons. Chapters 3 & 4. PDF available on Brightspace.
Optional Readings
- Kreps, Sarah, and Doug Kriner. 2023. “How AI Threatens Democracy.” Journal of Democracy 34 (4): 122–31.
Week 6: Research Design Session
This week should provide some time for brain storming ideas for the final assignment
Mandatory Readings
- Toshkov, Dimiter. 2016. “Theory in the Research Process.” In Research Design in Political Science, by Dimiter Toshkov. Macmillan Education UK. https://doi.org/10.1007/978-1-137-34284-3_3. (Chapters 1 + 2 + 3).
Week 7: AI, Journalism, and Media
As newsrooms automate content production and AI systems generate increasingly sophisticated text, fundamental questions arise about truth, trust, and the future of journalism. This week explores public attitudes toward AI in news, the problem of hallucinations in AI-generated content, and how automation is transforming journalistic practice.
Mandatory Readings
Newman, N., Fletcher, R., & Nielsen, R. K. (2024). What does the public in six countries think of generative AI in news? Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/what-does-public-six-countries-think-generative-ai-news
Brigham, Natalie Grace, Chongjiu Gao, Tadayoshi Kohno, Franziska Roesner, and Niloofar Mireshghallah. 2024. “Developing Story: Case Studies of Generative AI’s Use in Journalism.” arXiv:2406.13706. Preprint, arXiv, December 3. https://doi.org/10.48550/arXiv.2406.13706.
Sikorski, Christian von, and Michael Hameleers. 2025.“Disinformation in the Age of Artificial Intelligence (AI): Implications for Journalism and Mass Communication.” Journalism & Mass Communication Quarterly 102 (4): 941–57. https://doi.org/10.1177/10776990251375097.
Hanley, Hans W. A., and Zakir Durumeric. 2024. “Machine-Made Media: Monitoring the Mobilization of Machine-Generated Articles on Misinformation and Mainstream News Websites.” Proceedings of the International AAAI Conference on Web and Social Media 18 (May): 542–56. https://doi.org/10.1609/icwsm.v18i1.31333.
Optional Readings
- Noain-Sánchez, Amaya. 2022. “Addressing the Impact of Artificial Intelligence on Journalism: The Perception of Experts, Journalists and Academics.” Communication & Society 35 (3): 105–21. https://doi.org/10.15581/003.35.3.105-121.
Week 8: AI and Labor/Work
Will AI create mass unemployment or usher in an era of abundance? This week examines AI’s impact on work, from the macroeconomic implications to the hidden labor of data workers in the Global South.
Mandatory Readings
Noy, Shakked, and Whitney Zhang. 2023. “Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence.” Science 381 (6654): 187–92. https://doi.org/10.1126/science.adh2586.
Gallego, Aina, and Thomas Kurer. 2022. “Automation, Digitalization, and Artificial Intelligence in the Workplace: Implications for Political Behavior.” Annual Review of Political Science 25 (Volume 25, 2022): 463–84. https://doi.org/10.1146/annurev-polisci-051120-104535.
Perrigo, Billy. 2023. “Exclusive: The $2 Per Hour Workers Who Made ChatGPT Safer.” TIME, January 18. https://time.com/6247678/openai-chatgpt-kenya-workers/.
Agrawal, Ajay, Joshua S. Gans, and Avi Goldfarb. 2019. “Artificial Intelligence: The Ambiguous Labor Market Impact of Automating Prediction.” Journal of Economic Perspectives 33 (2): 31–50. https://doi.org/10.1257/jep.33.2.31.
Week 9: AI and Inequalities
AI development is concentrated in the Global North, raising urgent questions about global equity and power. This week examines the “AI divide” between rich and poor nations, explores how to institutionalize ethics in AI development, and investigates the asymmetries of AI-driven globalization. We consider whether AI’s promised impact will deepen or potentially reduce global inequalities, and what mechanisms might ensure more equitable outcomes.
Mandatory Readings
World Economic Forum. 2023. “The ‘AI Divide’ between the Global North and Global South.” January 16. https://www.weforum.org/stories/2023/01/davos23-ai-divide-global-north-global-south
Colón Vargas, Nelson.2025. “Exploiting the Margin: How Capitalism Fuels AI at the Expense of Minoritized Groups.” AI and Ethics 5 (2): 1871–76. https://doi.org/10.1007/s43681-024-00502-w.
“AI’s $4.8 Trillion Future: UN Trade and Development Alerts on Divides, Urges Action | UN Trade and Development (UNCTAD).” 2025. April 7. https://unctad.org/news/ais-48-trillion-future-un-trade-and-development-alerts-divides-urges-action.
Zajko, Mike. 2022. “Artificial Intelligence, Algorithms, and Social Inequality: Sociological Contributions to Contemporary Debates.” Sociology Compass 16 (3): e12962. https://doi.org/10.1111/soc4.12962.
Optional Readings
- Panwar, Aklovya. 2025. “Generative AI and Copyright Issues Globally: ANI Media v OpenAI.” Tech Policy Press, January 8. https://techpolicy.press/generative-ai-and-copyright-issues-globally-ani-media-v-openai.
Week 10: AI and Climate Change
AI presents a paradox for climate action: it consumes massive amounts of energy while potentially enabling climate solutions. This week investigates AI’s environmental footprint, from data center water usage to carbon emissions from training large models. We also explore how AI might help combat climate change through better predictions, optimization, and scientific discovery. Can the benefits outweigh the costs, or does AI represent another accelerant of environmental crisis?
Mandatory Readings
Cowls, Josh, Andreas Tsamados, Mariarosaria Taddeo, and Luciano Floridi. 2023. “The AI Gambit: Leveraging Artificial Intelligence to Combat Climate Change—Opportunities, Challenges, and Recommendations.” AI & SOCIETY 38 (1): 283–307. https://doi.org/10.1007/s00146-021-01294-x.
MIT News | Massachusetts Institute of Technology. 2025. “Explained: Generative AI’s Environmental Impact.” January 17. https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117.
Stern, Nicholas, Mattia Romani, Roberta Pierfederici, et al. 2025. “Green and Intelligent: The Role of AI in the Climate Transition.” Npj Climate Action 4 (1): 56. https://doi.org/10.1038/s44168-025-00252-3.
“Explainer: How AI Helps Combat Climate Change | UN News.” 2023. November 3. https://news.un.org/en/story/2023/11/1143187.
Wiggers, Kyle. 2025. “ChatGPT May Not Be as Power-Hungry as Once Assumed.” TechCrunch, February 11. https://techcrunch.com/2025/02/11/chatgpt-may-not-be-as-power-hungry-as-once-assumed/.
Optional Readings
Bashir, Noman, Priya Donti, James Cuff, et al. 2024. “The Climate and Sustainability Implications of Generative AI.” An MIT Exploration of Generative AI, March 27. https://mit-genai.pubpub.org/pub/8ulgrckc/release/2.
Vincent, James. 2024. “How Much Electricity Does AI Consume?” The Verge, February 16. https://www.theverge.com/24066646/ai-electricity-energy-watts-generative-consumption.
Rolnick, David, Priya L. Donti, Lynn H. Kaack, et al. 2022. “Tackling Climate Change with Machine Learning.” ACM Comput. Surv. 55 (2): 42:1-42:96. https://doi.org/10.1145/3485128.
Luccioni, Alexandra Sasha, Sylvain Viguier, and Anne-Laure Ligozat. 2023. “Estimating the Carbon Footprint of BLOOM, a 176B Parameter Language Model.” Journal of Machine Learning Research 24 (253): 1–15.
Zhang, Mary. 2024. “Data Center Water Usage: A Comprehensive Guide.” Dgtl Infra, January 17. https://dgtlinfra.com/data-center-water-usage/.
Week 11: A Brighter Future Through Regulation?
Can regulation tame AI’s risks while preserving its benefits? This week examines emerging frameworks for AI governance, from the EU’s comprehensive regulatory approach to principles of human-centered AI design. We explore what trustworthy AI might look like in practice, debate whether current regulatory proposals are adequate, and consider who should have the power to shape AI’s future. The readings ask whether democratic societies can collectively steer this technology toward the common good.
Mandatory Readings
Narayanan, A., & Kapoor, S. (2024). AI snake oil: What artificial intelligence can do, what it can’t, and how to tell the difference. Princeton University Press. Chapter 8.
Coeckelbergh, Mark. 2024. Why AI Undermines Democracy and What to Do about It. John Wiley & Sons. Chapters 6 & 7. Self copying in the book on semester shelf.
Shneiderman, Ben. 2020. “Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy.” International Journal of Human–Computer Interaction 36 (6): 495–504. https://doi.org/10.1080/10447318.2020.1741118.
Optional Readings
Zuboff, Shoshana. 2021. “Opinion | The Coup We Are Not Talking About.” Opinion. The New York Times, January 29. https://www.nytimes.com/2021/01/29/opinion/sunday/facebook-surveillance-society-technology.html.
“AI Act | Shaping Europe’s Digital Future.” 2026. January 21. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai.
“AI Watch: Global Regulatory Tracker | White & Case LLP.” n.d. Accessed January 28, 2026. https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker.
Week 12: Project Presentations I
No readings this week
Week 13: Project Presentations II
No readings this week
Week 14: AI and Laziness, Arts, and Science
Does AI make us lazy thinkers, or does it free us for higher pursuits? This week explores AI’s impact on human creativity, deliberation, and knowledge production. We discuss human laziness, art, and scientific production. The readings challenge us to think about what makes art “art,” who deserves credit for AI-assisted creation, and whether AI represents a threat or opportunity for human flourishing.
Mandatory Readings
Ahmad, Sayed Fayaz, Heesup Han, Muhammad Mansoor Alam, et al. 2023. “Impact of Artificial Intelligence on Human Loss in Decision Making, Laziness and Safety in Education.” Humanities and Social Sciences Communications 10 (1): 311. https://doi.org/10.1057/s41599-023-01787-8.
Hao, Qianyue, Fengli Xu, Yong Li, and James Evans. 2026. “Artificial Intelligence Tools Expand Scientists’ Impact but Contract Science’s Focus.” Nature, January 14, 1–7. https://doi.org/10.1038/s41586-025-09922-y.
Chiang, Ted. 2024. Why A.I. Isn’t Going to Make Art | The New Yorker.
Mikalonytė, Elzė Sigutė, and Markus Kneer. 2022. “Can Artificial Intelligence Make Art?: Folk Intuitions as to Whether AI-Driven Robots Can Be Viewed as Artists and Produce Art.” J. Hum.-Robot Interact. 11 (4): 43:1-43:19. https://doi.org/10.1145/3530875.
Epstein, Ziv, Sydney Levine, David G. Rand, and Iyad Rahwan. 2020. “Who Gets Credit for AI-Generated Art?” iScience 23 (9): 101515. https://doi.org/10.1016/j.isci.2020.101515.
Optional Readings
Nair, S. (2022). German artist Mario Klingemann on his creation ‘Botto’, an NFT revolution Stirworld. https://www.stirworld.com/see-features-german-artist-mario-klingemann-on-his-creation-botto-an-nft-revolution
Zylinska, Joanna. 2020. AI Art: Machine Visions and Warped Dreams. Open Humanites Press. https://www.openhumanitiespress.org/books/titles/ai-art/.
NVIDIA. “Dive into AI Innovations in Art and Fashion at NVIDIA GTC Paris 2025.” Accessed January 28, 2026. https://www.nvidia.com/en-us/research/ai-art-gallery/.
Lo, Chung Kwan. 2023. “What Is the Impact of ChatGPT on Education? A Rapid Review of the Literature.” Education Sciences 13 (4). https://doi.org/10.3390/educsci13040410.
Fan, Yizhou, Luzhen Tang, Huixiao Le, et al. 2025. “Beware of Metacognitive Laziness: Effects of Generative Artificial Intelligence on Learning Motivation, Processes, and Performance.” British Journal of Educational Technology 56 (2): 489–530. https://doi.org/10.1111/bjet.13544.
Sample, Ian, and Ian Sample Science editor. 2023. “Programs to Detect AI Discriminate against Non-Native English Speakers, Shows Study.” Technology. The Guardian, July 10. https://www.theguardian.com/technology/2023/jul/10/programs-to-detect-ai-discriminate-against-non-native-english-speakers-shows-study.
Course Topic Network
This interactive network shows how the different themes and topics in this course connect to each other. Click and drag nodes to explore the relationships between weeks. Hover over nodes to see week titles.
How to use this network: - Drag nodes to rearrange the layout - Click a node to highlight its connections - Hover over nodes to see full week titles - Use the dropdown to filter by theme - Zoom with your mouse wheel or the navigation buttons - Node size represents reading load (bigger = more pages)