EMMA LUCE SCALI
EMMA LUCE SCALI
Small Group Project 2022-23
Artificial intelligence (AI) has assumed a progressively greater role in knowledge discovery/production, and we increasingly rely on AI applications to make decisions in the most diverse fields. However, the way in which AI understands/constructs and attempts at gaining what it considers to be relevant knowledge, has not been so far thoroughly explored. This research proposal starts from the conjecture that AI’s epistemology may be intimately connected to that which some authors have described as a distinctive ‘neoliberal epistemology’, grounded on a specific vision of the nature and (im)possibilities of human knowledge, with normative implications for the role of the individual in decision-making, and of the social institutions which are to be responsible for the ordering of social relations.
The proposed project will bring together, for a one-day interdisciplinary workshop, an international group of emerging and more prominent scholars, who have conducted innovative work on neoliberalism, AI and epistemology in their respective disciplinary fields (law, sociology, philosophy and politics) and encourage their interdisciplinary dialogue, offering the participants a site, during both the months leading up to the workshop and the workshop itself, to share their disciplinary perspectives/tools/findings, and gather a conceptually enriched understanding of the topic of research.
The proposed research will represent the first attempt 1) at developing a theoretical framework (of transdisciplinary relevance) on the epistemological connections between neoliberalism and AI, and 2) at addressing the implications – especially for conceptions of the human person (and qualities generally attached to it, such as agency, autonomy, freedom, responsibility, and human rights) and of social phenomena and institutions, including the State and the law – of neoliberal/AI conceptions of and processes for knowing, and their (in some cases, automated) application to individual/collective decision-making. Such analysis may be particularly relevant to inform ongoing attempts at regulating AI for the benefit of humanity.
If you would like to contact any of our Fellows to discuss their ISRF-funded work, please contact Dr Lars Cornelissen (Academic Editor) in the first instance, at firstname.lastname@example.org.