I am a PhD candidate in Philosophy at Central European University. My work focuses on moral responsibility, implicit attitudes, and the ethical implications of artificial intelligence. I am particularly interested in how we should understand responsibility for unintentional actions and outcomes, especially when those outcomes are shaped by complex technological or social systems.
My research draws from philosophy of mind, moral psychology, and philosophy of technology. I am currently developing projects on responsibility for AI-generated outcomes and on the nature of implicit bias.
You can learn more about my research by visiting my PhilPapers profile.
You can find my CV here.

I am currently interested in questions about moral responsibility, especially in cases where an agent causes harm unintentionally. These questions first emerged in my research on implicit attitudes, where agents often cause harm without intending to. This is an ongoing project that now includes a broader range of unintended actions, including those mediated by technological artefacts, particularly advanced technologies. In such contexts, what can we say about moral responsibility? Is blame justified, or are agents excused because their causal contribution cannot be traced back to bad intentions, negligence, or recklessness?
I am also interested in user–artefact relations from the perspectives of metaphysics, philosophy of action, and responsibility. The dominant approach in the metaphysics of artifacts—the intention-based view—ties artifacts and their functions to the intentions of their makers. My work shifts attention to the user instead: how users engage with artifacts, how artifacts become integrated into users’ actions, and what this means for agency and responsibility.
Autonomous technologies, particularly self-learning AI systems, are often said to create responsibility gaps —cases where harm is caused, yet no one is responsible, because no one appears to meet the control and epistemic conditions typically required for moral responsibility. In this paper, I argue that this problem is better understood as a challenge of attributing moral responsibility for unintentional actions. I suggest that the unintended, harmful AI-based outcomes should be characterized as unintentional actions that can be traced back to human agents. On this basis, I argue that while such actions may be unintentional under some description —and thus potentially excusable—they do not negate moral responsibility. Instead, they modify it: designers and users remain responsible due to the moral residue left by their involvement, and they may bear reparative obligations, such as offering explanations, apologies, or compensation for the harm caused. In high-stakes cases, moral agents may still need to take responsibility—and may, in some contexts, be appropriate targets of blame.
Paper on Implicit Bias – argues that implicit attitudes are better understood when compared to certain types of memory, especially those expressed through behavioural and emotional dispositions. Email me for a draft.
Paper on Moral Distance
Paper on Relational Approach to Responsibility Gaps (w/ Matteo Pascucci) Email me for a draft.
Pelin Kasar
Kasar_Pelin (@) phd.ceu.edu
Central European University (CEU)
Quellenstraße 51
1100 Vienna
Austria