Abstract
This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence (AI) technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of “many things” is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, which draws attention to the issues of transparency and explainability. In contrast to standard discussions, however, it is then argued that this knowledge problem regarding agents of responsibility is linked to the other side of the responsibility relation: the addressees or “patients” of responsibility, who may demand reasons for actions and decisions made by using AI. Inspired by a relational approach, responsibility as answerability thus offers an important additional, if not primary, justification for explainability based, not on agency, but on patiency.
References
Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160.
Aristotle. (1984). Nicomachean ethics. In J. Barnes (Ed.), The complete works of aristotle (Vol. 2, pp.1729–1867). Princeton: Princeton University Press.
Bostrom, N. (2014). Superintelligence. Oxford: Oxford University Press.
Bryson, J. (2016). Patiency is not a virtue: AI and the design of ethical systems. In AAAI spring symposium series. Ethical and Moral Considerations in Non-Human Agents. Retrieved from 4, Sept 2018, http://www.aaai.org/ocs/index .php/SSS/SSS16 /paper /view/12686 .
Caliskan, A., Bryson, J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356, 183–186.
Coeckelbergh, M. (2009). Virtual moral agency, virtual moral responsibility. AI & SOCIETY, 24(2), 181–189.
Coeckelbergh, M. (2010). Moral appearances: Emotions, robots, and human morality. Ethics and Information Technology, 12(3), 235–241.
Coeckelbergh, M. (2011). Moral responsibility, technology, and experiences of the tragic: From Kierkegaard to offshore engineering. Science and Engineering Ethics, 18(1), 35–48.
Dignum, V., Baldoni, M, Baroglio, C., Caon, M., Chatila, R., Dennis, L., & Génova, G., et al. (2018). “Ethics by design: Necessity or curse?” Association for the Advancement of Artificial Intelligence. Retrieved from 21, Jan 2019, http://www.aies-confe rence .com/2018/conte nts/paper s/main/AIES_2018_paper 68.pdf.
Duff, R. A. (2005). Who is responsible, for what, to whom? Ohio State Journal of Criminal Law, 2, 441–461.
European Commission AI HLEG (High-Level Expert Group on Artificial Intelligence). (2019). Ethics Guidelines for Trustworthy AI. Retrieved from 22, Aug 2019, https://ec.europ a.eu/futur ium/en/aiallia nce-consu ltati on/guide lines #Top.
Fischer, J. M., & Ravizza, M. (1998). Responsibility and control: A theory of moral responsibility. Cambridge: Cambridge University Press.
Floridi, L., & Sanders, J. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379.
Gunkel, D. J. (2018a). Mind the gap: Responsible robotics and the problem of responsibility. Ethics and Information Technology. https://doi.org/10.1007/s1067 6-017-9428-2.
Gunkel, D. J. (2018b). The other question: Can and should robots have rights? Ethics and Information Technology, 20(2), 87–99.
Hanson, F. A. (2009). Beyond the skin bag: On the moral responsibility of extended agencies. Ethics and Information Technology, 11(1), 91–99.
Helveke, A., & Nida-Rümelin, J. (2015). Responsibility for crashes of autonomous vehicles: An ethical analysis. Science and Engineering Ethics, 21(3), 619–630.
Horowitz, M., & Scharre, P. (2015). An introduction to autonomy in weapon systems. CNAS Working Papper. https://www.cnas.org/publi catio ns/repor ts/an-intro ducti on-to-auton omy-in-weapo n-syste ms.
Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8, 195–204.
Kleinberg, J., Ludwig, J., Mullainathany, S., & Sunstein, C. R. (2019). Discrimination in the age of algorithms. Journal of Legal Analysis, 10, 1–62.
Levinas, E. (1969). Totality and infinity: An essay on exteriority (A. Lingis, Trans.). Pittsburgh: Duquesne University.
Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183.
McKenna, M. (2008). Putting the lie on the control condition for moral responsibility. Philosophical Studies, 139(1), 29–37.
Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38.
Mittelstadt, B., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data and Society, 3, 1–21.
Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. Intelligent Systems (IEEE), 21(4), 18–21.
Nyholm, S., & Smids, Jilles. (2016). The ethics of accident-algorithms for self-driving cars: An applied trolley problem? Ethical Theory and Moral Practice, 19(5), 1275–1289.
Rudy-Hiller, F. 2018. The epistemic condition for moral responsibility. Stanford Encyclopedia of Philosophy. Retrieved 26, Aug 2019, https://plato .stanf ord.edu/entri es/moral -respo nsibi lity-epist emic/.
Samek, W., Wiegand, T., & Müller, K.-R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. https://arxiv .org/pdf/1708.08296 .pdf.
Sommaggio, P., & Marchiori, S. (2018). Break the chains: A new way to consider machine’s moral problems. Biolaw Journal, 3, 241–257.
Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77.
Stahl, B. C. (2006). Responsible computers? A case for ascribing quasi-responsibility to computers independent of personhood or agency. Ethics and Information Technology, 8, 205–213.
Suárez-Gonzalo, S., Mas-Manchón, L., & Guerrero-Solé, F. (2019). Tay is you. The attribution of responsibility in the algorithmic culture. Observatorio, 13(2), 1–14.
Sullins, J. P. (2006). When is a robot a moral agent? International Review of Information Ethics, 6(12), 23–29.
Sunstein, C. R. (2018). Algorithms, correcting biases. Forthcoming, Social Research. Available at SSRN: https://ssrn.com/abstr act=33001 71.
Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751–752.
Turner, J. (2018). Robot rules: Regulating artificial intelligence. Cham: Palgrave Macmillan.
Van de Poel, I., Nihlén Fahlquist, J., Doorn, N., Zwart, S., & Royakkers, L. (2012). The problem of many hands: Climate change as an example. Science and Engineering Ethics, 18(1), 49–67.
Verbeek, P. P. (2006). Materializing morality. Science, Technology and Human Values, 31(3), 361–380.
Wallach, W., & Allen, C. (2009). Moral machines, teaching robots right from wrong. Oxford: Oxford University Press.

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Copyright (c) 2026 المعهد العالمي للفكر الإسلامي
