الملخص
تُناقِش هذه الدراسة إشكالية "عَزْو المسؤولية" التي يطرحها استعمال تقنيات الذكاء الاصطناعي؛ إذ يُفترَض أنْ يكون البشر وحدهم الفاعلين المسؤولين، لكنَّ هذا الافتراض يطرح علينا إشكالات كثيرة سنُناقِشها انطلاقاً مِن الشرطين الأرسطيين للمسؤولية: السيطرة، والمعرفة. وفي ما يختصُّ بشرط السيطرة، فإنَّنا سنضيف إلى مشكلة الأيادي الكثيرة الشهيرة مشكلة أُخرى نُسمّيها مشكلة الأشياء الكثيرة، وكذلك سنُشدِّد على البُعْد الزمني، ثمَّ نُركِّز على شرط المعرفة الذي يلفت انتباهنا إلى مسألتين مُهِمَّتين، هما: الشفافية، والتفسيرية. ولكنْ، في مُقابِل النقاشات الشائعة، سنُحاجِج بأنَّ إشكالية "معرفة الفاعل المسؤول" مُرتبِطة بالطرف الآخر في علاقة المسؤولية، وهو الطرف المفعول به أو المسؤول عنه؛ أيِ الطرف الذي يقع عليه فعل الطرف الأوَّل، ويحقُّ له أنْ يطلب منه تفسيراً لأفعاله التي فعلها به، ولقراراته التي اتَّخذها في شأنه، باستعمال الذكاء الاصطناعي. وبالسَّيْر على هذا النهج العِلاقي، تُقدِّم لنا المسؤولية، بوصفها مُساءَلة، تسويغاً إضافياً مُهِمّاً، بل أساسياً، لمطلب تفسير الأفعال والقرارات الآليَّة، وهو تسويغ ليس مبنياً على الفاعلية، وإنَّما يقوم على المفعولية الأخلاقية.
المراجع
Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160.
Aristotle. (1984). Nicomachean ethics. In J. Barnes (Ed.), The complete works of aristotle (Vol. 2, pp.1729–1867). Princeton: Princeton University Press.
Bostrom, N. (2014). Superintelligence. Oxford: Oxford University Press.
Bryson, J. (2016). Patiency is not a virtue: AI and the design of ethical systems. In AAAI spring symposium series. Ethical and Moral Considerations in Non-Human Agents. Retrieved from 4, Sept 2018, http://www.aaai.org/ocs/index .php/SSS/SSS16 /paper /view/12686 .
Caliskan, A., Bryson, J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356, 183–186.
Coeckelbergh, M. (2009). Virtual moral agency, virtual moral responsibility. AI & SOCIETY, 24(2), 181–189.
Coeckelbergh, M. (2010). Moral appearances: Emotions, robots, and human morality. Ethics and Information Technology, 12(3), 235–241.
Coeckelbergh, M. (2011). Moral responsibility, technology, and experiences of the tragic: From Kierkegaard to offshore engineering. Science and Engineering Ethics, 18(1), 35–48.
Dignum, V., Baldoni, M, Baroglio, C., Caon, M., Chatila, R., Dennis, L., & Génova, G., et al. (2018). “Ethics by design: Necessity or curse?” Association for the Advancement of Artificial Intelligence. Retrieved from 21, Jan 2019, http://www.aies-confe rence .com/2018/conte nts/paper s/main/AIES_2018_paper 68.pdf.
Duff, R. A. (2005). Who is responsible, for what, to whom? Ohio State Journal of Criminal Law, 2, 441–461.
European Commission AI HLEG (High-Level Expert Group on Artificial Intelligence). (2019). Ethics Guidelines for Trustworthy AI. Retrieved from 22, Aug 2019, https://ec.europ a.eu/futur ium/en/aiallia nce-consu ltati on/guide lines #Top.
Fischer, J. M., & Ravizza, M. (1998). Responsibility and control: A theory of moral responsibility. Cambridge: Cambridge University Press.
Floridi, L., & Sanders, J. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379.
Gunkel, D. J. (2018a). Mind the gap: Responsible robotics and the problem of responsibility. Ethics and Information Technology. https://doi.org/10.1007/s1067 6-017-9428-2.
Gunkel, D. J. (2018b). The other question: Can and should robots have rights? Ethics and Information Technology, 20(2), 87–99.
Hanson, F. A. (2009). Beyond the skin bag: On the moral responsibility of extended agencies. Ethics and Information Technology, 11(1), 91–99.
Helveke, A., & Nida-Rümelin, J. (2015). Responsibility for crashes of autonomous vehicles: An ethical analysis. Science and Engineering Ethics, 21(3), 619–630.
Horowitz, M., & Scharre, P. (2015). An introduction to autonomy in weapon systems. CNAS Working Papper. https://www.cnas.org/publi catio ns/repor ts/an-intro ducti on-to-auton omy-in-weapo n-syste ms.
Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8, 195–204.
Kleinberg, J., Ludwig, J., Mullainathany, S., & Sunstein, C. R. (2019). Discrimination in the age of algorithms. Journal of Legal Analysis, 10, 1–62.
Levinas, E. (1969). Totality and infinity: An essay on exteriority (A. Lingis, Trans.). Pittsburgh: Duquesne University.
Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183.
McKenna, M. (2008). Putting the lie on the control condition for moral responsibility. Philosophical Studies, 139(1), 29–37.
Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38.
Mittelstadt, B., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data and Society, 3, 1–21.
Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. Intelligent Systems (IEEE), 21(4), 18–21.
Nyholm, S., & Smids, Jilles. (2016). The ethics of accident-algorithms for self-driving cars: An applied trolley problem? Ethical Theory and Moral Practice, 19(5), 1275–1289.
Rudy-Hiller, F. 2018. The epistemic condition for moral responsibility. Stanford Encyclopedia of Philosophy. Retrieved 26, Aug 2019, https://plato .stanf ord.edu/entri es/moral -respo nsibi lity-epist emic/.
Samek, W., Wiegand, T., & Müller, K.-R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. https://arxiv .org/pdf/1708.08296 .pdf.
Sommaggio, P., & Marchiori, S. (2018). Break the chains: A new way to consider machine’s moral problems. Biolaw Journal, 3, 241–257.
Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77.
Stahl, B. C. (2006). Responsible computers? A case for ascribing quasi-responsibility to computers independent of personhood or agency. Ethics and Information Technology, 8, 205–213.
Suárez-Gonzalo, S., Mas-Manchón, L., & Guerrero-Solé, F. (2019). Tay is you. The attribution of responsibility in the algorithmic culture. Observatorio, 13(2), 1–14.
Sullins, J. P. (2006). When is a robot a moral agent? International Review of Information Ethics, 6(12), 23–29.
Sunstein, C. R. (2018). Algorithms, correcting biases. Forthcoming, Social Research. Available at SSRN: https://ssrn.com/abstr act=33001 71.
Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751–752.
Turner, J. (2018). Robot rules: Regulating artificial intelligence. Cham: Palgrave Macmillan.
Van de Poel, I., Nihlén Fahlquist, J., Doorn, N., Zwart, S., & Royakkers, L. (2012). The problem of many hands: Climate change as an example. Science and Engineering Ethics, 18(1), 49–67.
Verbeek, P. P. (2006). Materializing morality. Science, Technology and Human Values, 31(3), 361–380.
Wallach, W., & Allen, C. (2009). Moral machines, teaching robots right from wrong. Oxford: Oxford University Press.

هذا العمل مرخص بموجب Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
الحقوق الفكرية (c) 2026 المعهد العالمي للفكر الإسلامي
