Dorna Behdadi: Anthropomorphism and AI - Rethinking the 'Notficaton Strategy'
Wed
19
Mar
Wednesday 19 March, 2025at 13:15 - 15:00
HUM.H.119 (HD108)
The Research Seminar Series in Philosophy invites you to a seminar with Dorna Behdadi. Behdadi will speak on the topic of Anthropomorphism and AI - Rethinking the 'Notficaton Strategy'.
Abstract
The design of AI systems often involves anthropomorphic elements, intended to enhance usability and acceptability by tapping into human social cognition. For instance, giving a chatbot empathic responses, social norm compliance, or human-like interaction patterns can make interactions smoother and more appealing. However, anthropomorphic design also raises ethical concerns and potential unintended harms. People are reportedly already (mis)attributing mental states, personality traits, or moral agency to chatbots and AI assistants (Pelau et al., 2021; Waytz et al., 2014). For example, some users already see and treat these entities as friends (Newman, 2014), romantic or sexual companions (Döring et al., 2020) and blame them for harmful outputs (Sullivan & Fosso Wamba, 2022; Bigman & Gray, 2018).
Policy strategies aimed at mitigating problematic or harmful anthropomorphism of AI systems largely utilize what can be referred to as the ‘NotiTication Strategy’, namely, that “natural persons should be notiTied that they are interacting with an AI system” (European Parliament and Council of the European Union, 2024, Recital 132; see also Article 50). However, this approach assumes that explicit notiTication effectively counteracts the implicit, context-dependent cognitive processes that drive anthropomorphic perceptions. Even “once the facts are known” (Tigard, 2021, p. 594), the appearance, placement, and behavior of an AI system may still elicit mind-attribution in users.
Thus, the assumption that merely informing users about the nature and limitations of AI systems is sufTicient to prevent problematic attributions appears inadequately justiTied. This raises doubts about the effectiveness of the 'NotiTication Strategy' and highlights the need for alternative approaches that more directly address the cognitive mechanisms underlying anthropomorphism and mental state attribution to AI systems.
References:
Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cogniton, 181, 21-34.
Döring, N., Mohseni, M.R. & Walter, R. (2020). Design, Use, and Effects of Sex Dolls and Sex Robots: Scoping Review J Med Internet Res 22(7): e18551.
European Parliament and Council of the European Union. (2024). RegulaWon (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on arWficial intelligence and amending RegulaWons (EC) No 300/2008, (EU) No 167/2013, (EU)No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU)2019/2144 and DirecWves 2014/90/EU, (EU) 2016/797 and(EU) 2020/1828 (ArWficial Intelligence Act).
Newman, J. (2014, Oct 17). To Siri, With Love. New York Times.
Pelau, C., Dabija, D. C., & Ene, I. (2021). What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological anthropomorphic characteristics in the acceptance of artificial intelligence in the service industry. Computers in Human Behavior, 122, 106855.
Sullivan, Y. W., & Fosso Wamba, S. (2022). Moral judgments in the age of artificial intelligence. Journal of Business Ethics, 178(4), 917-943.
Tigard, D. W. (2021). There is no techno-responsibility gap. Philosophy & Technology, 34(3), 589-607.
Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113-117.