Lesekreis Theoretische Philosophie
Im Lesekreis Theoretische Philosophie trägt am Dienstag, 13.07., Filippo Santoni de Sio (Delft) zu seinem Paper "Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them" vor.
Im Lesekreis Theoretische Philosophie trägt am Dienstag, 13.07.,
Filippo Santoni de Sio (Delft) zu seinem Paper "Four Responsibility Gaps
with Artificial Intelligence: Why they Matter and How to Address them"
vor. Wir freuen uns über alle InteressentInnen, die dabei
sein möchten.
Der Vortrag geht von 13 bis 14.30 Uhr und ist pre read. Bei Interesse
wird um Rückmeldung an paul.klur@tu-dortmund.de gebeten, um den Text und
den Zoom-Link zu erhalten.
Abstract:
The notion of “responsibility gap” with artificial intelligence (AI) was
originally introduced in the philosophical debate to indicate the concern
that “learning automata” may make more difficult or impossible to
attribute moral culpability to persons for untoward events. Building on
literature in moral and legal philosophy, and ethics of technology, the
paper proposes a broader and more comprehensive analysis of the
responsibility gap. The responsibility gap, it is argued, is not one
problem but a set of at least four interconnected problems – gaps in
culpability, moral and public accountability, active responsibility—caused
by different sources, some technical, other organisational, legal,
ethical, and societal. Responsibility gaps may also happen with
non-learning systems. The paper clarifies which aspect of AI may cause
which gap in which form of responsibility, and why each of these gaps
matter. It proposes a critical review of partial and non-satisfactory
attempts to address the responsibility gap: those which present it as a
new and intractable problem (“fatalism”), those which dismiss it as a
false problem (“deflationism”), and those which reduce it to only one of
its dimensions or sources and/or present it as a problem that can be
solved by simply introducing new technical and/or legal tools
(“solutionism”). The paper also outlines a more comprehensive approach to
address the responsibility gaps with AI in their entirety, based on the
idea of designing socio-technical systems for “meaningful human control",
that is systems aligned with the relevant human reasons and capacities.