<div class="csl-bib-body">
<div class="csl-entry">Gao, H., Lorini, E., Olivetti, N., & Tesi, M. (2024). A Proof Calculus for Ethical Reasoning. In <i>PRIMA 2024: Principles and Practice of Multi-Agent Systems</i> (pp. 189–205). Springer. https://doi.org/10.1007/978-3-031-77367-9_15</div>
</div>
-
dc.identifier.uri
http://hdl.handle.net/20.500.12708/209723
-
dc.description.abstract
In order to endow an autonomous agent with ethical reasoning and with the capacity to represent ethical dilemmas it is crucial to model the interplay between its knowledge, values, and preferences. A multi-agent logic of evaluation LEV was recently introduced to explore the connection between these three aspects. In the semantics of this logic, states are partially ordered by a preference relation that reflects the values of the agent, whereas her values are interpreted by a neigbourhood function. In the present paper we provide a proof-theoretic analysis for the mono-agent version of this logic by introducing a hypersequent proof system. Next, we consider a proof-search oriented version of this calculus which provides a decision procedure for this logic and a direct (finite) countermodel extraction from a failed proof. We show how the logic and the calculus can be used to model and perform ethical reasoning.