Artificial intelligence is pervading our lives. As a matter of fact, from voice assistants to self-driving cars, a multitude of aspects of our lives are influenced from decisions made by systems trained through artificial intelligence techniques, for instance machine learning. A multitude of decision support systems has been developed as black boxes, i.e. systems that hide the internal logic to the final user. This lack of explanation represents a practical but also ethical issue. Considering the critical issues of the decisions made by these systems in recent times is emerging the need for artificial intelligence techniques able to explain the reasons why a certain decision is taken by these artificial intelligence algorithms. We propose an approach considering the adoption of the counterexample, automatically provided by model checking techniques, with the aim to explain the rationale behind a decision made by the system. Moreover, we propose a case study showing how the proposed approach can be useful to provide explainability in classification tasks.
On the Adoption of Counterexample for Classification Task Explainability
Mercaldo, Francesco
;Santone, Antonella
2024-01-01
Abstract
Artificial intelligence is pervading our lives. As a matter of fact, from voice assistants to self-driving cars, a multitude of aspects of our lives are influenced from decisions made by systems trained through artificial intelligence techniques, for instance machine learning. A multitude of decision support systems has been developed as black boxes, i.e. systems that hide the internal logic to the final user. This lack of explanation represents a practical but also ethical issue. Considering the critical issues of the decisions made by these systems in recent times is emerging the need for artificial intelligence techniques able to explain the reasons why a certain decision is taken by these artificial intelligence algorithms. We propose an approach considering the adoption of the counterexample, automatically provided by model checking techniques, with the aim to explain the rationale behind a decision made by the system. Moreover, we propose a case study showing how the proposed approach can be useful to provide explainability in classification tasks.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


