The research addresses the relationship between artificial intelligence and its legal regulation, focusing on the profile of the so-called algorithm damage. The first chapter outlines the main characteristics of artificial intelligence and of the machine learning and deep learning mechanisms that allow the intelligent system to learn and process data autonomously, but which make the operation opaque and unpredictable. The second chapter analyzes the main interventions adopted by the European legislator on the subject - starting from the Resolution of 16 February 2017 - which highlight the need for a regulation that takes into consideration the ethical implications of artificial intelligence without hindering innovation, while offering a legal framework such as to offer protection and security to all operators in the context of an anthropocentric, reliable and sustainable artificial intelligence. In the third chapter, the doctrinal dialogue on artificial intelligence and liability is explored, by addressing the possible solutions that have emerged in the legal debate. At present, the suggestive choice of recognizing a specific legal status to software agents cannot be shared, but the responsibility must be traced back to a human subject who had the task of exercising control over the machine and preventing any harmful consequences. The survey aimed at defining the legislation applicable to the algorithm damage focuses on two options: on the one hand the legislation on producer responsibility and on the other the code rules on tort liability which have proven themselves capable of adapting to social and technological changes unthinkable at the time of their introduction. Among these, in particular, the research focuses on the art. 2050 of the civil code which regulates liability for dangerous activities, which appears to be a regulatory hypothesis capable of offering a satisfactory solution to the issue of algorithmic liability, albeit with due caution. In terms of product liability, the critical issues relating to the application of the discipline to goods equipped with a self-learning algorithm are analyzed. In the fourth chapter, divided into two parts, it investigates on the operational sectors of artificial intelligence, in which the most important results in terms of innovation and development have been recorded: medicine and transport systems. The first part analyzes the various applications of robotics and artificial intelligence in healthcare, highlighting their advantages, but also their potential risks. In the second part of the chapter, however, the theme of the so-called intelligent transport systems, outlining the characteristics of the six levels of autonomy that can be achieved by the device. It is highlighted how as the autonomy of the devices increases, there is a radical shift from a driver-focused vision, i.e. based on the driver, to a product-focused approach, i.e. based on the product and the responsibility of the manufacturer.

Intelligenza artificiale e responsabilità

TESTA, Annalisa
2023-10-06

Abstract

The research addresses the relationship between artificial intelligence and its legal regulation, focusing on the profile of the so-called algorithm damage. The first chapter outlines the main characteristics of artificial intelligence and of the machine learning and deep learning mechanisms that allow the intelligent system to learn and process data autonomously, but which make the operation opaque and unpredictable. The second chapter analyzes the main interventions adopted by the European legislator on the subject - starting from the Resolution of 16 February 2017 - which highlight the need for a regulation that takes into consideration the ethical implications of artificial intelligence without hindering innovation, while offering a legal framework such as to offer protection and security to all operators in the context of an anthropocentric, reliable and sustainable artificial intelligence. In the third chapter, the doctrinal dialogue on artificial intelligence and liability is explored, by addressing the possible solutions that have emerged in the legal debate. At present, the suggestive choice of recognizing a specific legal status to software agents cannot be shared, but the responsibility must be traced back to a human subject who had the task of exercising control over the machine and preventing any harmful consequences. The survey aimed at defining the legislation applicable to the algorithm damage focuses on two options: on the one hand the legislation on producer responsibility and on the other the code rules on tort liability which have proven themselves capable of adapting to social and technological changes unthinkable at the time of their introduction. Among these, in particular, the research focuses on the art. 2050 of the civil code which regulates liability for dangerous activities, which appears to be a regulatory hypothesis capable of offering a satisfactory solution to the issue of algorithmic liability, albeit with due caution. In terms of product liability, the critical issues relating to the application of the discipline to goods equipped with a self-learning algorithm are analyzed. In the fourth chapter, divided into two parts, it investigates on the operational sectors of artificial intelligence, in which the most important results in terms of innovation and development have been recorded: medicine and transport systems. The first part analyzes the various applications of robotics and artificial intelligence in healthcare, highlighting their advantages, but also their potential risks. In the second part of the chapter, however, the theme of the so-called intelligent transport systems, outlining the characteristics of the six levels of autonomy that can be achieved by the device. It is highlighted how as the autonomy of the devices increases, there is a radical shift from a driver-focused vision, i.e. based on the driver, to a product-focused approach, i.e. based on the product and the responsibility of the manufacturer.
Artificial intelligence and liability
6-ott-2023
Intelligenza artificiale; Responsabilità; Danno da algoritmo
File in questo prodotto:
File Dimensione Formato  
Tesi_A_Testa.pdf

accesso aperto

Descrizione: Tesi di Dottorato
Dimensione 1.83 MB
Formato Adobe PDF
1.83 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11695/127409
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact