Logotipo librería Marcial Pons
Three liability regimes for artificial intelligence

Three liability regimes for artificial intelligence
algorithmic actants, hybrids, crowds

  • ISBN: 9781509949335
  • Editorial: Hart Publishing
  • Lugar de la edición: Oxford. Reino Unido
  • Encuadernación: Cartoné
  • Medidas: 24 cm
  • Nº Pág.: 208
  • Idiomas: Inglés

Papel: Cartoné
126,70 €
Stock en librería. Envío en 24/48 horas

Resumen

This book proposes three liability regimes to combat the wide responsibility gaps caused by AI systems - vicarious liability for autonomous software agents (actants); enterprise liability for inseparable human-AI interactions (hybrids); and collective fund liability for interconnected AI systems (crowds). Based on information technology studies, the book first develops a threefold typology that distinguishes individual, hybrid and collective machine behaviour. A subsequent social science analysis specifies the socio-digital institutions related to this threefold typology. Then it determines the social risks that emerge when algorithms operate within these institutions. Actants raise the risk of digital autonomy, hybrids the risk of double contingency in human-algorithm encounters, crowds the risk of opaque interconnections. The book demonstrates that the law needs to respond to these specific risks, by recognising personified algorithms as vicarious agents, human-machine associations as collective enterprises, and interconnected systems as risk pools - and by developing corresponding liability rules. The book relies on a unique combination of information technology studies, sociological institution and risk analysis, and comparative law. This approach uncovers recursive relations between types of machine behaviour, emergent socio-digital institutions, their concomitant risks, legal conditions of liability rules, and ascription of legal status to the algorithms involved.

1. Digitalisation: The Responsibility Gap
I. The Problem: The Dangerous Homo Ex Machina
A. Growing Liability Gaps
B. Scenarios
C. Current Law's Denial of Reality
II. The Overshooting Reaction: Full Legal Subjectivity for E-Persons?
III. Our Solution: Differential Legal Status Ascriptions for Algorithms
A. Algorithms in Social and Economic Contexts
B. (Legal) Form Follows (Social) Function
IV. Our Approach: Three Digital Risks
A. 'Socio-Digital Institutions' as Intermediaries between Technology and Law
B. A Typology of Machine Behaviour
C. A Typology of Socio-Digital Institutions
D. A Typology of Liability Risks

2. Autonomy and Personification
I. Artificial Intelligence as Actants
A. Anthropomorphism?
B. Actants and Action Attribution
C. Communication with Actants
II. Gradualised Digital Autonomy
A. Social Attribution of Autonomy
B. Legal Criteria of Autonomy
C. Our Solution: Decision under Uncertainty
III. Autonomy and Legal Personhood
A. Against Personification?
B. Uniform Personification?
C. Socio-Digital Institutions and Legal Status

3. Actants: Autonomy Risk
I. Socio-Digital Institution: Digital Assistance
II. The Autonomy Risk
III. Algorithmic Contract Formation
A. Invalidity of Algorithmic Contracts?
B. Algorithms as Mere Tools?
C. Our Solution: Agency Law for Electronic Agents
D. Limited Legal Personhood – Constellation One
E. The Digital Agent's Legal Declaration
F. Digital Assistance and the Principal-Agent Relation
G. Overstepping of Authority?
IV. Contractual Liability
A. The Dilemma of the Tool-Solution
B. Our Solution: Vicarious Performance
C. Limited Legal Personhood – Constellation Two
V. Non-Contractual Liability
A. Fault-Based Liability?
B. Product Liability?
C. Strict Causal Liability for Dangerous Objects and Activities?
D. Our Solution: Vicarious Liability in Tort
E. Limited Legal Personhood – Constellation Three
F. The 'Reasonable Algorithm'
G. Who is Liable?

4. Hybrids: Association Risk
I. Socio-Digital Institution: Human-Machine Associations
A. Emergent Properties
B. Hybridity
C. The Organisational Analogy
II. The Association Risk
III. Solution de lege ferenda: Hybrids as Legal Entities?
IV. Our Solution de lege lata: Enterprise Liability for Human-Machine Networks
A. Human-Machine Interactions as Networks
B. Networks and Enterprise Liability 1
C. Action Attribution and Liability Attribution in Hybrids
D. Liable Actors
E. Pro-Rata Network Share Liability
F. External Liability Concentration: 'One-Stop-Shop' Approach
G. Internal Liability Distribution: Pro Rata Network Share
V. Conclusion

5. Multi-Agent Crowds: Interconnectivity Risk
I. Socio-Digital Institution: Exposure to Interconnectivity
A. Non-Communicative Contacts
B. Distributed Cognitive Processes
II. The Interconnectivity Risk
III. Mismatch of New Risks and Existing Solutions
A. Applying Existing Categories
B. Vicarious or Product Liability of the Whole Interconnected AI-System?
C. Collective Liability of Actors Connected to the 'Technical Unit'?
IV. Our Solution: Socialising the Interconnectivity Risk
A. Entire or Partial Socialisation?
B. Risk Pools Decreed by Law
C. Public Administration of the Risk Pool: The Fund Solution
D. Financing: Ex-Ante and Ex-Post Components
E. Participation and Administration
F. Compensation and Recovery Action
G. Global Interconnectivity and National Administration

6. Conclusion: Three Liability Regimes and Their Interrelations
I. Synopsis and Rules
A. Digital Assistance: Vicarious Liability
B. Human-Machine Associations: Enterprise Liability
C. Interconnectivity: Fund Liability
II. Socio-Digital Institutions and Liability Law
A. One-Size-Fits-All or Sector-Specific Piecemeal Approach?
B. Socio-Digital Institutions
C. Institution-Generating Liability Law
D. Differential Treatment of Liable Actors
E. Calibrating Legal Personhood for AI
III. Interactions between Three Liability Regimes
A. Criteria for Delineating the Applicable Liability Regime
B. No Priority for a Liability Regime
IV. Exemplary Cases
A. Vicarious Liability: Robo-Advice
B. Enterprise Liability: Hybrid Journalism
C. Collective Funds: Flash Crash
D. Finale: Google Autocomplete

Resumen

Utilizamos cookies propias y de terceros para mejorar nuestros servicios y facilitar la navegación. Si continúa navegando consideramos que acepta su uso.

aceptar más información