Internet Policy Research Website

The Blind Watcher: Accountability
mechanisms in the Artificial Intelligence Act

Nicola Palladino

Abstract

The paper delves into the crucial aspect of accountability in the realm of artificial intelligence (AI), focusing specifically on the European Union’s proposed legislation, the Artificial Intelligence Act (AIA). After highlighting the transformative impact of AI on society and the need for robust governance mechanisms to mitigate potential misuses and risks associated with AI systems, the paper underscores the importance of building trust and public acceptance for AI, given its potential to reshape decision making processes across various sectors. The paper investigates the concept of accountability, differentiating between internal and external accountability in the context of AI systems. It emphasizes that AI’s multi stakeholder nature necessitates a comprehensive accountability framework, encompassing developers, providers, users, and regulatory bodies. The discussion delves into the AIA’s regulatory approach, which classifies AI applications based on risk and mandates compliance with distinct sets of requirements. The AIA’s accountability mechanisms are analyzed in-depth, from risk categorization to conformity assessments, with a focus on high-risk applications. The paper concludes by acknowledging the significance of the AIA as a pioneering regulation in the AI governance landscape. However, it raises concerns about potential shortcomings, such as the limited application of accountability requirements and the potential for vested interests to influence evaluations.

See the full text here

Comments are closed, but trackbacks and pingbacks are open.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More