Project’s

Objectives

The central objective of this project is to offer a business ethics perspective on how social, commercial, and political actors on both a local and global scale can ensure accountability in algorithmic decision-making processes.

The project aims at tackling issues associated with algorithmic opacity, such as lack of resources on the user side, bias, and discrimination.

The project shall conduct a multi-method and multi-stakeholder investigation to develop a comprehensive framework of the affordances, responsibilities, and outcomes of algorithmic decision-making to help explore the question of what explainable and accountable AI means in the context of organizations, which actors drive solutions, what (unintended) consequences might be, and how harms might be diminished through organizational mitigations.

Short description of the project

Balancing Progress and Ethics

The diffusion of rapidly improving, powerful artificial intelligence (AI) technologies is rendering automated decision-making an increasingly common component of organizational processes, in which employees, managers, clients, and the public are more and more facing a reality where machines make ‘decisions’ that are implemented without always meaningful avenues for questioning and redress. While AI and its employment in organizations have made great progress over the last decade, often for the better, there is often a countervailing harm that tends to affect more vulnerable members of the workforce and society. Often, preventing these harms goes beyond technical matters and goes hand in hand with organizational implementations, managerial ethics, and responsibility taken, as well as societal systems ensuring the accountability of such autonomous systems.

Project’s

Deliverables

  • A systematic overview of relevant academic, industry, and policy literature to identify research gaps, theoretical foundations, and targeted research agendas.
  • Adoption map of algorithmic decision-making systems, focusing on a specific European and Romanian cross-section of algorithm developers and adopters. In particular, we will investigate questions of what explainable and accountable AI means in the context of organizations, which actors drive solutions, what (unintended) consequences might be, and how harms might be diminished through organizational mitigation.
  • Audit and framework development for ‘Algorithmic Accountability’, in order to help raise awareness on the topic and build relationships with key stakeholders.
If you need more information about the project

Contact us