Objectives
The central objective of this project is to offer a business ethics perspective on how social, commercial, and political actors on both a local and global scale can ensure accountability in algorithmic decision-making processes.
The project aims at tackling issues associated with algorithmic opacity, such as lack of resources on the user side, bias, and discrimination.
The project shall conduct a multi-method and multi-stakeholder investigation to develop a comprehensive framework of the affordances, responsibilities, and outcomes of algorithmic decision-making to help explore the question of what explainable and accountable AI means in the context of organizations, which actors drive solutions, what (unintended) consequences might be, and how harms might be diminished through organizational mitigations.
Balancing Progress and Ethics
The diffusion of rapidly improving, powerful artificial intelligence (AI) technologies is rendering automated decision-making an increasingly common component of organizational processes, in which employees, managers, clients, and the public are more and more facing a reality where machines make ‘decisions’ that are implemented without always meaningful avenues for questioning and redress. While AI and its employment in organizations have made great progress over the last decade, often for the better, there is often a countervailing harm that tends to affect more vulnerable members of the workforce and society. Often, preventing these harms goes beyond technical matters and goes hand in hand with organizational implementations, managerial ethics, and responsibility taken, as well as societal systems ensuring the accountability of such autonomous systems.
Deliverables
Blog
Read some articles related to the project’s main subject