Assessing the AI that assesses us: the work of civil society and research in setting auditing standards for the public sector
European governments are increasingly using algorithms to automate or support decision-making in public services, such as welfare, taxation, health and social care, or law enforcement. From a series of scandals in Europe in recent years, such as the Dutch Toeslagenaffaire scandal, we know that algorithmic systems in public services can cause serious harm to society if their results are automatically taken as objective and if they lack transparency in their implementation. In this sense, civil society and academia are working to increase algorithmic accountability in government by conducting external audits as advocacy practices and working directly with institutions. But how to turn these practices into standards across Europe? How can civil society contribute to the auditing of public algorithms and promote oversight? And what role will the AI Act play?
Assessing the AI that assesses us: the work of civil society and research in setting auditing standards for the public sector