Abstract
This thesis explores how public agencies and non-governmental organizations (NGOs) interpret and negotiate a proposed expansion of automated decision-making in the Norwegian Directorate of Immigration (UDI). Automation generally refers to the process of making something operate independently of human intervention. While often associated with industrial production, the technology is increasingly introduced in other aspects of our work life, such as professional decision-making. The introduction of automation is often talked about in technologically determinist terms, as an inevitable development where the technology takes over jobs and eradicates the need for professional discretion and judgment. This thesis instead finds inspiration in the Social Construction of Technology (SCOT) approach, which argues that the introduction of technologies always involves negotiation between different actors with different understandings. Drawing on SCOT, this thesis explores an attempt to expand the use of automated decision-making in UDI. In 2019, the Norwegian Ministry of Justice proposed to decrease current restrictions on the use of automation. To pass, the proposal had to go through a hearing process, where different actors get the chance to comment and voice their concerns. Taking the hearing process as its case, the thesis’ overall research question is: How do relevant social groups interpret and negotiate the proposed expansion of automated decision making in UDI? To address this research question, the thesis draws on documents from the hearing process and qualitative interviews with eight key participants in the debate. Using concepts from discourse analysis as an analytical framework, I identify three discursive struggles surrounding the proposal, concerning: the relationship between discretion and objective criteria; the strengths and weaknesses of humans and machines as decision-makers; and whether children’s cases can be subject to automation. In debating these issues, the actors are roughly divided into two sides: On the one hand, a “supportive” group argues that “simple” and “straightforward” cases can be decided using objective criteria; that machines are better decision-makers in some cases; and that children’s cases can be subject to automation. On the other hand, a more “critical” group argues against the distinction between objective criteria and discretion; sees humans as the most suited and trustworthy decision-makers; and argues that children’s cases should not be automated. Whereas the supportive side draws mainly on a modernistic discourse, emphasizing that automated decision-making is an objective and reliable solution for the challenges in UDI, the critical side draws more on a humanistic discourse, which sees automation as a poor alternative to human decision-making. The hearing process thus sees two groups struggle for hegemony over the definition of “automation”. Importantly, their discursive struggles can affect how the technology is understood, developed and implemented in UDI – especially as the struggles are taking place in a democratically institutionalized negotiation process and the outcome of the hearing process will be materialized into legal regulations. The thesis thus highlights the importance of looking at how public agencies and non-governmental organizations debate proposals to introduce or expand the use of technologies, as this can teach us valuable lessons about how technologies are negotiated through democratic processes more generally.