Sammendrag
This essay concerns the epistemology of beliefs formed on the basis of the output of AI systems. To do this, I follow the definition given by the European council’s High-Level Expert Group on Artificial Intelligence (AI HLEG). I discuss some of the epistemically problematic implications of that definition and propose a tentative epistemological taxonomy of AI. This taxonomy is based on the dynamic between opacity and autonomy, on a scale of the total complexity of systems that acts as an external source of knowledge. By using this taxonomic system, I then suggest possible theories of justification used for other sources of knowledge that may be suited for a variety of AI depending on where it fits in the taxonomy. In the end, I conclude that social elements are the most important part of a plausible theory of justification for the reliance, or trust, in AI.