Human judge and artificial intelligence: The administrative process tested for impartiality in the era of automated decisions

Abstract

The development of artificial intelligence continues unabated and the recent approval of the European Artificial Intelligence (AI) Act is a confirmation of unprecedented regulatory scope for legal science. In such a context, administrative policymakers discuss ways and criteria of governance of this instrument with unimaginable legal potential. Administrative judgment is not immune to a similar phenomenon of vast proportions, but rather proves to be fully influenced through the widespread implementation of the so-called cc.dd. “automated decisions”. This contribution offers a systematic investigation of the matter, proposing a conceptual review of the legal clause of impartiality as an extraordinary legal tool that allows automated decisions to be made functional for the protection of the sphere of individual and collective rights in harmony with the constitutional system. To this end, the article questions the use of artificial intelligence in order to understand whether this paradigm is respectful of the principles underlying the legal system, such as the principles of legality and equality, whose corollary - that of impartiality – appears, according to a historicist and anthropological approach, not properly respected. This is based on the creativity deficits of algorithmic intelligence. It will therefore be said that human partiality appears to be more impartial than the asserted impartiality of the machine. Further principles will then be analysed, including those of instrumentality, transparency (and knowability, which requires complex solutions to manage), imputability and non-exclusivity of the automated decision. The conclusions propose a constitutionally oriented reading of the legal principle of impartiality.

.pdf (Italiano)
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.