%0 Journal Article
%A Millière, Raphaël
%A Rathkopf, Charles
%T Anthropocentric bias in language model evaluation
%J Computational linguistics
%V .
%@ 0891-2017
%C Cambridge, MA
%I MIT Press
%M FZJ-2025-04990
%P 1 - 10
%D 2025
%X Evaluating the cognitive capacities of large language models (LLMs) requires overcoming not only anthropomorphic but also anthropocentric biases. This article identifies two types of anthropocentric bias that have been neglected: overlooking how auxiliary factors can impede LLM performance despite competence (auxiliary oversight), and dismissing LLM mechanistic strategies that differ from those of humans as not genuinely competent (mechanistic chauvinism). Mitigating these biases requires an empirical, iterative approach to mapping cognitive tasks to LLM-specific capacities and mechanisms, achieved by supplementing behavioral experiments with mechanistic studies.
%F PUB:(DE-HGF)16
%9 Journal Article
%R 10.1162/COLI.a.582
%U https://juser.fz-juelich.de/record/1048885