| Hauptseite > Online First > Anthropocentric bias in language model evaluation |
| Journal Article | FZJ-2025-04990 |
;
2025
MIT Press
Cambridge, MA
This record in other databases:
Please use a persistent id in citations: doi:10.1162/COLI.a.582
Abstract: Evaluating the cognitive capacities of large language models (LLMs) requires overcoming not only anthropomorphic but also anthropocentric biases. This article identifies two types of anthropocentric bias that have been neglected: overlooking how auxiliary factors can impede LLM performance despite competence (auxiliary oversight), and dismissing LLM mechanistic strategies that differ from those of humans as not genuinely competent (mechanistic chauvinism). Mitigating these biases requires an empirical, iterative approach to mapping cognitive tasks to LLM-specific capacities and mechanisms, achieved by supplementing behavioral experiments with mechanistic studies.
|
The record appears in these collections: |