Talk (non-conference) (Other) FZJ-2025-05123

http://join2-wiki.gsi.de/foswiki/pub/Main/Artwork/join2_logo100x88.png
Anthropocentric bias in language model evaluation



2025

Tübingen-Nancy Seminar on Philosophical Aspects of Computer Sciences, BerlinBerlin, Germany, 27 Nov 20252025-11-27

Abstract: Evaluating the cognitive capacities of large language models (LLMs) requires overcoming not only anthropomorphic but also anthropocentric biases. This article identifies two types of anthropocentric bias that have been neglected: (i) overlooking how auxiliary factors can impede LLM performance despite competence, which we call auxiliary oversight, and (ii) dismissing LLM mechanistic strategies that differ from those of humans as not genuinely competent, which we call mechanistic chauvinism. Mitigating these biases necessitates an empirically-driven, iterative approach to mapping cognitive tasks to LLM-specific capacities and mechanisms, which can be done by supplementing carefully designed behavioral experiments with mechanistic studies.Paper coauthored with Raphaël Millière.


Contributing Institute(s):
  1. Gehirn & Verhalten (INM-7)
Research Program(s):
  1. 5255 - Neuroethics and Ethics of Information (POF4-525) (POF4-525)

Appears in the scientific report 2025
Click to display QR Code for this record

The record appears in these collections:
Document types > Presentations > Talks (non-conference)
Institute Collections > INM > INM-7
Workflow collections > Public records
Publications database

 Record created 2025-12-09, last modified 2026-02-20



Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)