TY  - CONF
AU  - Rathkopf, Charles
TI  - Do Large Language Models Understand Meaning?
M1  - FZJ-2023-01493
PY  - 2023
AB  - It is curiously difficult to articulate the capacities of large language modelswithout getting yourself into philosophically controversial terrain. In this talk Iexplain why. The talk has three parts. In the first, I give a sketch of how largelanguage models are built, with particular attention to the way words arerepresented as vector quantities. In the second, I describe the various ways inwhich the capacities of language models have been tested empirically. In thethird, I provide the main philosophical argument. I argue that, in order tounderstand what large language models are, we must reject the seeminglyinnocent metaphysical principle that everything in the world either has a mindor it does not.
T2  - Kimball Union Academy
CY  - 13 Jan 2023, online event (USA)
Y2  - 13 Jan 2023
M2  - online event, USA
LB  - PUB:(DE-HGF)31
UR  - https://juser.fz-juelich.de/record/1005477
ER  -