%0 Conference Paper
%A Rathkopf, Charles
%T Do Large Language Models Understand Meaning?
%M FZJ-2023-01493
%D 2023
%X It is curiously difficult to articulate the capacities of large language modelswithout getting yourself into philosophically controversial terrain. In this talk Iexplain why. The talk has three parts. In the first, I give a sketch of how largelanguage models are built, with particular attention to the way words arerepresented as vector quantities. In the second, I describe the various ways inwhich the capacities of language models have been tested empirically. In thethird, I provide the main philosophical argument. I argue that, in order tounderstand what large language models are, we must reject the seeminglyinnocent metaphysical principle that everything in the world either has a mindor it does not.
%B Kimball Union Academy
%C 13 Jan 2023, online event (USA)
Y2 13 Jan 2023
M2 online event, USA
%F PUB:(DE-HGF)31
%9 Talk (non-conference)
%U https://juser.fz-juelich.de/record/1005477