Talk (non-conference) (Other) FZJ-2023-01493

http://join2-wiki.gsi.de/foswiki/pub/Main/Artwork/join2_logo100x88.png
Do Large Language Models Understand Meaning?



2023

Kimball Union Academy, online eventonline event, USA, 13 Jan 20232023-01-13

Abstract: It is curiously difficult to articulate the capacities of large language modelswithout getting yourself into philosophically controversial terrain. In this talk Iexplain why. The talk has three parts. In the first, I give a sketch of how largelanguage models are built, with particular attention to the way words arerepresented as vector quantities. In the second, I describe the various ways inwhich the capacities of language models have been tested empirically. In thethird, I provide the main philosophical argument. I argue that, in order tounderstand what large language models are, we must reject the seeminglyinnocent metaphysical principle that everything in the world either has a mindor it does not.


Contributing Institute(s):
  1. Gehirn & Verhalten (INM-7)
Research Program(s):
  1. 5255 - Neuroethics and Ethics of Information (POF4-525) (POF4-525)

Appears in the scientific report 2023
Click to display QR Code for this record

The record appears in these collections:
Document types > Presentations > Talks (non-conference)
Institute Collections > INM > INM-7
Workflow collections > Public records
Publications database

 Record created 2023-03-15, last modified 2023-03-24



Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)