001049026 001__ 1049026 001049026 005__ 20251209202152.0 001049026 037__ $$aFZJ-2025-05121 001049026 1001_ $$0P:(DE-Juel1)176538$$aRathkopf, Charles$$b0$$eCorresponding author$$ufzj 001049026 1112_ $$aBerlin Philosophy of AI Group$$cBerlin$$d2025-12-04 - $$wGermany 001049026 245__ $$aShallow Belief in LLMs 001049026 260__ $$c2025 001049026 3367_ $$033$$2EndNote$$aConference Paper 001049026 3367_ $$2DataCite$$aOther 001049026 3367_ $$2BibTeX$$aINPROCEEDINGS 001049026 3367_ $$2ORCID$$aLECTURE_SPEECH 001049026 3367_ $$0PUB:(DE-HGF)31$$2PUB:(DE-HGF)$$aTalk (non-conference)$$btalk$$mtalk$$s1765288552_1972$$xOther 001049026 3367_ $$2DINI$$aOther 001049026 520__ $$aDo large language models have beliefs? Interpretationist theories hold that belief attribution depends on predictive utility rather than on internal representational format. Because LLMs display impressive linguistic fluency, a straightforward interpretationist view seems to imply that they are doxastic equivalents of humans. This paper argues that this implication is mistaken.I separate two questions. First, do propositional-attitude (PA) models predict LLM behavior better than non-PA alternatives? Second, do PA models yield similar predictive utility for LLMs and for humans? LLMs meet the first condition: PA models outperform n-gram baselines. However, PA models achieve much lower predictive utility for LLMs than for humans. This deficit arises from architectural constraints that prevent LLMs from reconciling contradictions across context boundaries.This limitation produces a form of indeterminacy that is largely absent in human belief. Although humans also face indeterminacy, they possess mechanisms such as embodied action, long-term memory, and continual learning that mitigate it over time. LLMs lack these mechanisms. Parallel considerations apply to desire ascription, which undermines attempts to locate an asymmetry between belief and desire in LLMs.The paper develops a predictive-profile framework that captures this reduced utility as a form of shallow belief. The framework preserves the quasi-rational character of LLMs while avoiding both eliminativism and overattribution. 001049026 536__ $$0G:(DE-HGF)POF4-5255$$a5255 - Neuroethics and Ethics of Information (POF4-525)$$cPOF4-525$$fPOF IV$$x0 001049026 909CO $$ooai:juser.fz-juelich.de:1049026$$pVDB 001049026 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)176538$$aForschungszentrum Jülich$$b0$$kFZJ 001049026 9131_ $$0G:(DE-HGF)POF4-525$$1G:(DE-HGF)POF4-520$$2G:(DE-HGF)POF4-500$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$9G:(DE-HGF)POF4-5255$$aDE-HGF$$bKey Technologies$$lNatural, Artificial and Cognitive Information Processing$$vDecoding Brain Organization and Dysfunction$$x0 001049026 9141_ $$y2025 001049026 9201_ $$0I:(DE-Juel1)INM-7-20090406$$kINM-7$$lGehirn & Verhalten$$x0 001049026 980__ $$atalk 001049026 980__ $$aVDB 001049026 980__ $$aI:(DE-Juel1)INM-7-20090406 001049026 980__ $$aUNRESTRICTED