TY  - JOUR
AU  - Böke, Annkathrin
AU  - Hacker, Hannah
AU  - Chakraborty, Millennia
AU  - Baumeister-Lingens, Luise
AU  - Vöckel, Jasper
AU  - Koenig, Julian
AU  - Vogel, David HV
AU  - Lichtenstein, Theresa Katharina
AU  - Vogeley, Kai
AU  - Kambeitz-Ilankovic, Lana
AU  - Kambeitz, Joseph
TI  - Observer-Independent Assessment of Content Overlap in Mental Health Questionnaires: Large Language Model–Based Study
JO  - JMIR AI
VL  - 4
SN  - 2817-1705
CY  - Toronto, Ont.
PB  - JMIR Publications
M1  - FZJ-2025-05706
SP  - e79868 - e79868
PY  - 2025
AB  - Background: Mental disorders are frequently evaluated using questionnaires, which have been developed over the past decades for the assessment of different conditions. Despite the rigorous validation of these tools, high levels of content divergence have been reported for questionnaires measuring the same construct of psychopathology. Previous studies that examined the content overlap required manual symptom labeling, which is observer-dependent and time-consuming.Objective: In this study, we used large language models (LLMs) to analyze content overlap of mental health questionnaires in an observer-independent way and compare our results with clinical expertise.Methods: We analyzed questionnaires from a range of mental health conditions, including adult depression (n=7), childhood depression (n=15), clinical high risk for psychosis (CHR-P; n=11), mania (n=7), obsessive-compulsive disorder (n=7), and sleep disorder (n=12). Two different LLM-based approaches were tested. First, we used sentence Bidirectional Encoder Representations from Transformers (sBERT) to derive numerical representations (embeddings) for each questionnaire item, which were then clustered using k-means to group semantically similar symptoms. Second, questionnaire items were prompted to a Generative Pretrained Transformer to identify underlying symptom clusters. Clustering results were compared to a manual categorization by experts using the adjusted rand index. Further, we assessed the content overlap within each diagnostic domain based on LLM-derived clusters.Results: We observed varying degrees of similarity between expert-based and LLM-based clustering across diagnostic domains. Overall, agreement between experts was higher than between experts and LLMs. Among the 2 LLM approaches, GPT showed greater alignment with expert ratings than sBERT, ranging from weak to strong similarity depending on the diagnostic domain. Using GPT-based clustering of questionnaire items to assess the content overlap within each diagnostic domain revealed a weak (CHR-P: 0.344) to moderate (adult depression: 0.574; childhood depression: 0.433; mania: 0.419; obsessive-compulsive disorder [OCD]: 0.450; sleep disorder: 0.445) content overlap of questionnaires. Compared to the studies that manually investigated content overlap among these scales, the results of this study exhibited variations, though these were not substantial.Conclusions: These findings demonstrate the feasibility of using LLMs to objectively assess content overlap in diagnostic questionnaires. Notably, the GPT-based approach showed particular promise in aligning with expert-derived symptom structures.Keywords: GPT; content overlap; large language models; questionnaires; sBERT; scales; sentence Bidirectional Encoder Representations from Transformers; symptom overlap.
LB  - PUB:(DE-HGF)16
DO  - DOI:10.2196/79868
UR  - https://juser.fz-juelich.de/record/1049992
ER  -