Preprint FZJ-2026-01886

http://join2-wiki.gsi.de/foswiki/pub/Main/Artwork/join2_logo100x88.png
Cytoarchitecture in Words: Weakly Supervised Vision-Language Modeling for Human Brain Microscopy

 ;  ;  ;

2026
arXiv

arxiv () [10.48550/arXiv.2602.23088]

This record in other databases:  

Please use a persistent id in citations: doi:  doi:

Abstract: Foundation models increasingly offer potential to support interactive, agentic workflows that assist researchers during analysis and interpretation of image data. Such workflows often require coupling vision to language to provide a natural-language interface. However, paired image-text data needed to learn this coupling are scarce and difficult to obtain in many research and clinical settings. One such setting is microscopic analysis of cell-body-stained histological human brain sections, which enables the study of cytoarchitecture: cell density and morphology and their laminar and areal organization. Here, we propose a label-mediated method that generates meaningful captions from images by linking images and text only through a label, without requiring curated paired image-text data. Given the label, we automatically mine area descriptions from related literature and use them as synthetic captions reflecting canonical cytoarchitectonic attributes. An existing cytoarchitectonic vision foundation model (CytoNet) is then coupled to a large language model via an image-to-text training objective, enabling microscopy regions to be described in natural language. Across 57 brain areas, the resulting method produces plausible area-level descriptions and supports open-set use through explicit rejection of unseen areas. It matches the cytoarchitectonic reference label for in-scope patches with 90.6% accuracy and, with the area label masked, its descriptions remain discriminative enough to recover the area in an 8-way test with 68.6% accuracy. These results suggest that weak, label-mediated pairing can suffice to connect existing biomedical vision foundation models to language, providing a practical recipe for integrating natural-language in domains where fine-grained paired annotations are scarce.

Keyword(s): Computer Vision and Pattern Recognition (cs.CV) ; FOS: Computer and information sciences ; I.2.6; I.2.7; I.4.9; I.5.1; I.5.4


Contributing Institute(s):
  1. Strukturelle und funktionelle Organisation des Gehirns (INM-1)
Research Program(s):
  1. 5254 - Neuroscientific Data Analytics and AI (POF4-525) (POF4-525)
  2. Helmholtz AI - Helmholtz Artificial Intelligence Coordination Unit – Local Unit FZJ (E.40401.62) (E.40401.62)
  3. HIBALL - Helmholtz International BigBrain Analytics and Learning Laboratory (HIBALL) (InterLabs-0015) (InterLabs-0015)
  4. EBRAINS 2.0 - EBRAINS 2.0: A Research Infrastructure to Advance Neuroscience and Brain Health (101147319) (101147319)
  5. X-BRAIN (ZT-I-PF-4-061) (ZT-I-PF-4-061)
  6. DFG project G:(GEPRIS)501864659 - NFDI4BIOIMAGE - Nationale Forschungsdateninfrastruktur für Mikroskopie und Bildanalyse (501864659) (501864659)

Appears in the scientific report 2026
Click to display QR Code for this record

The record appears in these collections:
Institutssammlungen > INM > INM-1
Dokumenttypen > Berichte > Vorabdrucke
Workflowsammlungen > Öffentliche Einträge
Workflowsammlungen > In Bearbeitung
Online First

 Datensatz erzeugt am 2026-03-03, letzte Änderung am 2026-04-17


Restricted:
Volltext herunterladen PDF
Externer link:
Volltext herunterladenVolltext
Dieses Dokument bewerten:

Rate this document:
1
2
3
 
(Bisher nicht rezensiert)