Home > Publications database > ConText Transformer: Text-guided Instance Segmentation in Scientific Imaging |
Poster (After Call) | FZJ-2025-02904 |
; ;
2025
Abstract: Scientific imaging gives rise to a multitude of different segmentation tasks, many of which involve manually annotated datasets. We have collected a large number of such heterogeneous datasets, comprising over 10 million instance annotations, and demonstrate that in a multi-task setting, segmentation models at this scale cannot be effectively trained using solely image-based supervised learning. A major reason is that images from the same domain may be used to address different research questions, with varying annotation procedures and styles. For example, images of biological tissues may be evaluated for nuclei or cell bodies, despite using the same image modality. To overcome these challenges, we propose using simple text-based task descriptions to provide models with the necessary context for solving a given objective. We introduce the ConText Transformer, which implements a dual-stream architecture, processing and fusing both image and text data. Based on the provided textual descriptions, the model learns to adapt its internal feature representations to effectively switch between segmenting different classes and annotation styles observed in the datasets. These descriptions can range from simple class names (e.g., “white blood cells”)—prompting the model to only segment the referenced class—to more nuanced formulations such as toggling the use of overlapping segmentations in model predictions or segmenting a nucleus, even in the absence of cytoplasm or membrane, as is common in datasets like TissueNet but omitted in Cellpose. Since interpreting these descriptions is part of the model training, it is also possible to define dedicated terms abbreviating very complex descriptions. ConText Transformer is designed for compatibility. It can be used with existing segmentation frameworks, including the Contour Proposal Network (CPN) or Mask R-CNN. Our experiments on over 10 million instance annotations show that ConText Transformer models achieve competitive segmentation performance and outperform specialized models in several benchmarks; confirming that a single, unified model can effectively handle a wide spectrum of segmentation tasks; and eventually may replace specialist models in scientific image segmentation
![]() |
The record appears in these collections: |