001     1031452
005     20241107210038.0
037 _ _ |a FZJ-2024-05671
041 _ _ |a English
100 1 _ |a Upschulte, Eric
|0 P:(DE-Juel1)177675
|b 0
|e Corresponding author
|u fzj
111 2 _ |a 8th BigBrain Workshop
|c Padua
|d 2024-09-09 - 2024-09-11
|w Italy
245 _ _ |a Towards Universal Instance Segmentation Models in Biomedical Imaging
260 _ _ |c 2024
336 7 _ |a Conference Paper
|0 33
|2 EndNote
336 7 _ |a Other
|2 DataCite
336 7 _ |a INPROCEEDINGS
|2 BibTeX
336 7 _ |a conferenceObject
|2 DRIVER
336 7 _ |a LECTURE_SPEECH
|2 ORCID
336 7 _ |a Conference Presentation
|b conf
|m conf
|0 PUB:(DE-HGF)6
|s 1730976784_30533
|2 PUB:(DE-HGF)
|x After Call
502 _ _ |c HHU Düssledorf
520 _ _ |a Precise instance segmentation is a critical part of many fields of research in biomedical imaging. One key challenge is applying models to new data domains, typically involving pre-training a model on a larger corpus of data and fine-tuning it with new annotations for each specific domain. This process is labor- intensive and requires creating and maintaining multiple branched versions of the model. Working towards universal instance segmentation models in biomedical imaging, we propose to unify domain-adapted model branches into a single multi-expert model, following a foundation model paradigm. Our goal is to replace most existing fine-tuning scenarios with prompt-based user instructions, allowing the user to clearly state the task and object classes of interest. We hypothesize that such a combined approach improves generalization, as the base model can benefit from datasets that were previously only used for fine-tuning. A key challenge in the creation of such models is to resolve training conflicts and ambiguity in a pragmatic fashion when combining different segmentation tasks, datasets, and data domains. Such conflicts can occur if datasets focus on different classes in the same domain. For example, some datasets annotate all cells in microscopy images, while others focus on cells of a specific cell type. A naïve combination of such sets would create an ill-posed learning problem for most models, requiring them to infer their task from their input, which is undesirable in a universal setting. Models like SAM and MedSAM highlight the potential of prompting, but often require external detectors and fine-tuning. Here, we propose to leverage prompt-based task descriptions as a tool to manipulate general model behavior, such that user instructions yield domain expert models. We test our approach by training a Contour Proposal Network (CPN) on a multi-modal data collection, including the TissueNet dataset. Prompts, such as “cell segmentation” or simply “nuclei”, modulate underlying features, allowing the CPN to segment the respective object classes in TissueNet with a mean F1 score of 0.90 (0.88 for cells, 0.92 for nuclei), compared to 0.84 (0.81, 0.87) without prompting. Overall, the proposed approach introduces an interactive linguistic component that allows the conflict-free composition of various segmentation datasets, thus allowing to unify previously separated segmentation tasks. With that, we consider it an important step towards universal models.
536 _ _ |a 5254 - Neuroscientific Data Analytics and AI (POF4-525)
|0 G:(DE-HGF)POF4-5254
|c POF4-525
|f POF IV
|x 0
536 _ _ |a HIBALL - Helmholtz International BigBrain Analytics and Learning Laboratory (HIBALL) (InterLabs-0015)
|0 G:(DE-HGF)InterLabs-0015
|c InterLabs-0015
|x 1
536 _ _ |a EBRAINS 2.0 - EBRAINS 2.0: A Research Infrastructure to Advance Neuroscience and Brain Health (101147319)
|0 G:(EU-Grant)101147319
|c 101147319
|f HORIZON-INFRA-2022-SERV-B-01
|x 2
536 _ _ |a Helmholtz AI - Helmholtz Artificial Intelligence Coordination Unit – Local Unit FZJ (E.40401.62)
|0 G:(DE-Juel-1)E.40401.62
|c E.40401.62
|x 3
700 1 _ |a Harmeling, Stefan
|0 P:(DE-HGF)0
|b 1
700 1 _ |a Amunts, Katrin
|0 P:(DE-Juel1)131631
|b 2
|u fzj
700 1 _ |a Dickscheid, Timo
|0 P:(DE-Juel1)165746
|b 3
|u fzj
856 4 _ |u https://events.hifis.net/event/1416/contributions/11283/
909 C O |o oai:juser.fz-juelich.de:1031452
|p openaire
|p VDB
|p ec_fundedresources
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 0
|6 P:(DE-Juel1)177675
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 2
|6 P:(DE-Juel1)131631
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 3
|6 P:(DE-Juel1)165746
913 1 _ |a DE-HGF
|b Key Technologies
|l Natural, Artificial and Cognitive Information Processing
|1 G:(DE-HGF)POF4-520
|0 G:(DE-HGF)POF4-525
|3 G:(DE-HGF)POF4
|2 G:(DE-HGF)POF4-500
|4 G:(DE-HGF)POF
|v Decoding Brain Organization and Dysfunction
|9 G:(DE-HGF)POF4-5254
|x 0
914 1 _ |y 2024
920 _ _ |l yes
920 1 _ |0 I:(DE-Juel1)INM-1-20090406
|k INM-1
|l Strukturelle und funktionelle Organisation des Gehirns
|x 0
980 _ _ |a conf
980 _ _ |a VDB
980 _ _ |a I:(DE-Juel1)INM-1-20090406
980 _ _ |a UNRESTRICTED


LibraryCollectionCLSMajorCLSMinorLanguageAuthor
Marc 21