Journal Article FZJ-2025-04211

http://join2-wiki.gsi.de/foswiki/pub/Main/Artwork/join2_logo100x88.png
Accelerated quantification of reinforcement degradation in additively manufactured Ni-WC metal matrix composites via SEM and vision transformers

 ;  ;  ;  ;  ;  ;  ;  ;

2025
Science Direct New York, NY

Materials characterization 229(Part B), 115645 - () [10.1016/j.matchar.2025.115645]

This record in other databases:

Please use a persistent id in citations: doi:  doi:

Abstract: Machine learning (ML) applications have shown potential in analyzing complex patterns in additively manufactured (AMed) structures. Metal matrix composites (MMC) offer the potential to enhance functional parts through a metal matrix and reinforcement particles. However, their processing can induce several co-existing anomalies in the microstructure, which are difficult to analyze through optical metallography. Scanning electron microscopy (SEM) can better highlight the degradation of reinforcement particles, but the analysis can be labor-intensive, time-consuming, and highly dependent on expert knowledge. Deep learning-based semantic segmentation has the potential to expedite the analysis of SEM images and hence support their characterization in the industry. This capability is particularly desired for rapid and precise quantification of defect features from the SEM images. In this study, key state-of-the-art semantic segmentation methods from self-attention-based vision transformers (ViTs) are investigated for their segmentation performance on SEM images with a focus on segmenting defect pixels. Specifically, SegFormer, MaskFormer, Mask2Former, UPerNet, DPT, Segmenter, and SETR models were evaluated. A reference fully convolutional model, DeepLabV3+, widely used on semantic segmentation tasks, is also included in the comparison. A SEM dataset representing AMed MMCs was generated through extensive experimentation and is made available in this work. Our comparison shows that several transformer-based models perform better than the reference CNN model with UPerNet (94.33 % carbide dilution accuracy) and SegFormer (93.46 % carbide dilution accuracy) consistently outperformed the other models in segmenting damage to the carbide particles in the SEM images. The findings on the validation and test sets highlight the most frequent misclassification errors at the boundaries of defective and defect-free pixels. The models were also evaluated based on their prediction confidence as a practical measure to support decision-making and model selection. As a result, the UPerNet model with the Swin backbone is recommended for segmenting SEM images from AMed MMCs in scenarios where accuracy and robustness are desired whereas the SegFormer model is recommended for its lighter design and competitive performance. In the future, the analysis can be extended by including higher capacity as well as smaller models in the comparison. Similarly, variations in specific hyperparameters can be investigated to reinforce the rationale of selecting a specific configuration.

Classification:

Contributing Institute(s):
  1. Materials Data Science and Informatics (IAS-9)
Research Program(s):
  1. 5111 - Domain-Specific Simulation & Data Life Cycle Labs (SDLs) and Research Groups (POF4-511) (POF4-511)

Appears in the scientific report 2025
Database coverage:
Medline ; Creative Commons Attribution-NonCommercial-NoDerivs CC BY-NC-ND 4.0 ; OpenAccess ; Clarivate Analytics Master Journal List ; Current Contents - Engineering, Computing and Technology ; Ebsco Academic Search ; Essential Science Indicators ; IF < 5 ; JCR ; SCOPUS ; Science Citation Index Expanded ; Web of Science Core Collection
Click to display QR Code for this record

The record appears in these collections:
Dokumenttypen > Aufsätze > Zeitschriftenaufsätze
Institutssammlungen > IAS > IAS-9
Workflowsammlungen > Öffentliche Einträge
Publikationsdatenbank
Open Access

 Datensatz erzeugt am 2025-10-20, letzte Änderung am 2025-10-23


OpenAccess:
Volltext herunterladen PDF
Dieses Dokument bewerten:

Rate this document:
1
2
3
 
(Bisher nicht rezensiert)