Hauptseite > Publikationsdatenbank > Cellular level 3D reconstructed volumes at 1µm resolution within the BigBrain |
Poster (After Call) | FZJ-2024-06479 |
; ; ; ;
2024
Abstract: <b>Background & Summary</b><br>Analyzing cells and their distributions in histological sections is the basis for computing cytoarchitectonic maps of the human brain (Amunts and Zilles, 2015). While cell distributions are inherently 3-dimensional, microscopic analysis in cell-body stained tissue sections is usually performed in individual 2D sections. The first 3D reconstruction of an entire human brain from histology was generated at 20µm isotropic resolution and shared as ‘BigBrain’ (Amunts et al., 2013). However, investigating the distribution of individual cells in 3D requires even higher resolution and precision.<br><br>While previous work has exploited trajectories of vessels for cross-section alignment (Dickscheid et al., 2019), bisected cells provide significantly more alignment constraints and overall better coverage of tissue, thus allowing improved precision of image registration and correction of 3D cell distributions with respect to redundant detections (Huysegoms et al., 2019). Based on this strategy, we processed 300 histological sections from the occipital cortex of the BigBrain dataset, which were scanned at 1µm isotropic resolution. We reconstructed two 6x6x6 mm3 volumes of interest in 3D; one in V1 (h0c1) and the other in V2 (h0c2). Both volumes were subsequently anchored into the 20µm 3D BigBrain space using an affine transformation based on manually selected landmarks defined with VoluBA (https://ebrains.eu/service/voluba). The provided resources benefit evaluation of workflows that analyze 3D cell distributions.<br><br><b>Data acquisition</b><br>300 coronal, cell-body stained histological sections (20µm thickness) of the BigBrain were imaged with a high-throughput scanning system (Tissue Scope, Huron Technologies, Inc.) at 20 different focal planes, each with an inplane resolution of 1µm. A nearly isotropic image stack was thus obtained within the occipital cortex, ranging from section number 429 to 728 of the BigBrain dataset. As described in detail in (Amunts et al., 2013), sections were embedded in paraffin and stained using a modified silver staining. Consequently, the images show cell bodies with a strong dark contrast.Each section was visually inspected for histological artifacts. Eight sections were excluded from volume 1 and nine sections were excluded from volume 2.<br><br><b>Detection and matching of microstructures</b><br>The workflow used for reconstructing the stack of histological images is based on the detection and matching of corresponding microstructures between consecutive sections. Specifically, we trained a Support Vector Machine to detect blood vessels and developed a cell segmentation algorithm based on Deep Learning that is robust to staining inhomogeneities and overlapping cells (Upschulte et al., 2022). Separating touching and overlapping cells poses an especially challenging problem in segmentation tasks but is resolved in the present work by incorporating a-priori knowledge of cellular contours.We were subsequently able to identify corresponding pairs of bisected cells between adjacent sections using their centroid positions. The matching task is challenged by the limited distinctiveness of cell shapes, as well as the relatively low prevalence of bisected cells (20-40% of cells at 20µm tissue thickness). To overcome these issues, our workflow computes the Largest Common Pointset (LCP) between cell centroids of adjacent sections, using purely geometric constraints under locally affine transforms. The approach is based on the 4-Points Congruent Sets strategy (Aiger, Mitra and Cohen-Or, 2008), which repeatedly selects 4 random points in one pointset and finds approximately congruent point constellations in the other, effectively identifying valid pairs of bisected cells. To deal with large human brain sections, we extended the algorithm to operate hierarchically across multiple scales and incorporated a sliding window approach to handle non-linear tissue deformations (Huysegoms et al., 2019).<br><br><b>Linear 3D reconstruction</b><br>Based on the extracted cell matches, an optimal affine 2D transformation was computed for each pair of consecutive images using a Least Squares approach. The resulting transformations were concatenated in an iterative manner in order to yield absolute positions. During this process, we applied a user-defined ROI of size 6x6mm2 to filter the number of matches and to obtain affine parameters that are tailored to the local tissue deformations.<br><br><b>Acknowledgements</b><br>Pavel Chervakov and Xiao Gui (both INM-1) built and generally maintain the infrastructure to provide interactive online visualization of the reconstructed datasets.<br>This project received funding from the European Union’s Horizon 2020 Research and Innovation Programme [grant agreement 785907 (HBP SGA2), 945539 (HBP SGA3)] and the Helmholtz Association’s Initiative and Networking Fund through the Helmholtz International BigBrain Analytics and Learning Laboratory (HIBALL) [grant agreement InterLabs-0015].<br>Computing time was granted through JARA on the supercomputer JURECA at Jülich Supercomputing Centre (JSC) as part of the project CJINM14.
![]() |
The record appears in these collections: |