001     1034454
005     20241223113912.0
024 7 _ |2 doi
|a 10.48550/arXiv.2411.18164
037 _ _ |a FZJ-2024-07220
041 _ _ |a English
088 _ _ |2 arXiv
|a https://doi.org/10.48550/arXiv.2411.18164
100 1 _ |0 P:(DE-HGF)0
|a Abubaker, Mohammed
|b 0
245 _ _ |a RPEE-Heads: A Novel Benchmark For Pedestrian Head Detection in Crowd Videos
260 _ _ |b arXiv
|c 2024
336 7 _ |0 PUB:(DE-HGF)25
|2 PUB:(DE-HGF)
|a Preprint
|b preprint
|m preprint
|s 1734595884_7058
336 7 _ |2 ORCID
|a WORKING_PAPER
336 7 _ |0 28
|2 EndNote
|a Electronic Article
336 7 _ |2 DRIVER
|a preprint
336 7 _ |2 BibTeX
|a ARTICLE
336 7 _ |2 DataCite
|a Output Types/Working Paper
520 _ _ |a The automatic detection of pedestrian heads in crowded environments is essential for crowd analysis and management tasks, particularly in high-risk settings such as railway platforms and event entrances. These environments, characterized by dense crowds and dynamic movements, are underrepresented in public datasets, posing challenges for existing deep learning models. To address this gap, we introduce the Railway Platforms and Event Entrances-Heads (RPEE-Heads) dataset, a novel, diverse, highresolution, and accurately annotated resource. It includes 109,913 annotated pedestrian heads across 1,886 images from 66 video recordings, with an average of 56.2 heads per image. Annotations include bounding boxes for visible head regions. In addition to introducing the RPEE-Heads dataset, this paper evaluates eight state-of-the-art object detection algorithms using the RPEE-Heads dataset and analyzes the impact of head size on detection accuracy. The experimental results show that You Only Look Once v9 and Real-Time Detection Transformer outperform the other algorithms, achieving mean average precisions of 90.7% and 90.8%, with inference times of 11 and 14 milliseconds, respectively. Moreover, the findings underscore the need for specialized datasets like RPEE-Heads for training and evaluating accurate models for head detection in railway platforms and event entrances. The dataset and pretrained models are available at https://doi.org/10.34735/ped.2024.2.
536 _ _ |0 G:(DE-HGF)POF4-5111
|a 5111 - Domain-Specific Simulation & Data Life Cycle Labs (SDLs) and Research Groups (POF4-511)
|c POF4-511
|f POF IV
|x 0
536 _ _ |0 G:(BMBF)01DH16027
|a Pilotprojekt zur Entwicklung eines palästinensisch-deutschen Forschungs- und Promotionsprogramms 'Palestinian-German Science Bridge' (01DH16027)
|c 01DH16027
|x 1
588 _ _ |a Dataset connected to DataCite
650 _ 7 |2 Other
|a Computer Vision and Pattern Recognition (cs.CV)
650 _ 7 |2 Other
|a Machine Learning (cs.LG)
650 _ 7 |2 Other
|a FOS: Computer and information sciences
700 1 _ |0 P:(DE-HGF)0
|a Alsadder, Zubayda
|b 1
700 1 _ |0 P:(DE-HGF)0
|a Abdelhaq, Hamed
|b 2
|e Corresponding author
700 1 _ |0 P:(DE-Juel1)132064
|a Boltes, Maik
|b 3
|e Corresponding author
|u fzj
700 1 _ |0 P:(DE-Juel1)185971
|a Alia, Ahmed
|b 4
|u fzj
773 _ _ |a 10.48550/arXiv.2411.18164
856 4 _ |u https://arxiv.org/abs/2411.18164
909 C O |o oai:juser.fz-juelich.de:1034454
|p VDB
910 1 _ |0 I:(DE-588b)5008462-8
|6 P:(DE-Juel1)132064
|a Forschungszentrum Jülich
|b 3
|k FZJ
910 1 _ |0 I:(DE-588b)5008462-8
|6 P:(DE-Juel1)185971
|a Forschungszentrum Jülich
|b 4
|k FZJ
913 1 _ |0 G:(DE-HGF)POF4-511
|1 G:(DE-HGF)POF4-510
|2 G:(DE-HGF)POF4-500
|3 G:(DE-HGF)POF4
|4 G:(DE-HGF)POF
|9 G:(DE-HGF)POF4-5111
|a DE-HGF
|b Key Technologies
|l Engineering Digital Futures – Supercomputing, Data Management and Information Security for Knowledge and Action
|v Enabling Computational- & Data-Intensive Science and Engineering
|x 0
914 1 _ |y 2024
920 _ _ |l yes
920 1 _ |0 I:(DE-Juel1)IAS-7-20180321
|k IAS-7
|l Zivile Sicherheitsforschung
|x 0
980 _ _ |a preprint
980 _ _ |a VDB
980 _ _ |a I:(DE-Juel1)IAS-7-20180321
980 _ _ |a UNRESTRICTED


LibraryCollectionCLSMajorCLSMinorLanguageAuthor
Marc 21