001     1019197
005     20231220201928.0
024 7 _ |a 10.34734/FZJ-2023-05241
|2 datacite_doi
037 _ _ |a FZJ-2023-05241
041 _ _ |a English
100 1 _ |a Alia, Ahmed
|0 P:(DE-Juel1)185971
|b 0
|e Corresponding author
111 2 _ |a 2023 the 3rd International Conference on Computers and Automation
|g CompAuto 2023
|c Paris
|d 2023-12-07 - 2023-12-09
|w France
245 _ _ |a Artificial Intelligence-based Early Pushing Detection in Live Video Streams of Crowds
260 _ _ |c 2023
336 7 _ |a Conference Paper
|0 33
|2 EndNote
336 7 _ |a Other
|2 DataCite
336 7 _ |a INPROCEEDINGS
|2 BibTeX
336 7 _ |a conferenceObject
|2 DRIVER
336 7 _ |a LECTURE_SPEECH
|2 ORCID
336 7 _ |a Conference Presentation
|b conf
|m conf
|0 PUB:(DE-HGF)6
|s 1703053906_12022
|2 PUB:(DE-HGF)
|x After Call
502 _ _ |c Wuppertal University
520 _ _ |a Entrances of crowded events are often set up as bottlenecks for several reasons, such as access control, ticket validation, or security check. In these scenarios, some pedestrians could start pushing others or using gaps among crowds to reduce their waiting time. Such behavior doesn’t only limit the comfort zones but also leads to threatening people’s safety. Early detection of pushing behavior may assist security and organizers in making decisions on time, enhancing the comfort and safety of the entrances. Unfortunately, existing works reported in the literature to detect pushing in crowds are limited and have not satisfied the early detection requirements. For instance, Lügering et al. [1] developed a manual rating system to understand when, where and why pushing appears in video recordings of crowded entrance areas. To overcome the limitations of manual analysis, Alia et al. [2] proposed an automatic deep-learning system for pushing detection. However, this system does not meet the requirements of early detection. To fulfill the early detection requirements, we present an Artificial Intelligence framework for automatically identifying pushing in the live camera stream in real-time. Our framework consists of two main components: The first component uses a pretrained deep optical flow model and color wheel method to extract the movement of pixels from the live stream of a crowd and represent this information visually. The second component includes an adapted and trained EfficientNetV2B0 model, which extracts deep features from the motion information, and then identifies and annotates pushing patches within the live stream. We created a labeled dataset using five real-world experiments [3] with their associated ground truths to train the adapted model and evaluate the framework. The experimental setups mimic the crowded event entrances, and two experts based on the manual rating system [1] created the ground truths. According to the experimental results, our framework identified pushing patches with an accuracy of 87% and within a reasonable delay time. --- References: [1] Üsten, E., Lügering, H. & Sieben, A., Pushing and Non-pushing Forward Motion in Crowds: A Systematic Psychological Observation Method for Rating Individual Behavior in Pedestrian Dynamics, Collective Dynamics, 7 pp. 1-16, 2022. --- [2] Alia, A., Maree, M. & Chraibi, M. A Hybrid Deep Learning and Visualization Framework for Pushing Behavior Detection in Pedestrian Dynamics, Sensors, 22, 4040, 2022. ---[3] Pedestrian Dynamics Data Archive hosted by the Forschungszentrum Jülich, P. Crowds in front of bottlenecks from the perspective of physics and social psychology, 2018.
536 _ _ |a 5111 - Domain-Specific Simulation & Data Life Cycle Labs (SDLs) and Research Groups (POF4-511)
|0 G:(DE-HGF)POF4-5111
|c POF4-511
|f POF IV
|x 0
536 _ _ |a Pilotprojekt zur Entwicklung eines palästinensisch-deutschen Forschungs- und Promotionsprogramms 'Palestinian-German Science Bridge' (01DH16027)
|0 G:(BMBF)01DH16027
|c 01DH16027
|x 1
700 1 _ |a Maree, moahammed
|0 P:(DE-HGF)0
|b 1
700 1 _ |a Chraibi, Mohcine
|0 P:(DE-Juel1)132077
|b 2
856 4 _ |y OpenAccess
|u https://juser.fz-juelich.de/record/1019197/files/CompAuto2023-CA322-A-Abstract.pdf
856 4 _ |y OpenAccess
|x icon
|u https://juser.fz-juelich.de/record/1019197/files/CompAuto2023-CA322-A-Abstract.gif?subformat=icon
856 4 _ |y OpenAccess
|x icon-1440
|u https://juser.fz-juelich.de/record/1019197/files/CompAuto2023-CA322-A-Abstract.jpg?subformat=icon-1440
856 4 _ |y OpenAccess
|x icon-180
|u https://juser.fz-juelich.de/record/1019197/files/CompAuto2023-CA322-A-Abstract.jpg?subformat=icon-180
856 4 _ |y OpenAccess
|x icon-640
|u https://juser.fz-juelich.de/record/1019197/files/CompAuto2023-CA322-A-Abstract.jpg?subformat=icon-640
909 C O |o oai:juser.fz-juelich.de:1019197
|p openaire
|p open_access
|p VDB
|p driver
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 0
|6 P:(DE-Juel1)185971
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 2
|6 P:(DE-Juel1)132077
913 1 _ |a DE-HGF
|b Key Technologies
|l Engineering Digital Futures – Supercomputing, Data Management and Information Security for Knowledge and Action
|1 G:(DE-HGF)POF4-510
|0 G:(DE-HGF)POF4-511
|3 G:(DE-HGF)POF4
|2 G:(DE-HGF)POF4-500
|4 G:(DE-HGF)POF
|v Enabling Computational- & Data-Intensive Science and Engineering
|9 G:(DE-HGF)POF4-5111
|x 0
914 1 _ |y 2023
915 _ _ |a OpenAccess
|0 StatID:(DE-HGF)0510
|2 StatID
920 _ _ |l yes
920 1 _ |0 I:(DE-Juel1)IAS-7-20180321
|k IAS-7
|l Zivile Sicherheitsforschung
|x 0
980 _ _ |a conf
980 _ _ |a VDB
980 _ _ |a UNRESTRICTED
980 _ _ |a I:(DE-Juel1)IAS-7-20180321
980 1 _ |a FullTexts


LibraryCollectionCLSMajorCLSMinorLanguageAuthor
Marc 21