001019197 001__ 1019197
001019197 005__ 20231220201928.0
001019197 0247_ $$2datacite_doi$$a10.34734/FZJ-2023-05241
001019197 037__ $$aFZJ-2023-05241
001019197 041__ $$aEnglish
001019197 1001_ $$0P:(DE-Juel1)185971$$aAlia, Ahmed$$b0$$eCorresponding author
001019197 1112_ $$a2023 the 3rd International Conference on Computers and Automation$$cParis$$d2023-12-07 - 2023-12-09$$gCompAuto 2023$$wFrance
001019197 245__ $$aArtificial Intelligence-based Early Pushing Detection in Live Video Streams of Crowds
001019197 260__ $$c2023
001019197 3367_ $$033$$2EndNote$$aConference Paper
001019197 3367_ $$2DataCite$$aOther
001019197 3367_ $$2BibTeX$$aINPROCEEDINGS
001019197 3367_ $$2DRIVER$$aconferenceObject
001019197 3367_ $$2ORCID$$aLECTURE_SPEECH
001019197 3367_ $$0PUB:(DE-HGF)6$$2PUB:(DE-HGF)$$aConference Presentation$$bconf$$mconf$$s1703053906_12022$$xAfter Call
001019197 502__ $$cWuppertal University
001019197 520__ $$aEntrances of crowded events are often set up as bottlenecks for several reasons, such as access control, ticket validation, or security check. In these scenarios, some pedestrians could start pushing others or using gaps among crowds to reduce their waiting time. Such behavior doesn’t only limit the comfort zones but also leads to threatening people’s safety. Early detection of pushing behavior may assist security and organizers in making decisions on time, enhancing the comfort and safety of the entrances. Unfortunately, existing works reported in the literature to detect pushing in crowds are limited and have not satisfied the early detection requirements. For instance, Lügering et al. [1] developed a manual rating system to understand when, where and why pushing appears in video recordings of crowded entrance areas. To overcome the limitations of manual analysis, Alia et al. [2] proposed an automatic deep-learning system for pushing detection. However, this system does not meet the requirements of early detection. To fulfill the early detection requirements, we present an Artificial Intelligence framework for automatically identifying pushing in the live camera stream in real-time. Our framework consists of two main components: The first component uses a pretrained deep optical flow model and color wheel method to extract the movement of pixels from the live stream of a crowd and represent this information visually. The second component includes an adapted and trained EfficientNetV2B0 model, which extracts deep features from the motion information, and then identifies and annotates pushing patches within the live stream. We created a labeled dataset using five real-world experiments [3] with their associated ground truths to train the adapted model and evaluate the framework. The experimental setups mimic the crowded event entrances, and two experts based on the manual rating system [1] created the ground truths. According to the experimental results, our framework identified pushing patches with an accuracy of 87% and within a reasonable delay time. --- References: [1] Üsten, E., Lügering, H. & Sieben, A., Pushing and Non-pushing Forward Motion in Crowds: A Systematic Psychological Observation Method for Rating Individual Behavior in Pedestrian Dynamics, Collective Dynamics, 7 pp. 1-16, 2022. --- [2] Alia, A., Maree, M. & Chraibi, M. A Hybrid Deep Learning and Visualization Framework for Pushing Behavior Detection in Pedestrian Dynamics, Sensors, 22, 4040, 2022. ---[3] Pedestrian Dynamics Data Archive hosted by the Forschungszentrum Jülich, P. Crowds in front of bottlenecks from the perspective of physics and social psychology, 2018.
001019197 536__ $$0G:(DE-HGF)POF4-5111$$a5111 - Domain-Specific Simulation & Data Life Cycle Labs (SDLs) and Research Groups (POF4-511)$$cPOF4-511$$fPOF IV$$x0
001019197 536__ $$0G:(BMBF)01DH16027$$aPilotprojekt zur Entwicklung eines palästinensisch-deutschen Forschungs- und Promotionsprogramms 'Palestinian-German Science Bridge' (01DH16027)$$c01DH16027$$x1
001019197 7001_ $$0P:(DE-HGF)0$$aMaree, moahammed$$b1
001019197 7001_ $$0P:(DE-Juel1)132077$$aChraibi, Mohcine$$b2
001019197 8564_ $$uhttps://juser.fz-juelich.de/record/1019197/files/CompAuto2023-CA322-A-Abstract.pdf$$yOpenAccess
001019197 8564_ $$uhttps://juser.fz-juelich.de/record/1019197/files/CompAuto2023-CA322-A-Abstract.gif?subformat=icon$$xicon$$yOpenAccess
001019197 8564_ $$uhttps://juser.fz-juelich.de/record/1019197/files/CompAuto2023-CA322-A-Abstract.jpg?subformat=icon-1440$$xicon-1440$$yOpenAccess
001019197 8564_ $$uhttps://juser.fz-juelich.de/record/1019197/files/CompAuto2023-CA322-A-Abstract.jpg?subformat=icon-180$$xicon-180$$yOpenAccess
001019197 8564_ $$uhttps://juser.fz-juelich.de/record/1019197/files/CompAuto2023-CA322-A-Abstract.jpg?subformat=icon-640$$xicon-640$$yOpenAccess
001019197 909CO $$ooai:juser.fz-juelich.de:1019197$$popenaire$$popen_access$$pVDB$$pdriver
001019197 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)185971$$aForschungszentrum Jülich$$b0$$kFZJ
001019197 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)132077$$aForschungszentrum Jülich$$b2$$kFZJ
001019197 9131_ $$0G:(DE-HGF)POF4-511$$1G:(DE-HGF)POF4-510$$2G:(DE-HGF)POF4-500$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$9G:(DE-HGF)POF4-5111$$aDE-HGF$$bKey Technologies$$lEngineering Digital Futures – Supercomputing, Data Management and Information Security for Knowledge and Action$$vEnabling Computational- & Data-Intensive Science and Engineering$$x0
001019197 9141_ $$y2023
001019197 915__ $$0StatID:(DE-HGF)0510$$2StatID$$aOpenAccess
001019197 920__ $$lyes
001019197 9201_ $$0I:(DE-Juel1)IAS-7-20180321$$kIAS-7$$lZivile Sicherheitsforschung$$x0
001019197 980__ $$aconf
001019197 980__ $$aVDB
001019197 980__ $$aUNRESTRICTED
001019197 980__ $$aI:(DE-Juel1)IAS-7-20180321
001019197 9801_ $$aFullTexts