001008638 001__ 1008638
001008638 005__ 20230714203525.0
001008638 0247_ $$2datacite_doi$$a10.34734/FZJ-2023-02439
001008638 037__ $$aFZJ-2023-02439
001008638 041__ $$aEnglish
001008638 1001_ $$0P:(DE-Juel1)185971$$aAlia, Ahmed$$b0$$eCorresponding author
001008638 1112_ $$aHelmholtz AI Conference 2023$$cHamburg$$d2023-06-12 - 2023-06-14$$wGermany
001008638 245__ $$aA Novel Voronoi-based Convolutional Neural Network Approach for Crowd Video Analysis and Pushing Person Detection
001008638 260__ $$c2023
001008638 3367_ $$033$$2EndNote$$aConference Paper
001008638 3367_ $$2DataCite$$aOther
001008638 3367_ $$2BibTeX$$aINPROCEEDINGS
001008638 3367_ $$2DRIVER$$aconferenceObject
001008638 3367_ $$2ORCID$$aLECTURE_SPEECH
001008638 3367_ $$0PUB:(DE-HGF)6$$2PUB:(DE-HGF)$$aConference Presentation$$bconf$$mconf$$s1689316661_12807$$xAfter Call
001008638 520__ $$aAt crowded event entrances, some individuals attempt to push others to move quickly and enter the event faster. Such behavior can increase the density over time, which could not only threaten the comfort of pedestrians but also cause life-threatening situations. To prevent such incidents, event organizers and security personnel need to understand the pushing dynamics in crowds. One effective way to achieve this is by detecting pushing individuals from video recordings of crowds. Recently, some automatic approaches have been developed to help researchers identify pushing behavior in crowd videos. However, these approaches only detect the regions where pushing occurs rather than the pushing individuals, limiting their contribution to understanding pushing dynamics in crowds. To overcome the limitations of previous methods, this work presents a novel Voronoi-based Convolutional Neural Network (CNN) approach for pushing person detection in crowd videos. As depicted in Figure 1, the proposed approach comprises two main phases: feature extraction and labeling. In the first phase, a new Voronoi-based method is developed and utilized to identify the local regions of individuals, employing both the video and the associated trajectory data as inputs. It then uses EfficientNetB0 CNN to extract the deep features of individual behavior from the identified regions. In contrast, the labeling phase utilizes a fully connected layer with a Sigmoid activation function to analyze the extracted deep features and identify the pushing persons. Finally, this phase annotates the pushing persons in the video. Furthermore, this work produces a novel dataset using five real-world experiments with their associated ground truths, which is utilized for training and evaluating the proposed approach. The resulting dataset consists of 11717 local regions, of which 3067 represent pushing samples, and 8650 represent nonpushing samples. The experimental outcomes demonstrate that the proposed approach attained an accuracy of 83% and a f1-score of 80%.
001008638 536__ $$0G:(DE-HGF)POF4-5111$$a5111 - Domain-Specific Simulation & Data Life Cycle Labs (SDLs) and Research Groups (POF4-511)$$cPOF4-511$$fPOF IV$$x0
001008638 536__ $$0G:(BMBF)01DH16027$$aPilotprojekt zur Entwicklung eines palästinensisch-deutschen Forschungs- und Promotionsprogramms 'Palestinian-German Science Bridge' (01DH16027)$$c01DH16027$$x1
001008638 7001_ $$0P:(DE-HGF)0$$aMaree, Mohammed$$b1
001008638 7001_ $$0P:(DE-Juel1)132077$$aChraibi, Mohcine$$b2
001008638 8564_ $$uhttps://helmholtzai-conference2023.de/program/
001008638 8564_ $$uhttps://juser.fz-juelich.de/record/1008638/files/Ahmed%20Alia.pdf$$yOpenAccess
001008638 909CO $$ooai:juser.fz-juelich.de:1008638$$popenaire$$popen_access$$pVDB$$pdriver
001008638 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)185971$$aForschungszentrum Jülich$$b0$$kFZJ
001008638 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)132077$$aForschungszentrum Jülich$$b2$$kFZJ
001008638 9131_ $$0G:(DE-HGF)POF4-511$$1G:(DE-HGF)POF4-510$$2G:(DE-HGF)POF4-500$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$9G:(DE-HGF)POF4-5111$$aDE-HGF$$bKey Technologies$$lEngineering Digital Futures – Supercomputing, Data Management and Information Security for Knowledge and Action$$vEnabling Computational- & Data-Intensive Science and Engineering$$x0
001008638 9141_ $$y2023
001008638 915__ $$0StatID:(DE-HGF)0510$$2StatID$$aOpenAccess
001008638 920__ $$lyes
001008638 9201_ $$0I:(DE-Juel1)IAS-7-20180321$$kIAS-7$$lZivile Sicherheitsforschung$$x0
001008638 980__ $$aconf
001008638 980__ $$aVDB
001008638 980__ $$aUNRESTRICTED
001008638 980__ $$aI:(DE-Juel1)IAS-7-20180321
001008638 9801_ $$aFullTexts