000903465 001__ 903465
000903465 005__ 20211210142715.0
000903465 0247_ $$2Handle$$a2128/29436
000903465 037__ $$aFZJ-2021-05138
000903465 041__ $$aEnglish
000903465 1001_ $$0P:(DE-Juel1)185971$$aAlia, Ahmed$$b0$$eCorresponding author$$ufzj
000903465 1112_ $$a10th Pedestrian and Evacuation Dynamics Conference$$cMelbourne & Sydney (Online)$$d2021-11-29 - 2021-11-30$$gPED2021$$wAustralia
000903465 245__ $$aTwo Methods for Detecting Pushing Behavior from Videos: A Psychological Rating System and a Deep Learning-based Approach
000903465 260__ $$c2021
000903465 3367_ $$033$$2EndNote$$aConference Paper
000903465 3367_ $$2DataCite$$aOther
000903465 3367_ $$2BibTeX$$aINPROCEEDINGS
000903465 3367_ $$2DRIVER$$aconferenceObject
000903465 3367_ $$2ORCID$$aLECTURE_SPEECH
000903465 3367_ $$0PUB:(DE-HGF)6$$2PUB:(DE-HGF)$$aConference Presentation$$bconf$$mconf$$s1639134322_24512$$xInvited
000903465 520__ $$aIn crowded entrances, some people try to be faster and therefore start pushing others. This pushing behaviorpotentially increases density, and decreases comfort as well safety of events. From research and practical perspectives, it is interesting to know where, why, and when pushing appears and, thereby, to understand the heterogeneity of movements in crowds. This paper presents two methods for identifying pushing in videos of crowds. The first one is a newly developed psychological rating system. It categorizes forward motion of people into four classes: 1) falling behind, 2) just walking, 3) mild pushing, and 4) strong pushing. The rating is performed by trained human observersusing the software PeTrack. This procedure allows to annotate individual behavior in every second, resulting in a high time resolution. However, this approach is time-consuming. The second method is an automated tool that can assist ad-hoc recognition of pushing behavior. We propose a novel deep learning-based technique that automatically detects pushing behavior scenarios from videos. In particular, we combine deep optical flow information with wheel visualization techniques to extract useful motion features from video sequences and generate a motion feature map between each two consecutive frames that visualizes: the motion speed, motion direction, spaces in crowd and interactions between pedestrians. Then, convolutional neural networks are used to extract the most relevant features (deep features) from these maps. Afterwards, additional supervised convolutional neural networks are used to automatically learn from the deep features to classify frames into pushing or non-pushing behavior classes. To evaluate this approach, we have conducted experiments using manually annotated videos by the first method. Results demonstrated a high congruence of both approaches and a promising performance in identifying pushing behavior from videos.
000903465 536__ $$0G:(DE-HGF)POF4-5111$$a5111 - Domain-Specific Simulation & Data Life Cycle Labs (SDLs) and Research Groups (POF4-511)$$cPOF4-511$$fPOF IV$$x0
000903465 7001_ $$0P:(DE-HGF)0$$aMaree, Mohammed$$b1
000903465 7001_ $$0P:(DE-Juel1)161429$$aHaensel, David$$b2$$ufzj
000903465 7001_ $$0P:(DE-Juel1)132077$$aChraibi, Mohcine$$b3$$ufzj
000903465 7001_ $$0P:(DE-Juel1)185054$$aLügering, Helena$$b4$$eCorresponding author$$ufzj
000903465 7001_ $$0P:(DE-Juel1)178979$$aSieben, Anna$$b5$$ufzj
000903465 7001_ $$0P:(DE-Juel1)185878$$aÜsten, Ezel$$b6$$eCorresponding author$$ufzj
000903465 8564_ $$uhttps://juser.fz-juelich.de/record/903465/files/Two%20Methods%20for%20Detecting%20Pushing%20Behavior%20from%20Videos-1.pdf$$yOpenAccess
000903465 909CO $$ooai:juser.fz-juelich.de:903465$$popenaire$$popen_access$$pVDB$$pdriver
000903465 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)185971$$aForschungszentrum Jülich$$b0$$kFZJ
000903465 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)161429$$aForschungszentrum Jülich$$b2$$kFZJ
000903465 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)132077$$aForschungszentrum Jülich$$b3$$kFZJ
000903465 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)185054$$aForschungszentrum Jülich$$b4$$kFZJ
000903465 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)178979$$aForschungszentrum Jülich$$b5$$kFZJ
000903465 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)185878$$aForschungszentrum Jülich$$b6$$kFZJ
000903465 9131_ $$0G:(DE-HGF)POF4-511$$1G:(DE-HGF)POF4-510$$2G:(DE-HGF)POF4-500$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$9G:(DE-HGF)POF4-5111$$aDE-HGF$$bKey Technologies$$lEngineering Digital Futures – Supercomputing, Data Management and Information Security for Knowledge and Action$$vEnabling Computational- & Data-Intensive Science and Engineering$$x0
000903465 9141_ $$y2021
000903465 915__ $$0StatID:(DE-HGF)0510$$2StatID$$aOpenAccess
000903465 920__ $$lyes
000903465 9201_ $$0I:(DE-Juel1)IAS-7-20180321$$kIAS-7$$lZivile Sicherheitsforschung$$x0
000903465 980__ $$aconf
000903465 980__ $$aVDB
000903465 980__ $$aUNRESTRICTED
000903465 980__ $$aI:(DE-Juel1)IAS-7-20180321
000903465 9801_ $$aFullTexts