001     903465
005     20211210142715.0
024 7 _ |a 2128/29436
|2 Handle
037 _ _ |a FZJ-2021-05138
041 _ _ |a English
100 1 _ |a Alia, Ahmed
|0 P:(DE-Juel1)185971
|b 0
|e Corresponding author
|u fzj
111 2 _ |a 10th Pedestrian and Evacuation Dynamics Conference
|g PED2021
|c Melbourne & Sydney (Online)
|d 2021-11-29 - 2021-11-30
|w Australia
245 _ _ |a Two Methods for Detecting Pushing Behavior from Videos: A Psychological Rating System and a Deep Learning-based Approach
260 _ _ |c 2021
336 7 _ |a Conference Paper
|0 33
|2 EndNote
336 7 _ |a Other
|2 DataCite
336 7 _ |a INPROCEEDINGS
|2 BibTeX
336 7 _ |a conferenceObject
|2 DRIVER
336 7 _ |a LECTURE_SPEECH
|2 ORCID
336 7 _ |a Conference Presentation
|b conf
|m conf
|0 PUB:(DE-HGF)6
|s 1639134322_24512
|2 PUB:(DE-HGF)
|x Invited
520 _ _ |a In crowded entrances, some people try to be faster and therefore start pushing others. This pushing behaviorpotentially increases density, and decreases comfort as well safety of events. From research and practical perspectives, it is interesting to know where, why, and when pushing appears and, thereby, to understand the heterogeneity of movements in crowds. This paper presents two methods for identifying pushing in videos of crowds. The first one is a newly developed psychological rating system. It categorizes forward motion of people into four classes: 1) falling behind, 2) just walking, 3) mild pushing, and 4) strong pushing. The rating is performed by trained human observersusing the software PeTrack. This procedure allows to annotate individual behavior in every second, resulting in a high time resolution. However, this approach is time-consuming. The second method is an automated tool that can assist ad-hoc recognition of pushing behavior. We propose a novel deep learning-based technique that automatically detects pushing behavior scenarios from videos. In particular, we combine deep optical flow information with wheel visualization techniques to extract useful motion features from video sequences and generate a motion feature map between each two consecutive frames that visualizes: the motion speed, motion direction, spaces in crowd and interactions between pedestrians. Then, convolutional neural networks are used to extract the most relevant features (deep features) from these maps. Afterwards, additional supervised convolutional neural networks are used to automatically learn from the deep features to classify frames into pushing or non-pushing behavior classes. To evaluate this approach, we have conducted experiments using manually annotated videos by the first method. Results demonstrated a high congruence of both approaches and a promising performance in identifying pushing behavior from videos.
536 _ _ |a 5111 - Domain-Specific Simulation & Data Life Cycle Labs (SDLs) and Research Groups (POF4-511)
|0 G:(DE-HGF)POF4-5111
|c POF4-511
|f POF IV
|x 0
700 1 _ |a Maree, Mohammed
|0 P:(DE-HGF)0
|b 1
700 1 _ |a Haensel, David
|0 P:(DE-Juel1)161429
|b 2
|u fzj
700 1 _ |a Chraibi, Mohcine
|0 P:(DE-Juel1)132077
|b 3
|u fzj
700 1 _ |a Lügering, Helena
|0 P:(DE-Juel1)185054
|b 4
|e Corresponding author
|u fzj
700 1 _ |a Sieben, Anna
|0 P:(DE-Juel1)178979
|b 5
|u fzj
700 1 _ |a Üsten, Ezel
|0 P:(DE-Juel1)185878
|b 6
|e Corresponding author
|u fzj
856 4 _ |u https://juser.fz-juelich.de/record/903465/files/Two%20Methods%20for%20Detecting%20Pushing%20Behavior%20from%20Videos-1.pdf
|y OpenAccess
909 C O |o oai:juser.fz-juelich.de:903465
|p openaire
|p open_access
|p VDB
|p driver
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 0
|6 P:(DE-Juel1)185971
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 2
|6 P:(DE-Juel1)161429
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 3
|6 P:(DE-Juel1)132077
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 4
|6 P:(DE-Juel1)185054
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 5
|6 P:(DE-Juel1)178979
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 6
|6 P:(DE-Juel1)185878
913 1 _ |a DE-HGF
|b Key Technologies
|l Engineering Digital Futures – Supercomputing, Data Management and Information Security for Knowledge and Action
|1 G:(DE-HGF)POF4-510
|0 G:(DE-HGF)POF4-511
|3 G:(DE-HGF)POF4
|2 G:(DE-HGF)POF4-500
|4 G:(DE-HGF)POF
|v Enabling Computational- & Data-Intensive Science and Engineering
|9 G:(DE-HGF)POF4-5111
|x 0
914 1 _ |y 2021
915 _ _ |a OpenAccess
|0 StatID:(DE-HGF)0510
|2 StatID
920 _ _ |l yes
920 1 _ |0 I:(DE-Juel1)IAS-7-20180321
|k IAS-7
|l Zivile Sicherheitsforschung
|x 0
980 _ _ |a conf
980 _ _ |a VDB
980 _ _ |a UNRESTRICTED
980 _ _ |a I:(DE-Juel1)IAS-7-20180321
980 1 _ |a FullTexts


LibraryCollectionCLSMajorCLSMinorLanguageAuthor
Marc 21