000877488 001__ 877488
000877488 005__ 20220930130241.0
000877488 0247_ $$2doi$$a10.3758/s13428-020-01428-x
000877488 0247_ $$2altmetric$$aaltmetric:86428604
000877488 0247_ $$2pmid$$apmid:32710238
000877488 0247_ $$2WOS$$aWOS:000552180600001
000877488 037__ $$aFZJ-2020-02240
000877488 082__ $$a150
000877488 1001_ $$0P:(DE-HGF)0$$aDar, Asim H.$$b0
000877488 245__ $$aREMoDNaV: robust eye-movement classification for dynamic stimulation
000877488 260__ $$aAustin, Tex.$$bPsychonomic Society Publ.$$c2020
000877488 3367_ $$2DRIVER$$aarticle
000877488 3367_ $$2DataCite$$aOutput Types/Journal article
000877488 3367_ $$0PUB:(DE-HGF)16$$2PUB:(DE-HGF)$$aJournal Article$$bjournal$$mjournal$$s1596525782_9645
000877488 3367_ $$2BibTeX$$aARTICLE
000877488 3367_ $$2ORCID$$aJOURNAL_ARTICLE
000877488 3367_ $$00$$2EndNote$$aJournal Article
000877488 520__ $$aTracking of eye movements is an established measurement for many types of experimental paradigms. More complex and more prolonged visual stimuli have made algorithmic approaches to eye-movement event classification the most pragmatic option. A recent analysis revealed that many current algorithms are lackluster when it comes to data from viewing dynamic stimuli such as video sequences. Here we present an event classification algorithm—built on an existing velocity-based approach—that is suitable for both static and dynamic stimulation, and is capable of classifying saccades, post-saccadic oscillations, fixations, and smooth pursuit events. We validated classification performance and robustness on three public datasets: 1) manually annotated, trial-based gaze trajectories for viewing static images, moving dots, and short video sequences, 2) lab-quality gaze recordings for a feature-length movie, and 3) gaze recordings acquired under suboptimal lighting conditions inside the bore of a magnetic resonance imaging (MRI) scanner for the same full-length movie. We found that the proposed algorithm performs on par or better compared to state-of-the-art alternatives for static stimulation. Moreover, it yields eye-movement events with biologically plausible characteristics on prolonged dynamic recordings. Lastly, algorithm performance is robust on data acquired under suboptimal conditions that exhibit a temporally varying noise level. These results indicate that the proposed algorithm is a robust tool with improved classification accuracy across a range of use cases. The algorithm is cross-platform compatible, implemented using the Python programming language, and readily available as free and open-source software from public sources.
000877488 536__ $$0G:(DE-HGF)POF3-574$$a574 - Theory, modelling and simulation (POF3-574)$$cPOF3-574$$fPOF III$$x0
000877488 588__ $$aDataset connected to CrossRef
000877488 7001_ $$0P:(DE-Juel1)178612$$aWagner, Adina S.$$b1
000877488 7001_ $$0P:(DE-Juel1)177087$$aHanke, Michael$$b2$$eCorresponding author
000877488 773__ $$0PERI:(DE-600)2212635-1$$a10.3758/s13428-020-01428-x$$p1-16$$tBehavior research methods$$v52$$x0005-7878$$y2020
000877488 8564_ $$uhttps://juser.fz-juelich.de/record/877488/files/Dar2020_Article_REMoDNaVRobustEye-movementClas.pdf$$yRestricted
000877488 8564_ $$uhttps://juser.fz-juelich.de/record/877488/files/Dar2020_Article_REMoDNaVRobustEye-movementClas.pdf?subformat=pdfa$$xpdfa$$yRestricted
000877488 8767_ $$d2020-06-09$$eHybrid-OA$$jDEAL$$lDEAL: Springer$$pBR-Org-19-171.R3$$zapproved im dashboard
000877488 909CO $$ooai:juser.fz-juelich.de:877488$$pOpenAPC_DEAL$$pVDB$$popenCost
000877488 915__ $$0StatID:(DE-HGF)0200$$2StatID$$aDBCoverage$$bSCOPUS
000877488 915__ $$0StatID:(DE-HGF)0100$$2StatID$$aJCR$$bBEHAV RES METHODS : 2017
000877488 915__ $$0StatID:(DE-HGF)0300$$2StatID$$aDBCoverage$$bMedline
000877488 915__ $$0StatID:(DE-HGF)0600$$2StatID$$aDBCoverage$$bEbsco Academic Search
000877488 915__ $$0StatID:(DE-HGF)0030$$2StatID$$aPeer Review$$bASC
000877488 915__ $$0StatID:(DE-HGF)0199$$2StatID$$aDBCoverage$$bClarivate Analytics Master Journal List
000877488 915__ $$0StatID:(DE-HGF)0130$$2StatID$$aDBCoverage$$bSocial Sciences Citation Index
000877488 915__ $$0StatID:(DE-HGF)1180$$2StatID$$aDBCoverage$$bCurrent Contents - Social and Behavioral Sciences
000877488 915__ $$0StatID:(DE-HGF)1050$$2StatID$$aDBCoverage$$bBIOSIS Previews
000877488 915__ $$0StatID:(DE-HGF)9900$$2StatID$$aIF < 5
000877488 9141_ $$y2020
000877488 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)178612$$aForschungszentrum Jülich$$b1$$kFZJ
000877488 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)177087$$aForschungszentrum Jülich$$b2$$kFZJ
000877488 9131_ $$0G:(DE-HGF)POF3-574$$1G:(DE-HGF)POF3-570$$2G:(DE-HGF)POF3-500$$3G:(DE-HGF)POF3$$4G:(DE-HGF)POF$$aDE-HGF$$bKey Technologies$$lDecoding the Human Brain$$vTheory, modelling and simulation$$x0
000877488 9201_ $$0I:(DE-Juel1)INM-7-20090406$$kINM-7$$lGehirn & Verhalten$$x0
000877488 980__ $$ajournal
000877488 980__ $$aVDB
000877488 980__ $$aI:(DE-Juel1)INM-7-20090406
000877488 980__ $$aAPC
000877488 980__ $$aUNRESTRICTED
000877488 9801_ $$aAPC