000845485 001__ 845485
000845485 005__ 20220930130147.0
000845485 0247_ $$2doi$$a10.1155/2018/1350692
000845485 0247_ $$2ISSN$$a2314-4904
000845485 0247_ $$2ISSN$$a2314-4912
000845485 0247_ $$2Handle$$a2128/18369
000845485 0247_ $$2WOS$$aWOS:000432190600001
000845485 0247_ $$2altmetric$$aaltmetric:40766806
000845485 037__ $$aFZJ-2018-02719
000845485 082__ $$a620
000845485 1001_ $$0P:(DE-HGF)0$$aHasasneh, Ahmad$$b0
000845485 245__ $$aDeep Learning Approach for Automatic Classification of Ocular and Cardiac Artifacts in MEG Data
000845485 260__ $$aNew York, NY$$bHindawi Publishing$$c2018
000845485 3367_ $$2DRIVER$$aarticle
000845485 3367_ $$2DataCite$$aOutput Types/Journal article
000845485 3367_ $$0PUB:(DE-HGF)16$$2PUB:(DE-HGF)$$aJournal Article$$bjournal$$mjournal$$s1525330008_15478
000845485 3367_ $$2BibTeX$$aARTICLE
000845485 3367_ $$2ORCID$$aJOURNAL_ARTICLE
000845485 3367_ $$00$$2EndNote$$aJournal Article
000845485 520__ $$aWe propose an artifact classification scheme based on a combined deep and convolutional neural network (DCNN) model, to automatically identify cardiac and ocular artifacts from neuromagnetic data, without the need for additional electrocardiogram (ECG) and electrooculogram (EOG) recordings. From independent components, the model uses both the spatial and temporal information of the decomposed magnetoencephalography (MEG) data. In total, 7122 samples were used after data augmentation, in which task and nontask related MEG recordings from 48 subjects served as the database for this study. Artifact rejection was applied using the combined model, which achieved a sensitivity and specificity of 91.8% and 97.4%, respectively. The overall accuracy of the model was validated using a cross-validation test and revealed a median accuracy of 94.4%, indicating high reliability of the DCNN-based artifact removal in task and nontask related MEG experiments. The major advantages of the proposed method are as follows: (1) it is a fully automated and user independent workflow of artifact classification in MEG data; (2) once the model is trained there is no need for auxiliary signal recordings; (3) the flexibility in the model design and training allows for various modalities (MEG/EEG) and various sensor types.
000845485 536__ $$0G:(DE-HGF)POF3-573$$a573 - Neuroimaging (POF3-573)$$cPOF3-573$$fPOF III$$x0
000845485 588__ $$aDataset connected to CrossRef
000845485 7001_ $$0P:(DE-HGF)0$$aKampel, Nikolas$$b1
000845485 7001_ $$0P:(DE-Juel1)165677$$aSripad, Praveen$$b2
000845485 7001_ $$0P:(DE-Juel1)131794$$aShah, N. J.$$b3$$ufzj
000845485 7001_ $$0P:(DE-Juel1)131757$$aDammers, Jürgen$$b4$$eCorresponding author
000845485 773__ $$0PERI:(DE-600)2736230-9$$a10.1155/2018/1350692$$gVol. 2018, p. 1 - 10$$p1 - 10$$tJournal of Engineering$$v2018$$x2314-4912$$y2018
000845485 8564_ $$uhttps://juser.fz-juelich.de/record/845485/files/1350692.pdf$$yOpenAccess
000845485 8767_ $$92018-03-29$$d2018-03-29$$eAPC$$jDeposit$$lDeposit: Hindawi$$p1350692$$zFZJ-2018-02174, USD 1125,-
000845485 909CO $$ooai:juser.fz-juelich.de:845485$$popenCost$$pVDB$$pdriver$$pOpenAPC$$popen_access$$popenaire$$pdnbdelivery
000845485 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)165677$$aForschungszentrum Jülich$$b2$$kFZJ
000845485 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)131794$$aForschungszentrum Jülich$$b3$$kFZJ
000845485 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)131757$$aForschungszentrum Jülich$$b4$$kFZJ
000845485 9131_ $$0G:(DE-HGF)POF3-573$$1G:(DE-HGF)POF3-570$$2G:(DE-HGF)POF3-500$$3G:(DE-HGF)POF3$$4G:(DE-HGF)POF$$aDE-HGF$$bKey Technologies$$lDecoding the Human Brain$$vNeuroimaging$$x0
000845485 9141_ $$y2018
000845485 915__ $$0StatID:(DE-HGF)0150$$2StatID$$aDBCoverage$$bWeb of Science Core Collection
000845485 915__ $$0LIC:(DE-HGF)CCBY4$$2HGFVOC$$aCreative Commons Attribution CC BY 4.0
000845485 915__ $$0StatID:(DE-HGF)0501$$2StatID$$aDBCoverage$$bDOAJ Seal
000845485 915__ $$0StatID:(DE-HGF)0112$$2StatID$$aWoS$$bEmerging Sources Citation Index
000845485 915__ $$0StatID:(DE-HGF)0500$$2StatID$$aDBCoverage$$bDOAJ
000845485 915__ $$0StatID:(DE-HGF)0510$$2StatID$$aOpenAccess
000845485 915__ $$0StatID:(DE-HGF)0199$$2StatID$$aDBCoverage$$bThomson Reuters Master Journal List
000845485 9201_ $$0I:(DE-Juel1)INM-4-20090406$$kINM-4$$lPhysik der Medizinischen Bildgebung$$x0
000845485 9201_ $$0I:(DE-82)080010_20140620$$kJARA-BRAIN$$lJARA-BRAIN$$x1
000845485 9801_ $$aFullTexts
000845485 980__ $$ajournal
000845485 980__ $$aVDB
000845485 980__ $$aUNRESTRICTED
000845485 980__ $$aI:(DE-Juel1)INM-4-20090406
000845485 980__ $$aI:(DE-82)080010_20140620
000845485 980__ $$aAPC