Item Infomation

Full metadata record
DC FieldValueLanguage
dc.contributor.authorZitong, Yu-
dc.contributor.authorYuming, Shen-
dc.contributor.authorJingang, Shi-
dc.date.accessioned2023-04-24T02:13:40Z-
dc.date.available2023-04-24T02:13:40Z-
dc.date.issued2023-
dc.identifier.urihttps://link.springer.com/article/10.1007/s11263-023-01758-1-
dc.identifier.urihttps://dlib.phenikaa-uni.edu.vn/handle/PNK/8238-
dc.descriptionCC BYvi
dc.description.abstractRemote photoplethysmography (rPPG), which aims at measuring heart activities and physiological signals from facial video without any contact, has great potential in many applications (e.g., remote healthcare and affective computing). Recent deep learning approaches focus on mining subtle rPPG clues using convolutional neural networks with limited spatio-temporal receptive fields, which neglect the long-range spatio-temporal perception and interaction for rPPG modeling. In this paper, we propose two end-to-end video transformer based architectures, namely PhysFormer and PhysFormer++, to adaptively aggregate both local and global spatio-temporal features for rPPG representation enhancement.vi
dc.language.isovivi
dc.publisherSpringervi
dc.subjectrPPGvi
dc.subjectPhysFormer and PhysFormer++vi
dc.titlePhysFormer++: Facial Video-Based Physiological Measurement with SlowFast Temporal Difference Transformervi
dc.typeBookvi
Appears in CollectionsOER - Công nghệ thông tin

Files in This Item: