{"id":131095,"date":"2022-05-20T14:31:14","date_gmt":"2022-05-20T05:31:14","guid":{"rendered":"http:\/\/192.249.19.202\/?post_type=press-post&#038;p=131095"},"modified":"2026-04-18T01:34:53","modified_gmt":"2026-04-17T16:34:53","slug":"ee-human-multimodal-research-team-prof-ro-yong-man-interviewed-with-ieee-journal-news-letter","status":"publish","type":"press-post","link":"http:\/\/ee.presscat.kr\/en\/presses\/ee-human-multimodal-research-team-prof-ro-yong-man-interviewed-with-ieee-journal-news-letter\/","title":{"rendered":"EE Human Multimodal Research Team (Prof. Ro, Yong Man) interviewed with IEEE journal news letter."},"content":{"rendered":"<div>KAIST EE Human Multimodal Research Team (advisor Ro, Yong Man) interview is introduced in nominent IEEE journal news letter.<\/div>\n<div>(May issue of IEEE Consumer Technology Society News Featured People.)<\/div>\n<div>The Human Modal research team has been announcing the following papers in AI top tier conferences and IEEE journals such as NeurIPS, AAAI, CVPR, and ICCV for the past year.<\/div>\n<div>Please refer to the attachment for details of the IEEE CTSoc News on Consumer Technology (NCT).<\/div>\n<div><\/div>\n<div><img fetchpriority=\"high\" decoding=\"async\" class=\"alignnone size-full wp-image-131092\" src=\"http:\/\/ee.presscat.kr\/wp-content\/uploads\/2022\/05\/\ub178\uc6a9\ub9cc\uad50\uc218_\ucc98.png\" alt=\"\" width=\"734\" height=\"643\" title=\"\" srcset=\"http:\/\/ee.presscat.kr\/wp-content\/uploads\/2022\/05\/\ub178\uc6a9\ub9cc\uad50\uc218_\ucc98.png 734w, http:\/\/ee.presscat.kr\/wp-content\/uploads\/2022\/05\/\ub178\uc6a9\ub9cc\uad50\uc218_\ucc98-300x263.png 300w\" sizes=\"(max-width: 734px) 100vw, 734px\" \/><\/div>\n<div><\/div>\n<div>Interview Link : <a href=\"https:\/\/ctsoc.ieee.org\/images\/CTSOC-NCT-2022-05-FP.pdf\" rel=\"noopener\">https:\/\/ctsoc.ieee.org\/images\/CTSOC-NCT-2022-05-FP.pdf<\/a><\/div>\n<div><\/div>\n<div>\n<div>&#8211; Journal list<\/div>\n<ul>\n<li>&#8220;Distinguishing Homophenes using Multi-head Visual-audio Memory for Lip Reading.&#8221; \u00a0Minsu Kim, Jeong Hun Yeo, and Yong Man Ro.\u00a0<em>AAAI<\/em>. 2022.<\/li>\n<li>&#8220;SyncTalkFace: Talking Face Generation with Precise Lip-syncing via Audio-Lip Memory.&#8221; Se Jin Park, Minsu Kim, Joanna Hong, Jeongsoo Choi, and Yong Man Ro.\u00a0<em>AAAI<\/em>. 2022.<\/li>\n<li>&#8220;Lip to Speech Synthesis with Visual Context Attentional GAN.&#8221; Minsu Kim, Joanna Hong, and Yong Man Ro.\u00a0<em>NeurIPS<\/em>\u00a0(2021).<\/li>\n<li>&#8220;Multi-modality associative bridging through memory: Speech sound recollected from face video.&#8221; Minsu Kim*, Joanna Hong*, Se Jin Park, and Yong Man Ro.\u00a0<em>ICCV<\/em>. 2021.<\/li>\n<li>Video prediction recalling long-term motion context via memory alignment learning, S Lee, HG Kim, DH Choi, HI Kim, YM Ro, CVPR 2021.<\/li>\n<li>&#8220;Speech Reconstruction with Reminiscent Sound Via Visual Voice Memory.&#8221; \u00a0Joanna Hong, Minsu Kim, Se Jin Park, Yong Man Ro.\u00a0<em>IEEE Transactions on Audio, Speech, and Language Processing<\/em>\u00a029 (2021)<\/li>\n<li>&#8220;Cromm-vsr: Cross-modal memory augmented visual speech recognition.&#8221; Minsu Kim, Joanna Hong, Sejin Park, Yong Man Ro.\u00a0<em>IEEE Transactions on Multimedia<\/em>\u00a0(2021).<\/li>\n<\/ul>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>1371<\/p>\n","protected":false},"featured_media":131092,"template":"","class_list":["post-131095","press-post","type-press-post","status-publish","has-post-thumbnail","hentry"],"acf":[],"_links":{"self":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/press-post\/131095","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/press-post"}],"about":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/types\/press-post"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/media\/131092"}],"wp:attachment":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/media?parent=131095"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}