{"id":118503,"date":"2021-10-31T23:46:15","date_gmt":"2021-10-31T14:46:15","guid":{"rendered":"http:\/\/175.125.95.178\/ai-in-signal\/18503\/"},"modified":"2026-04-07T15:38:07","modified_gmt":"2026-04-07T06:38:07","slug":"18503","status":"publish","type":"ai-in-signal","link":"http:\/\/ee.presscat.kr\/en\/ai-in-signal\/18503\/","title":{"rendered":"High-quality Frame Interpolation via Tridirectional Inference  (Prof. In-So Kweon)"},"content":{"rendered":"<p style=\"text-align:justify;margin-bottom:11px\"><span style=\"font-size:10pt\"><span style=\"line-height:107%\"><span>Conference\/Journal, Year: WACV 2021<\/span><\/span><\/span><\/p>\n<p style=\"text-align:justify;margin-bottom:11px\"><span style=\"font-size:10pt\"><span style=\"line-height:107%\"><span>Videos have recently become an omnipresent form of media, gathering much attention from industry as well as academia. In the video enhancement field, video frame interpolation is a long-studied topic that has dramatically improved due to the advancement of deep convolutional neural networks (CNN). However, conventional approaches utilizing two successive frames often exhibit ghosting or tearing artifacts for moving objects. We argue that this phenomenon comes from the lack of reliable information provided only by two frames. With this motivation, we propose a frame interpolation method by utilizing tridirectional information obtained from three input frames. Information extracted from triplet frames allows our model to learn rich and reliable inter-frame motion representations, including subtle nonlinear movement, which can be easily trained via any video frames in a self-supervised manner. We demonstrate that our method generalizes well to high-resolution content by evaluating on FHD resolution, and illustrates our approach\u2019s effectiveness via comparison to state-of-theart methods on challenging video content.<\/span><\/span><\/span><\/p>\n<p style=\"text-align:justify;margin-bottom:11px\"><span style=\"font-size:10pt\"><span style=\"line-height:107%\"><span><\/p>\n<div class=\"\"><img decoding=\"async\" class=\"\" src=\"\/wp-content\/uploads\/drupal\/\uad8c\uc778\uc18c\uad50\uc218\ub2d822.png\" alt=\"\" title=\"\"><\/div>\n<p><\/span><\/span><\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>602<\/p>\n","protected":false},"featured_media":0,"template":"","class_list":["post-118503","ai-in-signal","type-ai-in-signal","status-publish","hentry"],"acf":[],"_links":{"self":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/ai-in-signal\/118503","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/ai-in-signal"}],"about":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/types\/ai-in-signal"}],"wp:attachment":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/media?parent=118503"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}