{"id":118458,"date":"2021-10-11T21:37:51","date_gmt":"2021-10-11T12:37:51","guid":{"rendered":"http:\/\/175.125.95.178\/ai-in-signal\/18458\/"},"modified":"2026-04-07T21:11:42","modified_gmt":"2026-04-07T12:11:42","slug":"18458","status":"publish","type":"ai-in-signal","link":"http:\/\/ee.presscat.kr\/en\/ai-in-signal\/18458\/","title":{"rendered":"Cross-Active Connection for Image-Text Multimodal Feature Fusion (\uae40\ub300\uc2dd \uad50\uc218\ub2d8)"},"content":{"rendered":"<p style=\"text-align:justify;margin-bottom:11px\"><span style=\"font-size:10pt\"><span style=\"line-height:107%\"><span>Burst image super-resolution is an ill-posed problem that aims to restore a high-resolution (HR) image from a se- quence of low-resolution (LR) burst images. To restore a photo-realistic HR image using their abundant information, it is essential to align each burst of frames containing ran- dom hand-held motion. Some kernel prediction networks (KPNs) that are operated without external motion compen- sation such as optical flow estimation have been applied to burst image processing as implicit image alignment mod- ules. However, the existing methods do not consider the interdependencies among the kernels of different sizes that have a significant effect on each pixel. In this paper, we propose a novel weighted multi-kernel prediction network (WMKPN) that can learn the discriminative features on each pixel for burst image super-resolution. Our experi- mental results demonstrate that WMKPN improves the vi- sual quality of super-resolved images. To the best of our knowledge, it outperforms the state-of-the-art within ker- nel prediction methods and multiple frame super-resolution (MFSR) on both the Zurich RAW to RGB and BurstSR datasets.<\/span><\/span><\/span><\/p>\n<p style=\"text-align:justify;margin-bottom:11px\">&nbsp;<\/p>\n<p style=\"text-align:justify;margin-bottom:11px\"><span style=\"font-size:10pt\"><span style=\"line-height:107%\"><span><\/p>\n<div class=\"\"><img decoding=\"async\" class=\"\" src=\"\/wp-content\/uploads\/drupal\/\uae40\ub300\uc2dd\uad50\uc218\ub2d83.png\" alt=\"\" title=\"\"><\/div>\n<p><\/span><\/span><\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>633<\/p>\n","protected":false},"featured_media":0,"template":"","class_list":["post-118458","ai-in-signal","type-ai-in-signal","status-publish","hentry"],"acf":[],"_links":{"self":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/ai-in-signal\/118458","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/ai-in-signal"}],"about":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/types\/ai-in-signal"}],"wp:attachment":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/media?parent=118458"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}