{"id":116895,"date":"2019-07-01T10:35:00","date_gmt":"2019-07-01T01:35:00","guid":{"rendered":"http:\/\/175.125.95.178\/research-achieve\/16895\/"},"modified":"2026-04-17T17:30:31","modified_gmt":"2026-04-17T08:30:31","slug":"16895","status":"publish","type":"research-achieve","link":"http:\/\/ee.presscat.kr\/en\/research-achieve\/16895\/","title":{"rendered":"KAIST EE Presented Cutting-edge AI Research Results At CVPR 2019"},"content":{"rendered":"<p><main id=\"main\" role=\"main\"><\/p>\n<blockquote>\n<p><span style=\"font-size:16px\"><strong>CVPR is the premier computer vision&nbsp;conference in the world. The quality and impact of an institution&#8217;s research activity in computer vision and artificial intelligence are often measured by the number of papers accepted to this conference. KAIST EE has been very prolific in this sense. At 2019 CVPR alone, KAIST EE researchers have published 12 papers, becoming one of the most productive institutions of the world in computer vision and artificial intelligence research. These papers can be found below:<\/strong><\/span><\/p>\n<\/blockquote>\n<p>&nbsp;<\/p>\n<p><a href=\"http:\/\/cvpr2019.thecvf.com\/program\/main_conference#awards\" rel=\"noopener\"><strong>Deep Blind Video Decaptioning by Temporal Aggregation and Recurrence<\/strong><\/a><\/p>\n<p><strong>Dahun Kim, Sanghyun Woo, Joon-Young Lee, In So Kweon<\/strong><\/p>\n<hr \/>\n<p><a href=\"http:\/\/cvpr2019.thecvf.com\/program\/main_conference#awards\" rel=\"noopener\"><strong>Deep Video Inpainting<\/strong><\/a><\/p>\n<p><strong>Dahun Kim, Sanghyun Woo, Joon-Young Lee, In So Kweon<\/strong><\/p>\n<hr \/>\n<p><a href=\"http:\/\/cvpr2019.thecvf.com\/program\/main_conference#awards\" rel=\"noopener\"><strong>Dense Relational Captioning: Triple-Stream Networks for Relationship-Based Captioning<\/strong><\/a><\/p>\n<p><strong>Dong-Jin Kim, Jinsoo Choi, Tae-Hyun Oh, In So Kweon<\/strong><\/p>\n<hr \/>\n<p><a href=\"http:\/\/cvpr2019.thecvf.com\/program\/main_conference#awards\" rel=\"noopener\"><strong>Learning Loss for Active Learning<\/strong><\/a><\/p>\n<p><strong>Donggeun Yoo, In So Kweon<\/strong><\/p>\n<hr \/>\n<p><a href=\"http:\/\/cvpr2019.thecvf.com\/program\/main_conference#awards\" rel=\"noopener\"><strong>Variational Prototyping-Encoder: One-Shot Learning with Prototypical Images<\/strong><\/a><\/p>\n<p><strong>Junsik Kim, Tae-Hyun Oh, Seokju Lee, Fei Pan, In So Kweon<\/strong><\/p>\n<hr \/>\n<p><a href=\"http:\/\/cvpr2019.thecvf.com\/program\/main_conference#awards\" rel=\"noopener\"><strong>dge-Labeling Graph Neural Network for Few-shot Learning&nbsp;<\/strong><\/a><\/p>\n<p><strong>Jongmin Kim, Taesup Kim, Sungwoong Kim, Chang D. Yoo<\/strong><\/p>\n<hr \/>\n<p><a href=\"http:\/\/cvpr2019.thecvf.com\/program\/main_conference#awards\" rel=\"noopener\"><strong>Progressive Attention Memory Network for Movie Story Question Answering<\/strong><\/a><\/p>\n<p><strong>Junyeong Kim, Minuk Ma, Kyungsu Kim, Sungjin Kim, Chang D. Yoo<\/strong><\/p>\n<hr \/>\n<p><a href=\"http:\/\/cvpr2019.thecvf.com\/program\/main_conference#awards\" rel=\"noopener\"><strong>Diversify and Match: A Domain Adaptive Representation Learning Paradigm for Object Detection&nbsp;<\/strong><\/a><\/p>\n<p><strong>Taekyung Kim, Minki Jeong, Seunghyeon Kim, Seokeon Choi, Changick Kim<\/strong><\/p>\n<hr \/>\n<p><a href=\"http:\/\/cvpr2019.thecvf.com\/program\/main_conference#awards\" rel=\"noopener\"><strong>Learning Not to Learn: Training Deep Neural Networks with Biased Data<\/strong><\/a><\/p>\n<p><strong>Byungju Kim, Hyunwoo Kim, Kyungsu Kim, Sungjin Kim, Junmo Kim<\/strong><\/p>\n<hr \/>\n<p><a href=\"http:\/\/cvpr2019.thecvf.com\/program\/main_conference#awards\" rel=\"noopener\"><strong>RL-GAN-Net: A Reinforcement Learning Agent Controlled GAN Network for Real-Time Point Cloud Shape Completion&nbsp;<\/strong><\/a><\/p>\n<p><strong>Muhammad Sarmad, Hyunjoo Jenny Lee, Young Min Kim<\/strong><\/p>\n<hr \/>\n<p><a href=\"http:\/\/cvpr2019.thecvf.com\/program\/main_conference#awards\" rel=\"noopener\"><strong>Efficient Neural Network Compression&nbsp;<\/strong><\/a><\/p>\n<p><strong>Hyeji Kim, Muhammad Umar Karim Khan, Chong-Min Kyung<\/strong><\/p>\n<hr \/>\n<p><a href=\"http:\/\/cvpr2019.thecvf.com\/program\/main_conference#awards\" rel=\"noopener\"><strong>Variational Information Distillation for Knowledge Transfer&nbsp;<\/strong><\/a><\/p>\n<p><strong>Sungsoo Ahn, Shell Xu Hu, Andreas Damianou, Neil D. Lawrence, Zhenwen Dai<\/strong><\/p>\n<p><\/main><\/p>\n","protected":false},"excerpt":{"rendered":"<p>707<\/p>\n","protected":false},"featured_media":126376,"template":"","research_category":[],"class_list":["post-116895","research-achieve","type-research-achieve","status-publish","has-post-thumbnail","hentry"],"acf":[],"_links":{"self":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/research-achieve\/116895","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/research-achieve"}],"about":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/types\/research-achieve"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/media\/126376"}],"wp:attachment":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/media?parent=116895"}],"wp:term":[{"taxonomy":"research_category","embeddable":true,"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/research_category?post=116895"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}