{"id":118497,"date":"2021-10-31T23:26:52","date_gmt":"2021-10-31T14:26:52","guid":{"rendered":"http:\/\/175.125.95.178\/ai-in-signal\/18497\/"},"modified":"2026-04-09T14:55:59","modified_gmt":"2026-04-09T05:55:59","slug":"18497","status":"publish","type":"ai-in-signal","link":"http:\/\/ee.presscat.kr\/en\/ai-in-signal\/18497\/","title":{"rendered":"Two-Phase Pseudo Label Densification for Self-training based Domain Adaptation  (Prof. In-So Kweon)"},"content":{"rendered":"<p style=\"text-align:justify;margin-bottom:11px\"><span style=\"font-size:10pt\"><span style=\"line-height:107%\"><span>Conference\/Journal, Year: ECCV, 2020<\/span><\/span><\/span><\/p>\n<p align=\"left\" style=\"text-align:left\"><span style=\"font-size:10pt\"><span style=\"line-height:normal\"><span><span><span>Recently, deep self-training approaches emerged as a powerful solution to the unsupervised domain adaptation. The self-training scheme involves iterative processing of target data; it generates target pseudo labels and retrains the network. However, since only the confident predictions are taken as pseudo labels, existing self-training approaches inevitably produce sparse pseudo labels in practice. We see this is critical because the resulting insufficient training-signals lead to a suboptimal, error-prone model. In order to tackle this problem, we propose a novel Two-phase Pseudo Label Densification framework, referred to as TPLD. In the first phase, we use sliding window voting to propagate the confident predictions, utilizing intrinsic spatial-correlations in the images. In the second phase, we perform a confidence-based easy-hard classification. For the easy samples, we now employ their full pseudolabels. For the hard ones, we instead adopt adversarial learning to enforce hard-to-easy feature alignment. To ease the training process and avoid noisy predictions, we introduce the bootstrapping mechanism to the original self-training loss. We show the proposed TPLD can be easily integrated into existing self-training based approaches and improves the performance significantly. Combined with the recently proposed CRST self-training framework, we achieve new state-of-the-art results on two standard UDA benchmarks.<\/span><\/span><\/span><\/span><\/span><\/p>\n<p align=\"left\" style=\"text-align:left\"><span style=\"font-size:10pt\"><span style=\"line-height:normal\"><span><span><span><\/p>\n<div class=\"\"><img decoding=\"async\" class=\"\" src=\"\/wp-content\/uploads\/drupal\/\uad8c\uc778\uc18c\uad50\uc218\ub2d816.png\" alt=\"\" title=\"\"><\/div>\n<p><\/span><\/span><\/span><\/span><\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>624<\/p>\n","protected":false},"featured_media":0,"template":"","class_list":["post-118497","ai-in-signal","type-ai-in-signal","status-publish","hentry"],"acf":[],"_links":{"self":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/ai-in-signal\/118497","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/ai-in-signal"}],"about":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/types\/ai-in-signal"}],"wp:attachment":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/media?parent=118497"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}