{"id":116811,"date":"2019-06-04T20:54:41","date_gmt":"2019-06-04T11:54:41","guid":{"rendered":"http:\/\/175.125.95.178\/ai-in-communication\/16811\/"},"modified":"2026-04-13T06:20:41","modified_gmt":"2026-04-12T21:20:41","slug":"16811","status":"publish","type":"ai-in-communication","link":"http:\/\/ee.presscat.kr\/en\/ai-in-communication\/16811\/","title":{"rendered":"Ph.D. student Seung-Yul Han\u2019s paper (Advised by Young-Chul Sung) was accepted to ICML 2019"},"content":{"rendered":"<p><strong>Title:&nbsp;<\/strong>Dimension-Wise Importance Sampling Weight Clipping for Sample-Efficient Reinforcement Learning<\/p>\n<p><strong>Authors:<\/strong>&nbsp;Seung-Yul Han &amp;&nbsp;Young-Chul Sung<\/p>\n<p>&nbsp;In importance sampling (IS)-based reinforcement learning algorithms such as Proximal Policy Optimization (PPO), IS weights are typically clipped to avoid large variance in learning. However, policy update from clipped statistics induces large bias in tasks with high action dimensions, and bias from clipping makes it difficult to reuse old samples with large IS weights. In this work, we propose the Dimension-wise Importance Sampling Weight Clipping (DISC) algorithm based on PPO, a representative on-policy algorithm, by applying dimension-wise IS weight clipping which separately clips the IS weight of each action dimension to avoid large bias and adaptively controls the IS weight to bound policy update from the current policy. This new technique enables efficient learning in high action-dimensional tasks and reusing old samples like in off-policy learning to significantly increase the sample efficiency. Numerical results show that the proposed DISC algorithm outperforms other state-of-the-art RL algorithms in various Open AI Gym tasks.<\/p>\n<p><strong>High-Dimensional Continuous Action Robot Learning:<\/strong><\/p>\n<p><img fetchpriority=\"high\" decoding=\"async\" alt=\"\" class=\"media-element file-default\" data-delta=\"1\" data-fid=\"6606\" data-media-element=\"1\" height=\"188\" width=\"700\" src=\"http:\/\/ee.presscat.kr\/sites\/default\/files\/robot_learning.png\" title=\"\"><\/p>\n<p><strong>Results: <\/strong>Comparison with state-of-the-art algorithms on Mujoco Robot Simulation<\/p>\n<p><img decoding=\"async\" alt=\"\" class=\"media-element file-default\" data-delta=\"2\" data-fid=\"6607\" data-media-element=\"1\" height=\"118\" width=\"700\" src=\"http:\/\/ee.presscat.kr\/sites\/default\/files\/result_1.png\" title=\"\"><\/p>\n<p><img decoding=\"async\" alt=\"\" class=\"media-element file-default\" data-delta=\"3\" data-fid=\"6608\" data-media-element=\"1\" height=\"181\" width=\"700\" src=\"http:\/\/ee.presscat.kr\/sites\/default\/files\/result_2.png\" title=\"\"><\/p>\n","protected":false},"excerpt":{"rendered":"<p>762<\/p>\n","protected":false},"featured_media":126748,"template":"","class_list":["post-116811","ai-in-communication","type-ai-in-communication","status-publish","has-post-thumbnail","hentry"],"acf":[],"_links":{"self":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/ai-in-communication\/116811","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/ai-in-communication"}],"about":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/types\/ai-in-communication"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/media\/126748"}],"wp:attachment":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/media?parent=116811"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}