{"id":202067,"date":"2025-08-01T17:41:30","date_gmt":"2025-08-01T08:41:30","guid":{"rendered":"http:\/\/ee.presscat.kr\/?post_type=research-achieve&#038;p=202067"},"modified":"2026-04-13T02:28:17","modified_gmt":"2026-04-12T17:28:17","slug":"ee-prof-changdong-yoos-research-team-develops-next-generation-reinforcement-learning-frameworks-for-physical-ai-erl-vlm-and-plare","status":"publish","type":"research-achieve","link":"http:\/\/ee.presscat.kr\/en\/research-achieve\/ee-prof-changdong-yoos-research-team-develops-next-generation-reinforcement-learning-frameworks-for-physical-ai-erl-vlm-and-plare\/","title":{"rendered":"EE Prof. Changdong Yoo\u2019s research team develops next-generation reinforcement learning frameworks for Physical AI: \u2018ERL-VLM\u2019 and \u2018PLARE\u2019"},"content":{"rendered":"<figure id=\"attachment_202235\" aria-describedby=\"caption-attachment-202235\" style=\"width: 900px\" class=\"wp-caption aligncenter\"><img fetchpriority=\"high\" decoding=\"async\" class=\"wp-image-202235\" src=\"http:\/\/ee.presscat.kr\/wp-content\/uploads\/2025\/08\/\uc720\ucc3d\ub3d9-\uad50\uc218\ub2d8-\ud300-1.jpg\" alt=\"\" width=\"900\" height=\"292\" title=\"\"><figcaption id=\"caption-attachment-202235\" class=\"wp-caption-text\">&lt; (From left) PhD candidate Luu Minh Tung, MS student Younghwan Lee, , MS student Donghoon Lee and Professor Chang D. Yoo &gt;<\/figcaption><\/figure>\n<p><span style=\"font-size: 14pt\">With recent advancements in artificial intelligence&#8217;s ability to understand both language and visual information, there is growing interest in Physical AI<em>,\u00a0<\/em>AI systems that can comprehend high-level human instructions and perform physical tasks such as object manipulation or navigation in the real world. Physical AI integrates large language models (LLMs), vision-language models (VLMs), reinforcement learning (RL), and robot control technologies, and is expected to become a cornerstone of next-generation intelligent robotics.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-size: 14pt\">To advance research in Physical AI, an EE research team led by Professor Chang D. Yoo (U-AIM: Artificial Intelligence &amp; Machine Learning Lab) has developed two novel reinforcement learning frameworks leveraging large vision-language models. The first, introduced in ICML 2025, is titled ERL-VLM (Enhancing Rating-based Learning to Effectively Leverage Feedback from Vision-Language Models). In this framework, a VLM provides absolute rating-based feedback on robot behavior, which is used to train a reward function. That reward is then used to learn a robot control AI model. This method removes the need for manually crafting complex reward functions and enables the efficient collection of large-scale feedback, significantly reducing the time and cost required for training.<\/span><\/p>\n<p>&nbsp;<\/p>\n<figure id=\"attachment_202060\" aria-describedby=\"caption-attachment-202060\" style=\"width: 900px\" class=\"wp-caption aligncenter\"><img decoding=\"async\" class=\"wp-image-202060\" src=\"http:\/\/ee.presscat.kr\/wp-content\/uploads\/2025\/08\/Inline-image-2025-07-31-14.39.29.062.jpg\" alt=\"\" width=\"900\" height=\"419\" title=\"\"><figcaption id=\"caption-attachment-202060\" class=\"wp-caption-text\"><span style=\"font-size: 12pt\">&lt;Figure 1. ERL-VLM framework&gt;<\/span><\/figcaption><\/figure>\n<p>&nbsp;<\/p>\n<p><span style=\"font-size: 14pt\">The second, published in IROS 2025, is titled PLARE (Preference-based Learning from Vision-Language Model without Reward Estimation). Unlike previous approaches, PLARE skips reward modeling entirely and instead uses pairwise preference feedback from a VLM to directly train the robot control AI model. This makes the training process simpler and more computationally efficient, without compromising performance.<\/span><\/p>\n<p>&nbsp;<\/p>\n<figure id=\"attachment_202062\" aria-describedby=\"caption-attachment-202062\" style=\"width: 900px\" class=\"wp-caption aligncenter\"><img decoding=\"async\" class=\"wp-image-202062\" src=\"http:\/\/ee.presscat.kr\/wp-content\/uploads\/2025\/08\/Inline-image-2025-07-31-14.41.28.258.jpg\" alt=\"\" width=\"900\" height=\"418\" title=\"\"><figcaption id=\"caption-attachment-202062\" class=\"wp-caption-text\"><span style=\"font-size: 12pt\">&lt;Figure 2. PLARE framework&gt;<\/span><\/figcaption><\/figure>\n<p>&nbsp;<\/p>\n<p><span style=\"font-size: 14pt\">Both frameworks demonstrated superior performance not only in simulation environments but also in real-world experiments using physical robots, achieving higher success rates and more stable behavior than existing methods\u2014thereby verifying their practical applicability.<\/span><\/p>\n<p>&nbsp;<\/p>\n<figure id=\"attachment_202064\" aria-describedby=\"caption-attachment-202064\" style=\"width: 900px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-202064\" src=\"http:\/\/ee.presscat.kr\/wp-content\/uploads\/2025\/08\/Inline-image-2025-07-31-15.34.45.846.png\" alt=\"\" width=\"900\" height=\"189\" title=\"\"><figcaption id=\"caption-attachment-202064\" class=\"wp-caption-text\"><span style=\"font-size: 12pt\">&lt;Figure 4. (From left) PLARE experimental results (success Rate) and example of real-world robot experiment setup&gt;<\/span><\/figcaption><\/figure>\n<p>&nbsp;<\/p>\n<p><span style=\"font-size: 14pt\">This research provides a more efficient and practical approach to enabling robots to understand and act upon human language instructions by leveraging large vision-language models\u2014bringing us a step closer to the realization of Physical AI.\u00a0Moving forward, Professor Changdong Yoo\u2019s team plans to continue advancing research in robot control, vision-language-based interaction, and scalable feedback learning to further develop key technologies in Physical AI.<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>329<\/p>\n","protected":false},"featured_media":202066,"template":"","research_category":[347],"class_list":["post-202067","research-achieve","type-research-achieve","status-publish","has-post-thumbnail","hentry","research_category-ai-machine-learning-en"],"acf":[],"_links":{"self":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/research-achieve\/202067","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/research-achieve"}],"about":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/types\/research-achieve"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/media\/202066"}],"wp:attachment":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/media?parent=202067"}],"wp:term":[{"taxonomy":"research_category","embeddable":true,"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/research_category?post=202067"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}