Highlights


Sihyun Joo, a graduate of our School’s Artificial Intelligence & Machine Learning Lab (U-AIM) under the supervision of Professor Chang D. Yoo, has been promoted to the youngest Executive Director at Hyundai Motor Group.
Mr. Joo obtained his master’s degree in artificial intelligence and machine learning from the U-AIM Lab in February 2011. Following graduation, he joined Hyundai Motor Group, where he has served as the Robotics Intelligence Software Team Leader (Principal Researcher). Throughout his career, he has demonstrated outstanding research capabilities and leadership, significantly contributing to the advancement of robotics and software technologies within the group. This recent promotion recognizes the deep trust and respect he has earned within the organization.
His current research focuses on AI-driven software technologies for robotic intelligence, particularly vision and voice-based AI models. He has implemented and validated various robotic systems and services such as indoor and outdoor delivery, patrol, and factory maintenance, while also working towards enhancing the mass production readiness of these technologies. Earlier in his career, he successfully developed deep learning and machine learning-based gesture and handwriting recognition technologies, which were effectively integrated into production vehicles.
Mr. Joo is expected to play a crucial role in strengthening Hyundai Motor Group’s competitiveness in robotics and AI technologies.


Miniaturization and weight reduction of medical wearable devices for continuous health monitoring such as heart rate, blood oxygen saturation, and sweat component analysis remain major challenges. In particular, optical sensors consume a significant amount of power for LED operation and wireless transmission, requiring heavy and bulky batteries. To overcome these limitations, KAIST EE researchers have developed a next-generation wearable platform that enables 24-hour continuous measurement by using ambient light as an energy source and optimizing power management according to the power environment.

The first core technology, the Photometric Method, is a technique that adaptively adjusts LED brightness depending on the intensity of the ambient light source. By combining ambient natural light with LED light to maintain a constant total illumination level, it automatically dims the LED when natural light is strong and brightens it when natural light is weak.
Whereas conventional sensors had to keep the LED on at a fixed brightness regardless of the environment, this technology optimizes LED power in real time according to the surrounding environment. Experimental results showed that it reduced power consumption by as much as 86.22% under sufficient lighting conditions.
The second is the Photovoltaic Method using high-efficiency multijunction solar cells. This goes beyond simple solar power generation to convert light in both indoor and outdoor environments into electricity. In particular, the adaptive power management system automatically switches among 11 different power configurations based on ambient conditions and battery status to achieve optimal energy efficiency.
The third innovative technology is the Photoluminescent Method. By mixing strontium aluminate microparticles* into the sensor’s silicone encapsulation structure, light from the surroundings is absorbed and stored during the day and slowly released in the dark. As a result, after being exposed to 500W/m² of sunlight for 10 minutes, continuous measurement is possible for 2.5 minutes even in complete darkness. *Strontium aluminate microparticles: A photoluminescent material used in glow-in-the-dark paint or safety signs, which absorbs light and emits it in the dark for an extended time.
These three technologies work complementarily—during bright conditions, the first and second methods are active, and in dark conditions, the third method provides additional support—enabling 24-hour continuous operation.
The research team applied this platform to various medical sensors to verify its practicality. The photoplethysmography sensor monitors heart rate and blood oxygen saturation in real time, allowing early detection of cardiovascular diseases. The blue light dosimeter accurately measures blue light, which causes skin aging and damage, and provides personalized skin protection guidance. The sweat analysis sensor uses microfluidic technology to simultaneously analyze salt, glucose, and pH in sweat, enabling real-time detection of dehydration and electrolyte imbalances.
Additionally, introducing in-sensor data computing significantly reduced wireless communication power consumption. Previously, all raw data had to be transmitted externally, but now only the necessary results are calculated and transmitted within the sensor, reducing data transmission requirements from 400B/s to 4B/s—a 100-fold decrease.
To validate performance, the research tested the device on healthy adult subjects in four different environments: bright indoor lighting, dim lighting, infrared lighting, and complete darkness. The results showed measurement accuracy equivalent to that of commercial medical devices in all conditions A mouse model experiment confirmed accurate blood oxygen saturation measurement in hypoxic conditions.

Professor Kyeongha Kwon of KAIST, who led the research, stated, “This technology will enable 24-hour continuous health monitoring, shifting the medical paradigm from treatment-centered to prevention-centered shifting the medical paradigm from treatment-centered to prevention-centered,” further stating that “cost savings through early diagnosis as well as strengthened technological competitiveness in the next-generation wearable healthcare market are anticipated.”
This research was published on July 1 in the international journal Nature Communications, with Do Yun Park, a doctoral student in the AI Semiconductor Graduate Program, as co–first author.
※ Paper title: Adaptive Electronics for Photovoltaic, Photoluminescent and Photometric Methods in Power Harvesting for Wireless and Wearable Sensors
※ DOI: https://doi.org/10.1038/s41467-025-60911-1
※ URL: https://www.nature.com/articles/s41467-025-60911-1
This research was supported by the National Research Foundation of Korea (Outstanding Young Researcher Program and Regional Innovation Leading Research Center Project), the Ministry of Science and ICT and Institute of Information & Communications Technology Planning & Evaluation (IITP) AI Semiconductor Graduate Program, and the BK FOUR Program (Connected AI Education & Research Program for Industry and Society Innovation, KAIST EE).


Our department’s Professor Young Min Song, in collaboration with Professor Hyeon‑Ho Jeong’s research team at GIST School of EECS, has developed a replication‑impossible security authentication technology based on nature‑inspired nanophotonic structures.
This technology can be easily embedded into physical products such as ID cards or QR codes and, being visually indistinguishable from existing items, provides strong tamper‑proof protection without compromising design. It holds broad potential for applications requiring genuine‑product authentication, including premium consumer goods, pharmaceuticals, and electronics.
Until now, anti‑tampering measures like QR codes and barcodes have been limited by their ease of replication and the difficulty of assigning truly unique identifiers to each item. A recently spotlighted solution is the physically unclonable function (PUF)*, which leverages the natural randomness arising during manufacturing to grant each device a unique physical signature, thereby enhancing security and authentication reliability.
However, existing PUF technologies, while achieving randomness and uniqueness, have struggled with color consistency control and are easily identified (and thus attacked) from the outside. * Physically Unclonable Function (PUF): A technique that uses physical variations formed during the manufacturing process to generate a unique authentication key. Because these variations are inherently random and unclonable, even if the authentication data is stolen, constructing the exact hardware for authentication is effectively impossible.
In response, the research team turned its attention to the unique phenomenon of structural color* observed in natural organisms. For example, the wings of butterflies, feathers of birds, and leaves of seaweed all contain nanoscale microstructures arranged in a form of quasiorder*—a pattern that is neither completely ordered nor entirely random. These structures appear to exhibit uniform coloration to the naked eye, but internally contain subtle randomness that enables survival functions such as camouflage, communication, and predator evasion.
* Quasi‑order: A structural arrangement that is neither fully ordered nor fully disordered. In nature, nano‑scale elements are arranged in a pattern that blends order with randomness—found, for example, in butterfly wings, seaweed leaves, and bird feathers—producing uniform color at a macroscopic scale while embedding unique optical features.
* Structural Color: Color produced not by pigments but by nano‑meter‑scale structures that interact with light, commonly seen in living organisms. Classic examples include the iridescent wings of butterflies and the feathers of peacocks.
The researchers drew inspiration from these natural phenomena. They deposited a thin dielectric layer of HfO₂ onto a metallic mirror and then used electrostatic self‑assembly to arrange gold nanoparticles (tens of nanometers in size) into a quasi‑ordered plasmonic metasurface*. Visually, this nanostructure exhibits a uniform reflection color; under a high‑magnification optical microscope, however, each region reveals a distinct random scattering pattern—an “optical fingerprint*”—that is impossible to replicate. * Plasmonic Metasurface: An ultrathin optical structure comprising precisely arranged metallic nano‑elements that exploit surface plasmon resonance to locally enhance electromagnetic fields, enabling far more compact and precise light–matter interaction than conventional optics. * Optical Fingerprint: A unique pattern of reflection, scattering, and interference produced when light interacts with a micro‑ or nano‑scale structure. Because these patterns arise from random structural variations that cannot be exactly duplicated, they serve as a practically unclonable security feature.
The team confirmed that leveraging these nano‑scale stochastic patterns enhances PUF performance compared to conventional approaches.
In a hypothetical hacking scenario where an attacker attempts to recreate the device, the time required to decrypt the optical fingerprint would exceed the age of the Earth, rendering replication virtually impossible. Through demonstration experiments on pharmaceuticals, semiconductors, and QR codes, the researchers validated the technology’s practical industrial applicability.
Analysis of over 500 generated PUF keys showed an average bit‑value distribution of 0.501, which is remarkably close to the ideal balance of 0.5, and an average inter‑key Hamming distance of 0.494, demonstrating high uniqueness and reliability. Additionally, the scattering patterns remained stable under various environmental stresses, including high temperature, high humidity, and friction, confirming excellent durability.
Professor Young Min Song emphasized, “Whereas conventional security labels can be deformed by even minor damage, our technology secures both structural stability and unclonability. In particular, by separating visible color information from the invisible unique‑key information, it offers a new paradigm in security authentication.”
Professor Hyeon‑Ho Jeong added, “By reproducing structures in which order and disorder coexist in nature through nanotechnology, we have created optical information that appears identical externally yet is fundamentally unclonable. This technology can serve as a powerful anti‑counterfeiting measure across diverse fields, from premium consumer goods to pharmaceutical authentication and even national security.”
This work, guided by Professor Young Min Song (KAIST School of Electrical Engineering) and Professor Hyeon‑Ho Jeong (GIST School of EECS), and carried out by Gyurin Kim, Doeun Kim, JuHyeong Lee, Juhwan Kim, and Se‑Yeon Heo, was supported by the Ministry of Science and ICT and the National Research Foundation’s Early‑Career Research Program, the Regional Innovation Mega Project in R&D Special Zones, and the GIST‑MIT AI International Collaboration Project.
The results were published online on July 8, 2025, in the international journal Nature Communications.
* Paper title: Quasi‑ordered plasmonic metasurfaces with unclonable stochastic scattering for secure authentication


With recent advancements in artificial intelligence’s ability to understand both language and visual information, there is growing interest in Physical AI, AI systems that can comprehend high-level human instructions and perform physical tasks such as object manipulation or navigation in the real world. Physical AI integrates large language models (LLMs), vision-language models (VLMs), reinforcement learning (RL), and robot control technologies, and is expected to become a cornerstone of next-generation intelligent robotics.
To advance research in Physical AI, an EE research team led by Professor Chang D. Yoo (U-AIM: Artificial Intelligence & Machine Learning Lab) has developed two novel reinforcement learning frameworks leveraging large vision-language models. The first, introduced in ICML 2025, is titled ERL-VLM (Enhancing Rating-based Learning to Effectively Leverage Feedback from Vision-Language Models). In this framework, a VLM provides absolute rating-based feedback on robot behavior, which is used to train a reward function. That reward is then used to learn a robot control AI model. This method removes the need for manually crafting complex reward functions and enables the efficient collection of large-scale feedback, significantly reducing the time and cost required for training.

The second, published in IROS 2025, is titled PLARE (Preference-based Learning from Vision-Language Model without Reward Estimation). Unlike previous approaches, PLARE skips reward modeling entirely and instead uses pairwise preference feedback from a VLM to directly train the robot control AI model. This makes the training process simpler and more computationally efficient, without compromising performance.

Both frameworks demonstrated superior performance not only in simulation environments but also in real-world experiments using physical robots, achieving higher success rates and more stable behavior than existing methods—thereby verifying their practical applicability.

This research provides a more efficient and practical approach to enabling robots to understand and act upon human language instructions by leveraging large vision-language models—bringing us a step closer to the realization of Physical AI. Moving forward, Professor Changdong Yoo’s team plans to continue advancing research in robot control, vision-language-based interaction, and scalable feedback learning to further develop key technologies in Physical AI.




Smartphones must stay connected to mobile networks at all times to function properly. The corecomponent that enables this constant connectivity is the communication modem (Baseband) inside the device. KAIST researchers, using their self-developed testing framework called ‘LLFuzz (Lower Layer Fuzz),’ have discovered security vulnerabilities in the lower layers of smartphone communication modems and demonstrated the necessity of standardizing ‘mobile communication modem security testing.’ *Standardization: In mobile communication, conformance testing, which verifies normal operation in normal situations, has been standardized. However, standards for handling abnormal packets have not yet been established, hence the need for standardized security testing.
The research team utilized their self-developed ‘LLFuzz’ analysis framework to analyze the lower layer state transitions and error handling logic of the modem to detect security vulnerabilities. LLFuzz was able to precisely extract vulnerabilities caused by implementation errors by comparing and analyzing 3GPP* standard-based state machines with actual device responses. *3GPP: An international collaborative organization that creates global mobile communication standards.
The research team conducted experiments on 15 commercial smartphones from global manufacturers, including Apple, Samsung Electronics, Google, and Xiaomi, and discovered a total of 11 vulnerabilities. Among these, seven were assigned official CVE (Common Vulnerabilities and Exposures) numbers, and manufacturers applied security patches for these vulnerabilities. However, the remaining four have not yet been publicly disclosed.
While previous security research primarily focused on higher layers of mobile communication, such as NAS (Network Access Stratum) and RRC (Radio Resource Control), the research team concentrated on analyzing the error handling logic of mobile communication’s lower layers, which manufacturers have often neglected

These vulnerabilities occurred in the lower layers of the communication modem (RLC, MAC, PDCP, PHY*), and due to their structural characteristics where encryption or authentication is not applied, operational errors could be induced simply by injecting external signals. *RLC, MAC, PDCP, PHY: Lower layers of LTE/5G communication, responsible for wireless resource allocation, error control, encryption, and physical layer transmission.
The research team released a demo video showing that when they injected a manipulated wireless packet (malformed MAC packet) into commercial smartphones via a Software-Defined Radio (SDR) device using packets generated on an experimental laptop, the smartphone’s communication modem (Baseband) immediately crashed
※ Experiment video: https://drive.google.com/file/d/1NOwZdu_Hf4ScG7LkwgEkHLa_nSV4FPb_/view?usp=drive_link
The video shows data being normally transmitted at 23MB per second on the fast.com page, but immediately after the manipulated packet is injected, the transmission stops and the mobile communication signal disappears. This intuitively demonstrates that a single wireless packet can cripple a commercial device’s communication modem.

The vulnerabilities were found in the ‘modem chip,’ a core component of smartphones responsible for calls, texts, and data communication, making it a very important component.
- Qualcomm: Affects over 90 chipsets, including CVE-2025-21477, CVE-2024-23385.
- MediaTek: Affects over 80 chipsets, including CVE-2024-20076, CVE-2024-20077, CVE-2025-20659.
- Samsung: CVE-2025-26780 (targets the latest chipsets like Exynos 2400, 5400).
- Apple: CVE-2024-27870 (shares the same vulnerability as Qualcomm CVE).
The problematic modem chips (communication components) are not only in premium smartphones but also in low-end smartphones, tablets, smartwatches, and IoT devices, leading to the widespread potential for user harm due to their broad diffusion.
Furthermore, the research team experimentally tested 5G vulnerabilities in the lower layers and found two vulnerabilities in just two weeks. Considering that 5G vulnerability checks have not been generally conducted, it is possible that many more vulnerabilities exist in the mobile communication lower layers of baseband chips.
Professor Yongdae Kim explained, “The lower layers of smartphone communication modems are not subject to encryption or authentication, creating a structural risk where devices can accept arbitrary signals from external sources.” He added, “This research demonstrates the necessity of standardizing mobile communication modem security testing for smartphones and other IoT devices.”
The research team is continuing additional analysis of the 5G lower layers using LLFuzz and is also developing tools for testing LTE and 5G upper layers. They are also pursuing collaborations for future tool disclosure. The team’s stance is that “as technological complexity increases, systemic security inspection systems must evolve in parallel.”
First author Tuan Dinh Hoang, a Ph.D. student in the School of Electrical Engineering, will present the research results in August at USENIX Security 2025, one of the world’s most prestigious conferences in cybersecurity.
※ Paper Title: LLFuzz: An Over-the-Air Dynamic Testing Framework for Cellular Baseband Lower Layers (Tuan Dinh Hoang and Taekkyung Oh, KAIST; CheolJun Park, Kyung Hee Univ.; Insu Yun and Yongdae Kim, KAIST)
※ Lab homepage paper: https://syssec.kaist.ac.kr/pub/2025/LLFuzz_Tuan.pdf
※ Open-source repository: https://github.com/SysSec-KAIST/LLFuzz (To be released)
This research was conducted with support from the Institute of Information & Communications Technology Planning & Evaluation (IITP) funded by the Ministry of Science and ICT.


Eungchang Mason Lee, a Ph.D. candidate in Professor Hyun Myung’s lab in our School, has achieved the honor of receiving the Best Paper Award presented by KFMES (The Korea Federation of Mechanical Engineering Societies; 한국기계기술단체총연합회) at the 2025 Conference of Institute of Control, Robotics and Systems (ICROS).
Out of a total of 554 papers presented at the conference, 9 papers were selected for the Excellent Paper Award and 8 for the Best Paper Award. Among them, only one paper was honored with the KFMES Best Paper Award.
The award-winning paper, titled <Degeneracy-Robust LiDAR-Inertial Odometry with Adaptive Schmidt-Kalman Filter>, proposes a novel method for accurately and reliably estimating the pose of robot in extreme environments where LiDAR measurements are sparse or imbalanced.


Professor Chang-Sik Choi will join our department on August 1, 2025. Congratulations!
Professor Choi’s office is located in Room 715, Building N1. He conducts research on next-generation communication systems and their applications, including satellite communication, vehicular communication, cellular communication, and communication for autonomous driving. His interests lie in analyzing network performance using mathematical theories such as stochastic geometry, as well as machine learning and system simulation techniques, and leveraging these analyses to develop or optimize algorithms.
For more details about Professor Chang-Sik Choi’s research, please refer to his homepage.
https://sites.google.com/view/ccsik77/