Highlights


Students from Professor Kwon Kyeongha’s laboratory in our School – Lee Uikyeong(master’s student), Hwang Byeongho(doctoral student), Shin Jihan, and Park Jinho(master’s students) – won the Grand Prize at the Semiconductor Live Demonstration Competition held during the Korean Society of Semiconductor & Display Technology Summer Conference.
The winning team developed an impedance measurement device for battery health diagnosis and conducted real-time demonstrations on-site. This system presented an innovative solution that enables precise evaluation of automotive battery performance and lifespan through high-precision impedance measurement.
The achievement is particularly significant for presenting a practical solution in the field of energy management, which is crucial in the electric vehicle era. Through this award, the research team’s hardware development capabilities have been recognized, and their work is expected to contribute to technological advancement in related fields.


Students Dae Hyun Kang and Seung Hoon Kim (M.S.& Ph.D. integrated program) from Professor Byung Jin Cho’s Research Lab have been honored with the Best Oral Presentation Award at the 2025 Summer Conference of the Korean Institute of Electrical and Electronic Material Engineers (KIEEME).


Carmela Michelle Esteban, a Ph.D. candidate in the research group of Professor Seunghyup Yoo at KAIST School of Electrical Engineering, received the Young Researcher Award for Best Poster Presentation at the 18th International Symposium on Flexible Organic Electronics (ISFOE25), held from July 7 to 10 in Thessaloniki, Greece.
Michelle was recognized for her outstanding research presentation titled “Multi-Functional Polymeric Substrate with Integrated Optical Layers for Flexible Organic Photodetectors.”
ISFOE is a prestigious international symposium in the field of flexible organic and printed electronics, held annually to foster innovation in next-generation electronics. Each year, the Young Researcher Awards are presented to graduate students who demonstrate academic excellence and exceptional research achievements in the field.
Awardees receive a certificate and a complimentary publication for use in the Nanomaterials journal published by MDPI.



With the joint advancement of artificial intelligence and robotics technologies, enabling robots to perceive and respond to their environments as efficiently as humans has become a critical challenge. Recently, a Korean research team has attracted attention by newly implementing an artificial sensory nervous system that mimics biological sensory nerves without any complex software or circuitry. This technology minimizes energy consumption while intelligently reacting to external stimuli, promising applications in ultra‑miniature robots, prosthetic hands, and robotics for medical or extreme environments.
A joint research team led by Shinhyun Choi, KAIST Endowed Chair Professor, and Jongwon Lee, Professor in the Department of Semiconductor Convergence at Chungnam National University, together with See‑On Park of the integrated MS-PhD program in the KAIST School of Electrical Engineering, has developed a next‑generation, neuromorphic‑semiconductor‑based artificial sensory nervous system. They experimentally demonstrated a novel robotic system that responds efficiently to external stimuli.
Animals, including humans, ignore safe or familiar stimuli but respond selectively and sensitively to important ones, thus preventing energy waste while focusing on crucial signals for swift reaction to environmental changes. For example, one soon tunes out the hum of an air conditioner or the feeling of clothes on the skin, yet quickly focuses on hearing one’s name called or sensing a sharp object touching the skin. This is regulated by the sensory nervous system’s functions of “habituation” and “sensitization,” and many have sought to apply these biological features to robots for more efficient, human‑like environmental responses.
However, implementing complex features such as habituation and sensitization in robots has required separate software or intricate circuitry, hindering miniaturization and energy efficiency. In particular, efforts using memristors, neuromorphic semiconductor elements whose resistance depends on the history of current flow, have been limited by conventional memristors’ simple conductance changes, which failed to replicate the sensory system’s complexity.
To overcome these limitations, the team engineered a new memristor in which opposing conductance‑changing layers coexist within a single device. This structure enables the realistic emulation of habituation and sensitization, as seen in biological sensory nerves.

This device gradually reduces its response upon repeated stimuli and, when a danger signal is detected, becomes sensitized again, faithfully reproducing the complex synaptic response patterns of real nervous systems.
Using these memristors, the researchers built a memristor‑based artificial sensory nervous system for touch and pain detection, and attached it to a robotic hand to test its efficiency. When safe tactile stimuli were repeatedly applied, the robotic hand initially sensitive to the novel touch began to ignore it, demonstrating habituation. Later, when an electric shock accompanied the touch (a danger signal), the system recognized it as such and regained sensitivity, confirming the sensitization function.

These experiments prove that robots can respond to stimuli as efficiently as humans without complex software or processors, validating the feasibility of energy‑efficient, neuro‑inspired robots.
See‑On Park, first author of the study, stated, “By emulating the human sensory nervous system with next‑generation semiconductors, we’ve opened the door to a new class of robots that respond more intelligently and with greater energy efficiency to their environments. We expect applications in ultra‑miniature robots, military robots, and medical prostheses, where the convergence of advanced semiconductors and robotics is critical.”
This research was published online on July 1, 2025, in the international journal Nature Communications.
Paper title: Experimental demonstration of third‑order memristor‑based artificial sensory nervous system for neuro‑inspired robotics
DOI: https://doi.org/10.1038/s41467-025-60818-x
This research was supported by the National Research Foundation of Korea’s Next‑Generation Intelligent Semiconductor Technology Development Project, Mid‑Career Research Program, PIM AI Semiconductor Core Technology Development Project, Outstanding Young Researcher Program, and the Nano Comprehensive Technology Institute’s Nanomedical Devices Project.



However, the device’s electrical properties degrade easily in the presence of moisture or water, which limited their use as implantable bioelectronics. Furthermore, optimizing the high-resolution integration process on thin, flexible probes remained a challenge.
To address this, the team enhanced the operational reliability of OLEDs in moist, oxygen-rich environments and minimized tissue damage during implantation. They patterned an ultrathin, flexible encapsulation layer* composed of aluminum oxide and parylene-C (Al₂O₃/parylene-C) at widths of 260–600 micrometers (μm) to maintain biocompatibility. *Encapsulation layer: A barrier that completely blocks oxygen and water molecules from the external environment, ensuring the longevity and reliability of the device.
When integrating the high-resolution micro OLEDs, the researchers also used parylene-C, the same biocompatible material as the encapsulation layer, to maintain flexibility and safety. To eliminate electrical interference between adjacent OLED pixels and spatially separate them, they introduced a pixel define layer (PDL), enabling the independent operation of eight micro OLEDs.
Furthermore, they precisely controlled the residual stress and thickness in the multilayer film structure of the device, ensuring its flexibility even in biological environments. This optimization allowed for probe insertion without bending or external shuttles or needles, minimizing mechanical stress during implantation.



As generative AI technology advances, so do concerns about its potential misuse in manipulating online public opinion. Although detection tools for AI-generated text have been developed previously, most are based on long, standardized English texts and therefore perform poorly on short (average 51 characters), colloquial Korean news comments. The research team from KAIST has made headlines by developing the first technology to detect AI-generated comments in Korean.
A research team led by Professor Yongdae Kim from KAIST’s School of Electrical Engineering, in collaboration with the National Security Research Institute, has developed XDAC, the world’s first system for detecting AI-generated comments in Korean.
Recent generative AI can adjust sentiment and tone to match the context of a news article and can automatically produce hundreds of thousands of comments within hours—enabling large-scale manipulation of public discourse. Based on the pricing of OpenAI’s GPT-4o API, generating a single comment costs approximately 1 KRW. At this rate, producing the average 200,000 daily comments on major news platforms would cost only about 200,000 KRW (approx. USD 150) per day. Public LLMs, with their own GPU infrastructure, can generate massive volumes of comments at virtually no cost.
The team conducted a human evaluation to see whether people could distinguish AI-generated comments from human-written ones. Of 210 comments tested, participants mistook 67% of AI-generated comments for human-written, while only 73% of genuine human comments were correctly identified. In other words, even humans find it difficult to accurately tell AI comments apart. Moreover, AI-generated comments scored higher than human comments in relevance to article context (95% vs. 87%), fluency (71% vs. 45%), and exhibited a lower perceived bias rate (33% vs. 50%).
Until now, AI-generated text detectors have relied on long, formal English prose and fail to perform well on brief, informal Korean comments. Such short comments lack sufficient statistical features and abound in nonstandard colloquial elements, such as emojis, slang, repeated characters, where existing models do not generalize well. Additionally, realistic datasets of Korean AI-generated comments have been scarce, and simple prompt-based generation methods produced limited diversity and authenticity.
To overcome these challenges, the team developed an AI comment generation framework that employs four core strategies: 1) leveraging 14 different LLMs, 2) enhancing naturalness, 3) fine-grained emotion control, and 4) reference-based augmented generation, to build a dataset mirroring real user styles. A subset of this dataset has been released as a benchmark. By applying explainable AI (XAI) techniques to precise linguistic analysis, they uncovered unique linguistic and stylistic features of AI-generated comments through XAI analysis.

For example, AI-generated comments tended to use formal expressions like “것 같다” (“it seems”) and “에 대해” (“about”), along with a high frequency of conjunctions, whereas human commentators favored repeated characters (ㅋㅋㅋㅋ), emotional interjections, line breaks, and special symbols.
In the use of special characters, AI models predominantly employed globally standardized emojis, while real humans incorporated culturally specific characters including Korean consonants (ㅋ, ㅠ, ㅜ) and symbols (ㆍ, ♡, ★, •).
Notably, 26% of human comments included formatting characters (line breaks, multiple spaces), compared to just 1% of AI-generated ones. Similarly, repeated-character usage (e.g. ㅋㅋㅋㅋ, ㅎㅎㅎㅎ, etc.) appeared in 52% of human comments but only 12% of AI comments.
XDAC captures these distinctions to boost detection accuracy. It transforms formatting characters (line breaks, spaces) and normalizes repeated-character patterns into machine-readable features. It also learns each LLM’s unique linguistic fingerprint, enabling it to identify which model generated a given comment.
With these optimizations, XDAC achieves a 98.5% F1 score in detecting AI-generated comments, a 68% improvement over previous methods, and records an 84.3% F1 score in identifying the specific LLM used for generation.

Professor Yongdae Kim emphasized, “This study is the world’s first to detect short comments written by generative AI with high accuracy and to attribute them to their source model. It lays a crucial technical foundation for countering AI-based public opinion manipulation.”
The team also notes that XDAC’s detection capability may have a chilling effect, much like sobriety checkpoints, drug testing, or CCTV installation, which can reduce the incentive to misuse AI simply through its existence.
Platform operators can deploy XDAC to monitor and respond to suspicious accounts or coordinated manipulation attempts, with strong potential for expansion into real-time surveillance systems or automated countermeasures.
The core contribution of this work is the XAI-driven detection framework. It has been accepted to the main conference of ACL 2025, the premier venue in natural language processing, taking place on July 27th.
※Paper Title:
XDAC: XAI-Driven Detection and Attribution of LLM-Generated News Comments in Korean
※Full Paper:
https://github.com/airobotlab/XDAC/blob/main/paper/250611_XDAC_ACL2025_camera_ready.pdf
This research was conducted under the supervision of Professor Yongdae Kim at KAIST, with Senior Researcher Wooyoung Go (NSR and PhD candidate at KAIST) as the first author, and Professors Hyoungshick Kim (Sungkyunkwan University) and Alice Oh (KAIST) as co-authors.


Acoustic source separation and classification is a key next-generation AI technology for early detection of anomalies in drone operations piping faults or border surveillance and for enabling spatial audio editing in AR VR content production.
Professor Jung-Woo Choi’s research team from the School of Electrical Engineering won first place in the “Spatial Semantic Segmentation of Sound Scenes” task of the “IEEE DCASE Challenge 2025.”
This year’s challenge featured 86 teams competing across six tasks. In their first-ever participation, KAIST’s team ranked first in Task 4: Spatial Semantic Segmentation of Sound Scenes—a highly demanding task requiring the analysis of spatial information in multi-channel audio signals with overlapping sound sources. The goal was to separate individual sounds and classify them into 18 predefined categories. The team, composed of Dr. Dongheon Lee, integrated MS-PhD student Younghoo Kwon, and MS student Dohwan Kim, will present their results at the DCASE Workshop in Barcelona this October.
Earlier this year, Dr. Dongheon Lee developed a state-of-the-art sound source separation AI combining Transformer and Mamba architectures. Furthermore, at the challenge, led by Younghoo Kwon, the team established the chain-of-inference architecture that first separates waveforms and source types and then refines the estimation by utilizing the estimated waveforms and classes as clues for target signal extraction in the next stage.

This chain-of-inference approach is inspired by human’s auditory scene analysis mechanism that isolates individual sounds by focusing on incomplete clues such as sound type, rhythm, or direction.
In the evaluation metric CA-SDRi (Class-aware Signal-to-distortion Ratio improvement)*, the team was the only participant to achieve a double-digit improvement of 11 dB, demonstrating their technical excellence. *CA-SDRi (Class-aware Signal-to-distortion Ratio improvement) measures how much clearer and less distorted the target sound is compared with the original mix.
Professor Choi remarked, “I am proud that our team’s world leading acoustic separation AI models over the past three years have now received formal recognition. Despite the greatly increased difficulty and the limited development window due to other conference schedules and final exams, each member demonstrated focused research that led to first place.”

The “IEEE DCASE Challenge 2025” was held online from April 1st to June 15th for submissions, with results announced on June 30th. Since its inception in 2013 under the IEEE Signal Processing Society, the challenge has served as a global stage for AI models in the acoustic field.
Go to the IEEE DCASE Challenge 2025 website (Click)
This research was supported by the National Research Foundation of Korea’s Mid-Career Researcher Program and STEAM Research Project, funded by the Ministry of Education, and the Future Defense Research Center, funded by the Defense Acquisition Program Administration and the Agency for Defense Development.