Highlights


We are pleased to announce that our alumnus, Dr. Ranggi Hwang (advised by Prof. Minsoo Rhu), has been appointed Assistant Professor in the Department of Computer Science and Engineering at the Ulsan National Institute of Science and Technology (UNIST), effective September 1, 2025.
Dr. Hwang earned his Ph.D. from KAIST in August 2025. His research centers on computer architecture, with a particular focus on hardware–software co-design for AI. During his graduate studies, he designed GPU systems and AI accelerators for a range of AI workloads, including recommender systems, graph algorithms, and privacy-preserving AI. He has published papers at top venues such as ISCA, MICRO, HPCA, and ASPLOS.
He also completed research internships at Microsoft Research and NVIDIA Research, where he developed memory- and power-efficient GPU systems for large language model (LLM) inference. His service to the research community includes the MICRO 2025 External Review Committee (ERC) and the HPCA 2026 Program Committee (PC).
Please join us in congratulating Dr. Hwang on his appointment.
Professor Ranggi Hwang’s website: https://sites.google.com/view/unist-cocolab


Related Links(Click)


Professor Sanghyeon Kim of our school received the Merck Young Scientist Award at the 22nd Merck Award Ceremony, held on the 20th at BEXCO in Busan.
The Merck Award is a technical paper award established in 2004 by Merck, a leading German science and technology company, together with the Korean Information Display Society. It is presented to honor outstanding research achievements in the field of display technology and to support the development of the Korean display industry.
Professor Kim has been at the forefront of securing key original intellectual property (IP) in inorganic-based MicroLED display technology through independent domestic technology. He has continued pioneering research that enables the realization of ultra-high-resolution and low-power AR/VR displays. As a result, he has developed world-class MicroLED pixels, contributing to innovation in display technology.
In particular, he successfully implemented a technology that allows direct integration of MicroLEDs onto a complementary metal-oxide semiconductor (CMOS) backplane in a single process, thereby realizing an ultra-high-resolution red display of 1700 PPI (pixels per inch), which was highly recognized.* CMOS (Complementary Metal-Oxide Semiconductor): The circuit substrate that drives the display.
In his acceptance speech, Professor Kim stated, “I am truly grateful and deeply honored to receive the Merck Young Scientist Award, and I feel a great sense of responsibility. I will continue to devote myself as a researcher to ensure that MicroLED display technology translates into tangible industrial competitiveness.”


He presented a dissertation titled “A Study on Deep Model Training Using Knowledge and Data Engineering Methods Considering Sample Ambiguity”. He has also published various papers in journals and conferences on topics related to computer vision, time series analysis, and AI for physics.


nd Professor Dong-Ho Kang (Gwangju Institute of Science and Technology, GIST). >
Our school’s research team has developed a next-generation image sensor that can autonomously adapt to drastic changes in illumination without any external image-processing pipeline. The technology is expected to be applicable to autonomous vehicles, intelligent robotics, security and surveillance, and other vision-centric systems.
In this joint work, KAIST School of Electrical Engineering Professor Young Min Song and GIST Professor Dong-Ho Kang designed a ferroelectric-based optoelectronic device inspired by the brain’s neural architecture. The device integrates light sensing, memory (recording), and in-sensor processing within a single element, enabling a new class of image sensors.
As demand grows for “Visual AI,” there is an urgent need for high-performance visual sensors that operate robustly across diverse environments. Conventional CMOS-based image sensors* process each pixel’s signal independently; when scene brightness changes abruptly, they are prone to saturation, overexposure or underexposure, leading to information loss. *CMOS (complementary metal-oxide semiconductor) image sensors are fabricated using semiconductor processes and are widely used in digital cameras, smartphones, and other consumer electronics.
In particular, they struggle to adapt instantly to extremes such as day/night transitions, strong backlighting, or rapid indoor-outdoor changes, often requiring separate calibration or post-processing of the captured data.

To address these limitations, the team designed a ferroelectric-based image sensor that draws on biological neural structures and learning principles to remain adaptive under extreme environmental variation. By controlling the ferroelectric polarization state, the device can retain sensed optical information for extended periods and selectively amplify or suppress it. As a result, it performs contrast enhancement, illumination compensation, and noise suppression on-sensor, eliminating the need for complex post-processing. The team demonstrated stable face recognition across day/night and indoor/outdoor conditions solely via in-sensor processing, without reconstructing training datasets or performing additional training to handle unstructured environments.

The proposed device is also highly compatible with established AI training algorithms such as convolutional neural networks (CNNs).
CNNs are deep-learning architectures specialized for 2D data such as images and videos, which extract features through convolution operations and perform classification. They are widely used in visual tasks including face recognition, autonomous driving, and medical image analysis.

Professor Young Min Song commented, “This study expands ferroelectric devices, traditionally used as electrical memory, into the domains of neuromorphic vision and in-sensor computing. Going forward, we plan to advance this platform into next-generation vision systems capable of precisely sensing and processing wavelength, polarization, and phase of light.”
This research was supported by the Mid-career Researcher Program of the Ministry of Science and ICT and the National Research Foundation of Korea (NRF). The results were published online in the international journal “Advanced Materials” on July 28th.


Electroretinography (ERG) is an ophthalmic diagnostic method used to determine whether the retina is functioning normally. It is widely employed for diagnosing hereditary retinal diseases or assessing retinal function decline.
A team of Korean researchers has developed a next-generation wireless ophthalmic diagnostic technology that replaces the existing stationary, darkroom-based retinal testing method by incorporating an “ultrathin OLED” into a contact lens. This breakthrough is expected to have applications in diverse fields such as myopia treatment, ocular biosignal analysis, augmented-reality (AR) visual information delivery, and light-based neurostimulation.
A research team led by Professor Seunghyup Yoo from the School of Electrical Engineering, in collaboration with Professor Se Joon Woo of Seoul National University Bundang Hospital, Professor Sei Kwang Hahn of POSTECH, the CEO of PHI Biomed Co., and the Electronics and Telecommunications Research Institute under the National Research Council of Science & Technology, has developed the world’s first wireless, contact lens-based wearable retinal diagnostic platform using organic light-emitting diodes (OLEDs).

This technology enables ERG simply by wearing the lens, eliminating the need for large specialized light sources and dramatically simplifying the conventional, complex ophthalmic diagnostic environment.
Traditionally, ERG requires the use of a stationary Ganzfeld device in a dark room, where patients must keep their eyes open and remain still during the test. This setup imposes spatial constraints and can lead to patient fatigue and compliances challenges.
To overcome these limitations, the joint research team integrated an ultrathin flexible OLED —approximately 12.5 μm thick, or 6–8 times thinner than a human hair— into a contact lens electrode for ERG. They also equipped it with a wireless power receiving antenna and a control chip, completing a system capable of independent operation.
For power transmission, the team adopted a wireless power transfer method using a 433 MHz resonant frequency suitable for stable wireless communication. This was also demonstrated in the form of a wireless controller embedded in a sleep mask, which can be linked to a smartphone —further enhancing practical usability.

While most smart contact lens–type light sources developed for ocular illumination have used inorganic LEDs, these rigid devices emit light almost from a single point, which can lead to excessive heat accumulation and thus usable light intensity. In contrast, OLEDs are areal light sources and were shown to induce retinal responses even under low luminance conditions. In this study, under a relatively low luminance* of 126 nits, the OLED contact lens successfully induced stable ERG signals, producing diagnostic results equivalent to those obtained with existing commercial light sources. *Luminance: A value indicating how brightly a surface or screen emits light; for reference, the luminance of a smartphone screen is about 300–600 nits (can exceed 1000 nits at maximum).
Animal tests confirmed that the surface temperature of a rabbit’s eye wearing the OLED contact lens remained below 27°C, avoiding corneal heat damage, and that the light-emitting performance was maintained even in humid environments—demonstrating its effectiveness and safety as an ERG diagnostic tool in real clinical settings.
Professor Seunghyup Yoo stated that “integrating the flexibility and diffusive light characteristics of ultrathin OLEDs into a contact lens is a world-first attempt,” and that “this research can help expand smart contact lens technology into on-eye optical diagnostic and phototherapeutic platforms, contributing to the advancement of digital healthcare technology.”

Jee Hoon Sim, Hyeonwook Chae, and Su-Bon Kim, PhD researchers at KAIST, played a key role as co-first authors alongside Dr. Sangbaie Shin of PHI Biomed Co.. Corresponding authors are Professor Seunghyup Yoo (School of Electrical Engineering, KAIST), Professor Sei Kwang Hahn (Department of Materials Science and Engineering, POSTECH), and Professor Se Joon Woo (Seoul National University Bundang Hospital). The results were published online in the internationally renowned journal ACS Nano on May 1st.
– Paper title: Wireless Organic Light-Emitting Diode Contact Lenses for On-Eye Wearable Light Sources and Their Application to Personalized Health Monitoring
– DOI: https://doi.org/10.1021/acsnano.4c18563
– Related video clip: http://bit.ly/3UGg6R8


With participation from Professor Insu Yun’s research team at KAIST’s School of Electrical Engineering, Samsung Research, POSTECH, and Georgia Institute of Technology formed “Team Atlanta” and won first place in the AI Cyber Challenge (AIxCC) hosted by the U.S. Defense Advanced Research Projects Agency (DARPA) at the world’s largest hacking conference, DEF CON 33, held in Las Vegas on August 8 (local time).
Led by Taesoo Kim of Samsung Research and Georgia Institute of Technology, Team Atlanta earned USD 4 million (approx. KRW 5.5 billion) in prize money, proving the excellence of AI-based autonomous cyber defense technology on the global stage.

The AI Cyber Challenge (AIxCC) is a two-year global competition jointly organized by DARPA and the U.S. Advanced Research Projects Agency for Health (ARPA-H). It challenges teams to use AI-based Cyber Reasoning Systems (CRS) to automatically analyze, detect, and fix software vulnerabilities. The total prize pool is USD 29.5 million, with USD 4 million awarded to the final winner.
In the final round, Team Atlanta scored 392.76 points, beating second-place Trail of Bits by more than 170 points to secure a decisive victory.
The Cyber Reasoning System (CRS) developed by Team Atlanta successfully detected various types of vulnerabilities and patched many of them in real time during the competition. Among the 70 artificially injected vulnerabilities in the final, the seven finalist teams detected an average of 77% and patched 61% of them. In addition, they discovered 18 previously unknown vulnerabilities in real-world software, demonstrating the potential of AI security technology.

All CRS technologies, including that of the winning team, will be made open source and are expected to be used to strengthen the security of critical infrastructure such as hospitals, water systems, and power grids.
Professor Insu Yun said, “I am very pleased with this tremendous achievement. This victory demonstrates that Korea’s cybersecurity research has reached the highest global standards, and it was meaningful to showcase the capabilities of Korean researchers on the world stage. We will continue research that combines AI and security technologies to safeguard the digital safety of both our nation and the global community.”
KAIST President Kwang Hyung Lee stated, “This victory is another proof that KAIST is a global leader in the convergence of future cybersecurity and artificial intelligence. We will continue to provide full support so that our researchers can compete confidently on the world stage and achieve outstanding results.”


For the first time world, a Korean research team has devised and experimentally validated a “Measurement‐protection (MP)” theory that enables stable quantum key distribution (QKD) without any measurement calibration.
Professor Joonwoo Bae’s team from our School, in collaboration with the Quantum Communications Laboratory at the Electronics and Telecommunications Research Institute (ETRI), has developed a new technology that enables stable quantum communication in moving environments such as satellites, ships, and drones.
Quantum communication is a high-precision technology that transmits information via the quantum states of light, but in wireless, moving environments it has suffered from severe instability due to weather and surrounding environmental changes. In particular, in rapidly changing settings like the sky, sea, or air, reliably delivering quantum states has been extremely challenging.
This research is significant as it overcomes those limitations and opens up possibility of exchanging quantum information stably even while in motion. It is expected that quantum technology can be applied in the future to secure communications between satellites and ground stations, as well as to drone and maritime communications.
Quantum key distribution (QKD) is a technology that uses the principles of quantum mechanics to distribute cryptographic keys that are fundamentally immune to eavesdropping. Existing QKD protocols required repeated recalibration of the receiver’s measurement devices whenever the channel conditions changed.
However, in this work the team proved that, with only simple local operations, stable key distribution is possible regardless of channel conditions. The theory was developed by Professor Bae’s group, and the experiments were carried out by ETRI researchers.

To generate single-photon pulses, the researchers used a 100 MHz light source: a vertical-cavity surface-emitting laser (VCSEL). A VCSEL is a type of semiconductor laser whose beam is emitted vertically from the top surface of the chip.
They emulated a long-distance free-space link with up to 30 dB loss over a 10 m path and inserted various polarization noise to simulate a wireless environment. Even under these harsh conditions, they confirmed that quantum transmission and measurement remained reliable. Both the transmitter and receiver were equipped with three waveplates each to implement the required local operations.
As a result, they demonstrated that an MP-based QKD system can raise the system’s maximum tolerable quantum bit error rate (QBER), the fraction of transmitted qubits received in error, by up to 20.7% compared to conventional approaches.
In other words, if the received QBER is below 20.7%, stable quantum key distribution is possible without any measurement calibration. This establishes the foundation for implementing reliable quantum communication across a variety of noisy channel environments. The team believes this achievement can be applied to scenarios similar to satellite-ground links.
The study was published on June 25 in the IEEE’s prestigious communications journal, “Journal on Selected Areas in Communications”, with ETRI’s Heasin Ko and KAIST’s Spiros Kechrimparis serving as co-first authors.
Professor Bae commented, “This result will be a decisive turning point in bringing reliable quantum-secure communication into practical reality, even under complex environments.”
This research was supported by the Ministry of Science and ICT and the Institute for Information & Communications Technology Planning & Evaluation (IITP) through the “Core Technology Development for Quantum Internet,” “ETRI R&D Support Project,” “Quantum Cryptography Communication Industry Expansion and Next-Generation Technology Development Project,” “Quantum Cryptography Communication Integration and Transmission Technology Advancement Project,” and “SW Computing Industrial Core Technology Development Project”; by the National Research Foundation of Korea through the “Quantum Common-Base Technology Development Project” and “Mid-Career Researcher Program”; and as part of the Future Space Education Center initiative of the Korea Aerospace Agency.