{"id":194414,"date":"2025-05-16T13:26:59","date_gmt":"2025-05-16T04:26:59","guid":{"rendered":"http:\/\/ee.presscat.kr\/?post_type=research-achieve&#038;p=194414"},"modified":"2026-04-13T05:15:56","modified_gmt":"2026-04-12T20:15:56","slug":"ee-prof-ian-oakleys-research-team-develops-next-generation-wearable-interaction-technologies-using-around-device-sensing","status":"publish","type":"research-achieve","link":"http:\/\/ee.presscat.kr\/en\/research-achieve\/ee-prof-ian-oakleys-research-team-develops-next-generation-wearable-interaction-technologies-using-around-device-sensing\/","title":{"rendered":"EE Prof. Ian Oakley&#8217;s Research Team Develops Next-Generation Wearable Interaction Technologies Using Around-Device Sensing"},"content":{"rendered":"<div style=\"text-align: center\">\n<figure id=\"attachment_194407\" aria-describedby=\"caption-attachment-194407\" style=\"width: 900px\" class=\"wp-caption aligncenter\"><img fetchpriority=\"high\" decoding=\"async\" class=\"wp-image-194407 size-full\" src=\"http:\/\/ee.presscat.kr\/wp-content\/uploads\/2025\/05\/\uc774\uc548\uc624\ud074\ub9ac-\uad50\uc218\ub2d8-900.jpg\" alt=\"\" width=\"900\" height=\"500\" title=\"\"><figcaption id=\"caption-attachment-194407\" class=\"wp-caption-text\">\u3008(from left) Ph.D. candidate Jiwan Kim, Professor Ian Oakley and Ph.D. candidate Mingyu Han\u00a0\u3009<\/figcaption><\/figure>\n<\/div>\n<div style=\"text-align: left\"><span style=\"color: #000000;font-size: 14pt\">When will the futuristic vision of natural gesture-based interaction with computers, as seen in sci-fi films like Iron Man, become a reality? Researchers from the KAIST School of Electrical Engineering have developed AI technologies that enable natural and expressive input for wearable devices.<\/span><\/div>\n<div>\u00a0<\/div>\n<div style=\"text-align: left\"><span style=\"color: #000000;font-size: 14pt\">Professor Ian Oakley\u2019s research team at KAIST\u2019s School of Electrical Engineering has developed two systems: BudsID, a finger-identification system for wireless earbuds, and SonarSelect, which enables mid-air gesture input on commercial smartwatches. These two studies were presented at the ACM Conference on Human Factors in Computing Systems (CHI)\u2014the world\u2019s premier conference in the field of human-computer interaction\u2014held in Yokohama, Japan from April 26 to May 1. The presentations were part of the \u201cEarables and Hearable\u201d and \u201cInteraction Techniques\u201d sessions, respectively.<\/span><\/div>\n<p style=\"text-align: left\">\u00a0<\/p>\n<p style=\"text-align: left\"><span style=\"color: #000000;font-size: 14pt\">BudsID uses magnetic sensing to distinguish between fingers based on magnetic field changes that occur when a user wearing a magnetic ring touches an earbud. A lightweight deep learning model identifies which finger is used, allowing different functions to be assigned to each finger, thus expanding the input expressiveness of wireless earbuds.<\/span><\/p>\n<p>&nbsp;<\/p>\n<figure id=\"attachment_194409\" aria-describedby=\"caption-attachment-194409\" style=\"width: 900px\" class=\"wp-caption aligncenter\"><img decoding=\"async\" class=\"wp-image-194409\" src=\"http:\/\/ee.presscat.kr\/wp-content\/uploads\/2025\/05\/\uc774\uc548\uc624\ud074\ub9ac\uad50\uc218\ub2d8-1.png\" alt=\"\" width=\"900\" height=\"236\" title=\"\"><figcaption id=\"caption-attachment-194409\" class=\"wp-caption-text\"><span style=\"color: #000000;font-size: 10pt\">&lt;Figure 1: Overall system architecture of BudsID: A user wears a magnetic ring, and magnetic field variations upon earbud touch are detected via deep learning to identify the finger and assign different functions accordingly, enhancing interaction expressiveness.&gt;<\/span><\/figcaption><\/figure>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #000000;font-size: 14pt\">This magnetic sensing system for wireless earbuds allows users to go beyond traditional interactions like play, pause, or call handling. By mapping different functions or input commands to individual fingers, the interaction capabilities can extend to augmented reality device control and beyond.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #000000;font-size: 14pt\">SonarSelect leverages active sonar sensing using only the built-in microphone, speaker, and motion sensors of a commercial smartwatch. It recognizes mid-air gestures around the device, enabling precise pointer manipulation and target selection.<\/span><\/p>\n<figure id=\"attachment_194411\" aria-describedby=\"caption-attachment-194411\" style=\"width: 900px\" class=\"wp-caption aligncenter\"><img decoding=\"async\" class=\"wp-image-194411\" src=\"http:\/\/ee.presscat.kr\/wp-content\/uploads\/2025\/05\/\uc774\uc548\uc624\ud074\ub9ac\uad50\uc218\ub2d8-2.png\" alt=\"\" width=\"900\" height=\"209\" title=\"\"><figcaption id=\"caption-attachment-194411\" class=\"wp-caption-text\"><span style=\"color: #000000;font-size: 10pt\">&lt;Figure 2: Three target selection methods using SonarSelect\u2019s around-device movement sensing on commercial smartwatches: A) Double-Crossing, B) Dwelling, C) Pinching.&gt;<\/span><\/figcaption><\/figure>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #000000;font-size: 14pt\">This gesture interaction technology, based on finger movements detected via active sonar, addresses usability issues of small smartwatch screens and touch occlusion. It enables delicate 3D spatial interactions around the device.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #000000;font-size: 14pt\">Jiwan Kim, first author of both papers, said, \u201cWe hope our research into around-device sensing for wearable interaction technologies will help shape the future of how people interact with wearable computing devices.\u201d<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #000000;font-size: 14pt\">Professor Ian Oakley\u2019s research team has made both project systems available as open source, allowing researchers and industry professionals to freely use the technology.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #000000;font-size: 14pt\">[BudsID]<\/span><\/p>\n<p style=\"text-align: center\"><iframe src=\"\/\/www.youtube.com\/embed\/l2dofn4LQAs\" width=\"560\" height=\"314\" allowfullscreen=\"allowfullscreen\"><\/iframe><\/p>\n<ul>\n<li><span style=\"color: #000000;font-size: 14pt\">Title: BudsID: Mobile-Ready and Expressive Finger Identification Input for Earbuds<\/span><\/li>\n<li><span style=\"color: #000000;font-size: 14pt\">Authors: Jiwan Kim, Mingyu Han, and Ian Oakley<\/span><\/li>\n<li><span style=\"color: #000000;font-size: 14pt\">DOI:\u00a0<a style=\"color: #000000\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3706598.3714133\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/dl.acm.org\/doi\/10.1145\/3706598.3714133<\/a><\/span><\/li>\n<li><span style=\"color: #000000;font-size: 14pt\">Open Source:\u00a0<a style=\"color: #000000\" href=\"https:\/\/github.com\/witlab-kaist\/BudsID\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/github.com\/witlab-kaist\/BudsID<\/a><\/span><\/li>\n<\/ul>\n<p><span style=\"color: #000000;font-size: 14pt\">[SonarSelect]<\/span><\/p>\n<p style=\"text-align: center\"><iframe src=\"\/\/www.youtube.com\/embed\/7rct9jDkupA\" width=\"560\" height=\"314\" allowfullscreen=\"allowfullscreen\"><\/iframe><\/p>\n<ul>\n<li><span style=\"color: #000000;font-size: 14pt\">Title: Cross, Dwell, or Pinch: Designing and Evaluating Around-Device Selection Methods for Unmodified Smartwatches<\/span><\/li>\n<li><span style=\"color: #000000;font-size: 14pt\">Authors: Jiwan Kim, Jiwan Son, and Ian Oakley<\/span><\/li>\n<li><span style=\"color: #000000;font-size: 14pt\">DOI:\u00a0<a style=\"color: #000000\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3706598.3714308\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/dl.acm.org\/doi\/10.1145\/3706598.3714308<\/a><\/span><\/li>\n<li><span style=\"color: #000000;font-size: 14pt\">Open Source:\u00a0<a style=\"color: #000000\" href=\"https:\/\/github.com\/witlab-kaist\/SonarSelect\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/github.com\/witlab-kaist\/SonarSelect<\/a><\/span><\/li>\n<\/ul>\n<div>\u00a0<\/div>\n<p><span style=\"color: #000000;font-size: 14pt\">This research was supported by the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (grants 2023R1A2C1004046, RS-2024-00407732) and the Institute for Information &amp; Communications Technology Planning &amp; Evaluation (IITP) under the University ICT Research Center (ITRC) support program (IITP-2024-RS-2024-00436398).<\/span><\/p>\n<div>\u00a0<\/div>\n","protected":false},"excerpt":{"rendered":"<p>530<\/p>\n","protected":false},"featured_media":194413,"template":"","research_category":[351],"class_list":["post-194414","research-achieve","type-research-achieve","status-publish","has-post-thumbnail","hentry","research_category-human-computer-interaction-en"],"acf":[],"_links":{"self":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/research-achieve\/194414","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/research-achieve"}],"about":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/types\/research-achieve"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/media\/194413"}],"wp:attachment":[{"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/media?parent=194414"}],"wp:term":[{"taxonomy":"research_category","embeddable":true,"href":"http:\/\/ee.presscat.kr\/en\/wp-json\/wp\/v2\/research_category?post=194414"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}