AI in EE

AI IN DIVISIONS

AI in Signal Division

AI in EE

AI IN DIVISIONS

AI in Signal Division ​ ​

AI in Signal Division

Hee Seung Wang, Seong Kwang Hong, Jae Hyun Han, Young Hoon Jung, Hyun Kyu Jeong, Tae Hong Im, Chang Kyu Jeong, Bo- Yeon Lee, Gwangsu Kim, Chang D Yoo, Keon Jae Lee. “Biomimetic and flexible piezoelectric mobile acoustic sensors with multiresonant ultrathin structures for machine learning biometrics”, (2021) Science Advances 7 (7), eabe5683

Flexible resonant acoustic sensors have attracted substantial attention as an essential component for intuitive human-machine interaction (HMI) in the future voice user interface (VUI). Several researches have been reported by mimicking the basilar membrane but still have dimensional drawback due to limitation of controlling a multifrequency band and broadening resonant spectrum for full-cover phonetic frequencies. Here, highly sensitive piezoelectric mobile acoustic sensor (PMAS) is demonstrated by exploiting an ultrathin membrane for biomimetic frequency band control. Simulation results prove that resonant bandwidth of a piezoelectric film can be broadened by adopting a lead-zirconate-titanate (PZT) membrane on the ultrathin polymer to cover the entire voice spectrum. Machine learning–based biometric authentication is demonstrated by the integrated acoustic sensor module with an algorithm processor and customized Android app. Last, exceptional error rate reduction in speaker identification is achieved by a PMAS module with a small amount of training data, compared to a conventional microelectromechanical system microphone.

 

유창동12

FIG. 4 Machine learning–based mobile biometric authentication of PMAS module.

(A) Schematic diagram of machine learning (ML)–based mobile biometric authentication using PMAS module. The multichannel signals of PMAS were wirelessly transferred to algorithm database for access control to a smartphone. (B) Comparison of voice feature between original sound and PMAS module signal. The graphs include voltage signal of time domain, FFT response, and STFT spectrogram. (C) Flowchart of GMM algorithm for speaker training and testing procedures composed of signal averaging, feature extraction, and layer formation. The speaker decision was performed by comparing the input voice information with pretrained dataset. (D) Speaker identification error rate of the PMAS module outperforming a commercial MEMS microphone in condition of 150 data training, 150 data testing, and seven mixtures. (E) Real-time mobile biometric authentication demonstrated by PMAS module and customized smartphone app for access permission and prohibition in condition of five training and one testing words. Photo credit: Hee Seung Wang, Korea Advanced Institute of Science and Technology.