Mindspeech: Continuous Imagined Speech Decoding Using High-density Fnirs And Prompt Tuning For Advanced Human-ai Interaction · The Large Language Model Bible Contribute to LLM-Bible

Mindspeech: Continuous Imagined Speech Decoding Using High-density Fnirs And Prompt Tuning For Advanced Human-ai Interaction

Zhang Suyi, Alam Ekram, Baber Jack, Bianco Francesca, Turner Edward, Chamanzar Maysam, Dehghani Hamid. Arxiv 2024

[Paper]    
Agentic Applications BERT Language Modeling Model Architecture Prompting

In the coming decade, artificial intelligence systems will continue to improve and revolutionise every industry and facet of human life. Designing effective, seamless and symbiotic communication paradigms between humans and AI agents is increasingly important. This paper reports a novel method for human-AI interaction by developing a direct brain-AI interface. We discuss a novel AI model, called MindSpeech, which enables open-vocabulary, continuous decoding for imagined speech. This study focuses on enhancing human-AI communication by utilising high-density functional near-infrared spectroscopy (fNIRS) data to develop an AI model capable of decoding imagined speech non-invasively. We discuss a new word cloud paradigm for data collection, improving the quality and variety of imagined sentences generated by participants and covering a broad semantic space. Utilising a prompt tuning-based approach, we employed the Llama2 large language model (LLM) for text generation guided by brain signals. Our results show significant improvements in key metrics, such as BLEU-1 and BERT P scores, for three out of four participants, demonstrating the method’s effectiveness. Additionally, we demonstrate that combining data from multiple participants enhances the decoder performance, with statistically significant improvements in BERT scores for two participants. Furthermore, we demonstrated significantly above-chance decoding accuracy for imagined speech versus resting conditions and the identified activated brain regions during imagined speech tasks in our study are consistent with the previous studies on brain regions involved in speech encoding. This study underscores the feasibility of continuous imagined speech decoding. By integrating high-density fNIRS with advanced AI techniques, we highlight the potential for non-invasive, accurate communication systems with AI in the near future.

Similar Work