Human-computer Interaction for Brain-inspired Computing Based on Machine Learning And Deep Learning: A Review
This is the repository of HCI for brain-inspired-computing which offers a thorough review of the current state of research concerning the application of brain-to-text(speech) models.
Human-computer Interaction for Brain-inspired Computing Based on Machine Learning And Deep Learning: A Review
Paper
Feel free to contact us or pull requests if you find any related papers that are not included here.
The continuous development of artificial intelligence has a profound impact on biomedical research and other fields.Brain-inspired computing is an important intersection of multimodal technology and biomedical field. This paper presents a comprehensive review of machine learning (ML) and deep learning (DL) models applied in human-computer interaction for brain-inspired computing, tracking their evolution, application value, challenges, and potential research trajectories. First, the basic concepts and development history are reviewed, and their evolution is divided into two stages: recent machine learning and current deep learning, emphasizing the importance of each stage in the research state of human-computer interaction for brain-inspired computing. In addition, the latest progress and key techniques of deep learning in different tasks of human-computer interaction for brain-inspired computing are introduced from six perspectives. Despite significant progress, challenges remain in making full use of its capabilities. This paper aims to provide a comprehensive review of human-computer interaction for brain-inspired computing models based on machine learning and deep learning, highlighting their potential in various applications and providing a valuable reference for future academic research.
If you find our work useful in your research, please consider citing:
@misc{yu2024humancomputer,
title={Human-computer Interaction for Brain-inspired Computing Based on Machine Learning And Deep Learning:A Review},
author={Bihui Yu and Sibo Zhang and Lili Zhou and Jingxuan Wei and Linzhuang Sun and Liping Bu},
year={2024},
eprint={2312.07213},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
Paper | Published in |
---|---|
ZuCo, A Simultaneous EEG and Eye-tracking Resource for Natural Sentence Reading | Scientific Data 2018 |
ZuCo 2.0: A Dataset of Physiological Recordings During Natural Reading and Annotation | LREC 2020 |
Predicting Human Brain Activity Associated with The Meanings of Nouns | Science 2008 |
Toward A Universal Decoder of Linguistic Meaning from Brain Activation | Nature Communications 2018 |
Paper | Published in |
---|---|
Automatic speech activity recognition from MEG signals using seq2seq learning | IEEE NER 2019 |
Decoding Speech from Single Trial MEG Signals Using Convolutional Neural Networks and Transfer Learning | EMBC 2019 |
Decoding imagined and spoken phrases from non-invasive neural (MEG) signals | Frontiers in Neuroscience 2020 |
MEG sensor selection for neural speech decoding | IEEE Access 2020 |
Decoding Speech Perception from Non-invasive Brain Recordings | Nature Machine Intelligence 2023 |
Paper | Published in |
---|---|
Machine Translation of Corticalcactivity to Text with An Encoder-decoder Framework | Nature Neuroscience 2020 |
Brain2char: a deep architecture for decoding text from brain recordings | Neural Engineering 2020 |
A Neural Speech Decoding Framework Leveraging Deep Learning and Speech Synthesis | BioRxiv 2023 |
Direct speech reconstruction from sensorimotor brain activity with optimized deep learning models | Neural Engineering 2023 |
Synthesizing Speech from ECoG with A Combination of Transformer-based Encoder and Neural Vocoder | ICASSP 2023 |