Language reconstruction from non-invasive brain recordings has been a long-standing challenge. Existing research has addressed this challenge with a classification setup, where a set of language candidates are pre-constructed and then matched with the representation decoded from brain recordings. Here, we propose a method that addresses language reconstruction through auto-regressive generation, which directly uses the representation decoded from functional magnetic resonance imaging (fMRI) as the input for a large language model (LLM), mitigating the need for pre-constructed candidates.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
July 2023
Hierarchical context modeling plays an important role in the response generation for multi-turn conversational systems. Previous methods mainly model context as multiple independent utterances and rely on attention mechanisms to obtain the context representation. They tend to ignore the explicit responds-to relationships between adjacent utterances and the special role that the user's latest utterance (the query) plays in determining the success of a conversation.
View Article and Find Full Text PDFOrdinal Regression (OR) aims to model the ordering information between different data categories, which is a crucial topic in multi-label learning. An important class of approaches to OR models the problem as a linear combination of basis functions that map features to a high-dimensional non-linear space. However, most of the basis function-based algorithms are time consuming.
View Article and Find Full Text PDF