Recent advances in deep learning have allowed artificial intelligence (AI) to reach near human-level performance in many sensory, perceptual, linguistic, and cognitive tasks. There is a growing need, however, for novel, brain-inspired cognitive architectures. The Global Workspace Theory (GWT) refers to a large-scale system integrating and distributing information among networks of specialized modules to create higher-level forms of cognition and awareness. We argue that the time is ripe to consider explicit implementations of this theory using deep-learning techniques. We propose a roadmap based on unsupervised neural translation between multiple latent spaces (neural networks trained for distinct tasks, on distinct sensory inputs and/or modalities) to create a unique, amodal Global Latent Workspace (GLW). Potential functional advantages of GLW are reviewed, along with neuroscientific implications.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.tins.2021.04.005 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!