Publications by authors named "Ilya Kuzovkin"

Human brain has developed mechanisms to efficiently decode sensory information according to perceptual categories of high prevalence in the environment, such as faces, symbols, objects. Neural activity produced within localized brain networks has been associated with the process that integrates both sensory bottom-up and cognitive top-down information processing. Yet, how specifically the different types and components of neural responses reflect the local networks' selectivity for categorical information processing is still unknown.

View Article and Find Full Text PDF

Objective: Numerous studies in the area of BCI are focused on the search for a better experimental paradigm-a set of mental actions that a user can evoke consistently and a machine can discriminate reliably. Examples of such mental activities are motor imagery, mental computations, etc. We propose a technique that instead allows the user to try different mental actions in the search for the ones that will work best.

View Article and Find Full Text PDF

Objective: In this work, a classification method for steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) is proposed. The method is based on information transfer rate (ITR) maximisation.

Approach: The proposed classification method uses features extracted by traditional SSVEP-based BCI methods and finds optimal discrimination thresholds for each feature to classify the targets.

View Article and Find Full Text PDF

Recent advances in the field of artificial intelligence have revealed principles about neural processing, in particular about vision. Previous work demonstrated a direct correspondence between the hierarchy of the human visual areas and layers of deep convolutional neural networks (DCNN) trained on visual object recognition. We use DCNN to investigate which frequency bands correlate with feature transformations of increasing complexity along the ventral visual pathway.

View Article and Find Full Text PDF

Evolution of cooperation and competition can appear when multiple adaptive agents share a biological, social, or technological niche. In the present work we study how cooperation and competition emerge between autonomous agents that learn by reinforcement while using only their raw visual input as the state representation. In particular, we extend the Deep Q-Learning framework to multiagent environments to investigate the interaction between two learning agents in the well-known video game Pong.

View Article and Find Full Text PDF