Publications by authors named "Tobias S Andersen"

Article Synopsis
  • Light-based gamma entrainment using sensory stimuli (GENUS) shows promise as a treatment for Alzheimer's disease (AD), but the discomfort from flickering lights may hinder patient adherence to the therapy.
  • A study investigates Invisible spectral flicker (ISF), which is a less detectable type of flicker, and finds it significantly more comfortable and less distracting for users compared to traditional luminance flickering (LF).
  • While reducing brightness didn’t impact SSVEP responses, it improved comfort, suggesting that combining ISF with less direct stimulation could enhance the overall treatment experience for AD patients.
View Article and Find Full Text PDF

Background And Purpose: Perioperative 5-FU, leucovorin, oxaliplatin, and docetaxel (FLOT) is recommended in resectable esophagogastric adenocarcinoma based on randomised trials. However, the effectiveness of FLOT in routine clinical practice remains unknown as randomised trials are subject to selection bias limiting their generalisability. The aim of this study was to evaluate the implementation of FLOT in real-world patients.

View Article and Find Full Text PDF

Background: Out-of-hospital seizure detection aims to provide clinicians and patients with objective seizure documentation in efforts to improve the clinical management of epilepsy. In-patient studies have found that combining different modalities helps improve the seizure detection accuracy. In this study, the objective was to evaluate the viability of out-of-hospital seizure detection using wearable ECG, accelerometry and behind-the-ear electroencephalography (EEG).

View Article and Find Full Text PDF

There is broad interest in discovering quantifiable physiological biomarkers for psychiatric disorders to aid diagnostic assessment. However, finding biomarkers for autism spectrum disorder (ASD) has proven particularly difficult, partly due to high heterogeneity. Here, we recorded five minutes eyes-closed rest electroencephalography (EEG) from 186 adults (51% with ASD and 49% without ASD) and investigated the potential of EEG biomarkers to classify ASD using three conventional machine learning models with two-layer cross-validation.

View Article and Find Full Text PDF

. Post-traumatic stress disorder (PTSD) is highly heterogeneous, and identification of quantifiable biomarkers that could pave the way for targeted treatment remains a challenge. Most previous electroencephalography (EEG) studies on PTSD have been limited to specific handpicked features, and their findings have been highly variable and inconsistent.

View Article and Find Full Text PDF

Objective: To explore the possibilities of wearable multi-modal monitoring in epilepsy and to identify effective strategies for seizure-detection.

Methods: Thirty patients with suspected epilepsy admitted to video electroencephalography (EEG) monitoring were equipped with a wearable multi-modal setup capable of continuous recording of electrocardiography (ECG), accelerometry (ACM) and behind-the-ear EEG. A support vector machine (SVM) algorithm was trained for cross-modal automated seizure detection.

View Article and Find Full Text PDF

Gaze patterns during face perception have been shown to relate to psychiatric symptoms. Standard analysis of gaze behavior includes calculating fixations within arbitrarily predetermined areas of interest. In contrast to this approach, we present an objective, data-driven method for the analysis of gaze patterns and their relation to diagnostic test scores.

View Article and Find Full Text PDF

Speech is perceived with both the ears and the eyes. Adding congruent visual speech improves the perception of a faint auditory speech stimulus, whereas adding incongruent visual speech can alter the perception of the utterance. The latter phenomenon is the case of the McGurk illusion, where an auditory stimulus such as e.

View Article and Find Full Text PDF

Speech perception is influenced by vision through a process of audiovisual integration. This is demonstrated by the McGurk illusion where visual speech (for example /ga/) dubbed with incongruent auditory speech (such as /ba/) leads to a modified auditory percept (/da/). Recent studies have indicated that perception of the incongruent speech stimuli used in McGurk paradigms involves mechanisms of both general and audiovisual speech specific mismatch processing and that general mismatch processing modulates induced theta-band (4-8 Hz) oscillations.

View Article and Find Full Text PDF

Background: We propose rigorously optimised supervised feature extraction methods for multilinear data based on Multilinear Discriminant Analysis (MDA) and demonstrate their usage on Electroencephalography (EEG) and simulated data. While existing MDA methods use heuristic optimisation procedures based on an ambiguous Tucker structure, we propose a rigorous approach via optimisation on the cross-product of Stiefel manifolds. We also introduce MDA methods with the PARAFAC structure.

View Article and Find Full Text PDF

Incongruent audiovisual speech stimuli can lead to perceptual illusions such as fusions or combinations. Here, we investigated the underlying audiovisual integration process by measuring ERPs. We observed that visual speech-induced suppression of P2 amplitude (which is generally taken as a measure of audiovisual integration) for fusions was similar to suppression obtained with fully congruent stimuli, whereas P2 suppression for combinations was larger.

View Article and Find Full Text PDF

Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk-MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret.

View Article and Find Full Text PDF

Lesions to Broca's area cause aphasia characterized by a severe impairment of the ability to speak, with comparatively intact speech perception. However, some studies have found effects on speech perception under adverse listening conditions, indicating that Broca's area is also involved in speech perception. While these studies have focused on auditory speech perception other studies have shown that Broca's area is activated by visual speech perception.

View Article and Find Full Text PDF
Article Synopsis
  • Facial configuration is crucial for perceiving identity and expression from faces, and it influences visual speech perception, particularly when faces are upright, as seen in the Thatcher effect.
  • The McThatcher effect illustrates how the Thatcherization of faces disrupts the McGurk illusion, which shows how visual speech can affect auditory speech perception.
  • The study found that Thatcherization impacts the strength of the McGurk illusion and its corresponding auditory response (McGurk-MMN) primarily for upright faces, suggesting that a stronger visual cue is necessary for it to influence auditory perception.
View Article and Find Full Text PDF

In this study, we aim to automatically identify multiple artifact types in EEG. We used multinomial regression to classify independent components of EEG data, selecting from 65 spatial, spectral, and temporal features of independent components using forward selection. The classifier identified neural and five nonneural types of components.

View Article and Find Full Text PDF

Mobile brain imaging solutions, such as the Smartphone Brain Scanner, which combines low cost wireless EEG sensors with open source software for real-time neuroimaging, may transform neuroscience experimental paradigms. Normally subject to the physical constraints in labs, neuroscience experimental paradigms can be transformed into dynamic environments allowing for the capturing of brain signals in everyday contexts. Using smartphones or tablets to access text or images may enable experimental design capable of tracing emotional responses when shopping or consuming media, incorporating sensorimotor responses reflecting our actions into brain machine interfaces, and facilitating neurofeedback training over extended periods.

View Article and Find Full Text PDF

Pure alexia is a selective deficit in reading, following lesions to the posterior left hemisphere. Writing and other language functions remain intact in these patients. Whether pure alexia is caused by a primary problem in visual perception is highly debated.

View Article and Find Full Text PDF

The psychometric function of single-letter identification is typically described as a function of stimulus intensity. However, the effect of stimulus exposure duration on letter identification remains poorly described. This is surprising because the effect of exposure duration has played a central role in modeling performance in whole and partial report (Shibuya & Bundesen, 1988).

View Article and Find Full Text PDF

Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech signal. Here, we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects.

View Article and Find Full Text PDF

A change in sound intensity can facilitate luminance change detection. We found that this effect did not depend on whether sound intensity and luminance increased or decreased. In contrast, luminance identification was strongly influenced by the congruence of luminance and sound intensity change leaving only unsigned stimulus transients as the basis for audiovisual integration.

View Article and Find Full Text PDF

Maximum likelihood models of multisensory integration are theoretically attractive because the goals and assumptions of sensory information processing are explicitly stated in such optimal models. When subjects perceive stimuli categorically, as opposed to on a continuous scale, Maximum Likelihood Integration (MLI) can occur before or after categorization-early or late. We introduce early MLI and apply it to the audiovisual perception of rapid beeps and flashes.

View Article and Find Full Text PDF

In face-to-face conversation speech is perceived by ear and eye. We studied the prerequisites of audio-visual speech perception by using perceptually ambiguous sine wave replicas of natural speech as auditory stimuli. When the subjects were not aware that the auditory stimuli were speech, they showed only negligible integration of auditory and visual stimuli.

View Article and Find Full Text PDF

Information processing in auditory and visual modalities interacts in many circumstances. Spatially and temporally coincident acoustic and visual information are often bound together to form multisensory percepts [B.E.

View Article and Find Full Text PDF