Publications by authors named "Keyan Bi"

Motivation: Cell membrane segmentation in electron microscopy (EM) images is a crucial step in EM image processing. However, while popular approaches have achieved performance comparable to that of humans on low-resolution EM datasets, they have shown limited success when applied to high-resolution EM datasets. The human visual system, on the other hand, displays consistently excellent performance on both low and high resolutions.

View Article and Find Full Text PDF

The belief that learning can be modulated by social context is mainly supported by high-level value-based learning studies. However, whether social context can even modulate low-level learning such as visual perceptual learning (VPL) is still unknown. Unlike traditional VPL studies in which participants were trained singly, here, we developed a novel dyadic VPL paradigm in which paired participants were trained with the same orientation discrimination task and could monitor each other's performance.

View Article and Find Full Text PDF

Recently a theory (Zhaoping, Vision Research, 136, 32-49, 2017) proposed that top-down feedback from higher to lower visual cortical areas, to aid visual recognition, is stronger in the central than in the peripheral visual fields. Since top-down feedback helps feature binding, a critical visual recognition process, this theory predicts that insufficient feedback in the periphery should make feature misbinding more likely. To test this prediction, this study assessed binding between color and motion features, or between luminance and motion features, at different visual field eccentricities.

View Article and Find Full Text PDF

Background: Transcranial alternating current stimulation (tACS) has been widely used to alter ongoing brain rhythms in a frequency-specific manner to modulate relevant cognitive functions, including visual functions. Therefore, it is a useful tool for exploring the causal role of neural oscillations in cognition. Visual functions can be improved substantially by training, which is called visual perceptual learning (VPL).

View Article and Find Full Text PDF