College students searched for the letter "a" in prose passages typed normally, with an asterisk (Experiments 1 and 2) or the letter "x" (Experiment 3) replacing every interword space, or with asterisks replacing only some of the interword spaces (Experiment 2). Contrary to predictions based on masking through lateral interference but consistent with predictions based on studies of eye movement monitoring and unitization, asterisks or instances of the letter "x" surrounding the word "a" actually made the letter "a" easier to detect in that word, but generally not in other words in the text. It is concluded that for very common words, reading units may extend beyond the word boundary to include the surrounding interword spaces.

Download full-text PDF

Source
http://dx.doi.org/10.3758/bf03195847DOI Listing

Publication Analysis

Top Keywords

interword spaces
12
reading units
8
letter "a"
8
letter "x"
8
replacing interword
8
predictions based
8
letter
6
units include
4
interword
4
include interword
4

Similar Publications

A neural machine translation method based on split graph convolutional self-attention encoding.

PeerJ Comput Sci

February 2024

School of Information Engineering, Fuyang Normal University, Fuyang, Anhui, China.

With the continuous advancement of deep learning technologies, neural machine translation (NMT) has emerged as a powerful tool for enhancing communication efficiency among the members of cross-language collaborative teams. Among the various available approaches, leveraging syntactic dependency relations to achieve enhanced translation performance has become a pivotal research direction. However, current studies often lack in-depth considerations of non-Euclidean spaces when exploring interword correlations and fail to effectively address the model complexity arising from dependency relation encoding.

View Article and Find Full Text PDF

Quantum-inspired neural network with hierarchical entanglement embedding for matching.

Neural Netw

February 2025

School of Information Technology, Halmstad University, Halmstad, Sweden. Electronic address:

Quantum-inspired neural networks (QNNs) have shown potential in capturing various non-classical phenomena in language understanding, e.g., the emgerent meaning of concept combinations, and represent a leap beyond conventional models in cognitive science.

View Article and Find Full Text PDF

The Effect of Visual Word Segmentation Cues in Tibetan Reading.

Brain Sci

September 2024

Plateau Brain Science Research Center, Tibet University, Lhasa 850000, China.

Background/objectives: In languages with within-word segmentation cues, the removal or replacement of these cues in a text hinders reading and lexical recognition, and adversely affects saccade target selection during reading. However, the outcome of artificially introducing visual word segmentation cues into a language that lacks them is unknown. Tibetan exemplifies a language that does not provide visual cues for word segmentation, relying solely on visual cues for morpheme segmentation.

View Article and Find Full Text PDF

It is well known that the Chinese writing system lacks visual cues for word boundaries, such as interword spaces. However, characters must be grouped into words or phrases for understanding, and the lack of interword spaces can cause certain ambiguity. In the current study, young and older Chinese adults' eye movements were recorded during their reading of naturally unspaced sentences, where consecutive words or nonwords were printed using alternating colors.

View Article and Find Full Text PDF

One difference among writing systems is how orthographic cues are used to demarcate words; although most alphabetic scripts use inter-word spaces, some Asian scripts do not explicitly mark word boundaries (e.g., Chinese).

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!