Publications by authors named "Richard A Charter"

This 'proof of concept' study was implemented in anticipation of identifying and testing a novel antigen of human origin as a potential immunogen in a paradigm that emphasizes immunomodulation and immune system reconstitution as requisites to the development of an effective human immunodeficiency virus (HIV)-acquired immune deficiency syndrome vaccine. Fifteen HIV-infected, highly active antiretroviral therapy (HAART) naive, otherwise healthy male seropositive patients were stratified by [CD4+] into 3 groups of 5 patients: group 1 >500/mm; group 2 > 250/mm but <500/mm; and group 3 < 250/mm. Five healthy male subjects were used as controls.

View Article and Find Full Text PDF

The present article extends work on Ponterotto and Ruckdeschel's Reliability Matrix for estimating the adequacy of internal consistency measures. Specifically, it uses statistical tests to determine whether a calculated coefficient alpha is equal to or greater than the hypothesized population coefficient alpha identified in the Reliability Matrix. The Feldt, Woodruff, and Salih (1987) confidence interval test and Bonett's (2002) approximate z-test and N formula are applied.

View Article and Find Full Text PDF

Over 50 years ago Payne and Jones (1957) developed what has been labeled the traditional reliable difference formula that continues to be useful as a significance test for the difference between two test scores. The traditional reliable difference is based on the standard error of measurement (SEM) and has been updated to a confidence interval approach. As an alternative to the traditional reliable difference, this article presents the regression-based reliable difference that is based on the standard error of estimate (SEE) and estimated true scores.

View Article and Find Full Text PDF

A comprehensive approach to the interpretation of difference scores is presented. Formulas for the test of statistical significance between two test scores, computed by a confidence interval, and for the calculation of the probabilities for the power of the statistical test, underinterpretation, overinterpretation, and misinterpretation are provided. Definitions and examples of their use in score interpretation are provided.

View Article and Find Full Text PDF

The author provides statistical approaches to aid investigators in assuring that sufficiently high test score reliabilities are achieved for specific research purposes. The statistical approaches use tests of statistical significance between the obtained reliability and lowest population reliability that an investigator will tolerate. The statistical approaches work for coefficient alpha and related coefficients and for alternate-forms, split-half (2-part alpha), and retest reliabilities.

View Article and Find Full Text PDF

Suppose one has a battery of K subtests and a composite for the battery is defined as the mean of the K standardized subtest scores. An individual's single-subtest deviation score is the difference between the individual's score on any single subtest and his composite score. A cluster deviation score is the difference between an examinee's average for a small set (cluster) of subtests and his composite.

View Article and Find Full Text PDF

KR-21 provides a lower limit for the computed value of KR-20. KR-20 is equivalent to coefficient alpha when a test is composed of dichotomous items scored 0 or 1. Therefore, KR-21 coefficients, computed from simple summary statistics, can be used in cases in which journal authors do not provide the test score reliability.

View Article and Find Full Text PDF

Test score reliabilities and sample sizes (N) used to establish the reliabilities are described for a variety of tests constructed for African-American populations. The sample size was 341. The average internal consistency reliability was .

View Article and Find Full Text PDF

Confidence intervals are provided for the validity coefficients calculated by Veazey, et al. for the M-FAST. Two coefficients alpha are also presented along with suggestions for different approaches to calculating the M-FAST internal consistency reliability.

View Article and Find Full Text PDF

Criterion-referenced (Livingston) and norm-referenced (Gilmer-Feldt) techniques were used to measure the internal consistency reliability of Folstein's Mini-Mental State Examination (MMSE) on a large sample (N = 418) of elderly medical patients. Two administration and scoring variants of the MMSE Attention and Calculation section (Serial 7s only and WORLD only) were investigated. Livingston reliability coefficients (rs) were calculated for a wide range of cutoff scores.

View Article and Find Full Text PDF

Coefficient alpha and an item analysis were calculated for the 16-item Benton Visual Form Discrimination Test (VFDT) using a heterogeneous sample (N = 293) of mostly elderly medical patients who were suspected of having cognitive impairment. The total score reliability was .74.

View Article and Find Full Text PDF

Part 1 presents the results of a meta-analytic study on the effects of aging on intelligence. Analysis of a total of 20 longitudinal samples shows that most of the intelligence scores rose before the age of 50 and fell at a progressively increasing rate after the age of 50. An equation describing this rise and fall in intelligence was derived.

View Article and Find Full Text PDF

Formulas requiring the computation of only three standard deviations are presented for computing the interjudge reliability coefficient for any number of judges. These formulas yield coefficients identical to those obtained from a one-way repeated-measures analysis of variance. Even researchers with small handheld calculators can use this simple approach.

View Article and Find Full Text PDF

Knight's 2003 analysis of the effect of the WAIS-III instructions on the Matrix Reasoning subtest was based on multiple t tests, which is a violation of conventional statistical procedures. Using this procedure significant differences were found between the group who know the subtest was untimed versus the group which did not know if the subtest was timed or untimed. Reanalysis of the data used three statistical alternatives: (a) Bonferroni correction for all possible t tests, (b) one-way analysis of variance, and (c) selected t tests with the Bonferroni correction.

View Article and Find Full Text PDF

Formulae for combining reliability coefficients from any number of samples are provided. These formulae produce the exact reliability one would compute if one had the raw data from the samples. Needed are the sample means, standard deviations, sample sizes, and reliability coefficients.

View Article and Find Full Text PDF

Several studies have investigated random responding to the F, F Back, and VRIN scales. Only one study attempted to provide practical cutoff scores for these scales, but was unable to reach definitive cutoffs. This study uses the normal approximation to the binomial distribution and provides confidence interval bounds for random responding at the 95, 90, and 85% levels for the F, F Back, and VRIN scales.

View Article and Find Full Text PDF

The author presented descriptive statistics for 937 reliability coefficients for various reliability methods (e.g., alpha) and test types (e.

View Article and Find Full Text PDF

This article highlights some dangers inherent in interpreting individual examinee strengths and weaknesses in WAIS-III and WISC-III profiles. In both manuals, there are tables providing point estimates for determining if a significant difference exists when comparing one subtest to the average of several subtests. However, these point estimates may lead to interpretation errors.

View Article and Find Full Text PDF

The poorly written administration and scoring instructions for the Boston Naming Test allow too wide a range of interpretations. Three different, seemingly correct interpretations of the scoring methods were compared. The results show that these methods can produce large differences in the total score.

View Article and Find Full Text PDF

In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively.

View Article and Find Full Text PDF

When the reliability of test scores must be estimated by an internal consistency method, partition of the test into just 2 parts may be the only way to maintain content equivalence of the parts. If the parts are classically parallel, the Spearman-Brown formula may be validly used to estimate the reliability of total scores. If the parts differ in their standard deviations but are tau equivalent, Cronbach's alpha is appropriate.

View Article and Find Full Text PDF

A reanalysis of the retest reliabilities for the Colored Progressive Matrices indicates Kazlauskaite and Lynn's conclusions (2002) were not accurate.

View Article and Find Full Text PDF

The effectiveness of the MCMI-III Validity scale, Scale X, and the Clinical Personality Pattern scales to detect random responding is put to the test. The binomial expansion and Monte Carlo techniques were used. If the examiner is willing to interpret tests of questionable validity, then 50% of the random responders will not be detected.

View Article and Find Full Text PDF

Internal consistency reliabiities for the WMS-III Primary Indexes, Primary Index subtest, and Ability-Memory discrepancy scores are provided. The reliabilities ranged from .00 to .

View Article and Find Full Text PDF