Publications by authors named "Peter Bentler"

Climate change is a critical issue of our time, and its causes, pathways, and forecasts remain a topic of broader discussion. In this paper, we present a novel data driven pathway analysis framework to identify the key processes behind mean global temperature and sea level rise, and to forecast the magnitude of their increase from the present to 2100. Based on historical data and dynamic statistical modeling alone, we have established the causal pathways that connect increasing greenhouse gas emissions to increasing global mean temperature and sea level, with its intermediate links encompassing humidity, sea ice coverage, and glacier mass, but not for sunspot numbers.

View Article and Find Full Text PDF
Alpha, FACTT, and Beyond.

Psychometrika

December 2021

Sijtsma and Pfadt (Psychometrika, 2021) provide a wide-ranging defense for the use of coefficient alpha. Alpha is practical and useful when its limitations are acceptable. This paper discusses several methodologies for reliability, some new here, that go beyond alpha and were not emphasized by Sijtsma and Pfadt.

View Article and Find Full Text PDF

n real data analysis with structural equation modeling, data are unlikely to be exactly normally distributed. If we ignore the non-normality reality, the parameter estimates, standard error estimates, and model fit statistics from normal theory based methods such as maximum likelihood (ML) and normal theory based generalized least squares estimation (GLS) are unreliable. On the other hand, the asymptotically distribution free (ADF) estimator does not rely on any distribution assumption but cannot demonstrate its efficiency advantage with small and modest sample sizes.

View Article and Find Full Text PDF

Purpose: Black dialysis patients report better health-related quality of life (HRQOL) than White patients, which may be explained if Black and White patients respond systematically differently to HRQOL survey items.

Methods: We examined differential item functioning (DIF) of the Kidney Disease Quality of Life 36-item (KDQOL-36) Burden of Kidney Disease, Symptoms and Problems with Kidney Disease, and Effects of Kidney Disease scales between Black (n = 18,404) and White (n = 21,439) dialysis patients. We fit multiple group confirmatory factor analysis models with increasing invariance: a Configural model (invariant factor structure), a Metric model (invariant factor loadings), and a Scalar model (invariant intercepts).

View Article and Find Full Text PDF

Background: The Centers for Medicare & Medicaid Services require that dialysis patients' health-related quality of life be assessed annually. The primary instrument used for this purpose is the Kidney Disease Quality of Life 36-Item Short-Form Survey (KDQOL-36), which includes the SF-12 as its generic core and 3 kidney disease-targeted scales: Burden of Kidney Disease, Symptoms and Problems of Kidney Disease, and Effects of Kidney Disease. Despite its broad use, there has been limited evaluation of KDQOL-36's psychometric properties.

View Article and Find Full Text PDF

Classical test theory reliability coefficients are said to be population specific. Reliability generalization, a meta-analysis method, is the main procedure for evaluating the stability of reliability coefficients across populations. A new approach is developed to evaluate the degree of invariance of reliability coefficients to population characteristics.

View Article and Find Full Text PDF

Internal consistency reliability coefficients based on classical test theory, such as α, ω, λ₄, model-based ρ, and the greatest lower bound ρ, are computed as ratios of estimated common variance to total variance. They omit specific variance. As a result they are downward-biased and may fail to predict external criteria (McCrae et al.

View Article and Find Full Text PDF

Asymptotically optimal correlation structure methods with binary data can break down in small samples. A new correlation structure methodology based on a recently developed odds-ratio (OR) approximation to the tetrachoric correlation coefficient is proposed as an alternative to the LPB approach proposed by Lee et al. (1995).

View Article and Find Full Text PDF

Rigdon (2012) suggests that partial least squares (PLS) can be improved by killing it, that is, by making it into a different methodology based on components. We provide some history on problems with component-type methods and develop some implications of Rigdon's suggestion. It seems more appropriate to maintain and improve PLS as far as possible, but also to freely utilize alternative models and methods when those are more relevant in certain data analytic situations.

View Article and Find Full Text PDF

Extending the theory of lower bounds to reliability based on splits given by Guttman (in Psychometrika 53, 63-70, 1945), this paper introduces quantile lower bound coefficients λ 4(Q) that refer to cumulative proportions of potential locally optimal "split-half" coefficients that are below a particular point Q in the distribution of split-halves based on different partitions of variables into two sets. Interesting quantile values are Q=0.05,0.

View Article and Find Full Text PDF

The current study focuses on the relationships among a trauma history, a substance use history, chronic homelessness, and the mediating role of recent emotional distress in predicting drug treatment participation among adult homeless people. We explored the predictors of participation in substance abuse treatment because enrolling and retaining clients in substance abuse treatment programs is always a challenge particularly among homeless people. Participants were 853 homeless adults from Los Angeles, California.

View Article and Find Full Text PDF

Both the family and school environments influence adolescents' violence, but there is little research focusing simultaneously on the two contexts. This study analyzed the role of positive family and classroom environments as protective factors for adolescents' violence against authority (parent abuse and teacher abuse) and the relations between antisocial behavior and child-to-parent violence or student-to-teacher violence. The sample comprised 687 Spanish students aged 12-16 years, who responded to the Family Environment Scale (FES) and the Classroom Environment Scale (CES).

View Article and Find Full Text PDF

High-dimensional longitudinal data involving latent variables such as depression and anxiety that cannot be quantified directly are often encountered in biomedical and social sciences. Multiple responses are used to characterize these latent quantities, and repeated measures are collected to capture their trends over time. Furthermore, substantive research questions may concern issues such as interrelated trends among latent variables that can only be addressed by modeling them jointly.

View Article and Find Full Text PDF

Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and two well-known robust test statistics. A modification to the Satorra-Bentler scaled statistic is developed for the condition that sample size is smaller than degrees of freedom.

View Article and Find Full Text PDF

The item factor analysis model for investigating multidimensional latent spaces has proved to be useful. Parameter estimation in this model requires computationally demanding high-dimensional integrations. While several approaches to approximate such integrations have been proposed, they suffer various computational difficulties.

View Article and Find Full Text PDF

Molenaar (2003, 2011) showed that a common factor model could be transformed into an equivalent model without factors, involving only observed variables and residual errors. He called this invertible transformation the Houdini transformation. His derivation involved concepts from time series and state space theory.

View Article and Find Full Text PDF

Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators.

View Article and Find Full Text PDF

Normal-distribution-based maximum likelihood (ML) and multiple imputation (MI) are the two major procedures for missing data analysis. This article compares the two procedures with respects to bias and efficiency of parameter estimates. It also compares formula-based standard errors (SEs) for each procedure against the corresponding empirical SEs.

View Article and Find Full Text PDF

Based on the Bayes modal estimate of factor scores in binary latent variable models, this paper proposes two new limited information estimators for the factor analysis model with a logistic link function for binary data based on Bernoulli distributions up to the second and the third order with maximum likelihood estimation and Laplace approximations to required integrals. These estimators and two existing limited information weighted least squares estimators are studied empirically. The limited information estimators compare favorably to full information estimators based on marginal maximum likelihood, MCMC, and multinomial distribution with a Laplace approximation methodology.

View Article and Find Full Text PDF

Bi-factor analysis is a form of confirmatory factor analysis originally introduced by Holzinger and Swineford (Psychometrika 47:41-54, 1937). The bi-factor model has a general factor, a number of group factors, and an explicit bi-factor structure. Jennrich and Bentler (Psychometrika 76:537-549, 2011) introduced an exploratory form of bi-factor analysis that does not require one to provide an explicit bi-factor structure a priori.

View Article and Find Full Text PDF

This article develops a procedure based on copulas to simulate multivariate nonnormal data that satisfy a prespecified variance-covariance matrix. The covariance matrix used can comply with a specific moment structure form (e.g.

View Article and Find Full Text PDF

Bi-factor analysis is a form of confirmatory factor analysis originally introduced by Holzinger. The bi-factor model has a general factor and a number of group factors. The purpose of this paper is to introduce an exploratory form of bi-factor analysis.

View Article and Find Full Text PDF

Finite mixture factor analysis provides a parsimonious model to explore latent group structures of high-dimensional data. In this modeling framework, we can explore latent structures for continuous responses. However, dichotomous items are often used to define latent domains in practice.

View Article and Find Full Text PDF

Indefinite symmetric matrices that are estimates of positive definite population matrices occur in a variety of contexts such as correlation matrices computed from pairwise present missing data and multinormal based theory for discretized variables. This note describes a methodology for scaling selected off-diagonal rows and columns of such a matrix to achieve positive definiteness. As a contrast to recently developed ridge procedures, the proposed method does not need variables to contain measurement errors.

View Article and Find Full Text PDF

Maximum likelihood is commonly used for estimation of model parameters in analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in maximum likelihood analysis.

View Article and Find Full Text PDF