Intensive longitudinal data analysis, commonly used in psychological studies, often concerns outcomes that have strong floor effects, that is, a large percentage at its lowest value. Ignoring a strong floor effect, using regular analysis with modeling assumptions suitable for a continuous-normal outcome, is likely to give misleading results. This article suggests that two-part modeling may provide a solution.
View Article and Find Full Text PDFTo date, cross-lagged panel modeling has been studied only for continuous outcomes. This article presents methods that are suitable also when there are binary and ordinal outcomes. Modeling, testing, identification, and estimation are discussed.
View Article and Find Full Text PDFThis article considers identification, estimation, and model fit issues for models with contemporaneous and reciprocal effects. It explores how well the models work in practice using Monte Carlo studies as well as real-data examples. Furthermore, by using models that allow contemporaneous and reciprocal effects, the paper raises a fundamental question about current practice for cross-lagged panel modeling using models such as cross-lagged panel model (CLPM) or random intercept cross-lagged panel model (RI-CLPM): Can cross-lagged panel modeling be relied on to establish cross-lagged effects? The article concludes that the answer is no, a finding that has important ramifications for current practice.
View Article and Find Full Text PDFAlcohol use has been shown to increase stress, and there is some evidence that stress predicts subsequent alcohol use during treatment for alcohol use disorder (AUD), particularly among females who are more likely to report coping-motivated drinking. Gaining a better understanding of the processes by which stress and alcohol use are linked during treatment could potentially inform AUD treatment planning. The current study aimed to characterize the association between stress and drinking during the course of AUD treatment and whether there were sex differences in these associations.
View Article and Find Full Text PDFThis review summarizes the current state of the art of statistical and (survey) methodological research on measurement (non)invariance, which is considered a core challenge for the comparative social sciences. After outlining the historical roots, conceptual details, and standard procedures for measurement invariance testing, the paper focuses in particular on the statistical developments that have been achieved in the last 10 years. These include Bayesian approximate measurement invariance, the alignment method, measurement invariance testing within the multilevel modeling framework, mixture multigroup factor analysis, the measurement invariance explorer, and the response shift-true change decomposition approach.
View Article and Find Full Text PDFThis article demonstrates that the regular LTA model is unnecessarily restrictive and that an alternative model is readily available that typically fits the data much better, leads to better estimates of the transition probabilities, and extracts new information from the data. By allowing random intercept variation in the model, between-subject variation is separated from the within-subject latent class transitions over time allowing a clearer interpretation of the data. Analysis of two examples from the literature demonstrates the advantages of random intercept LTA.
View Article and Find Full Text PDFIn many disciplines researchers use longitudinal panel data to investigate the potentially causal relationship between 2 variables. However, the conventions and concerns vary widely across disciplines. Here we focus on 2 concerns, that is: (a) the concern about random effects versus fixed effects, which is central in the (micro)econometrics/sociology literature; and (b) the concern about grand mean versus group (or person) mean centering, which is central in the multilevel literature associated with disciplines like psychology and educational sciences.
View Article and Find Full Text PDFScalar invariance is an unachievable ideal that in practice can only be approximated; often using potentially questionable approaches such as partial invariance based on a stepwise selection of parameter estimates with large modification indices. Study 1 demonstrates an extension of the power and flexibility of the alignment approach for comparing latent factor means in large-scale studies (30 OECD countries, 8 factors, 44 items, N = 249,840), for which scalar invariance is typically not supported in the traditional confirmatory factor analysis approach to measurement invariance (CFA-MI). Importantly, we introduce an alignment-within-CFA (AwC) approach, transforming alignment from a largely exploratory tool into a confirmatory tool, and enabling analyses that previously have not been possible with alignment (testing the invariance of uniquenesses and factor variances/covariances; multiple-group MIMIC models; contrasts on latent means) and structural equation models more generally.
View Article and Find Full Text PDFA limiting feature of previous work on growth mixture modeling is the assumption of normally distributed variables within each latent class. With strongly non-normal outcomes, this means that several latent classes are required to capture the observed variable distributions. Being able to relax the assumption of within-class normality has the advantage that a non-normal observed distribution does not necessitate using more than one class to fit the distribution.
View Article and Find Full Text PDFAsparouhov and Muthén (2014) presented a new method for multiple-group confirmatory factor analysis (CFA), referred to as the alignment method. The alignment method can be used to estimate group-specific factor means and variances without requiring exact measurement invariance. A strength of the method is the ability to conveniently estimate models for many groups, such as with comparisons of countries.
View Article and Find Full Text PDFThe factor mixture model (FMM) uses a hybrid of both categorical and continuous latent variables. The FMM is a good model for the underlying structure of psychopathology because the use of both categorical and continuous latent variables allows the structure to be simultaneously categorical and dimensional. This is useful because both diagnostic class membership and the range of severity within and across diagnostic classes can be modeled concurrently.
View Article and Find Full Text PDFMeasurement invariance (MI) is a pre-requisite for comparing latent variable scores across groups. The current paper introduces the concept of approximate MI building on the work of Muthén and Asparouhov and their application of Bayesian Structural Equation Modeling (BSEM) in the software Mplus. They showed that with BSEM exact zeros constraints can be replaced with approximate zeros to allow for minimal steps away from strict MI, still yielding a well-fitting model.
View Article and Find Full Text PDFGenome-wide association studies (GWAS) have failed to replicate common genetic variants associated with antidepressant response, as defined using a single endpoint. Genetic influences may be discernible by examining individual variation between sustained versus unsustained patterns of response, which may distinguish medication effects from non-specific, or placebo responses to active medication. We conducted a GWAS among 1116 subjects with Major Depressive Disorder from the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) trial who were characterized using Growth Mixture Modeling as showing a sustained versus unsustained pattern of clinical response over 12 weeks of treatment with citalopram.
View Article and Find Full Text PDFRandomized experiments are the gold standard for evaluating proposed treatments. The intent to treat estimand measures the effect of treatment assignment, but not the effect of treatment if subjects take treatments to which they are not assigned. The desire to estimate the efficacy of the treatment in this case has been the impetus for a substantial literature on compliance over the last 15 years.
View Article and Find Full Text PDFPsychol Methods
September 2012
This article proposes a new approach to factor analysis and structural equation modeling using Bayesian analysis. The new approach replaces parameter specifications of exact zeros with approximate zeros based on informative, small-variance priors. It is argued that this produces an analysis that better reflects substantive theories.
View Article and Find Full Text PDFMost comparisons of the efficacy of antidepressants have relied on the assumption that missing data are randomly distributed. Dropout rates differ between drugs, suggesting this assumption may not hold true. This paper examines the effect of non-random dropout on a comparison of two antidepressant drugs, escitalopram and nortriptyline, in the treatment of major depressive disorder.
View Article and Find Full Text PDFBackground: Bipolar disorder is a severe psychiatric disorder with high heritability. Co-morbid conditions are common and might define latent subgroups of patients that are more homogeneous with respect to genetic risk factors.
Methodology: In the Caucasian GAIN bipolar disorder sample of 1000 cases and 1034 controls, we tested the association of single nucleotide polymorphisms with patient subgroups defined by co-morbidity.
What progress prevention research has made comes through strategic partnerships with communities and institutions that host this research, as well as professional and practice networks that facilitate the diffusion of knowledge about prevention. We discuss partnership issues related to the design, analysis, and implementation of prevention research and especially how rigorous designs, including random assignment, get resolved through a partnership between community stakeholders, institutions, and researchers. These partnerships shape not only study design, but they determine the data that can be collected and how results and new methods are disseminated.
View Article and Find Full Text PDFObjective: We examined the effects of non-steroidal anti-inflammatory drugs on cognitive decline as a function of phase of pre-clinical Alzheimer disease.
Methods: Given recent findings that cognitive decline accelerates as clinical diagnosis is approached, we used rate of decline as a proxy for phase of pre-clinical Alzheimer disease. We fit growth mixture models of Modified Mini-Mental State (3MS) Examination trajectories with data from 2388 participants in the Alzheimer's Disease Anti-inflammatory Prevention Trial and included class-specific effects of naproxen and celecoxib.
This article uses a general latent variable framework to study a series of models for nonignorable missingness due to dropout. Nonignorable missing data modeling acknowledges that missingness may depend not only on covariates and observed outcomes at previous time points as with the standard missing at random assumption, but also on latent variables such as values that would have been observed (missing outcomes), developmental trends (growth factors), and qualitatively different types of development (latent trajectory classes). These alternative predictors of missing data can be explored in a general latent variable framework with the Mplus program.
View Article and Find Full Text PDFThis paper presents new methods for synthesizing results from subgroup and moderation analyses across different randomized trials. We demonstrate that such a synthesis generally results in additional power to detect significant moderation findings above what one would find in a single trial. Three general methods for conducting synthesis analyses are discussed, with two methods, integrative data analysis and parallel analyses, sharing a large advantage over traditional methods available in meta-analysis.
View Article and Find Full Text PDFLatent Class Analysis (LCA) is a statistical method used to identify subtypes of related cases using a set of categorical and/or continuous observed variables. Traditional LCA assumes that observations are independent. However, multilevel data structures are common in social and behavioral research and alternative strategies are needed.
View Article and Find Full Text PDFNEO instruments are widely used to assess Big Five personality factors, but confirmatory factor analyses (CFAs) conducted at the item level do not support their a priori structure due, in part, to the overly restrictive CFA assumptions. We demonstrate that exploratory structural equation modeling (ESEM), an integration of CFA and exploratory factor analysis (EFA), overcomes these problems with responses (N = 3,390) to the 60-item NEO-Five-Factor Inventory: (a) ESEM fits the data better and results in substantially more differentiated (less correlated) factors than does CFA; (b) tests of gender invariance with the 13-model ESEM taxonomy of full measurement invariance of factor loadings, factor variances-covariances, item uniquenesses, correlated uniquenesses, item intercepts, differential item functioning, and latent means show that women score higher on all NEO Big Five factors; (c) longitudinal analyses support measurement invariance over time and the maturity principle (decreases in Neuroticism and increases in Agreeableness, Openness, and Conscientiousness). Using ESEM, we addressed substantively important questions with broad applicability to personality research that could not be appropriately addressed with the traditional approaches of either EFA or CFA.
View Article and Find Full Text PDFStruct Equ Modeling
October 2009
This study introduces a two-part factor mixture model as an alternative analysis approach to modeling data where strong floor effects and unobserved population heterogeneity exist in the measured items. As the names suggests, a two-part factor mixture model combines a two-part model, which addresses the problem of strong floor effects by decomposing the data into dichotomous and continuous response components, with a factor mixture model, which explores unobserved heterogeneity in a population by establishing latent classes. Two-part factor mixture modeling can be an important tool for situations in which ordinary factor analysis produces distorted results and can allow researchers to better understand population heterogeneity within groups.
View Article and Find Full Text PDFObesity has become an epidemic in many countries and is one of the major risk conditions for disease including type 2 diabetes, coronary heart disease, stroke, dyslipidemia, and hypertension. Recent genome-wide association studies have identified two genes (FTO and near MC4R) that were unequivocally associated with body mass index (BMI) and obesity. For the Genetic Analysis Workshop 16, data from the Framingham Heart Study were made available, including longitudinal anthropometric and metabolic traits for 7130 Caucasian individuals over three generations, each with follow-up data at up to four time points.
View Article and Find Full Text PDF