Machine learning explainability techniques have been proposed as a means for psychologists to "explain" or interrogate a model in order to gain an understanding of a phenomenon of interest. Researchers concerned with imposing overly restrictive functional form (e.g., as would be the case in a linear regression) may be motivated to use machine learning algorithms in conjunction with explainability techniques, as part of exploratory research, with the goal of identifying important variables that are associated with/predictive of an outcome of interest. However, and as we demonstrate, machine learning algorithms are highly sensitive to the underlying causal structure in the data. The consequences of this are that predictors which are deemed by the explainability technique to be unrelated/unimportant/unpredictive, may actually be highly associated with the outcome. Rather than this being a limitation of explainability techniques per se, we show that it is rather a consequence of the mathematical implications of regression, and the interaction of these implications with the associated conditional independencies of the underlying causal structure. We provide some alternative recommendations for psychologists wanting to explore the data for important variables. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1037/met0000699 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!