Relative Weights Analysis is a "Grey Box" Relative Importance Algorithm
Abstract:
Relative weights and dominance analysis offer two promising relative importance methods for multiple regression. Whereas dominance analysis offers more statistically interpretable solutions, calculating such solutions is computationally burdensome. Conversely, although relative weights are computationally simpler, interpreting said weights statistically is more difficult. Trading statistical interpretability for computational simplicity is sometimes called black box prediction or using black box machine learning algorithms, which is often viewed skeptically in applied psychology. The purpose of this talk is to highlight that despite this machine learning skepticism, many applied psychologists are comfortable using relative weights analysis, which, as I argue, is itself a relatively opaque or grey box statistical method. In other words, applied psychological researchers seem comfortable with the computational simplicity / statistical interpretability trade off when conducting relative importance analysis while also rejecting said trade off for a wider range of machine learning analytical methods. I argue here that making meaningful progress as a science requires applied psychological researchers to reconsider their perspectives on relative importance analysis and machine learning in general.