Showing that there is a relationship between variables is an important function of nearly every statistic that we have discussed throughout this text. We have conceptualized that general idea in several different ways; We have looked at effect sizes like Cohen’s d, we’ve looked at correlations, and we’ve considered measures like R2. When it comes to using factors in social research, it is useless to use a factor to explain something when a single variable does just as good a job (after all, every good social scientist aims for parsimony). When we want to get at how good of a job a factor does at explaining variance versus a single variable, we can examine a statistic known as an eigenvalue.
The eigenvalue is a measure of how much of the variance in the observed variables a factor explains. Any factor with an eigenvalue greater than one (1.00) explains more variance than a single observed variable does. Note that you cannot interpret an eigenvalue as you do R2. The eigenvalue is one possible method of deciding which factors to retain and which ones to “throw out” because they do not contain enough information to be useful. The interpretation of factor analyses can be difficult because there are technically as many factors as there are variables entered into the analyses. They are always presented (by software and in the professional literature) in the order of how much variation they explain. Factors that explain very little of the variation are usually discarded.
Statistical software usually offers the researcher some flexibility in determining the number of factors to extract. There are exploratory methods that use statistical cut points (like specified eigenvalues). There are confirmatory methods that extract the number of factors that the researcher specifies. The correct strategy depends largely on the researcher’s purpose rather than any sort of objectively “best” method.
Last Modified: 02/14/2019