Methodological Notes

In Brief Note

latent-variable-scheme

In social sciences, researchers often study hidden or underlying variables that represent complex ideas, such as personality traits or characteristics of organizations. Traditionally, simple averages or sums have been used to estimate these variables, but there is a more accurate and reliable method called congeneric approaches.

In survey research, the CLC Estimator helps calculate these underlying variables, often called latent constructs. It does this by considering each item's unique connection to the latent construct and its margin of error.

In congeneric approaches, the higher the correlation between an item and the latent construct, the higher the assigned loading. For example, if X1 is more closely related to the latent construct than X2, X1 will have a greater weight than X2 in the latent construct.

Extended Note

Introduction

Latent constructs are essential in social sciences for measuring abstract entities like personality traits, leadership attitudes, and organizational characteristics (Graham, 2006; McNeish and Wolf, 2020). In psychological research, variables of interest like motivation, mathematics ability, or anxiety are often not directly measurable (Jöreskog & Sörbom, 1979). Therefore, latent constructs are conceptualized through multiple items, and their estimation follows two main approaches: parallel and congeneric (Jöreskog, 1971). This discussion explores the limitations of parallel approaches and the advantages of congeneric approaches in estimating unidimensional latent constructs.

Parallel vs Congeneric Approaches

Parallel approaches assume that items have the same loading and error variance, and each item contributes equally to the latent construct (Jöreskog, 1971). The most popular parallel approaches include sum and average scores. However, McNeish and Wolf (2020) argue that parallel approaches often lack theoretical justification and supporting evidence, leading to potential replicability issues (Flake & Fried, 2019; Fried & Flake, 2018). On the other hand, congeneric approaches allow for unique loadings and error variances for each item, making each item's contribution to the latent construct different (Millsap and Everson, 1991; Graham, 2006; McNeish and Wolf, 2020). 

Let ξ represent the latent construct, and X1, ..., Xp be p items. A factor model is typically defined as:

Xi = λiξ + εi, i = 1,..., p

E(εi) = 0, Var(εi) = θi

E(ξ) = 0, Var(ξ) = 1

In this definition, λi, εi, and θi represent the mean, (unstandardized) loading, and measurement error of item Xi, respectively. Each item is expressed as a linear function of the latent construct plus a measurement error, meaning the latent construct is conceived as a common causal structure underlying the items. Consequently, changes in the construct are reflected by changes in the items, and the items are expected to be highly correlated and interchangeable: dropping an item should not alter the conceptual meaning of the construct.

Estimating latent constructs based on multiple items follows two main approaches: parallel and congeneric (Jöreskog, 1971). In parallel approaches, items have the same loading and error variance (i.e., λi = λ and θi = θ, parallel assumption), meaning each item contributes equally to the latent construct. The most popular parallel approaches include sum and average scores, which define the latent construct as the raw sum or average of item scores. In contrast, congeneric approaches assign unique loadings and error variances to each item (Millsap & Everson, 1991; Graham, 2006; McNeish & Wolf, 2020). Thus, the higher the correlation between an item and the latent construct, the higher the loading. For example, if I1 is more closely related to the latent variable than I2, then I1 will have a greater weight than I2 in the calculation of the latent construct. 

This implies that, unlike parallel approaches, the measurement of each item contributes differently to the latent construct. This makes congeneric approaches more appropriate for estimating latent constructs in survey research, especially when researchers adopt previously validated measures (McNeish and Wolf, 2020).

Importance of Validity and Reliability

Scale validity and reliability are crucial aspects of the measurement process. However, several studies have reported that the rigor accompanying scales is often scant (Barry et al., 2014; Crutzen & Peters, 2017; Flake, Pek, & Hehman, 2017). As shown highlitgted by McNeish and Wolf (2020), Crutzen and Peters (2017) found that less than 3% of studies reported information about the validity of their scales, and only 13% of studies provided evidence of validity based on the internal structure (Flake et al., 2017). This highlights the importance of proper validation and the need for better reporting practices in psychological research.

Addressing the Limitations: A User-friendly Shiny App

To address the above problem, we developed a user-friendly Shiny app, the CLC Estimator, that allows social scientists to estimate latent constructs based on congeneric approaches (Marzi et al., 2023). The app can estimate a unidimensional latent construct based on several different congeneric methods, given a data file provided by the user, including item measurements. The CLC Estimator app is helpful when a statistical package requires programming expertise to perform the congeneric estimation. (Marzi et al., 2023). For example, congeneric-based estimates are typically not ready for use in software that performs regression analysis, QCA, ANOVA, MANOVA, and other types of analysis, making it difficult for users unfamiliar with statistical environments to obtain a proper estimate of congeneric latent constructs.

Conclusion

In conclusion, congeneric approaches may offer a more reliable and valid estimation of unidimensional latent constructs compared to parallel approaches (Marzi et al., 2023). Researchers should be aware of the assumptions and limitations associated with different scoring methods and ensure that their chosen approach aligns with the theoretical and methodological assumptions of the original validation process. Having developed a user-friendly tool like the CLC Estimator Shiny app can help facilitate the adoption of congeneric approaches, improving the rigour and replicability of research involving latent constructs.


References

Barry, A. E., Chaney, B., Piazza-Gardner, A. K., & Chavarria, E. A. (2014). Validity and reliability reporting practices in the field of health education and behavior. Health Education & Behavior, 41, 12–18 

Crutzen, R., & Peters, G.-J. Y. (2017). Scale quality: Alpha is an inadequate estimate and factor-analytic evidence is needed first of all. Health Psychology Review, 11(3), 242–247. https://doi.org/10.1080/17437199.2015.1124240

Flake, J. K., & Fried, E. I. (2019). Measurement Schmeasurement: Questionable measurement practices and how to avoid them. Advances in Methods and Practices in Psychological Science, 2(4), 433–456. https://doi.org/10.1177/2515245919874805

Flake, J. K., Pek, J., & Hehman, E. (2017). Construct validation in social and personality research: Current practice and recommendations. Social Psychological and Personality Science, 8(4), 370–378. https://doi.org/10.1177/1948550617693063

Fried, E. I., & Flake, J. K. (2018). Measurement matters. APS Observer, 31.

Graham, J. M. (2006). Congeneric and (essentially) tau-equivalent estimates of score reliability: What they are and how to use them. Educational and Psychological Measurement, 66(6), 930–944. https://doi.org/10.1177/0013164406288165

Jöreskog, K. G. (1971). Simultaneous factor analysis in several populations. Psychometrika, 36(4), 409–426. https://doi.org/10.1007/BF02291366

Joreskog, K. G. & Sorbom, D.(1979). Advances in factor analysis and structural equation models. Cambridge, MA: Abt Books

Marzi, G., Balzano, M., Egidi, L., & Magrini, A. (2023). CLC estimator: A tool for latent construct estimation via congeneric approaches in survey research. Multivariate Behavioral Research, (in press). https://doi.org/10.1080/00273171.2023.2193718

McNeish, D., & Wolf, M. G. (2020). Thinking twice about sum scores. Behavior Research Methods, 52(6), 2286–2305. https://doi.org/10.3758/s13428-019-01362-4

Millsap, R. E., & Everson, H. (1991). Confirmatory measurement model comparisons using latent means. Multivariate Behavioral Research, 26(3), 479-497. https://doi.org/10.1207/s15327906mbr2603_6

CLC_Published.pdf