Psychometrics is the field of study concerned with the theory and technique of psychological measurement, which includes the measurement of knowledge, abilities, attitudes, and personality traits. The field is primarily concerned with the study of differences between individuals. It involves two major research tasks, namely: (i) the construction of instruments and procedures for measurement; and (ii) the development and refinement of theoretical approaches to measurement.
Origins and background
Much of the early theoretical and applied work in psychometrics was undertaken in an attempt to measure intelligence. The origin of psychometrics has connections to the related field of psychophysics. Charles Spearman, a pioneer in psychometrics who developed approaches to the measurement of intelligence, studied under Wilhelm Wundt and was trained in psychophysics. The psychometrician L. L. Thurstone later developed and applied a theoretical approach to the measurement referred to as the law of comparative judgment, an approach which has close connections to the psychophysical theory developed by Ernst Heinrich Weber and Gustav Fechner. In addition, Spearman and Thurstone both made important contributions to the theory and application of factor analysis, a statistical method that has been used extensively in psychometrics.
More recently, psychometric theory has been applied in the measurement of personality, attitudes and beliefs, academic achievement, and in health-related fields. Measurement of these unobservable phenomena is difficult, and much of the research and accumulated art in this discipline has been developed in an attempt to properly define and quantify such phenomena. Critics, including practitioners in the physical sciences and social activists, have argued that such definition and quantification is impossibly difficult, and that such measurements are often misused. Proponents of psychometric techniques can reply, though, that their critics often misuse data by not applying psychometric criteria, and also that various quantitative phenomena in the physical sciences, such as heat and forces, cannot be observed directly but must be inferred from their manifestations.
Definition of measurement in the social sciences
The definition of measurement in the social sciences has been a controversial issue. A currently widespread definition, proposed by Stanley Smith Stevens (1946), is that measurement is "the assignment of numerals to objects or events according to some rule". This definition was introduced in the paper in which Stevens proposed four levels of measurement. Although widely adopted, this definition differs in important respects from the more classical definition of measurement adopted throughout the physical sciences, which is that measurement is the numerical estimation and expression of the magnitude of one quantity relative to another (Michell, 1997). Indeed, Stevens' definition of measurement was put forward in response to the British Ferguson Committee, whose chair A. Ferguson was a physicist. The committee was appointed in 1932 by the British Association for the Advancement of Science to investigate the possibility of quantitatively estimating sensory events. Although its chair and other members were physicists, the committee also comprised several psychologists. The committee's report highlighted the importance of the definition of measurement. While Stevens' response was to propose a new definition, which has had considerable influence in the field, this was by no means the only response to the report. Another, notably different, response was to accept the classical definition, as reflected in the following statement:
- Measurement in psychology and physics are in no sense different. Physicists can measure when they can find the operations by which they may meet the necessary criteria; psychologists have but to do the same. They need not worry about the mysterious differences between the meaning of measurement in the two sciences (Reese, 1943, p. 49).
These divergent responses are reflected to a large extent within alternative approaches to measurement. For example, methods based on covariance matrices are typically employed on the premise that numbers, such as raw scores derived from assessments, are measurements. Such approaches implicitly entail Stevens' definition of measurement, which requires only that numbers are assigned according to some rule. The main research task, then, is generally considered to be the discovery of associations between scores, and of factors posited to underlie such associations. On the other hand, when measurement models such as the Rasch model are employed, numbers are not assigned based on a rule. Instead, in keeping with Reese's statement above, specific criteria for measurement are stated, and the objective is to construct procedures or operations that provide data which meet the relevant criteria. Measurements are estimated based on the models, and tests are conducted to ascertain whether it has been possible to meet the relevant criteria.
Instruments and procedures
The first psychometric instruments were designed to measure the concept of intelligence. The best known historical approach involves the Stanford-Binet IQ test, developed originally by the French Psychologist Alfred Binet. Contrary to a fairly widespread misconcoption, there is no compelling evidence that it is possible to measure innate intelligence through such instruments, in the sense of an innate learning capacity unaffected by experience, nor was this the original intention when they were developed. Nevertheless, IQ tests are useful tools for various purposes. An alternative conception of intelligence is that cognitive facilities within individuals are a manifestation of a general component, or general intelligence factor, as well as cognitive capacity specific to a given domain.
Psychometrics is applied widely in educational assessment to measure abilities in domains such as reading, writing, and mathematics. The main approaches in applying tests in these domains have been Classical Test Theory and the more modern Item Response Theory and Rasch measurement models. These modern approaches permit joint scaling of persons and assessment items, which provides a basis for mapping of developmental continua by allowing descriptions of the skills displayed at various points along a continuum. Such approaches provide powerful information regarding the nature of developmental growth within various domains.
Another major focus in psychometrics have been on personality testing. There have been a range of theoretical approaches to conceptualising and measuring personality. Some of the better known instruments include the Minnesota Multiphasic Personality Inventory and the Myers-Briggs Type Indicator. Attitudes have also been studied extensively in psychometrics. A common approach to the measurement of attitudes is the use of the Likert scale. An alternative approach involves the application of unfolding measurement models, the most general being the Hyperbolic Cosine Model (Andrich & Luo, 1993).
Psychometric theory involves several distinct areas of study. First, psychometricians have developed a large body of theory used in the development of mental tests and analysis of data collected from these tests. This work can be roughly divided into classical test theory (CTT) and the more recent item response theory (IRT). An approach which is similar to IRT but also quite distinctive, in terms of its origins and features, is represented by the Rasch model for measurement. The development of the Rasch model, and the broader class of models to which it belongs, was explicitly founded on requirements of measurement in the physical sciences (Rasch, 1960).
Second, psychometricians have developed methods for working with large matrices of correlations and covariances. Techniques in this general tradition include factor analysis (finding important underlying dimensions in the data), multidimensional scaling (finding a simple representation for high-dimensional data) and data clustering (finding objects which are like each other). In these multivariate descriptive methods, users try to simplify large amounts of data. More recently, structural equation modeling and path analysis represent more sophisticated approaches to solving this problem of large covariance matrices. These methods allow statistically sophisticated models to be fitted to data and tested to determine if they are adequate fits.
The key traditional concepts in classical test theory are reliability and validity. A reliable measure is measuring something consistently, while a valid measure is measuring what it is supposed to measure. A reliable measure may be consistent without necessarily being valid, .e.g., a measurement instrument like a broken ruler may always under-measure a quantity by the same amount each time (consistently), but the resulting quantity is still wrong, that is, invalid. For another example, a reliable rifle will have a tight cluster of bullets in the target, while a valid one will center that cluster around the center of the target.
Both reliability and validity may be assessed mathematically. Internal consistency may be assessed by correlating performance on two halves of a test (split-half reliability); the value of the Pearson product-moment correlation coefficient is adjusted with the Spearman-Brown prediction formula to correspond to the correlation between two full-length tests. Other approaches include the intra-class correlation (the ratio of variance of measurements of a given target to the variance of all targets). A commonly used measure is Cronbach's α, which is equivalent to the mean of all possible split-half coefficients. Stability over repeated measures is assessed with the Pearson coefficient, as is the equivalence of different versions of the same measure (different forms of an intelligence test, for example). Other measures are also used.
Validity may be assessed by correlating measures with a criterion measure known to be valid. When the criterion measure is collected at the same time as the measure being validated the goal is to establish concurrent validity; when the criterion is collected later the goal is to establish predictive validity. A measure has construct validity if it is related to other variables as required by theory. Content validity, or face validity, is simply a demonstration that the items of a test are drawn from the domain being measured; it does not guarantee that the test actually measures phenomena in that domain.
Predictive or concurrent validity cannot exceed the square of the correlation between two versions of the same measure.
Item response theory models the relationship between latent traits and responses to test items. Among other advantages, IRT provides a basis for obtaining an estimate of the location of a test-taker on a given latent trait as well as the standard error of measurement of that location. For example, a university student's knowledge of history can be deduced from his or her score on a university test and then be compared reliably with a high school student's knowledge deduced from a less difficult test. Scores derived by classical test theory do not have this characteristic, and assessment of actual ability (rather than ability relative to other test-takers) must be assessed by comparing scores to those of a norm group randomly selected from the population. In fact, all measures derived from classical test theory are dependent on the sample tested, while, in principle, those derived from item response theory are not.
For some, the field of psychometrics has controversial aspects relating to the human implications of applied measurement. In part, the controversy involves the very notion of standardized tests. For others, the problematic aspects of psychometrics involve the history of the field, which involve aspects of eugenics.
- Andrich, D. & Luo, G. (1993) A hyperbolic cosine model for unfolding dichotomous single-stimulus responses. Applied Psychological Measurement, 17, 253-276.
- Michell, J. (1997). Quantitative science and the definition of measurement in psychology. British Journal of Psychology, 88, 355-383.
- Michell, J. (1999). Measurement in Psychology. Cambridge: Cambridge University Press.
- Reese, T.W. (1943). The application of the theory of physical measurement to the measurement of psychological magnitudes, with three experimental examples. Psychological Monographs, 55, 1-89.
- Rasch, G. (1960/1980). Probabilistic models for some intelligence and attainment tests. Copenhagen, Danish Institute for Educational Research), expanded edition (1980) with foreword and afterword by B.D. Wright. Chicago: The University of Chicago Press.
- Stevens, S. S. (1946). On the theory of scales of measurement. Science, 103, 667-80.
- Thurstone, L.L. (1927). A law of comparative judgement. Psychological Review, 34, 278-286.
- Thurstone, L.L. (1929). The Measurement of Psychological Value. In T.V. Smith and W.K. Wright (Eds.), Essays in Philosophy by Seventeen Doctors of Philosophy of the University of Chicago. Chicago: Open Court.
- Thurstone, L.L. (1959). The Measurement of Values. Chicago: The University of Chicago Press.