May I insert one or two conceptual points into the debate on intelligence testing (THES, July 7, 14 ) before it takes off again on predictable lines?
Establishing what is intelligence is a conceptual not an empirical question; it involves deciding what "intelligence" means. "Intelligence" denotes not merely understanding, but the ability to understand, ie it is the rate at which one grasps relationships.
While one can learn the items between which the relationships obtain, so increasing the contexts in which one's intelligence can be realised, and while learning background features of the context may facilitate performance, intelligence cannot be just one more thing to be learnt.
Increasing intelligence would be increasing the rate at which one learnt, and it is not clear that it is something that can itself be learnt, or is even possible.
If one's measured rate increased (because, say, one had been coached in test procedures), that would not be an increase in intelligence; indeed it would be difficult to find anything that would be accounted an increase in intelligence, rather than a more accurate measurement of it. Conceptually, intelligence appears to be a basic datum about someone.
While there is much political insistence, I know of no reason, and certainly no a priori reason, why differences in this datum cannot be explained innately, ie genetically.
Reduction in intelligence can be explained physically by, eg brain damage or drugs. Why, in principle, cannot differences be explained physically, eg by genes that, say, result in differences in the intricacy of neural circuitry in the brain, and why in principle should genetically homogeneous groups (if there are any such) not share that characteristic?
Again, there seems no reason in principle why this rate cannot be measured, ie quantified. As with all measurement, one would need a scale of units of logical work, and that would involved establishing a convention. Any measure would, of course, be relative to some finite set of specified relationships, which in intelligence tests it is, and "intelligence" would denote the rate of learning of that set.
Since logical relationships (consistency, contradiction, inclusion, independence) are not culture dependent, such measures are not inevitably culturally biased.
Predictions are in principle possible from the rate, which is a ratio of amount of logical work done in unit time, but such predictions must be confined to the class of items in the set originally measured, and it needs to be remembered that other variables - attention, motivation, anxiety level - also affect the score.
The more diverse the set of items, the greater its predictive value. Most trouble has arisen when there have been extrapolations to a rate of learning of items in the set originally measured, and it needs to be remembered that other variables - attention, motivation, anxiety level - also affect the score. Of course, the more diverse the set of items, the greater its predictive value.
Most trouble has arisen when there have been extrapolations to a rate of learning of items not in the original test set, or to some completely general trait. This is nonsense, as are inferences, as in the 11 plus, from amount of knowledge to intelligence.
Having said all of this, such a measure is consistent with total ignorance and lack of understanding of any extra test situation, and with complete absence of any context - specific competence or motivation.
While a measure of this specific ability is a coherent possibility, then, and perhaps has its uses for those who have to make future educational provision, especially as one necessary condition of educability, it must be of limited value to employers, or anyone else who needs to be able to predict performance in some specific role.
University of London