IQ is a type of standard score that indicates how far above, or how far below, his/her peer group an individual stands in mental ability. The peer group score is an IQ of 100; this is obtained by applying the same test to huge numbers of people from all socio-economic strata of society, and taking the average.

The term “IQ” was coined by the psychologist William Stern for the German term Intelligenzquotient in 1912. At that time, IQ was represented as a ratio of mental age to chronological age x 100. So, if an individual of 10 years of age has a mental age of 10, the IQ will be 100. However, if the mental age is greater than the chronological age, eg, 12 rather than 10, the IQ will be 120. Similarly, if a mental age is lower than the chronological age, the IQ will be lower than 100.

When current IQ tests were developed, the average score of the norming sample was defined as IQ 100 and standard deviations (SD) up or down are defined as, for example, 16 or 24 IQ points greater or less than 100. Mensa accepts individuals who score in the top 2%, ie, two SDs or more above the average. This would include those who score at or above 132 (Stanford-Binet) or 148 (Cattell). Mensa accepts many different tests, as long as they have been standardised and normed and are accepted by professional psychologists’ associations.

The measurement of intelligence

Sir Francis Galton was the first scientist who attempted to devise a modern test of intelligence in 1884. In his open laboratory people could have the acuity of their vision and hearing measured, as well as their reaction time to different stimuli. The world’s first mental test, created by James McKeen Cattell in 1890, consisted of similar tasks, almost all of them measuring the speed and accuracy of perception. It soon turned out, however, that such tasks cannot predict academic achievement, and therefore are probably imperfect measures of anything we would call intelligence.

The first modern-day IQ test was created by Alfred Binet in 1905. Unlike Galton, he was not inspired by scientific inquiry. Rather, he had very practical implications in mind: to be able to identify children who cannot keep up with their peers in the educational system that had recently been made compulsory for all. Binet’s test consisted of knowledge questions as well as ones requiring simple reasoning. Besides test items Binet also needed an external criterion of validity, which he found in age. Indeed, even though there is substantial variation in the pace of development, older children are by and large more cognitively advanced than younger ones.

Binet, therefore, identified the mean age at which children, on average, are capable of solving each item, and categorized items accordingly. This way he could estimate a children’s position relative to their peers: if a child, for instance, is capable of solving items that are, on average, only solved by children who are two years older, then this child is two years ahead in mental development.

Subsequently, a more accurate approach was proposed by William Stern, who suggested that instead of subtracting real age from the age estimated from test performance, the latter (termed as ‘mental age’) should be divided by the former. Hence the famous ‘intelligence quotient’ or IQ was born and defined as (mental age) / (chronological age). It indeed turned out that such a calculation is more in line with other estimates of mental performance. For instance, an 8-year old performing on the level of a 6-year old arrives at the same estimate under Binet’s system as a 6-year old performing on the level of a 4-year old. Yet in Stern’s system the 6-year old gets a lower score as 4/6 < 6/8.  Experience shows that when they both turn 10, the now 8-year old is more likely to outperform the now 6-year old in cognitive tasks, hence Stern’s method proved to be more valid.

It was the US where IQ testing became a real success story after Lewis Terman revised Binet’s test, created a much more appropriate norm than the original, and published it as the Stanford-Binet test. He was also keen to multiply the result with 100, so the final equation for is IQ is (mental age) / (chronological age) X 100. Indeed, a 130 IQ sounds much sexier than 1.3.

This method, however, only works well in children. If a child’s parents are told that their 6-year old already has the mental capabilities of an average 9-year old and therefore her IQ is 150 they will over the Moon. But if the child’s grandfather is told that even though he is only 60 his cognitive abilities are on par with an average 90-year old he might not take it as a compliment. Obviously, the quotient only works as long as Binet’s original criterion is functional, i.e. as long as older age in general means better abilities. In other words, the method is inappropriate when mental development does not take place any more.

Donald Wechsler solved the problem of calculating adult IQ by simply comparing performance to the distribution of test scores, which is a normal distribution. In his system the IQ of those whose score equals the mean of the age group is 100. This way the IQ of the average adult is 100, just like the IQ of the average child in the original system. He used the statistical properties of the normal distribution to assign IQ scores based on the extent of contemporaries one outscores. For instance, someone whose score is one standard deviation (a statistical concept that describes average dispersion) above the mean and who thus outperforms 86% of contemporaries has an IQ of 115. And so on, see the figure below.

So why is it called IQ, a quotient, if nothing gets divided? Well, the reason simply is that the concept of IQ had become too popular by then to be discorded. In adults the intelligence quotient is not really a quotient at all; it is an indication of how well one performs on mental tests compared to one’s contemporaries.

Besides extending the concept of IQ to adults another major step in the development of intelligence testing was the creation of ‘group tests’, i.e. tests can be scored in an algorithmic fashion with an answer key, and which therefore can be administered in groups instead of having to be individually administered by qualified psychologists. The first such test was created for the US army, but IQ-tests eventually spread from education and military settings to the workplace and beyond. Soon IQ-tests became one of psychology’s greatest popular successes and still are to this day.