History of Intelligence Testing
Among the first to investigate individual differences in mental ability was a British scientist, Sir Frances Galton, who compared people based on their awards and accomplishments. This research convinced him that intelligence was inherited and led to further studies which involved evaluating individual differences in reaction time and range and specificity of the senses, which have since been shown to correlate with academic success.
A French psychologist, Alfred Binet, developed a test to accurately predict academic success when the French government asked him to help them determine which children in the public schools would have difficulty with formal education. He, and his colleague, Theodore Simon, found that tests of practical knowledge, memory, reasoning, vocabulary, and problem solving were better predictors of school success than the sensory tests used by Galton. Subjects were asked to perform simple commands and gestures, repeat spoken numbers, name objects in pictures, define common words, tell how two objects are different, and define abstract terms. Similar items are used in todayís intelligence tests.
Assuming that children all follow the same pattern of development but develop at different rates, Binet and Simon created the concept of mental age, whereby, for example, a child of any age who scored as well as an average twelve-year-old was said to have a mental age of twelve.
Binetís test was not widely used in France, but Henry Goddard, director of a school for mentally challenged students, brought it to the United States, translated it into English, and used it to test people for mental retardation. Louis Terman, another American psychologist, adapted the test for use with adults, established new standards for average ability at each age, and called it the Stanford-Binet Intelligence Scale, because of his affiliation with Stanford University.
Terman is also responsible for the term, intelligence quotient, or IQ. He changed the way the results of the test were stated from a simple mental age to a quotient, a number which is the result of dividing one number by another. In this case, the mental age is divided by the chronological age, and the result is multiplied by 100, just to get rid of the decimal point. So, a child who is eight years old and answers the test questions as well as a twelve-year-old scores an intelligence quotient of 12/8 x 100, or 150. A twelve-year-old who answers the test questions as well as an average eight-year-old would have an IQ of 8/12 x 100, or 66.
This formula works well for comparing children, but since intelligence levels off in adulthood, it is not appropriate for adults. A thirty-year-old who answers questions as well as an average twenty-year-old would have an IQ of only 20/30 x 100, or 66.
So intelligence tests today no longer use the IQ formula. Instead, the score on a modern intelligence test compares a personís performance with others his/her own age, while arbitrarily defining the average score as 100. By convention, most people still use the term IQ to refer to a score on an intelligence test.
Group Intelligence Tests
Before World War I, all intelligence tests were administered on a one to one basis. During the war, a group of psychologists, led by Robert M.Yerkes, developed two tests, one for English speakers, and one for non-English speakers or illiterates, which could be administered to groups of recruits to help the army determine the most effective placement of individuals. Highest scoring recruits were considered for officer training, and lowest scoring recruits were rejected from service.
Following the war, group tests were more popular. The National Intelligence Test, developed by Terman and Yerkes, was first used around 1920 to test school children. The Scholastic Aptitude Test (SAT) was introduced in 1926 to help colleges and universities screen prospective students.
Today individual and group intelligence tests are widely used in education, the military, and business.
© 2005 -2021 Lyric Duveyoung All rights reserved