Are A levels the best indicator of student potential in higher education or should the government consider introducing US-style SATs? Mandy Garner reports
As this year's A-level results are published, questions are likely to arise about whether such exams are the fairest way to increase access and reach the magic 50 per cent participation rate in higher education that the government is so keen to achieve.
Earlier this year, the Sutton Trust published research that showed that American-style aptitude tests (SATs) - the IQ-type tests used to measure academic potential - could offer students who did not perform well at the traditional A-level exams an alternative that could be used in addition to A levels to broaden university intake.
Head teachers supported calls for the government to look into the matter and Oxford is already piloting such tests.
Ironically, at the same time as the United Kingdom is considering SATs, they have been at the centre of a row in the US after the University of California said that it wanted to abandon them in favour of a more content-based test. Its president believes the existing test limits the high-school curriculum. There are also concerns that the tests in English discriminate against immigrant children who have English as a second language, and that they are culturally biased. Afro-American and Latino children traditionally do least well at SATs, although they do better at SATs than they do in the classroom.
The SAT is taken by about 2 million Americans a year and is administered by the College Board, which was founded in 1900 to help students make the transition to higher education. The Educational Testing Service develops and administers the test, and members and outside experts devise the questions, which are reviewed by the SAT committee, made up of College Board members.
The most popular aptitude test is the SAT I, which measures maths and verbal reasoning abilities. The SAT II measures aptitude in subjects such as English, maths, science, history and foreign languages. In the South and Midwest, the ACT, a test published by American College Testing that measures a student's aptitude in English, maths reading and science reasoning, is more popular.
The THES decided to contribute to the debate by subjecting a range of academics and students to a SAT I test. Surprisingly, volunteers were thin on the ground. Many found it hard to find a spare three hours at all - Laurie Taylor asked to do the test in 15-minute bursts. In the end, three brave volunteers put their heads above the parapet: Peter Knight, vice-chancellor of the University of Central England; Dave Barlow, a senior lecturer in the pharmacy department at King's College London; and Corinne Spivack, a student in year 12 at Beal High School in Redbridge, Surrey.
The exam is divided into six sections, which alternate between English and maths and between easy and hard questions - all with multiple-choice answers.
The THES award for top swot went to Peter Knight, who got a startling combined score of 1,490, easily more than the 1,200 he would need to get into an Ivy League university. His scores for English and maths were more or less evenly matched.
Next came Dave Barlow who scored 1,330. Ironically, given his background in sciences, he did best in English, although he complained that the English questions were the hardest. Corinne Spivack, who hopes to go on to study languages, got 940 and admitted she struggled at the end of the exam because of the chopping and changing between maths and English. Maths was her weak point. Below they give their views on the test.
The THES instructions were explicit. I had to perform under exam conditions with only a calculator for company. I opted for shutting myself in the study after having excluded all the family from the house. This enforced isolation was nearly successful except that our giant ginger cat managed to push open the door and helpfully sat on the answer sheet. He then showed his displeasure by noisily honking up a furball. That is my appeal for extenuating circumstances.
University entry is a competitive process. The selection is decided largely on the basis of the applicant's achievement at A level. The disadvantages of this system include conditional offers and the associated bear garden that is the clearing system. The A-level results are not generally a good predictor of the degree that will ultimately be achieved. The system is likely to deteriorate further as a result of the ludicrous new Universities and Colleges Admissions Service scoring system, which seeks to pretend that an "A" at AS level is in some magical way equal to a "C" at A level.
So would SATs be better? They are certainly well designed, sophisticated and easy to administer. They provide a post-qualification admission system and may also be a better predictor of success than A levels. I wonder how good a predictor of success they are in the United States. As an idea it is worth thinking about. Meanwhile, I do not intend to sit an SAT ever again!
The test seemed well designed and the times allocated to the different sections sensible. I am intrigued to know, however, why those compiling them thought it necessary to alternate between blocks of English and maths questions. Personally, I would have preferred to have been presented with all English questions together and all maths questions together.
Also, the English tests seemed to require a higher level of specific vocabulary - such as evanescent and circumlocution - than the maths, which required only an elementary knowledge of terms such as integers.
If the pharmacy department at King's were to demand that all prospective students sat such tests and made offers only to those who achieved scores on a par with those demanded for top US colleges, our first-year intake (currently at 100 plus) would almost certainly fall to single figures. The literacy and numeracy skills of our incoming students would prove woefully inadequate.
But introducing such an admissions policy would be entirely inappropriate. GCSE and A-level courses do not equip students for this kind of test. We find that school-leavers with respectable A-level maths grades are generally competent at calculus but are much less confident at simple arithmetic, for example. Equally, we find that students have very limited abilities in English comprehension and writing.
This is why we decided some years ago that we needed to provide students with formative exercises in essay writing and a compulsory (elementary) maths course to cover these deficiencies.
The test lasts three hours - a long time for any exam, with both maths and English content, which increased the difficulty factor. Towards the end, it became so tiresome that my mind was unable to focus effectively. The temptation to guess many of the answers was great. A student taking this exam in "real" circumstances, in the knowledge that success would help them get into college, would hopefully be more prepared and would not be so tempted, but the fact that the exam is so varied could mean their answers might not reflect their true capabilities. However, I believe the test might be a good way of assessing people's ability if it were better designed. Either it could contain a broader cross-section of subject matter or it could be made into two separate shorter tests - one for English and one for maths. This would help students to concentrate fully on one subject or the other for its 90-minute duration.
1) If the points P (-2, 6) Q (-2, 1), and R (2, 1) are vertices of a triangle, what is the area of the triangle?
2) If a pound of grass seed covers an area of 500 square feet and costs $3.25, what is the cost, in dollars, of the seed needed to cover a level rectangular area that measures 200ft by 300ft?
3) Billboard is to advertisement as
a) sculpture: museum
b) store: window
c) library: book
d) canvas: painting
e) theatre: intermission
4) Tunnel is to mine as
a) conduit: fluid
b) corner: intersection
c) sign: detour
d) aisle: seat
e) corridor: building
© College Board
Scroll down for the answers...