How to structure a multiple choice question exam
Anthony Evans explains how to arrange a multiple choice question exam to provide a fair evaluation of students’ understanding, aid their learning progress and minimise cheating
You may also like
Multiple choice question (MCQs) assessments are often viewed with scepticism in higher education. I myself had always questioned their value until recently, when I began to see how useful they could be in certain learning contexts to check on students’ progress. Students who are non-native speakers often find that essay-based exams are more a test of their English skills than their comprehension of the content, and they appreciate using exam time to think rather than to write.
Companies use MCQs for important internal compliance training. If we accept the validity of MCQs for critical governance matters, I see no reason to dismiss them out of hand for higher education. I believe that MCQs have a place, provided that they are done well.
- Designing assessments to support deeper learning
- What does good assessment look like online?
- Designing ‘knowledge checker’ quizzes that motivate students to review feedback
The first step is understanding how to construct effective MCQs. I shared extensive advice on this in a previous advice resource, “Creating worthwhile multiple choice questions for higher education assessment”.
Once you have a bank of suitable questions, the issue becomes how to arrange them into an effective exam. When using MCQs, I distinguish between four main uses:
A form — this is something that students fill in either online or on paper. It is something they submit for me to monitor their progress. For example, if I provide a structured assignment, I will require them to complete a form as they go. This allows me to compare the progress of different groups and to see how much more time is needed. It is ungraded and helps to manage the flow of the session. I typically use Google Forms for MCQ forms. Given that this is ungraded, I also include open-ended questions, where students write in their answer.
A quiz — this is intended to help students check their learning. They can be used at the end of a session to cover recent concepts, or at the start of a session to permit time for reflection. They are ungraded, low risk and can be fun. I typically use Kahoot! for MCQ quizzes.
A test — this is primarily to allow students to practise for an exam. It has a similar format and uses similar types of questions. I typically use Google Forms, set as a “Quiz”, for MCQ tests.
An exam — this is a formal assessment and constitutes part of a student’s grade. To integrate with the grade book, I use whatever learning platform is used by the programme, such as Blackboard or Canvas.
Note that in this article I’ve focused on the latter two. Here are the key considerations for constructing a test or exam using MCQs:
Provide a range of difficulty — there’s no reason to assume that questions should be of equal difficulty, even if they count for the same number of points. In an essay exam, students can obtain the first 50 per cent of the grade fairly easily, but every percentage point above 90 per cent is progressively harder to achieve. This helps to create a normal distribution.
A risk with a MCQ exam is that instructors have no discretion to deliver a curve by retrospectively modifying the mark scheme, and therefore great care needs to be given to the mix of questions. When instructors grade essays, they typically restrict themselves to a narrow range, which ensures the right distribution. In a MCQ, you bring both tails into play. In a MCQ exam with 10 questions, ensuring that five are relatively easy reduces the risk of a large number of fails. And ensuring that at least one question is very difficult prevents scores of 100 per cent, which denies stronger students the ability to distinguish themselves from the rest of the cohort.
Assigning different points for different difficulty levels will also help with this. For example, having lots of simpler questions worth one point each, and several harder questions worth two or more.
Keep questions independent — having questions that sequentially follow creates higher risk for students, where missing one question severely impacts the next, and can provide information for savvy students that reduces their need to use and apply course content, where information provided in one question can be used as an input to solving another.
Having multiple questions that relate to the same exhibit can be used by duplicating the exhibit in each question.
Take steps to reduce cheating — the main risk of cheating is student communication. Providing a mixed order of questions, and a mixed order of alternatives within each stem, reduces this.
Don’t reveal the answers — when using MCQs as a practice test, it is important that students see their score, and I think it is helpful to let them see which questions they got correct and which were wrong. However, in my experience, revealing the correct answers provides a too-easy shortcut for students to take and stops the learning process.
Don’t deny them the opportunity to pass the test by giving away the solutions!
Don’t reveal the grades — make sure that you have an opportunity to review the grade distribution, correct any errors in the mark scheme, and finalise any partial credit decisions, before students see their scores.
The points above have convinced me that MCQ exams are a useful assessment method to use. However, I remain cautious and adopt the following rules:
1. Give students an opportunity to do a practice test before any graded exam.
2. Provide clear instructions and communicate them effectively.
3. Monitor the results. It is important that instructors confront their intuition about what constitutes a good or bad MCQ exam. If a significant number of students misunderstand a question, it means that it’s phrased wrong. If all students get the same answer correct or wrong, consider whether it’s serving its purpose.
The first few times I used MCQs, I got them badly wrong. But I try to be a quick learner.
Anthony J. Evans is a professor of economics at ESCP Business School.
For further insight on the pedagogical effectiveness of MCQs read:
“Writing multiple-choice questions for higher-level thinking” by Mike Dickinson
“Writing good multiple choice test questions”, by Cynthia J. Brame.