pros of MCQ
objective and easy, no need for inter-rating reliability
can remove easy or odd questions easily
cons of MCQ
how can you measure partial knowledge
confidence marking - assigning confidence to answer chosen
elimination testing - eliminate all options incorrect
complete ordering - rank options from most to least fave
partial ordering - eliminate if confident and rank remaining ones
probability testing - distribute 100 points across options
probability testing
final score is sum of probabilities assigned to correct answer
precision in measuring partial knowledge
examinees tend to like it
in standard format you get 0% if select wrong answer
cons with multiple correct answers
major problem in MCQ
correct guessing
how to counter effect of correct guessing
formula scoring
1/n-1 where n = number of options
can omit answers to avoid penalty
problems with formula scoring
students don’t like being penalized
test score is influenced by metacognition - eg omitting answer when if they answered it, they would have got it correct
test score is influenced by risk taking abilities
McDaniel et al 2007 method
McDaniel et al 2007 results
short answer (cued recall) is best
negative testing effect
chose the wrong answer during retrieval practice, then in final test you choose the wrong answer again because its most familiar
marsh e3t al 2009 method
marsh et al 2009 results
negative testing effect present
high school junior vs undergrads suffered more and omitted less, rather choose wrong answer then pass.
CORRECTIVE FEEDBACK NEEEDED!!
How does related questions lead to correct choosing
competitive lures between related options leads to elaborative retrieval
deeper retrieval of facts to eliminate least likely option
little et al 2019 method
ptsp completed online MCQ practice first with general knowledge
- elimination testing to encourage processing of all lures
- after distractor task, completed cued-recall test
- previously tested items repeated
- related questions and new questions aka control
whether practice with related items facilitates later test performance depends on???
whether the final test is in cued-recall or MC format
what did alamri and higham in press study try to find
Does falls recognition of practice questions also lead to impaired multiple choice performance in genuine educational context?
Can sequencing of relates to pairs be used to poo student performance?
alamri and higham in press METHOD
reason for separation is if put back to back they would notice questions aren’t the same
what did alamri and higham in prep think would happen
alamri and higham in prep results
facilitation was observed even though ptsp answered all questions in MC format. bigger boost in back to back condition
compare negative testing effect and related questions effect
-NTW = select ted lures on T1 are falsely recalled on T2 but in RTE, feedback on T1 is erroneously selected again on T2
what did Kelley et al 2019 find
peer wise involves 2 effective learning techniques
generation - generating info leads to better memory than reading
retrieval practice leads to better long term retention
Kelley et al 2019 method
at end of course researchers examine the questions generated an answered by each student and determine if there was exam questions that overlapped with either generated or answer the question.
Compare exam performance on these questions to control questions exam questions on topics that is no overlap with the students or third who answered peer wise was topics