To assemble a single year of the Quarterly Questions assessment program, it takes at least 530 test questions—enough to cover every subspecialty in ophthalmology from cataract to uveitis—along with additional questions based on more than 40 articles. So where does all of this content come from? And how does it make its way on to your screen as part of the assessment?
This post explains the process of writing a Quarterly Questions item and how that moves through the process of development, subject matter expert review, use on the assessment, and, finally, psychometric review to determine if the item is fair to use in the calculation of a participant’s score.
Item Development
The content of all American Board of Ophthalmology (ABO) examinations, including the Written Qualifying Examination (WQE), Oral Examination, and Quarterly Questions assessment, is developed by practicing ophthalmologists who volunteer for this role (we call them subject matter experts or “SMEs”) with the support of ABO examination development and psychometric staff. Test questions (items) are written and reviewed by SMEs, and the examinations are assembled by both SMEs and staff.
At least annually, ABO staff determines what items are needed to support the assembly of all of the exams the following year. Item-writing assignments (e.g., items related to management of strabismus for the Pediatrics module of Quarterly Questions) are distributed among a group of approximately 80-100 volunteers in March of each year. These volunteers are a combination of academic and private practice ophthalmologists who represent diversity in subspecialty focus, geographical location, ethnicity, and time since initial certification.
Between March and July of each calendar year, each SME volunteer is provided with remote item training according to best-practice-driven guidelines. These volunteers go on to develop approximately 15 items (for all of the ABO’s examinations, not just Quarterly Questions). They are then asked to make edits on those items following an editorial review. They also remotely assess the relevancy of their peers’ drafted items in a blind process where they are asked to answer each item in a quiz-like environment. Any draft items answered correctly by only a few volunteers signals that an item is likely in need of further edits.
Item Review
At an in-person meeting in late summer each year, the 80-100 volunteer SMEs gather to review the drafted items. In groups of 8 to 10, they review all of the items drafted earlier in the year and must unanimously agree with each item’s relevancy and the accuracy of the correct answer. Many items at this stage are edited or even deleted (retired) if they are not found to meet these conditions. All items that make it past this stage are added to the item bank as “new” items.
Item Appearance on Quarterly Questions
When the ABO prepares to administer an examination, a test form is developed. Using the test blueprint as a guideline, ABO staff assembles an examination form. Once the assembled form has been drafted, another team of SMEs reviews the items again to ensure they are accurate and relevant. Finally, the items are launched to diplomates in the Quarterly Questions platform.
Final Item Review
After the item has been in the field, a psychometric performance review takes place. This includes reviewing how many diplomates have answered the question correctly versus incorrectly and reviewing the comments left by diplomates on the item. Items that appear to have more than one correct answer, have been mis-keyed, or are otherwise flawed are sent to an SME for a final review. The SME reviews the item in the context of the item performance, along with participant comments, and decides whether the item is fair to use in scoring. Because items in Quarterly Questions are never used more than once, after an item goes through this process, it does not appear on any other ABO examinations or future versions of Quarterly Questions.
What would you like to know about ABO assessments? Submit your question to communications@abop.org.