Creating Test Items with Automated Item Generation

Share:

Listens: 0

ACTNext | Navigator

Education


A lot of research goes into ACT tests. Every question begins with psychometric grounding. Items are field tested and answers are checked for validity. Assessments must measure ability and not reward random guesswork on the part of the test-taker. In addition to accuracy, test items are also extensively reviewed for fairness. Subject matter experts (SMEs) develop questions in math, reading and comprehension, science, and graphic literacy. But that can take time, so test developers have created some ways to streamline the assessment pipeline. In this episode of ACTNext Navigator podcast, we discuss the history of automated item generation (AIG) at ACT with Rick Meisner. He's been with ACT for over 30 years and developed some of the first AIG content for math using BASIC programming language. Meisner also holds several patents related to AIG and automated scoring. Later in the show, we'll hear from Ian MacMillan and Brad Bolender. They've developed AIG software for WorkKeys graphic literacy assessment (AIGL) and passage organizing and extraction (POE) respectively. Each presented a poster at the 2019 Education Technology and Computational Psychometrics Symposium research poster and tech demo reception on October 9, 2019. The views and opinions expressed in this podcast are those of the authors only and do not necessarily reflect the official policy or position of ACT, Inc. Read a transcript of the show at  https://actnext.org/research-and-projects/podcast-ep7-aig-aigl-poe/