Guidelines for Technology-Based Assessment
Drew and Deborah were delighted to collaborate with industry colleagues on a document recently released by the International Test Commission (ITC) and the Association of Test Publishers (ATP) titled “Guidelines for Technology-Based Assessment”. The document is a comprehensive set of Guidelines tailored specifically for technology-based assessment. It provides guidance and best practices for the design, delivery, scoring and use of digital assessments, while ensuring validity, fairness, accessibility, security, and privacy. It is intended to be a living document with continuous revisions and updates as technology expands.
Drew provided the initial draft for the document chapter focused on Testing Disruptions. Based upon his recent work in the state of Florida as well as with other testing programs that experienced significant candidate testing disruptions during their test administration windows, Drew reviewed some of the critical roles and responsibilities that testing programs need to engage when they encounter these types of challenges. In addition to discussing what needs to happen when a testing disruption happens, Drew’s chapter also reviewed critical steps that every program can take proactively to help reduce the likelihood of testing disruptions. These activities include steps in beta testing, requirements when partnering with test delivery vendors and data collection activities that needs to be built into every testing program.
Deborah’s chapter, co-authored by Dylan Molenaar, discusses using item response times in scoring when processing speed is an important attribute of performance. Item response times can be used either as the sole factor in determining test score or in conjunction with item scores (i.e., right/wrong/partially right). For example, a fast correct response could result in more points than a slow correct response, however a fast incorrect response could result in losing points. Guidelines for using item response times in scoring include disclosing to test takers and those who interpret scores that timing data are used, using a high degree of precision when recording time (e.g., milliseconds rather than seconds), eliminating construct-irrelevant differences in response times (e.g., based on using different computer technology), and achieving acceptable model fit before using response times in scoring.
ACS is extremely proud of Deborah and Drew’s contribution to this document. If you would like to review Guidelines for Technology-Based Assessment, it is available electronically at no charge on the ATP website and ITC website.