MI participated in the National Assessment of Educational Progress (NAEP) Automated Scoring Challenge sponsored by the National Center for Education Statistics (NCES). NAEP is a gold-standard, nationally representative assessment known as “The Nation’s Report Card.” To date, all NAEP constructed response items have been handscored. The purpose of the Challenge was to determine whether automated scoring models could perform well with a representative subset of grade 4 and 8 NAEP Reading constructed response items.
The History of Educational Measurement is the latest publication of MI Senior Advisor Dr. Michael Bunch, who, along with Dr. Brian Clauser, has written and collected essays on the key events and ideas that have shaped educational measurement as we know it today.
As a reminder of how precious instructional time is for students and teachers, the COVID-19 crisis has revitalized concerns about the value of all types of tests. In a new piece for the Learning Agency, Corey Palermo, Executive Vice President and Chief Strategy Officer shares his recommendations for making better use of tests.
The National Council on Measurement in Education (NCME) has announced that they have a new digital Instructional Topics in Educational Measurement Series (ITEMS) module in their professional development portal.
Researchers at Measurement Incorporated conducted the first comprehensive examination of longitudinal scoring stability of a large-scale assessment program. Corey Palermo, Michael B. Bunch, and Kirk Ridge analyzed scoring data collected from 2016-2018, during three consecutive administrations of a large-scale, multi-state summative assessment program.
Dr. Corey Palermo, Vice President of Performance Assessment Scoring at Measurement Incorporated, teamed with Dr. Margareta Maria Thomson at NC State University to investigate teacher motivation associated with professional development (PD) in the context of a large-scale assessment program. From 2009–2012, over 200 teachers received PD in item writing, item review, and anchor setting (i.e., selecting responses that exemplify each rubric score point).
PEG Writing can be a reliable tool to help educators make important intervention decisions.
In a recent study published in the Journal of School Psychology, Joshua Wilson (2018) evaluated the use of PEG (Project Essay Grade) automated essay scoring as a screener to identify struggling writers as part of a universal screening system. Findings indicated that students scoring in the lower-range of PEG (scores lower than 12 on a first draft of a 30-minute essay) had a higher likelihood of subsequently failing the state test. Similarly, students scoring in the upper-range of PEG (scores above 18) had a high likelihood of subsequently passing the summative test.
Dr. Corey Palermo studied over 800 middle school students to understand how PEG Writing affects student's argumentative writing performance. He found that students who used PEG Writing produced higher quality essays.
…there are even more reasons to consider the potential of PEG Writing to make a positive impact on student achievement in your schools.
Students are more likely to increase their knowledge when they approach learning tasks strategically and actively manage their learning. Even young students are capable of regulating their learning to some extent.