Automated Scoring of Mathematics Tasks in the Common Core Era: Enhancements to M-Rater in Support of Cbal™ Mathematics and the Common Core Assessments. Fife, J. H. ETS Research Report Series, 2013(2):i–35, December, 2013.
Automated Scoring of Mathematics Tasks in the Common Core Era: Enhancements to M-Rater in Support of Cbal™ Mathematics and the Common Core Assessments [link]Paper  doi  abstract   bibtex   
The m-rater scoring engine has been used successfully for the past several years to score CBAL™ mathematics tasks, for the most part without the need for human scoring. During this time, various improvements to m-rater and its scoring keys have been implemented in response to specific CBAL needs. In 2012, with the general move toward creating innovative tasks for the Common Core assessment initiatives, in traditional testing programs, and with potential outside clients, and to further support CBAL, m-rater was enhanced in ways that move ETS's automated scoring capabilities forward and that provide needed functionality for CBAL: (a) the numeric equivalence scoring engine was augmented with an open-source computer algebra system; (b) a design flaw in the graph editor, affecting the way the editor graphs smooth functions, was corrected; (c) the graph editor was modified to give assessment specialists the option of requiring examinees to set the viewing window; and (d) m-rater advisories were implemented in situations in which m-rater either cannot score a response or may provide the wrong score. In addition, 2 m-rater scoring models were built that presented some new challenges.
@article{fife_automated_2013,
	title = {Automated {Scoring} of {Mathematics} {Tasks} in the {Common} {Core} {Era}: {Enhancements} to {M}-{Rater} in {Support} of {Cbal}™ {Mathematics} and the {Common} {Core} {Assessments}},
	volume = {2013},
	copyright = {© 2013 Educational Testing Service},
	issn = {2330-8516},
	shorttitle = {Automated {Scoring} of {Mathematics} {Tasks} in the {Common} {Core} {Era}},
	url = {http://onlinelibrary.wiley.com/doi/10.1002/j.2333-8504.2013.tb02333.x/abstract},
	doi = {10.1002/j.2333-8504.2013.tb02333.x},
	abstract = {The m-rater scoring engine has been used successfully for the past several years to score CBAL™ mathematics tasks, for the most part without the need for human scoring. During this time, various improvements to m-rater and its scoring keys have been implemented in response to specific CBAL needs. In 2012, with the general move toward creating innovative tasks for the Common Core assessment initiatives, in traditional testing programs, and with potential outside clients, and to further support CBAL, m-rater was enhanced in ways that move ETS's automated scoring capabilities forward and that provide needed functionality for CBAL: (a) the numeric equivalence scoring engine was augmented with an open-source computer algebra system; (b) a design flaw in the graph editor, affecting the way the editor graphs smooth functions, was corrected; (c) the graph editor was modified to give assessment specialists the option of requiring examinees to set the viewing window; and (d) m-rater advisories were implemented in situations in which m-rater either cannot score a response or may provide the wrong score. In addition, 2 m-rater scoring models were built that presented some new challenges.},
	language = {en},
	number = {2},
	urldate = {2016-05-05},
	journal = {ETS Research Report Series},
	author = {Fife, James H.},
	month = dec,
	year = {2013},
	keywords = {MathML, automated scoring, computer algebra system, cubic spline, education, equation editor, graph response, local extremum, mathematics, mathematics test item, uses sympy},
	pages = {i--35},
}

Downloads: 0