Paper SAS364-2014: Item Response Theory: What It Is and How You Can Use the IRT Procedure to Apply It. An, X. & Yung, Y. In Proceedings of the SAS® Global Forum 2014 Conference, pages 14, Cary, NC, 2014. SAS Institute Inc..
Paper SAS364-2014: Item Response Theory: What It Is and How You Can Use the IRT Procedure to Apply It [link]Paper  abstract   bibtex   
Item response theory (IRT) is concerned with accurate test scoring and development of test items. You design test items to measure various types of abilities (such as math ability), traits (such as extroversion), or behavioral characteristics (such as purchasing tendency). Responses to test items can be binary (such as correct or incorrect responses in ability tests) or ordinal (such as degree of agreement on Likert scales). Traditionally, IRT models have been used to analyze these types of data in psychological assessments and educational testing. With the use of IRT models, you can not only improve scoring accuracy but also economize test administrations by adaptively using only the discriminative items. These features might explain why in recent years IRT models have become increasingly popular in many other fields, such as medical research, health sciences, quality-of-life research, and even marketing research. This paper describes a variety of IRT models, such as the Rasch model, two-parameter model, and graded response model, and demonstrates their application by using real-data examples. It also shows how to use the IRT procedure, which is new in SAS/STAT® 13.1, to calibrate items, interpret item characteristics, and score respondents. Finally, the paper explains how the application of IRT models can help improve test scoring and develop better tests. You will see the value in applying item response theory, possibly in your own organization!
@inproceedings{an_paper_2014,
	address = {Cary, NC},
	title = {Paper {SAS364}-2014: {Item} {Response} {Theory}: {What} {It} {Is} and {How} {You} {Can} {Use} the {IRT} {Procedure} to {Apply} {It}},
	shorttitle = {Item {Response} {Theory}: {What} {It} {Is} and {How} {You} {Can} {Use} the {IRT} {Procedure} to {Apply} {It}},
	url = {http://support.sas.com/resources/papers/proceedings14/},
	abstract = {Item response theory (IRT) is concerned with accurate test scoring and development of test items. You design test items to measure various types of abilities (such as math ability), traits (such as extroversion), or behavioral characteristics (such as purchasing tendency). Responses to test items can be binary (such as correct or incorrect responses in ability tests) or ordinal (such as degree of agreement on Likert scales). Traditionally, IRT models have been used to analyze these types of data in psychological assessments and educational testing. With the use of IRT models, you can not only improve scoring accuracy but also economize test administrations by adaptively using only the discriminative items. These features might explain why in recent years IRT models have become increasingly popular in many other fields, such as medical research, health sciences, quality-of-life research, and even marketing research. This paper describes a variety of IRT models, such as the Rasch model, two-parameter model, and graded response model, and demonstrates their application by using real-data examples. It also shows how to use the IRT procedure, which is new in SAS/STAT® 13.1, to calibrate items, interpret item characteristics, and score respondents. Finally, the paper explains how the application of IRT models can help improve test scoring and develop better tests. You will see the value in applying item response theory, possibly in your own organization!},
	urldate = {2021-04-22},
	booktitle = {Proceedings of the {SAS}® {Global} {Forum} 2014 {Conference}},
	publisher = {SAS Institute Inc.},
	author = {An, Xinming and Yung, Yiu-Fai},
	year = {2014},
	keywords = {EAP, expected a posteriori, IRT, Item Response Theory, MAP, maximum a posteriori, ML, Maximum Likelihood, SAS},
	pages = {14},
}

Downloads: 0