The theory and practice of item response theory pdf

8.01  ·  3,376 ratings  ·  891 reviews
Posted on by
the theory and practice of item response theory pdf

R.J. DE AYALA () The Theory and Practice of Item Response Theory. | SpringerLink

Skip to search form Skip to main content. Yung Published Item response theory IRT is concerned with accurate test scoring and development of test items. You design test items to measure various kinds of abilities such as math ability , traits such as extroversion , or behavioral characteristics such as purchasing tendency. Responses to test items can be binary such as correct or incorrect responses in ability tests or ordinal such as degree of agreement on Likert scales. View PDF.
File Name: the theory and practice of item response theory
Size: 58458 Kb
Published 16.01.2019

What is Item Response Theory? by Nick Shryane

Item Response Theory: What It Is and How You Can Use the IRT Procedure to Apply It

Advancing Human Assessment pp Cite as. Few would doubt that researchers at ETS have contributed more to the general topic of item response theory IRT than individuals from any other institution. In this chapter, we review most of those contributions, dividing them into sections by decades of publication. The chapter traces a wide range of contributions through the decades, ending with recent work that produced models involving complex latent variable structures and multiple dimensions. Item response theory IRT models , in their many forms, are undoubtedly the most widely used models in large-scale operational assessment programs. They have grown from negligible usage prior to the s to almost universal usage in large-scale assessment programs, not only in the United States, but in many other countries with active and up-to-date programs of research in the area of psychometrics and educational measurement.


In various assessment contexts including entrance examinations, educational assessments, and personnel appraisal, performance assessment by raters has attracted much attention to measure higher order abilities of examinees. However, a persistent difficulty is that the ability measurement accuracy depends strongly on rater and task characteristics. To resolve this shortcoming, various item response theory IRT models that incorporate rater and task characteristic parameters have been proposed. However, because various models with different rater and task parameters exist, it is difficult to understand each model's features. Therefore, this study presents empirical comparisons of IRT models. Specifically, after reviewing and summarizing features of existing models, we compare their performance through simulation and actual data experiments.

In psychometrics , item response theory IRT also known as latent trait theory , strong true score theory , or modern mental test theory is a paradigm for the design, analysis, and scoring of tests , questionnaires , and similar instruments measuring abilities, attitudes, or other variables. It is a theory of testing based on the relationship between individuals' performances on a test item and the test takers' levels of performance on an overall measure of the ability that item was designed to measure. Several different statistical models are used to represent both item and test taker characteristics. This distinguishes IRT from, for instance, Likert scaling , in which " All items are assumed to be replications of each other or in other words items are considered to be parallel instruments" [2] p. By contrast, item response theory treats the difficulty of each item the item characteristic curves, or ICCs as information to be incorporated in scaling items.

1 thoughts on “Empirical comparison of item response theory models with rater's parameters

Leave a Reply