Selasa, 10 Januari 2012

VALIDITY AND RELIABILITY OF THE TEST

A. Methods of determining validity
Validity is concerned with the extent to which test result serve their intend use. The concept of validity, as used in testing, can be clarified further by nothing the following general points:
Validity refers to the interpretation of the result (not to the test itself).
Validity is inferred from available evidence (not measured)
Validity is specific to a particular use (selection, placement, evaluation of learning and so forth).
Validity is expressed by degree (for example, high, moderate or low)
Basic types of validity:
Content Validity
How adequately does the test content sample the larger domain of situations it represents? Content validity is a matter of determining whether the sample is representative of the larger domain it is supposed to represent. The procedures to get high content validity:
Identifying the subject-matter topics and the learning outcomes to be measured.
Preparing a set of specifications.
Constructing a test.
Criterion-Related Validities
How well does test performance predict future performance (predictive validity) or estimate present standing (concurrent validity) on some other valued measure called a criterion? There are two types of Criterion-Related Validity:
Concerned with the use of test performance to predict future performance on some other valued measure called a criterion.
Concerned with the use of test performance to estimate current performance on some criterion.
Construct validity
The aim is to identify all the factors that influence test performance and to determine the degree of influence of each. The process includes the following steps:
Identifying the construct that might account for test performance.
Formulating testable hypothesis from the theory surround each construct.
Gathering data to test these hypotheses.
B. Methods of determining reliability
Reliability refers to the consistency of test scores. A reliability coefficient is also a correlation coefficient. There are four methods to estimating reliability and the type of information each provides:
Test-Retest Method
The stability of test score over some given period of time. The Test-Retest Method requires administering the same form of the test to the same group after some time interval.
Equivalent-forms method
The consistency of the test score over different forms of the test (different sample of items).
Test-retest with equivalent forms
The consistency of test score over both a time interval and different forms of the test. This is combination of Test-Retest Method and Equivalent-forms method. Two different forms of the same test are administered with time intervening.
Internal-consistency methods
The consistency of test score over different part of the test. These methods require only a single administration of a test. One procedure, the split-half method, involves scoring the odd items and the even items separately and correlating the two sets of scores. A simple formula:
Reliability of total test=(2 X reliability for 1⁄2 test)/(1+reliability for 1⁄2 test)
The information from this formula: (1) the number items of test, (2) the mean, (3) the standard deviation.
Standard error of measurement
Standard error of measurement indicates the amount of error to allow for when interpreting individual test scores. Formula:
Standard error of measurement = s√(1-r_n )

Tidak ada komentar:

Posting Komentar