UNIT- 6: EDUCATIONAL ASSESSMENT AND EVALUATION (8602)

 

UNIT- 06

VALIDITY OF THE ASSESSMENT TOOLS

 Validity

The validity of an assessment tool is the degree to which it measures for what it is designed to measure.  The concept refers to the appropriateness, meaningfulness, and usefulness of the specific inferences made from test scores.

According to Messick the validity is a matter of degree, not absolutely valid or absolutely invalid. He advocates that, over time, validity evidence will continue to gather, either enhancing or contradicting previous findings.

Need of Test Validity

·         Test validity, or the validation of a test, explicitly means validating the use of a test in a specific context

·         To make sure that a test measures the skill, trait, or attribute it is supposed to measure.

·         To yield reasonable consistent results for individual

·         To measure with reasonable degree of Accuracy.

Methods of Measuring Validity

 


1.      Content Validity

Content validity evidence involves the degree to which the content of the test matches a content domain associated with the construct. A test has content validity built into it by careful selection of which items to include (Anastasi & Urbina, 1997). Items are chosen so that they comply with the test specification which is drawn up through a thorough examination of the subject domain. Lawshe (1975) proposed that each rater should respond to the following question for each item in content validity:

Is the skill or knowledge measured by this item?

·         Essential

·         Useful but not essential

·         Not necessary

        1.1Face Validity

Face validity is an estimate of whether a test appears to measure a certain criterion. Face validity relates to whether a test appears to be a good measure or not.

For example- suppose you were taking an instrument reportedly measuring your attractiveness, but the questions were asking you to identify the correctly spelled word in each list. Not much of a link between the claim of what it is supposed to do and what it actually does.

1.2 Curricular Validity

The extent to which the content of the test matches the objectives of a specific curriculum as it is formally described. Curricular validity is evaluated by groups of curriculum/content experts.  Table of specification may help to improve the validity of the test. Curricular validity takes on particular importance in situations where tests are used for high-stakes decisions, such as Punjab Examination Commission exams for fifth and eight grade students and Boards of Intermediate and Secondary Education Examinations.

2. Construct Validity

Construct is the concept or the characteristic that a test is designed to measure. According to Howell (1992) Construct validity is a test’s ability to measure factors which are relevant to the field of study. Construct validity is thus an assessment of the quality of an instrument or experimental design. It says 'Does it measure the construct it is supposed to measure'.

For example, to what extent is an IQ questionnaire actually measuring "intelligence"?

2.1 Convergent Validity

Convergent validity refers to the degree to which a measure is correlated with other measures that it is theoretically predicted to correlate with. This is similar to concurrent validity.

For example, if scores on a specific mathematics test are similar to students scores on other mathematics tests, then convergent validity is high (there is a positively correlation between the scores from similar tests of mathematics).

2.2 Discriminant Validity

Discriminant validity occurs where constructs that are expected not to relate with each other, such that it is possible to discriminate between these constructs.

 For example, if discriminant validity is high, scores on a test designed to assess students skills in mathematics should not be positively correlated with scores from tests designed to assess intelligence.

3. Criterion Validity

Criterion validity evidence involves the correlation between the test and a criterion variable (or variables) taken as representative of the construct. It compares the test with other measures or outcomes (the criteria) already held to be valid.

For example, employee selection tests are often validated against measures of job performance (the criterion), and IQ tests are often validated against measures of academic performance (the criterion).

4. Concurrent Validity

According to Howell (1992) “concurrent validity is determined using other existing and similar tests which have been known to be valid as comparisons to a test being developed. Concurrent validity refers to the degree to which the scores taken at one point correlates with other measures (test, observation or interview) of the same construct that is measured at the same time. This measure the relationship between measures made with existing tests. For example, a measure of creativity should correlate with existing measures of creativity.

5. Predictive Validity

Predictive validity assures how well the test predicts some future behaviour of the examinee. In predictive validity a test is correlated against the criterion to be made available sometimes in the future. In other words, test scores are obtained and then a time gap of months or years is allowed to elapse, after which the criterion scores are then obtained. Useful and important for the aptitude tests If higher scores on the Boards Exams are positively correlated with higher G.P.A.’s in the Universities and vice versa, then the Board exams is said to have predictive validity. 

 For more details download PPT


 

 

Comments