Skip to main content

UNIT- 6: EDUCATIONAL ASSESSMENT AND EVALUATION (8602)

 

UNIT- 06

VALIDITY OF THE ASSESSMENT TOOLS

 Validity

The validity of an assessment tool is the degree to which it measures for what it is designed to measure.  The concept refers to the appropriateness, meaningfulness, and usefulness of the specific inferences made from test scores.

According to Messick the validity is a matter of degree, not absolutely valid or absolutely invalid. He advocates that, over time, validity evidence will continue to gather, either enhancing or contradicting previous findings.

Need of Test Validity

·         Test validity, or the validation of a test, explicitly means validating the use of a test in a specific context

·         To make sure that a test measures the skill, trait, or attribute it is supposed to measure.

·         To yield reasonable consistent results for individual

·         To measure with reasonable degree of Accuracy.

Methods of Measuring Validity

 


1.      Content Validity

Content validity evidence involves the degree to which the content of the test matches a content domain associated with the construct. A test has content validity built into it by careful selection of which items to include (Anastasi & Urbina, 1997). Items are chosen so that they comply with the test specification which is drawn up through a thorough examination of the subject domain. Lawshe (1975) proposed that each rater should respond to the following question for each item in content validity:

Is the skill or knowledge measured by this item?

·         Essential

·         Useful but not essential

·         Not necessary

        1.1Face Validity

Face validity is an estimate of whether a test appears to measure a certain criterion. Face validity relates to whether a test appears to be a good measure or not.

For example- suppose you were taking an instrument reportedly measuring your attractiveness, but the questions were asking you to identify the correctly spelled word in each list. Not much of a link between the claim of what it is supposed to do and what it actually does.

1.2 Curricular Validity

The extent to which the content of the test matches the objectives of a specific curriculum as it is formally described. Curricular validity is evaluated by groups of curriculum/content experts.  Table of specification may help to improve the validity of the test. Curricular validity takes on particular importance in situations where tests are used for high-stakes decisions, such as Punjab Examination Commission exams for fifth and eight grade students and Boards of Intermediate and Secondary Education Examinations.

2. Construct Validity

Construct is the concept or the characteristic that a test is designed to measure. According to Howell (1992) Construct validity is a test’s ability to measure factors which are relevant to the field of study. Construct validity is thus an assessment of the quality of an instrument or experimental design. It says 'Does it measure the construct it is supposed to measure'.

For example, to what extent is an IQ questionnaire actually measuring "intelligence"?

2.1 Convergent Validity

Convergent validity refers to the degree to which a measure is correlated with other measures that it is theoretically predicted to correlate with. This is similar to concurrent validity.

For example, if scores on a specific mathematics test are similar to students scores on other mathematics tests, then convergent validity is high (there is a positively correlation between the scores from similar tests of mathematics).

2.2 Discriminant Validity

Discriminant validity occurs where constructs that are expected not to relate with each other, such that it is possible to discriminate between these constructs.

 For example, if discriminant validity is high, scores on a test designed to assess students skills in mathematics should not be positively correlated with scores from tests designed to assess intelligence.

3. Criterion Validity

Criterion validity evidence involves the correlation between the test and a criterion variable (or variables) taken as representative of the construct. It compares the test with other measures or outcomes (the criteria) already held to be valid.

For example, employee selection tests are often validated against measures of job performance (the criterion), and IQ tests are often validated against measures of academic performance (the criterion).

4. Concurrent Validity

According to Howell (1992) “concurrent validity is determined using other existing and similar tests which have been known to be valid as comparisons to a test being developed. Concurrent validity refers to the degree to which the scores taken at one point correlates with other measures (test, observation or interview) of the same construct that is measured at the same time. This measure the relationship between measures made with existing tests. For example, a measure of creativity should correlate with existing measures of creativity.

5. Predictive Validity

Predictive validity assures how well the test predicts some future behaviour of the examinee. In predictive validity a test is correlated against the criterion to be made available sometimes in the future. In other words, test scores are obtained and then a time gap of months or years is allowed to elapse, after which the criterion scores are then obtained. Useful and important for the aptitude tests If higher scores on the Boards Exams are positively correlated with higher G.P.A.’s in the Universities and vice versa, then the Board exams is said to have predictive validity. 

 For more details download PPT


 

 

Comments

Popular posts from this blog

How to set up a passkey for your Apple account

  Passkeys are a new and more secure way to sign in to your Apple account. They are similar to passwords, but they are stored on your device and are not shared with Apple. This makes them more resistant to phishing attacks and other security threats. Passkeys are currently not available for Apple accounts. However, they are expected to be available in a future software update. Set up a passkey for Apple account When passkeys are available, you will be able to set up a passkey for your Apple account by following these steps: 1.     Go to the Settings app on your Apple device. 2.     Tap on your name at the top of the screen. 3.     Tap on "Password & Security." 4.     Tap on "Passkeys." 5.     Tap on "Set Up Passkey." 6.     Follow the on-screen instructions to create a passkey. Once you have created a passkey, you will be able to use it to sign in to your Apple account on...

Requirement for connecting to the Internet

The basic requirements for connecting to the Internet are a computer device. In addition,   you need the following things, to connect to the Internet: (i)           Modem (ii)          Telephone wire (iii)         Internet Service Provider (ISP) (iv)        Internet connection (v)         Web-browsing software Modem (modulator-demodulator) A modem is a device that enables a computer to transmit data over telephone or cable lines. Computer stored information digitally; information transmitted over telephone lines in the form of analog waves. A modem converts between these two forms. A modem can be either internal or external. The internal modem is attached to a slot on the motherboard. The external modem can be placed anywhere outside the system unit and connected to the ...

Approaches of comparative education

  Apollo (1986) identified eight approaches to the study of Comparative Education. They are: 1. Problem Approach or Thematic approach 2. Case study approach 3. Area study approach 4. Historical approach 5. Descriptive approach  6. Philosophical approach  7. International approach and 8.  Gastronomic approach 1. Problem approach or thematic approach —   In this approach the investigator will first of all identify a particular educational problem in his own country. Then, he will begin to look for another country that has the same problem . —   The researcher will also study the education problem of another country in relation to their culture. The researcher will not only study the educational problem of another country but he will also examine the solution applied to such problem by the affected country . 2. Case study approach   —   In this approach, an education comparativist from Nigeria can go to Iraq to study the...