ASSESSMENT IN TEACHING COMMERCE

 

ASSESSMENT IN TEACHING COMMERCE

Achievement test:

Teaching and testing are the integral part of educational system. Testing is implicit in teaching some of the stages, which may be properly marked for testing procedures.

1.       During teaching.

2.       At the end of teaching a daily lesson.

3.       At the end of teaching a unit.

4.       At the end of the term.

5.       At the end of the year/curriculum.

   A test at the end of a teaching unit is known as the unit test.

        Usually the test / Examinations are held based on the entire syllabus. A unit test is not a random assessment of questions. It is preplanned, systematic and scientific test.

       A unit test is a test which is constructed, administered and assessed by a teacher after teaching a particular unit to the students.

Characteristics of Achievement test:

1. Objectivity:

2. Reliability:

3. Validity:

4. Practicability:

1. RELIABILITY:

     The dictionary meaning of reliability is consistency, depend­ence or trust. So, in measurement reliability is the consistency with which a test yields the same result in measuring whatever it does measure. A test score is called reliable when we have reason for believing the score to be stable and trust-worthy. Stability and trust-worthiness depend upon the degree to which the score is an index of time-reliability’ is free from chance error. Therefore, reliability can be defined as the degree of consistency between two measurements of the same thing.

For example: we administered an achievement test on Group-A and found a mean score of 55. Again after 3 days we ad­ministered the same test on Group-A and found a mean score of 55. It indicates that the measuring instrument (Achievement test) is providing a stable or dependable result. On the other hand, if in the second measurement the test provides a mean score around 77 then we can say that the test scores are not consistent.

1.       In the words of Gronlund and Linn (1995) “reliability refers to the consistency of measurement—that is, how consistent test scores or other evaluation results are from one measurement to other.”

2.        C.V. Good (1973) has defined reliability as the “wor­thiness with which a measuring device measures something; the degree to which a test or other instrument of evaluation measures consistently whatever it does in fact measure.”

3.       According to Ebel and Frisbie (1991) “the term reliability means the consistency with which a set of test scores measure whatever they do measure.”

         Theoretically, reliability is defined as the ratio of the true score and observed score variance.

   4. According to Davis (1946) “the degree of relative precisions of measurement of a set of test score is defined as reliability.”

Nature of Reliability:

1. Reliability refers to consistency of the results obtained with an instrument but not the instrument itself

2. Reliability refers to a particular interpretation of test scores. For example, a test score which is reliable over a period of time may not be reliable from one test to another equivalent test. So that reliability cannot be treated as general characteristics.

3. Reliability is a statistical concept to determine reliability we administer a test to a group once or more than once. Then the consistency is determined in terms of shifts in the relative position of a person in the group or amount of variation expected in an individual’s score. Shifting of relative position of an individual is related by means of a coefficient of correlation called ‘Reliability Coefficient’ and the amount of variation is reported by ‘Standard error of measurement’. Both these processes are statistical.

4. Reliability is necessary but not a sufficient condition for validity. A test which is not reliable cannot be valid. But it is not that a test with high reliability will possess high validity. Because a highly consistent test may measure something other than that what we intend to measure.

Methods of Determining Reliability:

For most educational tests the reliability coefficient provides the most revealing statistical index of quality that is ordinarily available. Estimates of the reliability of test provide essential information for judging their technical quality and motivating efforts to improve them. The consistency of a test score is expressed either in terms of shifts of an individual’s relative position in the group or in terms of amount of variation in an individual’s score.

On the basis of this estimation of reliability fall in to two general clas­sifications:

(i) Relative Reliability or Reliability Coefficient:

    In this method the reliability is stated in terms of a coefficient of cor­relation known as reliability coefficient. Hence, we determine the shifting of relative position of an individual’s score by coefficient of correlation.

(ii) Absolute Reliability or Standard error of Measure­ment:

       In this method, the reliability is stated in terms of the standard error of measurement. It indicates the amount of varia­tion of an individual’s score.

 2. VALIDITY:

        Validity is the most important characteristic of an evaluation programme, for unless a test is valid it serves no useful function. Psychologists, educators, guidance counselors use test results for a variety of purposes. Obviously, no purpose can be fulfilled, even partially, if the tests do not have a sufficiently high degree of validity.

       Validity means truth-fullness of a test. It means to what extent the test measures that, what the test maker intends to measure.

1.       “In selecting or constructing an evaluation instrument, the most important question is; To what extent will the results serve the particular uses for which they are intended? This is the essence of validity.” —GRONLUND

2.       Gronlund and Linn (1995)—” Validity refers to the ap­propriateness of the interpretation made from test scores and other evaluation results with regard to a particular use.”

3.       Ebel and Frisbie (1991)—” The term validity, when applied to a set of test scores, refers to the consistency (accuracy) with which the scores measure a particular cognitive ability of interest.”

4.       C.V. Good (1973)—In the dictionary of education defines validity as the “extent to which a test or other measuring instru­ment fulfils the purpose for which it is used.”

5.       Anne Anastasi (1969) writes “the validity of a test concerns what the test measures and how well it does so.”

6.       According to Davis (1964) “validity is the extent of which the rank order of the scores of examinees for whom a test is appropriate is the same as the rank order of the same examinees in the property or characteristic that the test is being used to measure. This property or characteristic is called the criterion. Since any test may be used for many different purposes, it follows that it may have many validities one corresponding to each criterion.”

7.       Freeman (1962) defines, “an index of validity shows the degree to which a test measures what it purports to measure, when compared with accepted criteria.”

8.       Lindquist (1942) has said, “validity of a test may be defined as the accuracy with which it measures that which it is intended to measure, or as the degree to which it approaches infallibility in measuring what it purports to measure.”

         From the above definitions it is clear that validity of an evaluation device is the degree to which it measures what it is intended to measure. Validity is always concerned with the specific use of the results and the soundness of our proposed interpretation.

         It is not also necessary that a test which is reliable may also be valid. For example, suppose a clock is set forward ten minutes. If the clock is a good time piece, the time it tells us will be reliable. Because it gives a constant result. But it will not be valid as judged by ‘Standard time’. This indicates “the concept that reliability is a necessary but not a sufficient condition for validity.”

Nature of Validity:

1. Validity refers to the appropriateness of the test results but not to the instrument itself.

2. Validity does not exist on an all-or-none basis but it is a matter of degree.

3. Tests are not valid for all purposes. Validity is always specific to particular interpretation. For example the results of a vocabulary test may be highly valid to test vocabulary but may not be that much valid to test composition ability of the student.

4. Validity is not of different types. It is a unitary concept. It is based on various types of evidence.

Factors Affecting Validity:

         Like reliability there are also several factors which affect the validity of test scores. There are some factors about which we are alert and can avoid easily. But there are some factors about which we are ignorant and it makes the test results invalid, for their intended use.

Some of these factors are as following:

1. Factors in the test:

(i) Unclear directions to the students to respond the test.

(ii) Difficulty of the reading vocabulary and sentence structure.

(iii) Too easy or too difficult test items.

(iv) Ambiguous statements in the test items.

(v) Inappropriate test items for measuring a particular outcome.

(vi) Inadequate time provided to take the test.

(vii) Length of the test is too short.

(viii) Test items not arranged in order of difficulty.

(ix) Identifiable pattern of answers.

Factors in Test Administration and Scoring:

(i) Unfair aid to individual students, who ask for help,

(ii) Cheating by the pupils during testing.

(iii) Unreliable scoring of essay type answers.

(iv) Insufficient time to complete the test.

(v) Adverse physical and psychological condition at the time of testing.

Factors related to Testee:

(i) Test anxiety of the students.

(ii) Physical and Psychological state of the pupil,

(iii) Response set—a consistent tendency to follow a certain pattern in responding the items.

3. OBJECTIVITY:

      Objectivity is an important characteristic of a good test. It affects both validity and reliability of test scores. Objectivity of a measuring instrument moans the degree to which different per­sons scoring the answer receipt arrives of at the same result.

1.       C.V. Good (1973) defines objectivity in testing is “the extent to which the instrument is free from personal error (personal bias), that is subjectivity on the part of the scorer”.

2.       Gronlund and Linn (1995) states “Objectivity of a test refers to the degree to which equally competent scores obtain the same results. So, a test is considered objective when it makes for the elimination of the scorer’s personal opinion and bias judgement. In this con­text there are two aspects of objectivity which should be kept in mind while constructing a test.”

(i) Objectivity in scoring.

(ii) Objectivity in interpretation of test items by the testee.

(i) Objectivity of Scoring:

     Objectivity of scoring means same person or different persons scoring the test at any time arrives at the same result without may chance error. A test to be objective must necessarily so worded that only correct answer can be given to it. In other words, the personal judgement of the individual who score the answer script should not be a factor affecting the test scores. So that the result of a test can be obtained in a simple and precise manner if the scoring procedure is objective. The scoring procedure should be such that there should be no doubt as to whether an item is right or wrong or partly right or partly wrong.

(ii) Objectivity of Test Items:

By item objectivity we mean that the item must call for a definite single answer. Well-con­structed test items should lead themselves to one and only one interpretation by students who know the material involved. It means the test items should be free from ambiguity. A given test item should mean the same thing to all the students that the test maker intends to ask. Dual meaning sentences, items having more than one correct answer should not be included in the test as it makes the test subjective.

4. USABILITY:

      Usability is another important characteristic of measuring instrument. Because practical considerations of the evaluation instruments cannot be neglected. The test must have practical value from time, economy, and administration point of view. This may be termed as usability.

So, while constructing or selecting a test the following practical aspects must be taken into account:

(i) Ease of Administration:

        It means the test should be easy to administer so that the general class-room teachers can use it. Therefore, simple and clear directions should be given. The test should possess very few subtests. The timing of the test should not be too difficult.

(ii) Time required for administration:

      Appropriate time limit to take the test should be provided. If in order to provide ample time to take the test we shall make the test shorter than the reliability of the test will be reduced. Gronlund and Linn (1995) are of the opinion that “Somewhere between 20 and 60 minutes of testing time for each individual score yielded by a published test is probably a fairly good guide”.

(iii) Ease of Interpretation and Application:

      Another im­portant aspect of test scores are interpretation of test scores and application of test results. If the results are misinterpreted, it is harmful on the other hand if it is not applied, then it is useless.

(iv) Availability of Equivalent Forms:

      Equivalent forms tests help to verify the questionable test scores. It also helps to eliminate the factor of memory while retesting pupils on same domain of learning. Therefore, equivalent forms of the same test in terms of content, level of difficulty and other characteristics should be available.

(v) Cost of Testing:

       A test should be economical from preparation, administration and scoring point of view.

Steps for setting up an Achievement test:

      The steps for setting up a good and meaningful unit test are,

 A) Planning (Design) of the test:

1.       Unit Analysis

2.       Content Analysis

3.       Weightage to content.

4.       Weightage to type of questions.       

5.       Weightage to objectives.

6.       Weightage to difficulty level.

7.       Preparation of Blue print.

B) Editing the Achievement test:

1.       Construction of items

2.       Selection of items

3.       Grouping of test items.

4.       Instructions of Examinee.

5.       Sections in the question paper.

6.       Preparing a marking scheme and scoring key.

C)  Reviewing the Question Paper:

1.       Question-wise analysis.

2.       Critical evaluation of the test.

D)  Administering the Test:

E) Interpret the test results:

1.       Score the answer scripts.

2.       Item analysis (After the test)

F) Statistical Treatment:

1.       Based on measures of central tendency values.

2.       Based on quartile points.

3.       Based on frequency polygon & histogram.

A) Planning (Design) of a unit test:

As it is obvious that, a planning made carefully to begin with, the design of the test is prepared so that it may be used as an effective instrument of evaluation. A proper design would increase the validity, reliability, objectivity and suitability of the test.

           The following aspects have to be looked up on while planning for a unit test and they are.

1.  Unit analysis:

       Here the teacher must analysis the whole unit into its sub-units, sub-units may be listed under sub headings and must be organized logically.

2. Content analysis:

      The content analysis must be done for each one of the sub unit separately by listing the important facts, concepts, principles, generalizations etc.

3. Weightage to objectives:

     The relative importance of each objective is to be considered. For informative subject Mathematics the objectives are knowledge understanding, application and skill. The main task here is to decide the weightage to be given to the different objectives include in the unit plan. This weightage should be decided by a committee of experts, including the classroom teacher.

Sl. No

Objective

Marks

Percentage

     1

Remembering

5

10%

 

     2

 Understanding

15

30%

 

     3

 Applying

20

40%

 

     4

 Skill

10

20%

 

 

Total

50

100%

 

4. Weightage to content:

The content of a unit is taught in the classroom by providing suitable learning experiences. All the subject matter will not have equal importance. Therefore, in order to test the understanding of the content, a proper weightage must be given looking into the nature, scope & importance of the content. The content weightage must be given in such unit wise and teacher must see no content/sub unit should left out.

 

Sl. No

Sub Units

Marks

Percentage

1

Meaning and definition of Business

5

10%

2

Characteristics of Business

5

10%

3

Scope of Business.

15

30%

4

Concept of Industry and its categories.

5

10%

5

Concept of Commerce

5

10%

6

Concept of Banking

5

10%

7

Forms of business enterprises

10

20%

 

Total

50

100%

5.  Weightage to type of questions:

For testing different abilities and subunits, different forms of using traditional form the essay questions.

              In order to test various learning out comes we have to use objective type, very short answer type, short answer type and easy type questions. We have to use these types of questions must give weightages on the basis of their adequacy etc. So that they can achieve our Instructional objectives.

Sl. No

Type of Questions

No. of Questions

Marks

Percentage of marks

 01

 Objective

25

9

18%

 02

 Short answer Type

8

16

32%

 03

 Essay type

3

25

50%

 

   Total

36

50

100%

6.  Weightage to difficulty level:

                 It is an accepted fact that in a class room there are 3 types of pupil’s average above, below average; accordingly, the test should not be difficult not too easy. The test should provide suitable opportunity to the bright, medium and weak students in class. The teacher expected to classify the items in three levels-difficult average and easy.

Difficulty level

Marks

Percentage

Easy

15

30%

Average

25

50%

Difficult

10

20%

Total

50

100%

 

 7. The Blue-Print:

         The blue print is a 3-dimensional chart showing the weightage given to objectives, content and types of questions in terms of marks.

   The blue print serves many useful purposes.

1.       It helps to improve the content validity of teacher made tests.

2.       It defines as clearly as possible the scope and emphasis of the test.

3.       It relates objectives to content.

4.       It acts as a guide to construct the unit test.

    BLUE PRINT

Content

Remembering

Understanding

Applying

Skill

Total

Sub Unit I

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Sub Unit II

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Sub Unit III

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

B) Editing the unit test:

Once the design is thoroughly prepared. The next step is to edit the test in the form at question paper. In editing the test, the following points to be keep in the mind.

1.  Construction of items:

       A teacher must construct or prepare number of questions on the unit. The items must be of variety. Like essay type, very short answer type, short answer type and objective type etc. The construction of test items it is necessary to identify the objectives and its specification that the item intends to measure. The items should over all sub units.

2.  Selection of test items:

       The teacher must select the relevant item according to blue print, based on objectives content coverage, type of question required. They should have scoring key and marking scheme for better clarify.

3.  Grouping of Test items:

We have to group the selected test items into different categories depending on the type of items.

4.  Instructions to Examinee:

      There are two types of instruction in the question paper

1.       General instruction

2.       Specific instruction

            The general instruction must be given in the beginning of question paper.

a)       This paper has two/three sections (A, B, C)

b)       All questions in the section is compulsory

c)       About time, medium of answering.

              The specific instructions enable the examinee to understand how to respond to a question.

5.  Sections in the Question Paper:

   Generally, the objective type items grouped under section a short answer in section B, Essay type in Section C.

6. Preparing marking scheme and scoring key:

  The marking scheme should be prepared for the essay and short answer type of questions only the important points to be written in the expected answer in scheme. The expected answer must be allotted with certain amount of time.

       The scoring key must be prepared for the objective type items.

C) Reviewing the question paper:

   1) Question wise analysis:

    Each Question must be considered separately and analyzed in terms of its sub unit, objective and specifications type of question marks allotted, time limit for answering.

                The purpose of Question wise analysis also to know the strength and weaknesses of question paper, Totally the question paper with blue print, to determine the content validity and for the satisfaction of paper setter.

 

Sl. No.

Sub Unit

Objectives

Specification

Type of Q.

Marks

Time Limit

Difficulty Level

 

 

 

 

 

 

 

 

 

2)  Critical Evaluation of the Test:

It is done to ensure the correctness, relevancy working and distracter of the item. All the Questions must be free from grammatical errors and relevant to the unit taught, age level of examinee and the distracter are homogenously given in objective questions.

         The question paper must cover the whole content. The test paper must be graded according to their difficulty level.

  No guess work should be encouraged.

D)  Administer the test:  

The revised question paper should be administered to the students. The teacher gives instruction to students and he should see teacher should supervise the unit test.

E) Interpret the test results:

Score the answer script:  Each student’s answer is numerically quantified and a list of individual students score in prepared.

The item wise analysis is done to know the validity of each test item separately and then the item difficulty index is calculated.

 

F) Statistical Analysis:

The raw scores obtained from the scoring of the test papers. After that construct a table of frequency distribution. In this table, the teacher takes appropriate class intervals and relevant frequencies. After preparing the table the teacher calculates central tendency i.e., Mean, Median and Mode. By the calculation of central tendency, we interpret that if Mean < Median the test is negatively skewed and easy if mean > median the test is positively skewed and difficult, if mean = median then the test is average.

        After the teacher calculates Quartile division, i.e., Q1, Q2, Q3 on the basis of QD’s we can interpret that if Q3-Q2 < Q2-Q1, the test is negatively skewed and easy, if Q3-Q2 > Q2-Q1, the test is positively skewed if Q3-Q2 = Q2-Q1 the given test in average.

On the basis of these calculations, the teacher can draw the graphs of frequency polygon and histogram.                             

Use of an Achievement Test:

1.       They help in knowing the learner’s achievement.

2.       They are useful to know the weaknesses and strengths of students.

3.       They are helpful in classifying the students.

4.       They help in deciding the effectiveness of teaching.

5.       They help in knowing the objectives are achieved (or) not.

6.       They become the part of continuous evaluation.

7.       They help to teacher to improve his teaching effectively.

8.       They help in development of self-confidence in facing the examinations.

 

Comments

Popular posts from this blog

Content in Commerce

TEACHING & LEARNING RESOURCES IN COMMERCE RESOURCES