Page 124 - Composition in Convergence The Impact of the New Media on Writing Assessment
P. 124

VALIDITY AND RELIABILITY      91

        from one observation to the next and also because the error is differ-
        ent  each time it is measured" (Lauer & Asher, 1988, p.  140). Rarely
        will a writing  assessment tool  give a consistent,  stable,  equivalent
        result,  which  is what evaluators look for in a reliable  measurement
        instrument  because too many  variable errors have the potential to
        exist  each time  an evaluation opportunity occurs.  Composition  re-
        searchers Janice Lauer and William Asher (1988) noted that the pre-
        cision of the criteria, the amount  and quality of the procedures used
        in evaluator training,  the continual monitoring of readers during an
        evaluation  session, the speed of rating,  and the readers' background
        and  attitudes  all  affect  reliability.  To that  list,  in  the  age  of
        convergence,  writing  teachers  can  now  add  the  medium  used  to
        produce the  text.
           Writing programs that depend on stability  in their assessment in-
        strument scores may not be accurately evaluating their student writ-
        ers.  Usually, group stability in a writing assessment is virtually  nil, as
        I have tried to show, which is what writing instructors  should expect.
        After  all, if stability  occurs in a student's  assessment over time, then
        growth  has not  occurred. For this  reason, writing  specialists should
        tread  carefully if they  are basing  their  assessment  instrument's  reli-
        ability on its stability in an exam environment. This is even more con-
        cern  for  caution  if the  student  is  working  with  new  technological
        media to write the exam. Student writers'  abilities can and do change
        over time, especially with  their facility in using computer programs.
        Assess students'  skills too soon or too long a period after  introducing
        new material or software, and false results can occur. If faculty are ex-
        pecting stability to happen with the test, a student's higher score on a
        second round may   not necessarily indicate a flaw  in the testing tool.
        Other variables, such as greater or lesser comfort levels with  compos-
        ing on screen or the students' familiarity with the software program,
        can make a difference  in student scores.
           Given the difficulties regarding variable errors, consistency of re-
        sults  is also  difficult  to  maintain  in  a writing  assessment. In  the
        psychometric model, consistency in assessment should not turn up
        any  conflicting elements. For instance,  a  consistent  reader  is ex-
        pected to read in accordance with other  readers.  Or a writer  is ex-
        pected to make consistent errors or possess a consistent style on the
        task.  Of course, writing  instructors  know that  consistency in  as-
        sessment is also subject  to  error. Any  change in  genre can expose
        different  writing  errors  or  a  shift  in voice, tone,  or  word  choice.
   119   120   121   122   123   124   125   126   127   128   129