BizEd

JulyAugust2003

Issue link: http://www.e-digitaleditions.com/i/62203

Contents of this Issue

Navigation

Page 30 of 67

I was once a professor of political science, and I've spent hours grading papers. After read- ing a paper, I would sometimes recognize that the student had a good argument but that he had gotten many of his facts wrong. I'd write my feedback on the paper and sign it off with a B-minus. Then, I would pick up the next paper, read it over, and give it a B-minus for a completely different set of reasons. But what data went into my grade book? Two B- minuses. After collecting and recording a wealth of data on how the students had responded to the assignment, I threw it all away when I handed the papers back. I had no record of where, in general, learning was going right and where it wasn't—or what I could do about it. in higher education is meant to address this condition. Assessment practices offer educators and their institutions the opportunity to gather systematic evidence about their stu- dents' learning and evaluate the effectiveness of their educa- tional offerings. In fact, many accrediting organizations are now asking institutions to provide evidence of student learn- ing and achievement. Even so, I have found that many institutions—business The new emphasis on the assessment of learning outcomes schools included—are still at a fairly early stage of grappling with assessment and assurance of learning. Although a major- ity of schools have begun to specify learning goals for their students and gather information on student performance, many share the mis- conception that gathering the data is the most important part of the process. They often neglect to use that data to improve student learning and experience. They fail to ask, "What can we learn from these results to make our courses better?" Setting up a workable assessment approach is no easy task, consensual judgments based on standards that are implied in their communities of practice. In the American classroom, there has long been an atmos- phere of exclusion, where an educator's classroom is his or her own private domain—others are rarely invited inside. As a result, we've created an environment that makes it extremely difficult to align standards. It is in this area that learning assessment initiatives are trying to make headway. In the early days of learning assessment, it was most often by Peter Ewell seen as something external to the learning process. Educators tended to approach it one of two ways: They either added a test or survey of students that was implemented outside the standard curriculum, or they used an existing standardized examination to meas- ure students' knowledge of the material. Although these eval- uation-based methods can be valid, they bring with them sev- eral problems. First, faculty often dismiss such methods as disconnected as it often involves exchanging long-held attitudes and habits for new approaches to teaching. But once the need for learn- ing assessment is recognized, the next step is actually using that information for improvement. By creating what I call a "community of practice," continuous assessment and ongo- ing improvement can become an integral and seamless part of the educational process. Making the Grade Creating a community of practice based on the consistent evaluation of student work is somewhat new to American higher education institutions. In Europe, on the other hand, educators are much more accustomed to objective systems of assessment, because many European schools have external examiner systems; in addition to the professor, an external reviewer also reads examples of student exams and projects. As a result, European educators have developed consistent from what they are doing in the classroom. After all, faculty often have little to no input into a standardized exam, and the results of the exam usually have no influence on student per- formance in the course. Second, because these "extra" tests are given outside the standard curriculum, students often don't take them very seriously. Finally, using such exams often adds extra expense to a school's budget. The main problem with traditional faculty-generated assignments and grades, on the other hand, is exactly what my initial example illustrated. Faculty members mark students individually, but gather no information about what aspects of course content a class as a whole has mastered and what aspects a class has generally failed to grasp. Not only that, but the fac- ulty also grade subjectively and idiosyncratically. They each use their own standards and create their own assignments. The alternative to "add-on" assessment methods and inconsistently awarded grades involves the use of "course- embedded assessments"—questions and assignments worked BizEd JULY/AUGUST 2003 29

Articles in this issue

Archives of this issue

view archives of BizEd - JulyAugust2003