Posts Tagged ‘assessment’

Peer Review for Trainees

Tuesday, September 13th, 2011

In one week in 2010, educators nominated the best articles about rethinking higher education. Organizers as the Roy Rosenzweig Center for History and New Media at George Mason University compiled the submissions into an e-book called Hacking the Academy.

One of the provocative ideas there for shaking up traditional academia is from Cathy Davidson, former Vice Provost for Interdisciplinary Studies at Duke. She describes an experiment in “crowdsourcing” student grades. Rather than the faculty member alone evaluate student performance, she had fellow class members determine if a student’s work was satisfactory.

This approach eliminated some of the usual student jockeying to perform for the teacher and gave them a wider audience. The method could easily be applied to clinical teaching settings where peers observe each other’s performance.

Assessing Assessment

Friday, June 17th, 2011

Last week BU hosted John McCahan Medical Education Day, a symposium dedicated to innovation, research, and technology in teaching. Martha Stassen, Director of Assessment at U Mass Amherst, delivered the keynote address in defense of assessment. Borrowing from Atul Gawande’s work on checklists, she argued that assessment can not only prevent errors but also enhance teaching.

One example of her argument is the biomedical engineer George Plopper, featured in a profile on Inside Higher Ed. After encountering Bloom’s taxonomy, Plopper restructured his undergraduate classes on cancer biology to integrate assessment into the syllabus. Instead of lecture, memorization, and test, Plopper’s classes now students analyze the subject matter themselves, teach it to each other, and apply it in realistic scenarios.

Plopper evaluates students on each of these tasks, pegging his assessment to specific terms in Bloom’s taxonomy. With the new focus on project-based learning, he measured quantitative gains in instances of higher order thinking. Rather than a burdensome task, being explicit about assessment helped improve student outcomes.

Faculty Accounting

Thursday, May 5th, 2011

All universities and teaching hospitals are feeling the need to assess their effectiveness. The move to accountability is partly motivated by increased competition for limited funds. A department that can demonstrate its positive impact stands a better chance at attracting faculty, trainees, and public support.

One extreme example of this move to measure productivity is a faculty “balance sheet” issued by Texas A & M University. As reported in the Wall Street Journal, the Chancellor’s office has tallied up all the income a faculty member generates through tuition and grants and then subtracted his or her expenses in salary and benefits. The results are not so predictable. The history department ends up with a $4.6 million surplus, but the Aerospace Engineering department reports a loss of $1.4 million.

Texas also mandates further transparency by requiring all academic departments to post their budgets and student evaluations just three clicks away from the university’s home page. Having access to raw data, though, may not help answer whether public funding is achieving desirable goals. The faculty accounting does not take into account teaching that takes place outside the classroom or allow for periods where faculty are coming up with innovative ideas that will garner future funding. Holding higher education accountable is not objectionable; what is short-sighted is measuring faculty contributions solely in dollars and cents.

Assessment

Thursday, February 24th, 2011

The Teagle Foundation has released a book called Literary Study, Measurement, and the Sublime: Disciplinary Assessment. The project brings together essays by experts in literature and assessment to suggest ways that teachers can measure student learning when it comes to less tangible outcomes.

Medical educators tend to think of their content as concrete. Learners, after all, must demonstrate their knowledge through national exams. But, in other ways, training medical students and residents resembles the teaching of literary studies. We hope learners gain empathy, professionalism, and the ability to “read” a patient. The suggestions in the book can help academic medical centers gauge their success in conveying these abstract qualities.

One model that may apply to the medical setting is the verified resume. Originally designed by the Department of Labor to emphasize skills training for the workforce, the six-item score card resonates with the goals of medical training. The verified resume includes measures of:

  • responsibility
  • team player
  • listening
  • creativity
  • acquiring and evaluating information
  • working with cultural diversity

Evaluating Evaluations

Wednesday, December 15th, 2010

A study of students at the University of Northern Iowa and Southeastern Oklahoma State University found that a third of respondents on course evaluations lie. The Des Moines Register interviewed one of the authors, who said that the mistruths are more likely to be motivated by animus toward the faculty member than appreciation.

The findings confirm other research that questions the validity of student evaluations. In one study, good-looking professors outscored their more homely counterparts on year-end evaluations. The anonymous nature of the forms leads to some disparaging or simply bizarre comments. A colleague of mine received the feedback that, “Dr. X creates a wholesome, Christian environment.” She wasn’t sure if the remark was meant as satire or flattery.

Despite the shortcomings of student evaluations, trainees are in the best position to offer opinions about how teaching can improve. One solution might be to make the forms identifiable so respondents have to own their words. Another idea is for a neutral outsider to conduct focus groups or interviews with students about the course and summarize the suggestions for the faculty member.

Rankled Rankings

Tuesday, August 17th, 2010

It’s as reliable an indicator of the start of the academic year as sales at Office Depot: U.S. News and World Report has released its rankings of best undergraduate colleges. (If it matters, Harvard topped the list and Boston University came in 56th. ) Most experts put little faith in the magazine’s methodology. Even if they could rank schools, a single score says nothing about whether an individual student will succeed there.

Still, I admit I check the list each year. Apparently a lot of other readers do, too. U.S. News now covers very little news and has come to be known for its ever-expanding franchise of rankings. They now rank U.S. hospitals by specialty. This seems even more absurd than colleges since a woman in need of a hysterectomy is unlikely to fly to Baltimore just to be seen by the nation’s top gynecology department.

If any good comes of these publicity stunts, it is to make universities and academic health centers accountable for outcomes. We may disagree on the criteria they use to measure excellence, but the rankings encourage institutions to consider what are the right criteria. Assessment helps us make sure we’re meeting the appropriate goals.