When we grade, do we mean what we say?

The aim of the “Journal Club” is to present a summary of a journal article and discuss it in the comments below or on social meeja. The emphasis is not on discussing the paper itself (e.g. methodology etc) but more what the observations or outcomes reported can tell us about our own practice. Get involved! It’s friendly. Be nice. And if you wish to submit your own summary of an article you like, please do. If you can’t access the paper in question, emailing the corresponding author usually works (This article is available online at the author’s homepage. PDF).

Comments on this article are open until Friday 4th October.
#3: HL Petcovic, H Fynewever, C Henderson, JM Mutambuki, JA Barney, Faculty Grading of Quantitative Problems: A mismatch between values and practice, Research in Science Education, 2013, 43, 437-455.
One of my own lecturers used to remark that he didn’t care how good our workings were, if the final answer was wrong, we’d get no marks. What use is a bridge, he’d ask, if the engineers calculated its span to be too short? We weren’t engineers.

After two weeks of giving out about students, this week I am looking at a paper that probes what lecturers say they expect in student answers, and whether there is a mismatch between these expectations and how they grade. To examine this, the authors have constructed a question with two example answers. The first answer is a detailed, well explained answer that has some errors in it, but these errors cancel out, and give the right answer (ANS A). The second answer is brief, and does not show workings, but gives the correct answer (ANS B). Ten chemistry academics were asked about their grading practices, given these answers, and asked to mark them.

In terms of scoring the solutions, eight of the ten academics scored the incorrect-workings answer (ANS A) higher than the correct-no workings answer (ANS B); and the remaining two scored them equally. The average scores were 7.8 versus 5.1. This was much higher than academics in physics and earth sciences, who were evenly split in whether ANS A scored higher than ANS B.

What do we say we want?

In the interviews, the authors drew up a list of values attributed to instructors in terms of what they wished to see in an answer. Value 1 was that instructors wished to see reasoning in answers to know if the student understands (and to offer specific feedback). All chemistry academics expressed this value.

Value 2 was that instructors wished to find evidence in order to deduct points for incorrect answers. This was interesting, as nine of the ten chemists used this as a reason to deduct points from ANS A, as the student had shown their work; whereas five chemists were reluctant to deduct marks from ANS B as they could not be sure if the student had the same mistakes, as they did not show their workings.

Seven chemists were attributed Value 3, which was a tendency to project correct thinking on ambiguous solutions, assuming that the student writing ANS B must have had the correct thought process, since there was no evidence of a mistake.

Finally, the chemists alone had a fourth value which was not found as much with earth scientists or at all with physicists – a desire to see organisation, units, significant figures; a general methodological approach.

There is evidently a mismatch between the values expressed. Value 1 (want reasoning) and Value 4 (want methodological approach) would appear to conflict with Value 2 (need evidence to deduct) and 3 (projecting correct thought). Most chemists expressed several values, and where they expressed conflicting values, the authors deduced a burden of proof; which set of values the academics (implicitly) rated higher. Six chemists placed the burden of proof on the student: “I can’t see if the student knew how to do this or just copied it.” The remainder placed the burden on themselves: “I don’t want to take any credit off but will tell him directly that he should give more detail.

Message to students

Students of course are likely to take messages from how we grade instead of how we say we will grade. If students are graded with the burden of proof on the instructor, they are more like to do well if they do not expose much reasoning in their answers. If they are required to show reasoning and demonstrate understanding, they are likely to score poorly. Therefore, while we often say that we want to see workings, reasoning, scientific argument, unless we follow through on that, we are rewarding students who call our bluff in this regard!

Discussion

I think this is an interesting paper, and it’s made me think back about how I mark student work. I would imagine that I would be in the burden of proof on the instructor camp, seeing that as implicitly fair to students, but perhaps I need to get a bit harder, and demand fully detailed reasoning in student answers.

  1. Can you identify with any of the four values the authors outline in this paper?
  2. For introductory students, do you have an “induction” of how to illustrate reasoning in answering questions and problems?

Interested to hear what you think.

4 thoughts on “When we grade, do we mean what we say?

  1. Very interesting, especially from my side of the fence. When marking for public examinations by far the highest value is placed on the right answer, for example in a thermodynamics question the right answer gains 3 marks, ‘right’ answer with the wrong sign gains 1 mark and a relevant expression with utter twaddle would also gain one mark. Arguably this speeds up marking but does not encourage deeper thought on the part of students who merely learn the process for getting the right answer rather than the reasoning behind it. As marking becomes even more pressured (in the days on more non specialist markers) then the value of the right answer is increased to the detriment of students’ understanding of the fundamentals.

    I definitely recognise the 4th value of organisation, generation of units etc. Anyone who has examined a page of badly organised mathematical scrawl desperately searching for something credit worthy does!

    Links in nicely to work I presented at ViCE 2012 in Edinburgh, providing students with exemplar, not model answers. I must get that reignited!

    1. Hi Kristy,
      Thanks for commenting! I think the first point you make is really important – as we standardise and regularise more and more, the focus moves more to the answer, and especially so with increasing time constraints. This contrasts with what we want in reality – evidence of a sound logical approach, drawing on relevant chemical knowledge, and of course, hopefully coming to the right answer! I guess that’s what we seek in that “fourth value”.

      I’d be interested in hearing more about these exemplars – because that sounds like a really effective way to showcase structuring and approaching an answer while not losing focus on getting to the answer. Are these worked examples or examples of structural approach or what? Basically I am looking to rob more good ideas from you!

      Michael

    1. Hey Trent,

      Thanks for comment, and your piece on your website on this is very useful – some very simple effective tips for avoiding the scenarios that may cause marker conflict as they were highlighted in the paper. Recommend that anyone interested read your piece on it!
      http://www.trentwallis.com/chemblog/marking-philosophy/

      Thanks again
      Michael

Comments are closed.