My ten favourite #chemed articles of 2015

This post is a sure-fire way to lose friends… but I’m going to pick 10 papers that were published this year that I found interesting and/or useful. This is not to say they are ten of the best; everyone will have their own 10 “best” based on their own contexts.

Caveats done, here are 10 papers on chemistry education research that stood out for me this year:

0. Text messages to explore students’ study habits (Ye, Oueini, Dickerson, and Lewis, CERP)

I was excited to see Scott Lewis speak at the Conference That Shall Not Be Named during the summer as I really love his work. This paper outlines an interesting way to find out about student study habits, using text-message prompts. Students received periodic text messages asking them if they have studied in the past 48 hours. The method is ingenious. Results are discussed in terms of cluster analysis (didn’t study as much, used textbook/practiced problems, and online homework/reviewed notes). There is lots of good stuff here for those interested in students’ study and supporting independent study time. Lewis often publishes with Jennifer Lewis, and together their papers are master-classes in quantitative data analysis. (Note this candidate for my top ten was so obvious I left it out in the original draft, so now it is a top 11…)

1. What do students learn in the laboratory (Galloway and Lowery-Bretz, CERP)?

This paper reports on an investigation using video cameras on the student to record their work in a chemistry lab. Students were interviewed soon after the lab. While we can see what students physically do while they are in the lab (psychomotor learning), it is harder to measure cognitive and affective experiences. This study set about trying to measure these, in the context of what the student considered to be meaningful learning. The paper is important for understanding learning that is going on in the laboratory (or not, in the case of recipe labs), but I liked it most for the use of video in collection of data.

2. Johnstone’s triangle in physical chemistry (Becker, Stanford, Towns, and Cole, CERP).

We are familiar with the importance of Johnstone’s triangle, but a lot of research often points to introductory chemistry, or the US “Gen Chem”. In this paper, consideration is given to understanding whether and how students relate macro, micro, and symbolic levels in thermodynamics, a subject that relies heavily on the symbolic (mathematical). The reliance on symbolic is probably due in no small part to the emphasis most textbooks place on this. The research looked at ways that classroom interactions can develop the translation across all levels, and most interestingly, a sequence of instructor interactions that showed an improvement in coordination of the three dimensions of triplet. There is a lot of good stuff for teachers of introductory thermodynamics here.

3. The all-seeing eye of prior knowledge (Boddey and de Berg, CERP).

My own interest in prior knowledge as a gauge for future learning means I greedily pick up anything that discusses it in further detail. And this paper does that well. It looked at the impact of completing a bridging course on students who had no previous chemistry, comparing them with those who had school chemistry. However, this study takes that typical analysis further, and interviewed students. These are used to tease out different levels of prior knowledge, with the ability to apply being supreme in improving exam performance.

4.  Flipped classes compared to active classes (Flynn, CERP).

I read a lot of papers on flipped lectures this year in preparing a review on the topic. This was by far the most comprehensive. Flipping is examined in small and large classes, and crucially any impact or improvement is discussed by comparing with an already active classroom. A detailed model for implementation of flipped lectures linking before, during, and after class activities is presented, and the whole piece is set in the context of curriculum design. This is dissemination of good practice at its best.

5. Defining problem solving strategies (Randles and Overton, CERP).

This paper gained a lot of attention at the time of publication, as it compares problem solving strategies of different groups in chemistry; undergraduates, academics, and industrialists. Beyond the headline though, I liked it particularly for its method – it is based on grounded theory, and the introductory sections give a very good overview on how this was achieved, which I think will be informative to many. Table 2 in particular demonstrates coding and example quotes which is very useful.

6. How do students experience labs? (Kable and more, IJSE)

This is a large scale project with a long gestation – the ultimate aim is to develop a laboratory experience survey, and in particular a survey for individual laboratory experiments, with a view to their iterative improvement. Three factors – motivation (interest and responsibility), assessment, and resources – are related to students’ positive experience of laboratory work. The survey probes students’ responses to these (some like quality of resources give surprising results). It is useful for anyone thinking about tweaking laboratory instruction, and looking for somewhere to start.

7. Approaches to learning and success in chemistry (Sinapuelas and Stacy, JRST)

Set in the context of transition from school to university, this work describes the categorisation of four levels of learning approaches (gathering facts, learning procedures, confirming understanding, applying ideas). I like these categories as they are a bit more nuanced, and perhaps less judgemental, than surface vs deep learning. The approach level correlates with exam performance. The paper discusses the use of learning resources to encourage students to move from learning procedures (level 2) to confirming understanding (level 3). There are in-depth descriptions characterising each level, and these will be informative to anyone thinking about how to support students’ independent study.

8. Exploring retention (Shedlosky-Shoemaker and Fautch, JCE).

This article categorises some psychological factors aiming to explain why some students do not complete their degree. Students switching degrees tend to have higher self-doubt (in general rather than just for chemistry) and performance anxiety. Motivation did not appear to distinguish between those switching or leaving a course and those staying. The study is useful for those interested in transition, as it challenges some common conceptions about student experiences and motivations. This study appears to suggest much more personal factors are at play.

9. Rethinking central ideas in chemistry (Talanquer, JCE).

Talanquer publishes regularly and operates on a different intellectual plane to most of us. While I can’t say I understand every argument he makes, he always provokes thought. In this commentary, he discusses the central ideas of introductory chemistry (atoms, elements, bonds, etc), and proposes alternative central ideas (chemical identity, mechanisms, etc). It’s one of a series of articles by several authors (including Talanquer himself) that continually challenge the approach we currently take to chemistry. It’s difficult to say whether this will ever become more than a thought experiment though…

10. Newcomers to education literature (Seethaler, JCE).

If you have ever wished to explain to a scientist colleague how education research “works”, this paper might be of use. It considers 5 things scientists should know about education research: what papers can tell you (and their limitations), theoretical bases in education research, a little on misconceptions and content inventories, describing learning, and tools of the trade. It’s a short article at three pages long, so necessarily leaves a lot of information out. But it is a nice primer.

Finally

The craziest graphical abstract of the year must go to Fung’s camera set up. And believe me, the competition was intense.

ed-2014-009624_0007

The feedback dilemma

Read the opening gambit of any educational literature on feedback. It will likely report that while feedback is desired by students, considered important by academics, and in the era of rankings, prioritized by universities, it largely goes unread and unused. Many reports state that students only look at the number grade, ignoring the comments unless it is substantially different from what they expected. Often students don’t realise that the feedback comments on one assignment can help with the next.

Why is this? Looking through the literature on this topic, the crux of the  problem is a dilemma about what academics think feedback actually is.

Duncan (2007) reported a project where previous feedback received by students was assimilated and synthesised into an individual feedback statement that students could apply to the next assignment. Their observations of the previous tutor feedback highlighted some interesting points. They found that tutor comments were written for more than just the students, directed more at a justification of marks for other examiners or for external examiners. Many tutor comments had no specific criticism, only vague praise, and a significant lack of clear and practical advice on how to improve. Feedback often required an understanding implicit to tutor, but not to the student (e.g. “use a more academic style”).

Similar findings from analysis of tutor feedback was reported by Orsmond and Merry (2011). They reported that praise was the most common form of feedback, with tutors explaining misunderstandings and correcting errors. While there was an assumption on the part of tutors that students would know how to apply feedback to future assignments, none of the tutors in their study suggested approaches on how to do this. Orrell (2006) argues that while tutors expressed particular intentions about feedback (appropriateness of content and develop self-evaluation for improvement), in reality the feedback was defensive and summative, justifying the mark assigned.

So what exactly is feedback?

A theme emerging from much of the literature surveyed is that there are different components to feedback. Orsmond and Merry coded eight different forms of feedback. Orrell outlines a teaching-editing-feedback code for distinguishing between different aspects of feedback.  I liked the scheme used by Donovan (2014), classifying feedback as either mastery and developmental (based on work by Petty). I’ve attempted to mesh together these different feedback classifications and relate them to what is described elsewhere as feedback and feed forward. In many of the studies, it was clear that tutors focussed on the feedback comments well, but gave little or no feed forward comments.

Assigning various codings to general categories of feedback and feed forward
Assigning various codings to general categories of feedback and feed forward

While some of these categorisations are contextual, I think it is helpful to develop a system whereby correction of student work, and in particular work that is meant to be formative, distinguishes clearly between correction of the work and assigning a mark for that, with a separate and distinct section for what needs to be considered in future assignments. Of course, ideally future assignments would take into account whether students have considered this feedback. In chemistry, there must be potential in the lab report correcting system.

A final note: Orsmond and Merry describe the student perspective of feedback in terms matching up the assignment with what the tutor wants and using feedback as part of their own intellectual development, part of a greater discourse between student and lecturer. Feedback that emphasizes the former effectively results in students mimicking their discipline – trying to match what they are observing. Whereas emphasis on the latter results in students becoming their discipline, growing in the intellectual capacity of the discipline.

I’m interested in a discussion on how we can present feedback to students physically—how should we highlight what they focus on and how we monitor their progression so that the feedback that we provide is shown to be of real value in their learning?

References:

Pam Donovan (2014) Closing the feedback loop: physics undergraduates’ use of feedback comments on laboratory coursework, Assessment & Evaluation in Higher Education, 39:8, 1017-1029, DOI: 10.1080/02602938.2014.881979

Neil Duncan (2007) ‘Feed‐forward’: improving students’ use of tutors’ comments, Assessment & Evaluation in Higher Education, 32:3, 271-283, DOI: 10.1080/02602930600896498

Janice Orrell (2006) Feedback on learning achievement: rhetoric and reality, Teaching in Higher Education, 11:4, 441-456, DOI: 10.1080/13562510600874235

Paul Orsmond & Stephen Merry (2011) Feedback alignment: effective and ineffective links between tutors’ and students’ understanding of coursework feedback, Assessment & Evaluation in Higher Education, 36:2, 125-136, DOI: 10.1080/02602930903201651

This week I’m reading… Changing STEM education

Summer is a great time for Good Intentions and Forward Planning… with that in mind I’ve been reading about what way we teach chemistry, how we know it’s not the best approach, and what might be done to change it.

Is changing the curriculum enough?

Bodner (1992) opens his discussion on reform in chemistry education writes that “recent concern”, way back in 1992, is not unique. He states that there are repeated cycles of concern about science education over the 20th century, followed by long periods of complacency. Scientists and educators usually respond in three ways:

  1. restructure the curriculum,
  2. attract more young people to science,
  3. try to change science teaching at primary and secondary level.

However, Bodner proposes that the problem is not in attracting people to science at the early stages, but keeping them on when they reach university, and that we at third level have much to learn with from our colleagues in primary and secondary level. Instead of changing the curriculum (the topics taught), his focus is on changing the way the curriculum is taught. In an era when textbooks (and one presumes now, the internet) have all the information one wants, the information dissemination component of a lecture is redundant. Bodner makes a case that students can perform quite well on a question involving equilibrium without understanding its relationship to other concepts taught in the same course, instead advocating an active learning classroom centred around discussion and explanation; dialogue between lecturers and student. He even offers a PhD thesis to back up his argument (A paper, with a great title, derived from this is here: PDF).

Are we there yet?

One of the frustrations I’m sure many who have been around the block a few times feel is the pace of change is so slow (read: glacial). 18 years after Bodner’s paper, Talanquer and Pollard (2010) criticize the chemistry curriculum at universities as “fact-based and encyclopedic, built upon a collection of isolated topics… detached from the practices, ways of thinking, and applications of both chemistry research and chemistry education research in the 21st century.” Their paper in CERP presents an argument for teaching “how we think instead of what we know”.

They describe their Chemistry XXI curriculum, which presents an introductory chemistry curriculum in eight units, each titled by a question. For example, Unit 1 is “How do we distinguish substances?”, consisting of four modules (1 to 2 weeks of work): “searching for differences, modelling matter, comparing masses, determining composition.” The chemical concepts mapping onto these include the particulate model of matter, mole and molar mass, and elemental composition.

Talanquer CERP 2010 imageAssessment of this approach is by a variety of means, including small group in-class activities. An example is provided for a component on physical and electronic properties of metals and non-metals; students are asked to design an LED, justifying their choices. I think this fits nicely into the discursive ideas Bodner mentions. Summative assessment is based on answering questions in a context-based scenario – picture shown.

In what is a very valuable addition to this discussion, learning progression levels are included, allowing student understanding of concepts and ideas, so that their progressive development can be monitored. It’s a paper that’s worth serious consideration and deserves more widespread awareness.

Keep on Truckin’

Finally in our trio is Martin Goedhart’s chapter in the recently published book Chemistry Education. Echoing the basis provided by Talanquer and Pollard, he argues that the traditional disciplines of analytical, organic, inorganic, physical, and biochemistry were reflective of what chemists were doing in research and practice. However, the interdisciplinary nature of our subject demands new divisions; Goedhart proposes three competency areas synthesis, analysis, and modelling. For example in analysis, the overall aim is “acquiring information about the composition and structure of substances and mixtures”. The key competencies are “sampling, using instruments, data interpretation”, with knowledge areas including instruments, methods and techniques, sample prep, etc. As an example of how the approach differs, he states that students should be able to select appropriate techniques for their analysis; our current emphasis is on the catalogue of facts on how each technique works. I think this echoes Talanquer’s point about shifting the emphasis on from what we know to how we think.

Plagiarism: Detection, Prevention, Monitoring

I attended the National Forum for Enhancement of Teaching and Learning seminar on plagiarism organised by Kevin O’Rourke at DIT’s Learning Teaching and Technology Centre. The meeting was interesting as it covered three aspects of plagiarism (in my opinion):

  1. Plagiarism detection
  2. Designing out plagiarism through various L&T methods
  3. Institutional and national profiling of extents of plagiarism

Plagiarism detection is probably the area most academics are familiar with in terms of the plagiarism debate. The pros and cons of SafeAssign and Turnitin were discussed by Kevin O’Rourke and Claire McAvinia of DIT, and the core message seemed to be that this kind of software is at best a tool in helping identify plagiarism. Care should be taken in using the plagiarism score which really needs to be read in the context of the document itself. In addition, the score itself is subject to limitations—it isn’t transparent what academic material is available to the software. Also, while it can be constructive to allow students to submit drafts to allow them gauge the level of plagiarism in their writing, there can be a tendency that students rewrite small sections with the aim of reducing the numerical score, rather than re-considering the document as a whole. Kevin pointed us to this video if you are interested in looking at this topic more.

The second component on designing out plagiarism was of most interest to me. Perry Share of IT Sligo gave a very interesting talk on the wide spectrum of plagiarism, ranging from intentional to unintentional, or “prototypical to patch-writing”. I think the most important thing coming out of his presentation was the consideration of how to design curricula (and most importantly assessment) to teach out plagiarism. A basic example was the consideration of assessment so that it avoided repetitious assignments or assignments that do not vary from year to year. This then developed into considering the process of academic writing. Students writing with a purpose, an overall motivation, will be more likely to consider their own thoughts (and write in their own words) as they have an argument or opinion they wish to present. Students lacking such a purpose will thus lack motivation, and thus revert to the rote-learning style reproduction of existing material. There was an interesting conversation on the lack of formal training for writing in undergraduate programmes. This might consider that “patch-writing” is a part of writing, especially among novices. This involves including some elements of other people’s material/structure in early drafts, but is iteratively rewritten as the author develops their own argument in their own voice to reach the final draft. Current assessment methods often don’t allow the time for this process to develop. Perry referenced Pecorari as a good text to follow up. An earlier webinar by Perry on the contextual element of plagiarism is available here.

Finally, Irene Glendinning (Coventry) spoke about an enormous Europe-wide project on monitoring levels of plagiarism, plagiarism policy, and so on. It was impressive in scale, and generated some interesting data, including an emerging “Academic Integrity” index. The work is subject to limited responses in some countries, but it looks to be a useful index to monitoring the extent of plagiarism prvention and policy existing in EU countries. The executive summary for Ireland was circulated and full details of the project are on the website: http://ippheae.eu/.

A future direction for clickers?

student answers

Clickers are routinely used to survey class on their understanding of topics or test their knowledge with quizzes, and as technology has developed, there have been clever ways of doing this (See: The Rise and Rise…). One issue that arises is that as lecturers, we don’t have a convenient way to know what individual students think, or what their answer is.

glassesEnter this recent paper from BJET, An Augmented Lecture Feedback System to support Learner and Teacher Communication. This paper describes a clicker-based system, but instead of (or as well as) a lecturer viewing a chart of responses, the lecturer sees the response hover over the student’s head. I know it’s early in the year, so I will let you read that sentence again.

The system works by way of the lecturer wearing glasses that scan the room and when each response is entered. The technology (while very clever) is still very rudimentary, and no-one in their right mind would want to look like this in their classroom, but as Google Glasses or equivalent take off, who knows what possibilities there will be in the coming decade.

I think it’s an interesting paper for showing a different aspect of lecturer-student interaction in the class. Quite what you do when you see that some students are incorrect is up to individual teaching scenarios.

The authors have a video explaining the paper in more detail, shown below.

Exam scheduling: semester or end of year?

Journal Club #6: G. Di Pietro, Bulletin of Economic Research, 2012, 65(1), 65 – 81. [Link]

It is my experience in academic discourse that when a change is proposed, those advocating the change rely on “gut instinct” and “common sense” while those opposing it seek evidence from the literature. My own institution is currently planning a significant change in the academic calendar, and while thinking about this, I came across this paper.

The author examines whether an institution’s reform involving moving semester exams to end of year exams had a negative impact on student performance. The system under study had two semesters, with exams in January and June, and the reform meant that there would be no January exams, just exams for the entire year at the end of the year (the way it used to be, I hear the chorus).

Reasons for the reform included the desire not to overburden first years with exams in January, and to allow students more time to digest their material. The author doesn’t hold back in stating that he believes the reasons for the reform were administrative and financial.

The study involved comparing the mid-term results from modules, and comparing these with semester exam results. Assuming that the mid-term results stayed constant before and after reform, the difference between the mid term mark and the exam performance mark before reform and after reform allow for a measure of the impact of reform on student grades to be determined.

BOER_423_f1

The results shown demonstrate that there was a drop in student performance when the exams moved out of semesters to the end of the year, with students scoring 4 points lower (nearly half a grade).

The author concludes with a statement that sounds a note of caution to those considering changing calendars (DIT colleagues take note!)

These findings may have important policy implications. Changes in examination arrangements should ideally be tested for their impact on student performance before they are introduced. Many changes in higher education are driven not by student learning considerations, but by other reasons such as financial and administrative convenience.

Discussion

Do you have any feelings regarding when modules should be examined?

 

When we grade, do we mean what we say?

The aim of the “Journal Club” is to present a summary of a journal article and discuss it in the comments below or on social meeja. The emphasis is not on discussing the paper itself (e.g. methodology etc) but more what the observations or outcomes reported can tell us about our own practice. Get involved! It’s friendly. Be nice. And if you wish to submit your own summary of an article you like, please do. If you can’t access the paper in question, emailing the corresponding author usually works (This article is available online at the author’s homepage. PDF).

Comments on this article are open until Friday 4th October.
#3: HL Petcovic, H Fynewever, C Henderson, JM Mutambuki, JA Barney, Faculty Grading of Quantitative Problems: A mismatch between values and practice, Research in Science Education, 2013, 43, 437-455.
One of my own lecturers used to remark that he didn’t care how good our workings were, if the final answer was wrong, we’d get no marks. What use is a bridge, he’d ask, if the engineers calculated its span to be too short? We weren’t engineers.

After two weeks of giving out about students, this week I am looking at a paper that probes what lecturers say they expect in student answers, and whether there is a mismatch between these expectations and how they grade. To examine this, the authors have constructed a question with two example answers. The first answer is a detailed, well explained answer that has some errors in it, but these errors cancel out, and give the right answer (ANS A). The second answer is brief, and does not show workings, but gives the correct answer (ANS B). Ten chemistry academics were asked about their grading practices, given these answers, and asked to mark them.

In terms of scoring the solutions, eight of the ten academics scored the incorrect-workings answer (ANS A) higher than the correct-no workings answer (ANS B); and the remaining two scored them equally. The average scores were 7.8 versus 5.1. This was much higher than academics in physics and earth sciences, who were evenly split in whether ANS A scored higher than ANS B.

What do we say we want?

In the interviews, the authors drew up a list of values attributed to instructors in terms of what they wished to see in an answer. Value 1 was that instructors wished to see reasoning in answers to know if the student understands (and to offer specific feedback). All chemistry academics expressed this value.

Value 2 was that instructors wished to find evidence in order to deduct points for incorrect answers. This was interesting, as nine of the ten chemists used this as a reason to deduct points from ANS A, as the student had shown their work; whereas five chemists were reluctant to deduct marks from ANS B as they could not be sure if the student had the same mistakes, as they did not show their workings.

Seven chemists were attributed Value 3, which was a tendency to project correct thinking on ambiguous solutions, assuming that the student writing ANS B must have had the correct thought process, since there was no evidence of a mistake.

Finally, the chemists alone had a fourth value which was not found as much with earth scientists or at all with physicists – a desire to see organisation, units, significant figures; a general methodological approach.

There is evidently a mismatch between the values expressed. Value 1 (want reasoning) and Value 4 (want methodological approach) would appear to conflict with Value 2 (need evidence to deduct) and 3 (projecting correct thought). Most chemists expressed several values, and where they expressed conflicting values, the authors deduced a burden of proof; which set of values the academics (implicitly) rated higher. Six chemists placed the burden of proof on the student: “I can’t see if the student knew how to do this or just copied it.” The remainder placed the burden on themselves: “I don’t want to take any credit off but will tell him directly that he should give more detail.

Message to students

Students of course are likely to take messages from how we grade instead of how we say we will grade. If students are graded with the burden of proof on the instructor, they are more like to do well if they do not expose much reasoning in their answers. If they are required to show reasoning and demonstrate understanding, they are likely to score poorly. Therefore, while we often say that we want to see workings, reasoning, scientific argument, unless we follow through on that, we are rewarding students who call our bluff in this regard!

Discussion

I think this is an interesting paper, and it’s made me think back about how I mark student work. I would imagine that I would be in the burden of proof on the instructor camp, seeing that as implicitly fair to students, but perhaps I need to get a bit harder, and demand fully detailed reasoning in student answers.

  1. Can you identify with any of the four values the authors outline in this paper?
  2. For introductory students, do you have an “induction” of how to illustrate reasoning in answering questions and problems?

Interested to hear what you think.