Chemistry, Pedagogy

My ten favourite #chemed articles of 2015

This post is a sure-fire way to lose friends… but I’m going to pick 10 papers that were published this year that I found interesting and/or useful. This is not to say they are ten of the best; everyone will have their own 10 “best” based on their own contexts.

Caveats done, here are 10 papers on chemistry education research that stood out for me this year:

0. Text messages to explore students’ study habits (Ye, Oueini, Dickerson, and Lewis, CERP)

I was excited to see Scott Lewis speak at the Conference That Shall Not Be Named during the summer as I really love his work. This paper outlines an interesting way to find out about student study habits, using text-message prompts. Students received periodic text messages asking them if they have studied in the past 48 hours. The method is ingenious. Results are discussed in terms of cluster analysis (didn’t study as much, used textbook/practiced problems, and online homework/reviewed notes). There is lots of good stuff here for those interested in students’ study and supporting independent study time. Lewis often publishes with Jennifer Lewis, and together their papers are master-classes in quantitative data analysis. (Note this candidate for my top ten was so obvious I left it out in the original draft, so now it is a top 11…)

1. What do students learn in the laboratory (Galloway and Lowery-Bretz, CERP)?

This paper reports on an investigation using video cameras on the student to record their work in a chemistry lab. Students were interviewed soon after the lab. While we can see what students physically do while they are in the lab (psychomotor learning), it is harder to measure cognitive and affective experiences. This study set about trying to measure these, in the context of what the student considered to be meaningful learning. The paper is important for understanding learning that is going on in the laboratory (or not, in the case of recipe labs), but I liked it most for the use of video in collection of data.

2. Johnstone’s triangle in physical chemistry (Becker, Stanford, Towns, and Cole, CERP).

We are familiar with the importance of Johnstone’s triangle, but a lot of research often points to introductory chemistry, or the US “Gen Chem”. In this paper, consideration is given to understanding whether and how students relate macro, micro, and symbolic levels in thermodynamics, a subject that relies heavily on the symbolic (mathematical). The reliance on symbolic is probably due in no small part to the emphasis most textbooks place on this. The research looked at ways that classroom interactions can develop the translation across all levels, and most interestingly, a sequence of instructor interactions that showed an improvement in coordination of the three dimensions of triplet. There is a lot of good stuff for teachers of introductory thermodynamics here.

3. The all-seeing eye of prior knowledge (Boddey and de Berg, CERP).

My own interest in prior knowledge as a gauge for future learning means I greedily pick up anything that discusses it in further detail. And this paper does that well. It looked at the impact of completing a bridging course on students who had no previous chemistry, comparing them with those who had school chemistry. However, this study takes that typical analysis further, and interviewed students. These are used to tease out different levels of prior knowledge, with the ability to apply being supreme in improving exam performance.

4.  Flipped classes compared to active classes (Flynn, CERP).

I read a lot of papers on flipped lectures this year in preparing a review on the topic. This was by far the most comprehensive. Flipping is examined in small and large classes, and crucially any impact or improvement is discussed by comparing with an already active classroom. A detailed model for implementation of flipped lectures linking before, during, and after class activities is presented, and the whole piece is set in the context of curriculum design. This is dissemination of good practice at its best.

5. Defining problem solving strategies (Randles and Overton, CERP).

This paper gained a lot of attention at the time of publication, as it compares problem solving strategies of different groups in chemistry; undergraduates, academics, and industrialists. Beyond the headline though, I liked it particularly for its method – it is based on grounded theory, and the introductory sections give a very good overview on how this was achieved, which I think will be informative to many. Table 2 in particular demonstrates coding and example quotes which is very useful.

6. How do students experience labs? (Kable and more, IJSE)

This is a large scale project with a long gestation – the ultimate aim is to develop a laboratory experience survey, and in particular a survey for individual laboratory experiments, with a view to their iterative improvement. Three factors – motivation (interest and responsibility), assessment, and resources – are related to students’ positive experience of laboratory work. The survey probes students’ responses to these (some like quality of resources give surprising results). It is useful for anyone thinking about tweaking laboratory instruction, and looking for somewhere to start.

7. Approaches to learning and success in chemistry (Sinapuelas and Stacy, JRST)

Set in the context of transition from school to university, this work describes the categorisation of four levels of learning approaches (gathering facts, learning procedures, confirming understanding, applying ideas). I like these categories as they are a bit more nuanced, and perhaps less judgemental, than surface vs deep learning. The approach level correlates with exam performance. The paper discusses the use of learning resources to encourage students to move from learning procedures (level 2) to confirming understanding (level 3). There are in-depth descriptions characterising each level, and these will be informative to anyone thinking about how to support students’ independent study.

8. Exploring retention (Shedlosky-Shoemaker and Fautch, JCE).

This article categorises some psychological factors aiming to explain why some students do not complete their degree. Students switching degrees tend to have higher self-doubt (in general rather than just for chemistry) and performance anxiety. Motivation did not appear to distinguish between those switching or leaving a course and those staying. The study is useful for those interested in transition, as it challenges some common conceptions about student experiences and motivations. This study appears to suggest much more personal factors are at play.

9. Rethinking central ideas in chemistry (Talanquer, JCE).

Talanquer publishes regularly and operates on a different intellectual plane to most of us. While I can’t say I understand every argument he makes, he always provokes thought. In this commentary, he discusses the central ideas of introductory chemistry (atoms, elements, bonds, etc), and proposes alternative central ideas (chemical identity, mechanisms, etc). It’s one of a series of articles by several authors (including Talanquer himself) that continually challenge the approach we currently take to chemistry. It’s difficult to say whether this will ever become more than a thought experiment though…

10. Newcomers to education literature (Seethaler, JCE).

If you have ever wished to explain to a scientist colleague how education research “works”, this paper might be of use. It considers 5 things scientists should know about education research: what papers can tell you (and their limitations), theoretical bases in education research, a little on misconceptions and content inventories, describing learning, and tools of the trade. It’s a short article at three pages long, so necessarily leaves a lot of information out. But it is a nice primer.

Finally

The craziest graphical abstract of the year must go to Fung’s camera set up. And believe me, the competition was intense.

ed-2014-009624_0007

Chemistry, Pedagogy

The feedback dilemma

Read the opening gambit of any educational literature on feedback. It will likely report that while feedback is desired by students, considered important by academics, and in the era of rankings, prioritized by universities, it largely goes unread and unused. Many reports state that students only look at the number grade, ignoring the comments unless it is substantially different from what they expected. Often students don’t realise that the feedback comments on one assignment can help with the next.

Why is this? Looking through the literature on this topic, the crux of the  problem is a dilemma about what academics think feedback actually is.

Duncan (2007) reported a project where previous feedback received by students was assimilated and synthesised into an individual feedback statement that students could apply to the next assignment. Their observations of the previous tutor feedback highlighted some interesting points. They found that tutor comments were written for more than just the students, directed more at a justification of marks for other examiners or for external examiners. Many tutor comments had no specific criticism, only vague praise, and a significant lack of clear and practical advice on how to improve. Feedback often required an understanding implicit to tutor, but not to the student (e.g. “use a more academic style”).

Similar findings from analysis of tutor feedback was reported by Orsmond and Merry (2011). They reported that praise was the most common form of feedback, with tutors explaining misunderstandings and correcting errors. While there was an assumption on the part of tutors that students would know how to apply feedback to future assignments, none of the tutors in their study suggested approaches on how to do this. Orrell (2006) argues that while tutors expressed particular intentions about feedback (appropriateness of content and develop self-evaluation for improvement), in reality the feedback was defensive and summative, justifying the mark assigned.

So what exactly is feedback?

A theme emerging from much of the literature surveyed is that there are different components to feedback. Orsmond and Merry coded eight different forms of feedback. Orrell outlines a teaching-editing-feedback code for distinguishing between different aspects of feedback.  I liked the scheme used by Donovan (2014), classifying feedback as either mastery and developmental (based on work by Petty). I’ve attempted to mesh together these different feedback classifications and relate them to what is described elsewhere as feedback and feed forward. In many of the studies, it was clear that tutors focussed on the feedback comments well, but gave little or no feed forward comments.

Assigning various codings to general categories of feedback and feed forward
Assigning various codings to general categories of feedback and feed forward

While some of these categorisations are contextual, I think it is helpful to develop a system whereby correction of student work, and in particular work that is meant to be formative, distinguishes clearly between correction of the work and assigning a mark for that, with a separate and distinct section for what needs to be considered in future assignments. Of course, ideally future assignments would take into account whether students have considered this feedback. In chemistry, there must be potential in the lab report correcting system.

A final note: Orsmond and Merry describe the student perspective of feedback in terms matching up the assignment with what the tutor wants and using feedback as part of their own intellectual development, part of a greater discourse between student and lecturer. Feedback that emphasizes the former effectively results in students mimicking their discipline – trying to match what they are observing. Whereas emphasis on the latter results in students becoming their discipline, growing in the intellectual capacity of the discipline.

I’m interested in a discussion on how we can present feedback to students physically—how should we highlight what they focus on and how we monitor their progression so that the feedback that we provide is shown to be of real value in their learning?

References:

Pam Donovan (2014) Closing the feedback loop: physics undergraduates’ use of feedback comments on laboratory coursework, Assessment & Evaluation in Higher Education, 39:8, 1017-1029, DOI: 10.1080/02602938.2014.881979

Neil Duncan (2007) ‘Feed‐forward’: improving students’ use of tutors’ comments, Assessment & Evaluation in Higher Education, 32:3, 271-283, DOI: 10.1080/02602930600896498

Janice Orrell (2006) Feedback on learning achievement: rhetoric and reality, Teaching in Higher Education, 11:4, 441-456, DOI: 10.1080/13562510600874235

Paul Orsmond & Stephen Merry (2011) Feedback alignment: effective and ineffective links between tutors’ and students’ understanding of coursework feedback, Assessment & Evaluation in Higher Education, 36:2, 125-136, DOI: 10.1080/02602930903201651

Chemistry, Context, Journal Club, Pedagogy, Weekly Links

This week I’m reading… Changing STEM education

Summer is a great time for Good Intentions and Forward Planning… with that in mind I’ve been reading about what way we teach chemistry, how we know it’s not the best approach, and what might be done to change it.

Is changing the curriculum enough?

Bodner (1992) opens his discussion on reform in chemistry education writes that “recent concern”, way back in 1992, is not unique. He states that there are repeated cycles of concern about science education over the 20th century, followed by long periods of complacency. Scientists and educators usually respond in three ways:

  1. restructure the curriculum,
  2. attract more young people to science,
  3. try to change science teaching at primary and secondary level.

However, Bodner proposes that the problem is not in attracting people to science at the early stages, but keeping them on when they reach university, and that we at third level have much to learn with from our colleagues in primary and secondary level. Instead of changing the curriculum (the topics taught), his focus is on changing the way the curriculum is taught. In an era when textbooks (and one presumes now, the internet) have all the information one wants, the information dissemination component of a lecture is redundant. Bodner makes a case that students can perform quite well on a question involving equilibrium without understanding its relationship to other concepts taught in the same course, instead advocating an active learning classroom centred around discussion and explanation; dialogue between lecturers and student. He even offers a PhD thesis to back up his argument (A paper, with a great title, derived from this is here: PDF).

Are we there yet?

One of the frustrations I’m sure many who have been around the block a few times feel is the pace of change is so slow (read: glacial). 18 years after Bodner’s paper, Talanquer and Pollard (2010) criticize the chemistry curriculum at universities as “fact-based and encyclopedic, built upon a collection of isolated topics… detached from the practices, ways of thinking, and applications of both chemistry research and chemistry education research in the 21st century.” Their paper in CERP presents an argument for teaching “how we think instead of what we know”.

They describe their Chemistry XXI curriculum, which presents an introductory chemistry curriculum in eight units, each titled by a question. For example, Unit 1 is “How do we distinguish substances?”, consisting of four modules (1 to 2 weeks of work): “searching for differences, modelling matter, comparing masses, determining composition.” The chemical concepts mapping onto these include the particulate model of matter, mole and molar mass, and elemental composition.

Talanquer CERP 2010 imageAssessment of this approach is by a variety of means, including small group in-class activities. An example is provided for a component on physical and electronic properties of metals and non-metals; students are asked to design an LED, justifying their choices. I think this fits nicely into the discursive ideas Bodner mentions. Summative assessment is based on answering questions in a context-based scenario – picture shown.

In what is a very valuable addition to this discussion, learning progression levels are included, allowing student understanding of concepts and ideas, so that their progressive development can be monitored. It’s a paper that’s worth serious consideration and deserves more widespread awareness.

Keep on Truckin’

Finally in our trio is Martin Goedhart’s chapter in the recently published book Chemistry Education. Echoing the basis provided by Talanquer and Pollard, he argues that the traditional disciplines of analytical, organic, inorganic, physical, and biochemistry were reflective of what chemists were doing in research and practice. However, the interdisciplinary nature of our subject demands new divisions; Goedhart proposes three competency areas synthesis, analysis, and modelling. For example in analysis, the overall aim is “acquiring information about the composition and structure of substances and mixtures”. The key competencies are “sampling, using instruments, data interpretation”, with knowledge areas including instruments, methods and techniques, sample prep, etc. As an example of how the approach differs, he states that students should be able to select appropriate techniques for their analysis; our current emphasis is on the catalogue of facts on how each technique works. I think this echoes Talanquer’s point about shifting the emphasis on from what we know to how we think.

Pedagogy

Plagiarism: Detection, Prevention, Monitoring

I attended the National Forum for Enhancement of Teaching and Learning seminar on plagiarism organised by Kevin O’Rourke at DIT’s Learning Teaching and Technology Centre. The meeting was interesting as it covered three aspects of plagiarism (in my opinion):

  1. Plagiarism detection
  2. Designing out plagiarism through various L&T methods
  3. Institutional and national profiling of extents of plagiarism

Plagiarism detection is probably the area most academics are familiar with in terms of the plagiarism debate. The pros and cons of SafeAssign and Turnitin were discussed by Kevin O’Rourke and Claire McAvinia of DIT, and the core message seemed to be that this kind of software is at best a tool in helping identify plagiarism. Care should be taken in using the plagiarism score which really needs to be read in the context of the document itself. In addition, the score itself is subject to limitations—it isn’t transparent what academic material is available to the software. Also, while it can be constructive to allow students to submit drafts to allow them gauge the level of plagiarism in their writing, there can be a tendency that students rewrite small sections with the aim of reducing the numerical score, rather than re-considering the document as a whole. Kevin pointed us to this video if you are interested in looking at this topic more.

The second component on designing out plagiarism was of most interest to me. Perry Share of IT Sligo gave a very interesting talk on the wide spectrum of plagiarism, ranging from intentional to unintentional, or “prototypical to patch-writing”. I think the most important thing coming out of his presentation was the consideration of how to design curricula (and most importantly assessment) to teach out plagiarism. A basic example was the consideration of assessment so that it avoided repetitious assignments or assignments that do not vary from year to year. This then developed into considering the process of academic writing. Students writing with a purpose, an overall motivation, will be more likely to consider their own thoughts (and write in their own words) as they have an argument or opinion they wish to present. Students lacking such a purpose will thus lack motivation, and thus revert to the rote-learning style reproduction of existing material. There was an interesting conversation on the lack of formal training for writing in undergraduate programmes. This might consider that “patch-writing” is a part of writing, especially among novices. This involves including some elements of other people’s material/structure in early drafts, but is iteratively rewritten as the author develops their own argument in their own voice to reach the final draft. Current assessment methods often don’t allow the time for this process to develop. Perry referenced Pecorari as a good text to follow up. An earlier webinar by Perry on the contextual element of plagiarism is available here.

Finally, Irene Glendinning (Coventry) spoke about an enormous Europe-wide project on monitoring levels of plagiarism, plagiarism policy, and so on. It was impressive in scale, and generated some interesting data, including an emerging “Academic Integrity” index. The work is subject to limited responses in some countries, but it looks to be a useful index to monitoring the extent of plagiarism prvention and policy existing in EU countries. The executive summary for Ireland was circulated and full details of the project are on the website: http://ippheae.eu/.

Trends in E-Learning

A future direction for clickers?

Clickers are routinely used to survey class on their understanding of topics or test their knowledge with quizzes, and as technology has developed, there have been clever ways of doing this (See: The Rise and Rise…). One issue that arises is that as lecturers, we don’t have a convenient way to know what individual students think, or what their answer is.

glassesEnter this recent paper from BJET, An Augmented Lecture Feedback System to support Learner and Teacher Communication. This paper describes a clicker-based system, but instead of (or as well as) a lecturer viewing a chart of responses, the lecturer sees the response hover over the student’s head. I know it’s early in the year, so I will let you read that sentence again.

The system works by way of the lecturer wearing glasses that scan the room and when each response is entered. The technology (while very clever) is still very rudimentary, and no-one in their right mind would want to look like this in their classroom, but as Google Glasses or equivalent take off, who knows what possibilities there will be in the coming decade.

I think it’s an interesting paper for showing a different aspect of lecturer-student interaction in the class. Quite what you do when you see that some students are incorrect is up to individual teaching scenarios.

The authors have a video explaining the paper in more detail, shown below.