My ten favourite #chemed articles of 2015

This post is a sure-fire way to lose friends… but I’m going to pick 10 papers that were published this year that I found interesting and/or useful. This is not to say they are ten of the best; everyone will have their own 10 “best” based on their own contexts.

Caveats done, here are 10 papers on chemistry education research that stood out for me this year:

0. Text messages to explore students’ study habits (Ye, Oueini, Dickerson, and Lewis, CERP)

I was excited to see Scott Lewis speak at the Conference That Shall Not Be Named during the summer as I really love his work. This paper outlines an interesting way to find out about student study habits, using text-message prompts. Students received periodic text messages asking them if they have studied in the past 48 hours. The method is ingenious. Results are discussed in terms of cluster analysis (didn’t study as much, used textbook/practiced problems, and online homework/reviewed notes). There is lots of good stuff here for those interested in students’ study and supporting independent study time. Lewis often publishes with Jennifer Lewis, and together their papers are master-classes in quantitative data analysis. (Note this candidate for my top ten was so obvious I left it out in the original draft, so now it is a top 11…)

1. What do students learn in the laboratory (Galloway and Lowery-Bretz, CERP)?

This paper reports on an investigation using video cameras on the student to record their work in a chemistry lab. Students were interviewed soon after the lab. While we can see what students physically do while they are in the lab (psychomotor learning), it is harder to measure cognitive and affective experiences. This study set about trying to measure these, in the context of what the student considered to be meaningful learning. The paper is important for understanding learning that is going on in the laboratory (or not, in the case of recipe labs), but I liked it most for the use of video in collection of data.

2. Johnstone’s triangle in physical chemistry (Becker, Stanford, Towns, and Cole, CERP).

We are familiar with the importance of Johnstone’s triangle, but a lot of research often points to introductory chemistry, or the US “Gen Chem”. In this paper, consideration is given to understanding whether and how students relate macro, micro, and symbolic levels in thermodynamics, a subject that relies heavily on the symbolic (mathematical). The reliance on symbolic is probably due in no small part to the emphasis most textbooks place on this. The research looked at ways that classroom interactions can develop the translation across all levels, and most interestingly, a sequence of instructor interactions that showed an improvement in coordination of the three dimensions of triplet. There is a lot of good stuff for teachers of introductory thermodynamics here.

3. The all-seeing eye of prior knowledge (Boddey and de Berg, CERP).

My own interest in prior knowledge as a gauge for future learning means I greedily pick up anything that discusses it in further detail. And this paper does that well. It looked at the impact of completing a bridging course on students who had no previous chemistry, comparing them with those who had school chemistry. However, this study takes that typical analysis further, and interviewed students. These are used to tease out different levels of prior knowledge, with the ability to apply being supreme in improving exam performance.

4.  Flipped classes compared to active classes (Flynn, CERP).

I read a lot of papers on flipped lectures this year in preparing a review on the topic. This was by far the most comprehensive. Flipping is examined in small and large classes, and crucially any impact or improvement is discussed by comparing with an already active classroom. A detailed model for implementation of flipped lectures linking before, during, and after class activities is presented, and the whole piece is set in the context of curriculum design. This is dissemination of good practice at its best.

5. Defining problem solving strategies (Randles and Overton, CERP).

This paper gained a lot of attention at the time of publication, as it compares problem solving strategies of different groups in chemistry; undergraduates, academics, and industrialists. Beyond the headline though, I liked it particularly for its method – it is based on grounded theory, and the introductory sections give a very good overview on how this was achieved, which I think will be informative to many. Table 2 in particular demonstrates coding and example quotes which is very useful.

6. How do students experience labs? (Kable and more, IJSE)

This is a large scale project with a long gestation – the ultimate aim is to develop a laboratory experience survey, and in particular a survey for individual laboratory experiments, with a view to their iterative improvement. Three factors – motivation (interest and responsibility), assessment, and resources – are related to students’ positive experience of laboratory work. The survey probes students’ responses to these (some like quality of resources give surprising results). It is useful for anyone thinking about tweaking laboratory instruction, and looking for somewhere to start.

7. Approaches to learning and success in chemistry (Sinapuelas and Stacy, JRST)

Set in the context of transition from school to university, this work describes the categorisation of four levels of learning approaches (gathering facts, learning procedures, confirming understanding, applying ideas). I like these categories as they are a bit more nuanced, and perhaps less judgemental, than surface vs deep learning. The approach level correlates with exam performance. The paper discusses the use of learning resources to encourage students to move from learning procedures (level 2) to confirming understanding (level 3). There are in-depth descriptions characterising each level, and these will be informative to anyone thinking about how to support students’ independent study.

8. Exploring retention (Shedlosky-Shoemaker and Fautch, JCE).

This article categorises some psychological factors aiming to explain why some students do not complete their degree. Students switching degrees tend to have higher self-doubt (in general rather than just for chemistry) and performance anxiety. Motivation did not appear to distinguish between those switching or leaving a course and those staying. The study is useful for those interested in transition, as it challenges some common conceptions about student experiences and motivations. This study appears to suggest much more personal factors are at play.

9. Rethinking central ideas in chemistry (Talanquer, JCE).

Talanquer publishes regularly and operates on a different intellectual plane to most of us. While I can’t say I understand every argument he makes, he always provokes thought. In this commentary, he discusses the central ideas of introductory chemistry (atoms, elements, bonds, etc), and proposes alternative central ideas (chemical identity, mechanisms, etc). It’s one of a series of articles by several authors (including Talanquer himself) that continually challenge the approach we currently take to chemistry. It’s difficult to say whether this will ever become more than a thought experiment though…

10. Newcomers to education literature (Seethaler, JCE).

If you have ever wished to explain to a scientist colleague how education research “works”, this paper might be of use. It considers 5 things scientists should know about education research: what papers can tell you (and their limitations), theoretical bases in education research, a little on misconceptions and content inventories, describing learning, and tools of the trade. It’s a short article at three pages long, so necessarily leaves a lot of information out. But it is a nice primer.

Finally

The craziest graphical abstract of the year must go to Fung’s camera set up. And believe me, the competition was intense.

ed-2014-009624_0007

Related Posts:

This week I’m reading… Changing STEM education

Summer is a great time for Good Intentions and Forward Planning… with that in mind I’ve been reading about what way we teach chemistry, how we know it’s not the best approach, and what might be done to change it.

Is changing the curriculum enough?

Bodner (1992) opens his discussion on reform in chemistry education writes that “recent concern”, way back in 1992, is not unique. He states that there are repeated cycles of concern about science education over the 20th century, followed by long periods of complacency. Scientists and educators usually respond in three ways:

  1. restructure the curriculum,
  2. attract more young people to science,
  3. try to change science teaching at primary and secondary level.

However, Bodner proposes that the problem is not in attracting people to science at the early stages, but keeping them on when they reach university, and that we at third level have much to learn with from our colleagues in primary and secondary level. Instead of changing the curriculum (the topics taught), his focus is on changing the way the curriculum is taught. In an era when textbooks (and one presumes now, the internet) have all the information one wants, the information dissemination component of a lecture is redundant. Bodner makes a case that students can perform quite well on a question involving equilibrium without understanding its relationship to other concepts taught in the same course, instead advocating an active learning classroom centred around discussion and explanation; dialogue between lecturers and student. He even offers a PhD thesis to back up his argument (A paper, with a great title, derived from this is here: PDF).

Are we there yet?

One of the frustrations I’m sure many who have been around the block a few times feel is the pace of change is so slow (read: glacial). 18 years after Bodner’s paper, Talanquer and Pollard (2010) criticize the chemistry curriculum at universities as “fact-based and encyclopedic, built upon a collection of isolated topics… detached from the practices, ways of thinking, and applications of both chemistry research and chemistry education research in the 21st century.” Their paper in CERP presents an argument for teaching “how we think instead of what we know”.

They describe their Chemistry XXI curriculum, which presents an introductory chemistry curriculum in eight units, each titled by a question. For example, Unit 1 is “How do we distinguish substances?”, consisting of four modules (1 to 2 weeks of work): “searching for differences, modelling matter, comparing masses, determining composition.” The chemical concepts mapping onto these include the particulate model of matter, mole and molar mass, and elemental composition.

Talanquer CERP 2010 imageAssessment of this approach is by a variety of means, including small group in-class activities. An example is provided for a component on physical and electronic properties of metals and non-metals; students are asked to design an LED, justifying their choices. I think this fits nicely into the discursive ideas Bodner mentions. Summative assessment is based on answering questions in a context-based scenario – picture shown.

In what is a very valuable addition to this discussion, learning progression levels are included, allowing student understanding of concepts and ideas, so that their progressive development can be monitored. It’s a paper that’s worth serious consideration and deserves more widespread awareness.

Keep on Truckin’

Finally in our trio is Martin Goedhart’s chapter in the recently published book Chemistry Education. Echoing the basis provided by Talanquer and Pollard, he argues that the traditional disciplines of analytical, organic, inorganic, physical, and biochemistry were reflective of what chemists were doing in research and practice. However, the interdisciplinary nature of our subject demands new divisions; Goedhart proposes three competency areas synthesis, analysis, and modelling. For example in analysis, the overall aim is “acquiring information about the composition and structure of substances and mixtures”. The key competencies are “sampling, using instruments, data interpretation”, with knowledge areas including instruments, methods and techniques, sample prep, etc. As an example of how the approach differs, he states that students should be able to select appropriate techniques for their analysis; our current emphasis is on the catalogue of facts on how each technique works. I think this echoes Talanquer’s point about shifting the emphasis on from what we know to how we think.

Related Posts:

Journal Club #2: Approaching Open Ended Problems

The aim of the “Journal Club” is to present a summary of a journal article and discuss it in the comments below or on social meeja. The emphasis is not on discussing the paper itself (e.g. methodology etc) but more what the observations or outcomes reported can tell us about our own practice. Get involved! It’s friendly. Be nice. And if you wish to submit your own summary of an article you like, please do. If you can’t access the paper in question, emailing the corresponding author usually works (CERP is free to access).

Comments on this article are open until Friday 27th September.

#2 T Overton, N Potter, C Lang, A study of approaches to solving open-ended problems in chemistry, Chemistry Education Research and Practice, 2013, doi: 10.1039/C3RP00028A

There is a lot of literature that promotes the use of contextualization in our teaching, to generate interest and promote engagement in a topic. This is often coupled with more open-ended activities, reflecting real-life situations. There is also a lot of literature that advocates that teachers should be cognisant of working memory by providing structure to student learning and by reducing “noise” as much as possible. I see these as conflicting viewpoints, and have struggled with over the last few years in thinking where the balance between providing enough of the “carrot”  of context and the “stick” of structure lies.

Tina Overton’s paper on student approaches to open ended problems is useful (and unusual) in this regard; the opening section presents a synopsis of several studies on approaches to problem solving when the problem is structured or algorithmic. But what happens when students are given a problem where the data is incomplete, or the method is unfamiliar, or the outcomes may be open (Johnstone, 1993)? Three examples of such problems are given. One of these is:

 Many commercial hair-restorers claim to stimulate hair growth. If human hair is composed mainly of the protein α-keratin, estimate the rate of incorporation of amino acid per follicle per second.

I have to be honest and say that this question scares the hair out of me, made my hair stand on end, and other hair-related puns, but as an “expert” in the discipline, I think I would have an idea where to start. The research involved listening to students talk out how they approached the problem, and these approaches were coded. They ranged from approaches that could lead to a feasible solution (makes estimations, knowing what approach to take, having a logical reasoning or approach, sensible assumptions) to approaches that are unlikely to succeed (seeking an algorithmic approach, distracted by context, lack of knowledge). When participants were grouped by common approaches, three distinct groupings emerged.

  1.  The first were those who used predominantly positive (scientific) approaches to the problem. They could clarify the aims and identify the data needed and assumptions that could be made. (10/27)
  2. The second where those who couldn’t attempt the problem, they didn’t know where to start in tackling the problem and didn’t have the basic knowledge required to call upon. They weren’t able to take a scientific approach to solving the problem. (10/27).
  3. Finally were students who got caught out with their prior knowledge confusing them (e.g. writing out rate equations), and although they tried to tackle the problem, were unsuccessful (7/27)

The study participants were from all years, but the authors state that there groups identified above did not correlate with stage in degree.

Discussion questions

This study interests me a lot. The headline is that of this (small) sample of students, a majority had difficulty with open ended, highly contextualised problems. I am making a sweeping statement, but I would hazard a guess that students in this institution get exposed to a lot more open ended problems than average. However, some students displayed significant expertise in approaching problems, and frankly their responses recorded are very impressive. Questions I think worth teasing out are:

  1. Do you use open-ended problems of this nature in teaching? Why/why not?
  2. There is clearly a gap in approaches. The middle group are caught in between, making good efforts to solve the problem, but calling on irrelevant prior knowledge. If you were planning to incorporate open-ended problem solving, how could this gap be addressed in curriculum design?

In regard to the second question, I’m trying to think of an approach around the worked example/fading approach used for simple algorithmic problems. Would something similar have a role, or does that approach necessitate a change in categorisation of the problem, back to closed, defined…?

Love to hear your thoughts. Remember the “Be nice” part…

Related Posts: