The Likert Debate

David Read blew my cover on Sunday night with a tweet mentioning my averaging of Likert data in our recent work on badges. If there is ever a way to get educational researchers to look up from their sherry on a Sunday evening, this is it.

Averaging Likert scales is fraught with problems. The main issue is that Likert response is ordinal, meaning that when you reply to a rating by selecting 1, 2, 3, 4, 5 – these are labels. Their appearance as numbers doesn’t make them numbers, and Likevangels note correctly that unlike the numbers 1, 2, 3… the spaces between the labels 1, 2, 3… do not have to be the same. In other words, if I ask you to rate how much you enjoyed the new season of Endeavour and gave you options 1, 2, 3, 4, 5 where 1 is not at all and 5 is very much so, you might choose 4 but that might be just because it was while it was near perfect TV, it wasn’t quite, there were a few things that bothered you (including that new Fancy chap), so you are holding back from a 5. If you could, you might say 4.9…

But someone else might say well it was just better than average, but only just mind. That new fella wasn’t a great actor but, hell, it is Endeavour, so you put 4 but really if you could you would say 3.5.

So both respondents are choosing 4, but the range of thought represented by that 4 is quite broad.

I can’t dispute this argument, but my own feeling on the matter is that this is a problem with Likert scales rather than a problem with their subsequent analysis. Totting up all of the responses in each category, we would still get two responses in the ‘4’ column, and those two responses would still represent quite a broad range of sentiments. Also, while I understand the ordinal argument, I do feel, that on balance, when respondents are asked to select between 1 and 5, there is an implied scale incorporated. One could of course emphasise the scale by adding in more points, but how many would be needed before the ordinal issue dissipates? A scale of 1 to 10? 1 – 100? Of course you could be very clever by doing what Bretz does with the Meaningful Learning in the Lab questionnaire and ask students to use a sliding scale which returns a number (Qualtrics allows for this more unusual question type). Regardless, it is still a rating influenced by the perception of the respondent.

Our badges paper tried to avoid being led by data by first exploring how the responses shifted in a pre-post questionnaire, so as to get some “sense” of the changes qualitatively. We saw a large decrease in 1s and 2s, and a large increase in 4s and 5s. Perhaps it is enough to say that; we followed the lead of Towns, whose methodology we based our own on,  in performing a pre-post comparison with a t-test. But like any statistic, the devil is in the detail, the statistic is just the summary. Conan Doyle wrote that “You can, never foretell what any one man will do, but you can say with precision what an average number will be up to. Individuals vary, but percentages remain constant. So says the statistician.

There is a bigger problem with Likert scales. They are just so darned easy to ask. It’s easy to dream up lots of things we want to know and stick it in a Likert question. Did you enjoy Endeavour? Average response: 4.2. It’s easy. But what does it tell us? It depends on what the respondent’s perception of enjoy is. Recently I’ve been testing out word associations rather than Likert. I want to know how student feel at a particular moment in laboratory work. Rather than asking them how nervous they feel or how confident they feel, I ask them to choose a word from a selection of a range of possible feelings. It’s not ideal, but it’s a move away from a survey of a long list of Likert questions.

How to do a literature review when studying chemistry education

It’s the time of the year to crank up the new projects. One challenge when aiming to do education research is finding some relevant literature. Often we become familiar with something of interest because we heard someone talk about it or we read about it somewhere. But this may mean that we don’t have many references or further reading that we can use to continue to explore the topic in more detail.

So I am going to show how I generally do literature searches. I hope that my approach will show you how you can source a range of interesting and useful papers relevant to the topic you are studying, as well as identify some of the key papers that have been written about this topic. What I tend to find is that there is never any shortage of literature, regardless of the topic you are interested in, and finding the key papers is a great way to get overviews of that topic.

Where to search?

For literature in education, there are three general areas to search. Each have advantages and disadvantages.

For those with access (in university) Web of Science will search databases which, despite the name, include Social Sciences and Arts and Humanities Indexes. Its advantage is that it is easy to narrow down searches to very specific terms, times, and research topics, meaning you can quickly source a list of relevant literature. Its disadvantage is that it doesn’t search a lot of material that may be relevant but that doesn’t pass the criteria for inclusion in the database (peer review, particular process regarding review, etc). So for example, Education in Chemistry articles do not appear here (As they are not peer reviewed as EiC is a periodical), and CERP articles only appeared about 10 years ago, thanks to the efforts of the immediate past editors. CERP is there now, but the point is there are a lot of discipline based journals (e.g. Australian Journal of Education in Chemistry) that publish good stuff but that isn’t in this database.

The second place to look is ERIC (Education Resources Information Center) – a US database that is very comprehensive. It includes a much wider range of materials such as conference abstracts, although you can limit to peer review. I find ERIC very good, although it can link to more obscure material that can be hard to access.

Finally, there is Google Scholar. This is great as everyone knows how to use it, it links to PDFs of documents are shown if they are available, and it is very fast. The downside is that it is really hard to narrow your search and you get an awful lot of irrelevant hits. But it can be useful if you have very specific search terms. Google Scholar also shows you who cited the work, which is useful, and more extensive than Web of Science’s equivalent feature, as Google, like ERIC, looks at everything, not just what is listed in the database. Google is also good at getting into books which you may be able to view.

A practice search

I am going to do a literature search for something I am currently interested in: how chemistry students approach studying. I’m interested in devising ways to improve how we assist students with study tasks, and so I want to look to the literature to find out how other people have done this. For the purpose of this exercise, we will see that “study” is a really hard thing to look for because of the many meanings of the word. I intend it to mean how students interact with their academic work, but of course “study” is very widely used in scientific discourse and beyond.

It’s important to write down as precisely as you can what it is you are interested in, because the first challenge when you open up the search database is to choose your search terms.

Let’s start with Web of Science. So I’ve said I’m interested in studying chemistry. So what if I put in

study AND chem*

where chem* is my default term for the various derivatives that can be used – e.g. chemical.

study and chemstar

Well, we can see that’s not much use, we get over 1 million hits! By the time I go through those my students will have graduated. The problem of course is that ‘study’ has a general meaning of investigation as well as a specific one that we mean here.

Let’s go back. What am I interested in. I am interested in how students approach study. So how might authors phrase this? Well they might talk about “study approaches”, or “study methods” or “study strategies” or “study habits”, or “study skills”, or… well there’s probably a few more, but that will be enough to get on with.

(“study approach*” or “study method*” or “study strateg*” or “study habit*” or “study skill*”) AND Chem*

So I will enter the search term as shown. Note that I use quotations; this is to filter results to those which mention these two words in sequence. Any pair that match, AND a mention of chem* will return in my results. Of course this rules out “approaches to study” but we have to start somewhere.


How does this look? Over 500. OK, better than a million+, but we can see that some of the hits are not relevant at all.

search2 results

In Web of Science, we can filter by category – a very useful feature. So I will refine my results to only those in the education category.

refine search2

This returns about 80 hits. Much better. Before we continue, I am going to exclude conference proceedings. The reason for this is that very often you can’t access the text of these and they clutter up the results. So I will exclude these in the same way as I refined for education papers above, except in this case selecting ‘exclude’. We’re now down to 60 hits, which is a nice enough number for an afternoon’s work.


Let’s move on to the next phase – an initial survey of your findings. For this phase, you need to operate some form of meticulous record keeping, or you will end up repeating your efforts at some future date. It’s also worth remembering what we are looking for: approaches people have used to develop chemistry students’ study skills. In my case I am interested in chemistry and in higher education. It is VERY easy to get distracted here and move from this phase to the next without completing this phase; trust me this will just mean you have to do the initial trawl all over again at some stage.

This trawl involves scanning titles and then abstracts to see if the articles are of interest. The first one in the list looks irrelevant, but clicking on it suggests that it is indeed for chemistry. It’s worth logging for now. I use the marked list feature, but you might choose to use a spreadsheet or a notebook. Just make sure it is consistent! Scrolling through the hits, we can very quickly see the hits that aren’t relevant. You can see here that including “study approach” in our search terms is going to generate quite a few false hits because it was picked up by articles mentioning the term “case study approach”.

I’ve shown some of the hits I marked of interest below. I have reduced my number down to 21. This really involved scanning quickly through abstract, seeing if it mentioned anything meaningful about study skills (the measurement or promotion of study skills) and if it did, it went in.

marked list 1


You’ll see in the marked list that some papers have been cited by other papers. It’s likely (though not absolutely so) that if someone else found this paper interesting, then you might too. Therefore clicking on the citing articles will bring up other more recent articles, and you can thin those out in the same way. Another way to generate more sources is to scan through the papers (especially the introductions) to see which papers influenced the authors of the papers you are interested in. You’ll often find there are a common few. Both these processes can “snowball” so that you generate quite a healthy number of papers to read. Reading will be covered another time… You can see now why about 50 initial hits is optimum. This step is a bit slow. But being methodical is the key!

A point to note: it may be useful to read fully one or two papers – especially those which appear to be cited a lot – before going into the thinning/snowballing phases as this can help give an overall clarity and context to the area, and might mean you are more informed about thinning/snowballing.

A practice search – Google Scholar

What about Google Scholar? For those outside universities that don’t have access to Web of Science, this is a good alternative. I enter my search terms using the Advanced search term accessed by the drop down arrow in the search box: you’ll see I am still using quotation marks to return exact matches but again there are limitations for this – for example strategy won’t return strategies, and Google isn’t as clever. So depending on the hit rate, you may wish to be more comprehensive.


With Google, over 300 hits are returned, but there isn’t a simple way to filter them. You can sort by relevance, according to how Google scores that, or by date, and you can filter by date. The first one in the list by Prosser and Trigwell is quite a famous one on university teacher’s conceptions of teaching, and not directly of interest here – although of course one could argue that we should define what our own conception of teaching is before we think about how we are going to promote particular study approaches to students. But I’m looking for more direct hits here. With so many hits, this is going to involve a pretty quick trawl through the responses. Opening hits in new tabs means I can keep the original list open. Another hit links to a book – one advantage of Google search, although getting a sense of what might be covered usually means going to the table of contents. A problem with books though is that only a portion may be accessible. But the trawl again involves thinning and snowballing, the latter is I think much more important in Google, and as mentioned scopes a much broader citing set.

Searching with ERIC

Finally, let’s repeat with ERIC. Putting in the improved Web of Science term returns 363 hits (or 114 if I select peer-reviewed only).

Eric search

ERIC allows you to filter by journal, and you can see here that it is picking up journals that wouldn’t be shown in Web of Science, and would be lost or unlikely in Google. You can also filter by date, by author, and by level (although the latter should be treated with some caution). Proquest is a thesis database, so would link to postgraduate theses (subscription required, but you could contact the supervisor).

eric filters

The same process of thinning and snowballing can be applied. ERIC is a little frustrating as you have to click into the link to find out anything about it, whereas the others mentioned show you, for example, the number of citations. Also, for snowballing, ERIC does not show you the link to citing articles, instead linking to the journal webpage, which means a few clicks. But for a free database, it is really good.

Which is the best?

It’s interesting to note that in the full search I did using these three platforms, each one threw up some results that the others didn’t. I like Web of Science but that’s what I am used to. ERIC is impressive in its scope – you can get information on a lot of education related publications, although getting access to some of the more obscure ones might be difficult. Google is very easy and quick, but for a comprehensive search I think it is a bit of a blunt instrument.  Happy searching!

Links to Back Issues of University Chemistry Education

I don’t know if I am missing something, but I have found it hard to locate past issues of University Chemistry Education, the predecessor to CERP.  They are not linked on the RSC journal page. CERP arose out of a merger between U Chem Ed and CERAPIE, and it is the CERAPIE articles that are hosted in the CERP back issues. Confused? Yes. (More on all of this here)

Anyway in searching and hunting old U Chem Ed articles, I have cracked the code of links and compiled links to back issues below. They are full of goodness. (The very last article published in UCE was the very first chemistry education paper I read – David McGarvey’s “Experimenting with Undergraduate Practicals“.)

Links to Back Issues

Contents of all issues: 

1997 – Volume 1:

1 – remains elusive… It contains Johnstone’s “And some fell on good ground” so I know it is out there… Edit: cracked it – they are available by article:

1998 – Volume 2:

1 –

2 –

1999 – Volume 3:

1 –

2 –

2000 – Volume 4:

1 –

2 –

2001 – Volume 5:

1 –

2 –

2002 – Volume 6:

1 –

2 –

2003 – Volume 7:

1 –

2 –

2004 – Volume 8:

1 –

2 –

I’ve downloaded these all now in case of future URL changes. Yes I was a librarian in another life.

UCE logo

This week I’m reading… Changing STEM education

Summer is a great time for Good Intentions and Forward Planning… with that in mind I’ve been reading about what way we teach chemistry, how we know it’s not the best approach, and what might be done to change it.

Is changing the curriculum enough?

Bodner (1992) opens his discussion on reform in chemistry education writes that “recent concern”, way back in 1992, is not unique. He states that there are repeated cycles of concern about science education over the 20th century, followed by long periods of complacency. Scientists and educators usually respond in three ways:

  1. restructure the curriculum,
  2. attract more young people to science,
  3. try to change science teaching at primary and secondary level.

However, Bodner proposes that the problem is not in attracting people to science at the early stages, but keeping them on when they reach university, and that we at third level have much to learn with from our colleagues in primary and secondary level. Instead of changing the curriculum (the topics taught), his focus is on changing the way the curriculum is taught. In an era when textbooks (and one presumes now, the internet) have all the information one wants, the information dissemination component of a lecture is redundant. Bodner makes a case that students can perform quite well on a question involving equilibrium without understanding its relationship to other concepts taught in the same course, instead advocating an active learning classroom centred around discussion and explanation; dialogue between lecturers and student. He even offers a PhD thesis to back up his argument (A paper, with a great title, derived from this is here: PDF).

Are we there yet?

One of the frustrations I’m sure many who have been around the block a few times feel is the pace of change is so slow (read: glacial). 18 years after Bodner’s paper, Talanquer and Pollard (2010) criticize the chemistry curriculum at universities as “fact-based and encyclopedic, built upon a collection of isolated topics… detached from the practices, ways of thinking, and applications of both chemistry research and chemistry education research in the 21st century.” Their paper in CERP presents an argument for teaching “how we think instead of what we know”.

They describe their Chemistry XXI curriculum, which presents an introductory chemistry curriculum in eight units, each titled by a question. For example, Unit 1 is “How do we distinguish substances?”, consisting of four modules (1 to 2 weeks of work): “searching for differences, modelling matter, comparing masses, determining composition.” The chemical concepts mapping onto these include the particulate model of matter, mole and molar mass, and elemental composition.

Talanquer CERP 2010 imageAssessment of this approach is by a variety of means, including small group in-class activities. An example is provided for a component on physical and electronic properties of metals and non-metals; students are asked to design an LED, justifying their choices. I think this fits nicely into the discursive ideas Bodner mentions. Summative assessment is based on answering questions in a context-based scenario – picture shown.

In what is a very valuable addition to this discussion, learning progression levels are included, allowing student understanding of concepts and ideas, so that their progressive development can be monitored. It’s a paper that’s worth serious consideration and deserves more widespread awareness.

Keep on Truckin’

Finally in our trio is Martin Goedhart’s chapter in the recently published book Chemistry Education. Echoing the basis provided by Talanquer and Pollard, he argues that the traditional disciplines of analytical, organic, inorganic, physical, and biochemistry were reflective of what chemists were doing in research and practice. However, the interdisciplinary nature of our subject demands new divisions; Goedhart proposes three competency areas synthesis, analysis, and modelling. For example in analysis, the overall aim is “acquiring information about the composition and structure of substances and mixtures”. The key competencies are “sampling, using instruments, data interpretation”, with knowledge areas including instruments, methods and techniques, sample prep, etc. As an example of how the approach differs, he states that students should be able to select appropriate techniques for their analysis; our current emphasis is on the catalogue of facts on how each technique works. I think this echoes Talanquer’s point about shifting the emphasis on from what we know to how we think.

The role of prior knowledge

One of the main challenges in teaching first year university students is that they have a great variety of backgrounds. A quick survey of any year one class will likely yield students who have come straight from school, students returning to education, and students who have taken a circuitous route through pre-university courses. Even the main block of students coming directly from school are a diverse group. Different school systems mean students can cover different content, and even that is assuming they take that subject at all. Those challenged with teaching first years know this better than those who take modules only in later years, when the group of students becomes somewhat more homogeneous.

All of this makes it difficult for the first year lecturer to approach their task, facing students who have, in our case, chemistry knowledge ranging from none at one extreme to a significant part of the syllabus they will be taking at the other.  As well as trying to navigate a syllabus that doesn’t isolate the novice learners and bore those who have “done it before”, there is a conceptual basis for worrying about this as well. Back in the 1950s, David Ausubel stated that the most important single factor influencing learning is what the learner already knows. Learners overlap new information with some existing knowledge, so that an ever-increasing complex understanding of a topic can develop. Unfortunately for novice learners, substantial unfamiliarity emphasises the attraction of rote learning, meaning that new information learned for the purpose of an exam and not integrated with some prior knowledge will likely be quickly forgotten, never mind understood.

In my own work (CERP, 2009, 10, 227-232), I analysed the performance of first year students in their end of module exam, and demonstrated that there was a consistent significant difference between the grades achieved by students who had taken chemistry at school and those that hadn’t. This difference disappeared from Year 2 onwards, suggesting that the group had levelled out somewhat by the end of the year. The response to this was to introduce pre-lecture activities so that students could at least familiarise themselves with some key terminology prior to lectures; a small attempt to generate some prior knowledge. Even this minor intervention led to the disappearance of any significant difference in exam scores (see BJET, 2012, 43(4), 2012 667–677).

I was reminded of all of this work by an excellent recent paper in Chemistry Education Research and Practice from Kevin de Berg and Kerrie Boddey which advances the idea further. Teaching nursing students in Australia, the researchers categorised the students as having completed senior school chemistry (SC), having completed a 3 day bridging course (BC) and having not studied chemistry since junior years of school (PC for poor chemistry). Statistical analysis showed some unsurprising results: those students with school chemistry scored higher (mean: 67%); followed by the bridging course students (54%); and lastly those students with poor chemistry (47%). The difference between BC and PC students was not significant however, although there were more lower performing students in the latter group.

These results align well with prior work on the value of prior knowledge in chemistry and the limited but positive impact of bridging courses. For me, this paper is valuable for its qualitative analysis. Students were interviewed about their experiences of learning chemistry. Those students who had completed the bridging course spoke about its value. For example, the authors quote Bella (please note, qualitative researchers, the author’s clever and helpful use of names beginning with B for Bridging Course students):

I think if I had actually gone straight just to class that first day not knowing anything, I don’t think I would have done half as well as what I would having known it.

Additionally, such was the perceived value of the bridging course, those students who had no prior chemistry felt left out. Paula states how she thinks it would have helped:

Familiarity in advance. Just, so you’re prepared. So you get the sort of basic, the basic framework of it all. So then, I’d sort of, got a head start and not be so overwhelmed…

Another interesting feature highlighted by students was the unique language of chemistry. Students spoke of entering their first chemistry class as being “like stepping into another world” and being “absolute Greek”. Additionally, the conceptual domains proved challenging, with students  overwhelmed by formulae and trying to visualise the molecular world. Pam states:

Maybe with anatomy, you can see more, you know, the things we’re cuttin’ up into pieces and looking inside. With chemistry, you can’t see it, so you gotta imagine that in your head and it’s hard tryin’ to imagine it, without actually physically touching it.

In a comprehensive review of prior knowledge, Dochy (1999) described “an overview of research that only lunatics would doubt“: prior knowledge was the most significant element in learning. Indeed, the review goes further when it considers strategies that teachers can use when considering students from differing backgrounds. I like this quote from Glaser and DeCorte cited elsewhere by Dochy:

Indeed, new learning is exceedingly difficult when prior informal as well as formal knowledge is not used as a springboard for future learning. It has also become more and more obvious, that in contrast to the traditional measures of aptitude, the assessment of prior knowledge and skill is not only a much more precise predictor of learning, but provides in addition a more useful basis for instruction and guidance’

This for me points a way forward. Much of the work on prior knowledge has concentrated on assessing the differences in performance as a function on prior knowledge or surveying the impact of amelioration strategies that aim to bring students up to the same level before teaching commences (such as bridging courses, pre-lecture activities, etc). But what about a more individualised learning path for students whereby a student’s starting point is taken from where their current understanding is. This would be beneficial to all learners – those without and also those with chemistry, the latter group could be challenged on any misconceptions in their prior knowledge. With technology getting cleverer, this is an exciting time to consider such an approach.