A future direction for clickers?

student answers

Clickers are routinely used to survey class on their understanding of topics or test their knowledge with quizzes, and as technology has developed, there have been clever ways of doing this (See: The Rise and Rise…). One issue that arises is that as lecturers, we don’t have a convenient way to know what individual students think, or what their answer is.

glassesEnter this recent paper from BJET, An Augmented Lecture Feedback System to support Learner and Teacher Communication. This paper describes a clicker-based system, but instead of (or as well as) a lecturer viewing a chart of responses, the lecturer sees the response hover over the student’s head. I know it’s early in the year, so I will let you read that sentence again.

The system works by way of the lecturer wearing glasses that scan the room and when each response is entered. The technology (while very clever) is still very rudimentary, and no-one in their right mind would want to look like this in their classroom, but as Google Glasses or equivalent take off, who knows what possibilities there will be in the coming decade.

I think it’s an interesting paper for showing a different aspect of lecturer-student interaction in the class. Quite what you do when you see that some students are incorrect is up to individual teaching scenarios.

The authors have a video explaining the paper in more detail, shown below.

embedded by Embedded Video

YouTube Direkt

Related Posts:

Class Sizes and Student Learning

A recent discussion on an ALT email circulation raised the interesting question of whether there was a threshold for class sizes, above which student learning experience diminished. Unfortunately, what followed was lots of “in my experienceHigginbotham-esque replies (with the exception of details of an interesting internal survey at NUIG), despite the original query specifically requesting evidence-based information.

You up there—in the blue and white jumper—what do you think the answer is?

A clackety-clack into Google Scholar throws up some interesting results on this topic. Unsurprisingly, the general trend is that increasing class size diminishes students’ educational experience, although the extent to which this happens seems to be luke-warm. There are two issues to consider: what is being measured to reflect something like “educational experience”; and what is the discipline.

What students think

In this regard, an interesting paper that caught my eye was one that considered the effect of class sizes in various disciplines (Cheng, 2011). This work dismisses student grades in favour of three evaluation scores derived from students: student learning, instructor recommendations, and course recommendations. Student learning was scored based on a student response to a 5-point Likert scale question “I learned a great deal from this course”. (Many of you, including myself, may be tempted to run screaming for the hills at this point. What would students know?! Cheng does make the point that she is not saying that this measure is superior to student outcomes, just a different measure. She refers to Pike’s (1996) interesting paper on student self-reporting for a discussion on this. Also, Hamermesh’s paper (2005) is worth a read for the title alone—in short, good looking lecturers get better ratings.)

Overall Data

Anyway, Cheng has amassed an impressive data set. “In total, the data span 24 departments, 2110 courses, 1914 instructors, and 10,357 observations from Fall 2004 to Spring 2009.” Before considering subject, on an overall level, Cheng found that for each of her three ratings, ratings fell as class sizes increased (although the smallest class sizes received both lowest and highest marks). Cheng has further used her data to generate a model to predict how student “learning” (**measured as outlined above**), instructor and course recommendations would change, so that for an increase of 50 in class size, these ratings would decrease by 1.4%, 1.3%, and 1.1% respectively. Of course, some disciplines will have smaller class sizes or may require more class-tutor interaction, so Cheng has drilled down into each discipline and determined if it is negatively or positively affected, or indeterminately effected (i.e. mixed results)

Subject Specific

In the sciences, chemistry, biology, physics and maths were unaffected by increasing class size in this model, as were history, philosophy, and visual arts. Almost half of the disciplines surveyed were inconclusive, some showed negative effects: some engineering disciplines, political science, social science. No discipline benefits from increasing enrollment.

Chemistry

Cheng considers that theoretical subjects such as the sciences may have a low correlation with class size, but rather depends on other factors, such as quality of instructor or student effort. While I think there are flaws, or at best limitations to this study (as Cheng acknowledges), it does open up interesting questions. The one I am interested in is the culture of teaching chemistry, which is fiercely traditional. That this data suggests that an increasing class size would have little effect on ratings measured here in a chemistry class would in turn suggest that its teaching is still very much based on a teacher-centred philosophy. Clickers, anyone?

References

  • Cheng, D. A. Effects of class size on alternative educational outcomes across disciplines, Economics of Education Review, 2011, 30, 980–990.
  • Hamermesh, D., & Parker, A. Beauty in the classroom: Instructors’ pulchritude and putative pedagogical productivity. Economics of Education Review, 2005, 24, 369–376.
  • Pike, G. R. Limitations of using students’ self-reports of academic development as proxies for traditional achievement measures, 1996, Research in Higher Education, 37, 89-114.

Related Posts:

Implementation of Research Based Teaching Strategies

The traditional, almost-folkloric, based approach to teaching science is a stark contrast to the evidence-based research approach scientists consider in their everyday research. The quote by Joel Michael* highlights the contrast:

As scientists, we would never think of writing a grant proposal without a thorough knowledge of the relevant literature, nor would we go into the laboratory to actually do an experiment without knowing about the most current methodologies being employed in the field. Yet, all too often, when we go into the classroom to teach, we assume that nothing more than our expert knowledge of the discipline and our accumulated experiences as students and teachers are required to be a competent teacher. But this makes no more sense in the classroom than it would in the laboratory!

In discussing the implementation of innovative teaching techniques, this post is drawing on the work of Charles Henderson who spoke at a conference earlier this year on his analysis of the impact of physics education research on the physics community in US. I think there are lessons for chemists from this work. (The underlying assumption here is that moving from traditional methods of teaching based on information transmission to student-centred or active teaching improves student learning. This position is I think consolidated by a significant body of research.)

Change Mechanisms

The decision to use what Anderson called Research Based Instructional Strategies (RBIS) by a lecturer follows five stages, described by Rogers: (1) knowledge or awareness about the innovation; (2) persuasion about its effectiveness;  (3) deciding to use the innovation; (4) implementing the innovation; and (5) confirmation to continue its use.

Awareness of RBIS obviously underlies this process. A 2008 survey by Henderson and Dancy of 722 physics faculty showed that 87% were familiar with at least 1 of 24 identified RBIS applicable to introductory physics, and 48% reporting that they use at least one in their teaching. Time was reported as the most common reason why faculty did not implement more RBIS in their teaching.

A subsequent study by Henderson examined the individual stages of the implementation process in more detail and found that:

  • 12% of faculty had no awareness
  • 16% had knowledge but did not implement (Stage 1-2, above)
  • 23% discontinued after trying (Stage 3-4)
  • 26% continued use at a low level (Stage, 5, 1 – 2 RBIS)
  • 23% continued a a high level (Stage, 5, >3 RBIS)

Innovation Bottleneck

Henderson uses his data to demonstrate that on the whole, the physics education community does a good job of dissemination of RBIS to the community of educators. Just 12% of faculty had no awareness, and 1/6 of those who did, made no attempt to implement any. Therefore it can be argued that the fall-off in innovation is at a later stage in the change process. Hence efforts to encourage innovation should aim to address the one third of those with awareness who discontinue after a trial and those with a low level of continuance to build on their success. These groups may be a more suitable focus for consideration, in terms of percentage, as well as the fact that they were willing to give an innovation a go, when compared to those who had knowledge but did not try any innovation.

Teasing this out appears to be difficult. The decision to continue seems to come down to personal characteristics, such as desire to find out more, and gender (female twice as likely to continue than male, but the paper does dispel some traditional conceptions about who is innovative and who isn’t!).

However, in terms of practical measures that can be made the following are listed:

  • Practice literature can present an overly rosy picture of implementation. Therefore, when someone trys it and hits an unexpected hurdle (student resistance and complaints, concerns over breadth of content, outcomes not as expected), there is a sense that it isn’t working, and the innovation is discontinued. Therefore it is important that practice literature gives a full and honest account of implementation.
  • Implementation can be modified to the person’s own circumstances, and in modification, the effectiveness of the innovation is lost. Therefore, pitfalls and important issues in the dissemination stage (workshops, talks, etc) should be highlighted.
  • There is evidence that if an innovation is supported by the designer during the implementation phase, the innovation is more successfully implemented.

Now, who wants to do this analysis for UK/Ireland chemistry?!

References

Charles Henderson, Melissa H. Dancy, Magdalena Niewiadomska-Bugaj (2010) Variables that Correlate with Faculty Use of Research-Based Instructional Strategies, 169-172. In Proceedings of the 2010 Physics Education Research Conference.

Charles Henderson & Dancy, M. (2009) The Impact of Physics Education Research on the Teaching of Introductory Quantitative Physics in the United States, Physical Review Special Topics: Physics Education Research, 5 (2), 020107.

*Thanks to my colleague Claire Mc Donnell for giving me this quote: Joel Michael, Advances in Physiology Education, (2006) 30, 159-167.

Rogers, E. M. (1995). Diffusion of innovations (4th ed.). New York: Free Press.

Related Posts:

The rise and rise of clickers in chemistry

As recently as 2008, a review of clickers in Chemistry Education Research and Practice had difficulty finding reports of their use in chemistry lecture rooms. In the intervening years, the increase in usage has been nothing short of meteoric. It’s interesting to survey the recent literature to consider how clickers are used in chemistry.

Simple response

The first category is those who use clickers in a simple check of the class’ understanding of a topic – do they know x? King (JCE, 2011, doi: 10.1021/ed1004799) describes the use of clickers to allow a class to identify the ‘muddiest point’, with the most common cause of difficulty being the subject of a review in the following lecture.

Initiate class/peer discussion

The second type of usage is to use clickers to gauge opinion from the class, often on a misconception, and use the initial class responses as a basis for discussion, with possible reassessment. Wagner (JCE, 2009, 86(11), 1300) describes the use of clickers in this manner, for example in asking students: which of the following substances has the ID(50) value? [aspirin; DDT; nicotine; caffeine; ethanol]; and initiating a subsequent class discussion based on the student responses. Mazur’s Peer Instruction is based on this approach.

Considering sequences

Ruder and Straumanis have a very nice paper (JCE, 2009, 86(12), 1392) on using digit response function in some clicker handsets so that students may input a sequence. Two examples illustrate the concept. in the first, students have to select which two precursors from two lists of reagents (in this example Michael donors and acceptors) they would choose in order to prepare a desired product. In their second example, students are asked to select the reagents they would add, in the correct sequence, to produce a desired product. These questions offer two advantages; they allow for a much larger set of possible answers and so minimise a lucky guess and they require students not only to know an answer, but to consider that answer in the context of a total problem. Clickers which do not allow numerical answers can still consider this approach – several more incorrect answers can be easily generated, and if clever, common wrong answers can be included. In fact, the authors say that they only show the responses to the top few most common incorrect answers.

This approach is also used by these authors to test students on curly arrow mechanisms – carbon atoms in a diagram are numbered, and students can describe their understanding of the mechanism by entering multiple-digit responses to represent a mechanism. It’s clever, but some of the very extensive models described seem a bit elaborate to expect students to be able to “code” their curly arrow mechanism into numbers. However, it shows how far the technology could be pushed. A similar approach using numbered carbons on complex organic structures as a basis for numerical entry is described by Flynn in her work on teaching retrosynthesic analysis (JCE, 2011, doi: 0.1021/ed200143k).

What and why?

My own use of clickers follows Treagust’s work (e.g. CERP, 2007, 8 (3), 293-307), where he asks students two-stage multiple choice questions. The first is a simple response, and the second is asking students to select why they chose that response. This work by Treagust is very clever, as it allows students who may know or guess the correct answer to really challenge themselves on their understanding of why they know what they know. The wrong answers in Treagust’s work are developed from literature reports of misconceptions. He has been very generous in sharing examples of these in the past.

In the lab

My colleagues Barry Ryan and Julie Dunne completed some work using clickers to assess pre-and post-lab activities. You can find out more about that here.

Do you use clickers? If so, I’d be interested to hear how in order to compile a “Chemist’s User’s Guide”.

Related Posts: