I’ve spent the last two week in Australia thanks to a trip to the Royal Australian Chemical Institute 100th Annual Congress in Melbourne. I attended the Chemistry Education symposium.
So what is keeping chemistry educators busy around this part of the world? There are a lot of similarities, but some differences. While we wrestle with the ripples of TEF and the totalitarian threat of learning gains, around here the acronym of fear is TLO: threshold learning outcomes. As I understand it, these are legally binding statements stating that university courses will ensure students will graduate with the stated outcomes. Institutions are required to demonstrate that these learning outcomes are part of their programmes and identify the level to which they are assessed. This all sounds very good, except individuals on the ground are now focussing on identifying where these outcomes are being addressed. Given that they are quite granular, this appears to be a huge undertaking and is raising questions like: where and to what extent is teamwork assessed in a programme?
This process does appear to have promoted a big interest in broader learning outcomes, with lots of talks on how to incorporate transferable skills into the curriculum, and some very nice research into students’ awareness of their skills. Badges are of interest here and may be a useful way to document these learning outcomes in a way that doesn’t need a specific mark. Labs were often promoted as a way of addressing these learning outcomes, but I do wonder how much we can use labs for learning beyond their surely core purpose of teaching practical chemistry.
Speaking of labs, there was some nice work on preparing for laboratory work and on incorporating context into laboratory work. There was (to me) a contentious proposal that there be a certain number of laboratory activities (such as titrations) that are considered core to a chemist’s repertoire, and that graduation should not be allowed until competence in those core activities be demonstrated. Personally I think chemistry is a broader church than that, and it will be interesting to watch that one progress. A round-table discussion spent a good bit of time talking about labs in light of future pressures of funding and space; and it does seem that we are still not quite clear about what the purpose of labs are. Distance education – which Australia has a well-established head start in – was also discussed, and I was really glad to hear someone with a lot of experience in this say that it is possible to generate a community with online learners, but that it takes a substantial personal effort. The lab discussion continued to the end, with a nice talk on incorporating computational thinking into chemistry education, with suggestions on how already reported lab activities might be used to achieve this.
Of course it is the personal dimension that is the real benefit of these meetings, and it was great to meet some faces old and new. Gwen Lawrie wasn’t on the program as the announcement of her award of Education Division Medal was kept secret for as long as possible. I could listen to Gwen all day, and her talk had the theme “Chasing Rainbows”, which captured so eloquently what it means to be a teacher-researcher in chemistry education, and in a landscape that continues to change. [Gwen’s publications are worth trawling] Gwen’s collaborator Madeline Schultz (a Division Citation Winner) spoke about both TLOs and on reflections on respected practitioners on their approaches to teaching chemistry – an interesting study using a lens of pedagogical content knowledge. From Curtin, I (re-)met Mauro Mocerino (who I heard speak in Europe an age ago on clickers) who spoke here of his long standing work on training demonstrators. Also from that parish, it was a pleasure to finally meet Dan Southam. I knew Dan only through others; a man “who gets things done” so it was lovely to meet him in his capacity as Chair of the Division and this symposium, and to see that his appellation rang true. And it was nice to meet Elizabeth Yuriev, who does lovely work exploring how students approach physical chemistry problem and on helping students with problem solving strategies.
There were lots of other good conversations and friendly meetings, demonstrating that chemistry educators are a nice bunch regardless of location. I wasn’t the only international interloper; Aishling Flaherty from University of Limerick was there to spread her good work on demonstrator training – an impressive programme she has developed and is now trialling in a different university and a different country. And George Bodner spoke of much of his work in studying how students learn organic chemistry, and in particular the case of “What to do about Parker”. The memory of Prof Bodner sitting at the back of my talk looking at my slides through a telescopic eye piece is a happy one that will stay with me for a long time. Talk of organic chemistry reminds me of a presentation about the app Chirality – 2 which was described – it covers lots of aspects about revising organic chemistry, and looked really great.
My slightly extended trip was because I had the good fortune to visit the research group of Prof Tina Overton, who moved to Melbourne a few years ago, joining native Chris Thompson in growing the chemistry education group at Monash. It was an amazing experience immersing in a vibrant and active research group, who are working on things ranging from student critical thinking, chemists’ career aspirations, awareness of transferable skills, and the process and effect of transforming an entire laboratory curriculum. I learned a lot as I always do from Tina and am extremely grateful for her very generous hosting. I leave Australia now, wondering if I can plan a journey in 2018 for ICCE in Sydney.
Anyone involved in e-learning will know of the cognitive theory of multimedia learning, which draws together information processing model (dual coding), cognitive load theory (working memory), and the notion of active processing. You can read a little more of this in this (old) post.
Anyway, for most of us who don’t do full on e-learning, Mayer’s principles have value when we make things like videos or multimedia that we wish the students to interact with outside of their time with us. As such, Mayer’s principles, as reported in The Cambridge handbook of multimedia learning are well cited. Mayer has just published an update (HT to the wonderful new Twitter feed: https://twitter.com/CogSciLearning), and because I have nothing better to do than twiddle my thumbs for the summer (thank you Adonis), I made a graphic summarising the 12 principles he describes. Many seem obvious but that is probably no bad thing; as well as thinking about videos, there might be some lessons about PowerPointing here too. Click on the image to embiggen.
I’m always a little envious when people tell me they were students of chemistry at Glasgow during Alex Johnstone’s time there. A recent read from the Education in Chemistry back-catalogue has turned me a shade greener. Let me tell you about something wonderful.
The concept of working memory is based on the notion that we can process a finite number of new bits in one instance, originally thought to be about 7, now about 4. What these ‘bits’ are depend on what we know. So a person who only knows a little chemistry will look at a complex organic molecule and see lots of carbons, hydrogens, etc joined together. Remembering it (or even discussing its structure/reactivity) would be very difficult – there are too many bits. A more advanced learner may be able to identify functional groups, where a group is an assembly or atoms in a particular pattern; ketones for example being an assembly of three carbons and an oxygen, with particular bonding arrangements. This reduces the number of bits.
Functional groups are important for organic chemists as they will determine the reactivity of the molecule, and a challenge for novices to be able to do this is to first be able to identify the functional groups. In order to help students practise this, Johnstone developed an innovative approach (this was 1982): an electronic circuit board.
The board was designed so that it was covered with a piece of paper listing all functional groups of interest on either side, and then an array of molecules in the middle, with functional groups circled. Students were asked to connect a lead from the functional group name to a matching functional group, and if they were correct, a lightbulb would flash.
A lightbulb would flash. Can you imagine the joy?!
If not, “back-up cards” were available so that students could review any that they connected incorrectly, and were then directed back to the board.
The board was made available to students in laboratory sessions, and they were just directed to play with it in groups to stimulate discussion (and so as “not to frighten them away with yet another test”). Thus students were able to test out their knowledge, and if incorrect they had resources to review and re-test. Needless to say the board was very popular with students, such that more complex sheets were developed for medical students.
Because this is 1982 and pre-… well, everything, Johnstone offers instructions for building the board, developed with the departmental electrician. Circuit instructions for 50 x 60 cm board were given, along with details of mounting various plans of functional groups onto the pegboard for assembly. I want one!
A. H. Johnstone, K. M. Letton, J. C. Speakman, Recognising functional groups, Education in Chemistry, 1982, 19, 16-19. RSC members can view archives of Education in Chemistry via the Historical Collection.
This week is All Aboard week in Ireland, essayed at “Building Confidence in Digital Skills for Learning”. I am speaking today in the gorgeous city of Galway on this topic, and came across this paper in a recent BJET which gives some useful context. It summarises interviews with 33 Australian academics from various disciplines, on the topic of why they used technology in assessment. While the particular lens is on assessment, I think there are some useful things to note for those espousing the incorporation of technology generally.
Four themes emerge from the interviews
The first is that there is a perceived cost-benefit analysis at play; the cost of establishing an assessment process (e.g. quizzes) was perceived to be offset by the benefit that it would offer, such as reducing workload in the long-run. However, some responses suggest that this economic bet didn’t pay off, and that lack of time meant that academics often took quick solutions or those they knew about, such as multiple choice quizzes.
The second theme is that technology was adopted because it is considered contemporary and innovative; this suggests a sense of inevitability of using tools as they are there. A (mildly upsetting) quote from an interview is given:
“It would have been nice if we could have brainstormed what we wanted students to achieve, rather than just saying “well how can ICT be integrated within a subject?”
The third theme was one around the intention to shape students’ behaviour – providing activities to guide them through learning. There was a sense that this was expected and welcomed by students.
Finally, at the point of implementation, significant support was required, which often wasn’t forthcoming, and because of this, and other factors, intentions had to be compromised.
The authors use these themes to make some points about the process of advocating and supporting those integrating technology. I like their point about “formative development” – rolling out things over multiple iterations and thus lowering the stakes. Certainly my own experience (in hindsight!) reflects the benefit of this.
One other aspect of advocacy that isn’t mentioned but I think could be is to provide a framework upon which you hang your approaches. Giving students quizzes “coz it helps them revise” probably isn’t a sufficient framework, and nor is “lecture capture coz we can”. I try to use the framework of cognitive load theory as a basis for a lot of what I do, so that I have some justification for when things are supported or not, depending on where I expect students to be at in their progression. It’s a tricky balance, but I think such a framework at least prompts consideration of an overall approach rather than a piecemeal one.
There’s a lovely graphic from All Aboard showing lots of technologies, and as an awareness tool it is great. But there is probably a huge amount to be done in terms of digital literacy, regarding both the how, but also the why, of integrating technology into our teaching approaches.
As pat of our ongoing development of an electronic laboratory manual at Edinburgh, I decided this year to incorporate discussion boards to support students doing physical chemistry labs. It’s always a shock, and a bit upsetting, to hear students say that they spent very long periods of time on lab reports. The idea behind the discussion board was to support them as they were doing these reports, so that they could use the time they were working on them in a more focussed way.
The core aim is to avoid the horror stories of students spending 18 hours on a report, because if they are spending that time on it, much of it must be figuring out what the hell it is they are meant to be doing. Ultimately, a lab report is a presentation of some data, usually graphically, and some discussion of the calculations based on that data. That shouldn’t take that long.
The system set-up was easy. I had asked around and heard some good suggestions for external sites that did this well (can’t remember it now but one was suggested by colleagues in physics where questions could be up-voted). But I didn’t anticipate so many questions that I would have to answer only the most pressing, and didn’t want “another login”, and so just opted for Blackboard’s native discussion board. Each experiment got its own forum, along with a forum for general organisation issues.
A postgrad demonstrator advised me to allow the posts to be made anonymously, and that seemed sensible. Nothing was being graded, and I didn’t want any reticence about asking questions. Even anonymously, some students apologised for asking what they deemed “silly” questions, but as in classroom scenarios, these were often the most insightful. Students were told to use the forum for questions, and initially, any questions by email were politely redirected to the board. In cases close to submission deadlines, I copied the essential part of the question, and pasted it to the board with a response. But once reports began to be due, the boards became actively used. I made sure in the first weekend to check in too, as this was likely going to be the time that students would be working on their reports.
The boards were extensively used. About 60 of our third years do phys chem labs at a time, and they viewed the boards over 5500 times in a 6 week period. Half of these views were on a new kinetics experiment, which tells me as organiser that I need to review that. For second years, they have just begun labs, and already in a two week period, 140 2nd years viewed the board 2500 times. The number of posts of course is nowhere near this, suggesting that most views are “lurkers”, and probably most queries are common. Since students can post anonymously, I have no data on what proportion of students were viewing the boards. Perhaps it is one person going in lots, but given the widespread viewership across all experiments, my guess is it isn’t. The boards were also accessible to demonstrators (who correct all the reports), but I’ve no idea if they looked at them.
The reception from students has been glowing, so much so that it is the surprise “win” of the semester. (Hey, look over here at all these videos I made… No? Okay then!) Students have reported at school council, staff student liaison committees, anecdotally to me and other staff that they really like and appreciate the boards. Which of course prompts introspection.
Why do they like them? One could say that of course students will like them, I’m telling them the answer. And indeed, in many cases, I am. The boards were set up to provide clear guidance on what is needed and expected in lab reports. So if I am asked questions, of course I provide clear guidance. That mightn’t always be the answer, but it will certainly be a very clear direction to students on what they should do. But in working through questions and answers, I stumbled across an additional aspect.
One more thing
Everyone’s favourite detective was famous for saying: “oh: just one more thing“. I’ve found in the lab that students are very keen and eager to know what purpose their experiment has in the bigger context, where it might be used in research, something of interest in it beyond the satisfaction of proving, once again, some fundamental physical constant. And in honesty, it is a failing on our part and in the “traditional” approach that we don’t use this opportunity to inspire. So sometimes in responding to questions, I would add in additional components to think about – one more thing – something to further challenge student thought, or to demonstrate where the associated theory or technique in some experiment we were doing is used in research elsewhere. My high point was when I came across an experiment that used exactly our technique and experiment, published in RSC Advances this year. This then sparked the idea of how we can develop these labs more, the subject of another post.
Again I have no idea if students liked this or followed up these leads. But it did ease my guilt a little that I might not be just offering a silver spoon. It’s a hard balance to strike, but I am certainly going to continue with discussion boards for labs while I work it out.
I’m attending the JISC Learning Analytics network meeting (information), which is giving a good overview on the emerging development of learning analytics, and its integration into higher education. Learning analytics aims to harness data about students interactions and engagement with a course, whatever can be measured, and use that in an intelligent way to inform and empower students about their own academic journey. Of course, one of the major questions being discussed here is what data is relevant? This was something I explored when looking at developing a model to help tutors predict student performance and identify at-risk students (see CERP: 2009, 10, 227) but things have moved on now and the discipline of learning analytics looks to automate a lot of the data gathering and have a sensible data reporting to both staff and individual students.
There was an interesting talk on the rollout of learning analytics from Gary Tindell at the University of East London, which described the roll-out over time of a learning analytics platform, which might be of interest to others considering integrating it into their own institution. He identified 5 phases, which developed over time:
Phase 1: collecting data on student attendance via swipe card system. This data can be broken down by school, module, event, student. Subsequently developed an attendance reporting app (assuming app here means a web-app). This app identifies students whose attendance falls below 75% threshold and flags interventions via student retention team. Unsurprisingly, there was a correlation between student attendance and module performance.
Phase 2: student engagement app for personal tutors: this pulls together data on student attendance, module activity, use of library, e-book activity, coursework submission, assessment profile etc and aims to privide tutors with a broader profile of student engagement.
Phase 3: Development of an app that integrates all this data and calculates a level of student engagement based on a weighting system for identifying at risk students (those at risk of leaving). Weighting can be changed depending on what is considered most important. It allows students see a level of engagement compared with their cohort.
Phase 4: Research phase – intention is to use data to inform the weightings applied to student engagement app. Initial correlations found highest correlations for attendance and average module marks. However, more interestingly, multiple regressions suggest all engagement measures are significant. They have developed a quadrant based model that identifies low engagers to high engagers, and provides an indicator of student performance. One of the key measures is previous student performance – but is that a student engagement measure??
Phase 5 – currently in progress, developing 3 different sets of visualisations of student engagement.
Compare individual engagement with UEL school and course
Provides student with an indication of where they are located in terms of student engagement
Provides an indication of the distance to travel for a student to be able to progress to another quadrant.
The next steps in the project are about aiming to answer the following questions:
– Can we accurately predictstudent performance based on a metric?
– Can providing students with information on their level of engagement really change study patterns?
It’s the last point that particularly interests me.
I am giving a keynote at the AHEAD conference in March, and the lecture itself will be a flipped lecture on lecture flipping. The audience will be a mixture of academics and support staff from all over Europe and beyond, and the idea is that they will watch the presentation in advance (hmmmm) and we will then use the time during the actual conference presentation to discuss emerging themes. I will be highly caffeinated.
In order to address some of the issues around lecture flipping that face most educators, I would be interested to hear thoughts from lecturers and support staff on the idea of lecture flipping. Any and all of the following… please do comment or tweet me @seerymk:
What do you think the potential of flipping is?
What concerns you about the model?
Is it scalable?
In terms of resources, have you any thoughts on the materials prepared for lecture flipping in advance of and/or for lectures.
How do you consider/reconsider assessment in light of lecture flipping.
Clickers are routinely used to survey class on their understanding of topics or test their knowledge with quizzes, and as technology has developed, there have been clever ways of doing this (See: The Rise and Rise…). One issue that arises is that as lecturers, we don’t have a convenient way to know what individual students think, or what their answer is.
The system works by way of the lecturer wearing glasses that scan the room and when each response is entered. The technology (while very clever) is still very rudimentary, and no-one in their right mind would want to look like this in their classroom, but as Google Glasses or equivalent take off, who knows what possibilities there will be in the coming decade.
I think it’s an interesting paper for showing a different aspect of lecturer-student interaction in the class. Quite what you do when you see that some students are incorrect is up to individual teaching scenarios.
The authors have a video explaining the paper in more detail, shown below.
The Resource Pack aims to show how WordPress web publishing platform (WordPress.org) can be a useful tool in creating and presenting e-portfolios. It aims to show what can be done technically to integrate various elements of an e-portfolio: the documentation of learning, conversation with peers and tutors, and presentation of the ‘product’ for assessment and/or feedback. [Jan 2012] You can download a PDF of this guide here: WordPress for E-Portfolios.
E-portfolios are a popular method of documenting and presenting learning that has occurred in a module. They are promoted for this purpose as they provide a means to record both the process of learning—thoughts, learning, and reflections that occurred during a learning experience—as well as the product—the showcasing of that learning for review or assessment. Components of an e-portfolio may include digital media, comments and reflection, statement pages such as statement of philosophy & prior learning, etc. Often these components may be categorised by themes or modules on a particular programme of study. The portfolio as a whole can lead to a large back of digital objects, which during or after the period of study, will need to be presented in some coherent way. This Resource Pack aims to provide suggestions and mechanisms to assist in compiling and presenting these digital objects in an easy and scalable manner using WordPress.
WordPress is a web-based software used for publishing websites. Rather confusingly, there are two: WordPress.com which is mainly used as a blogging platform, where all material is hosted by WordPress; and WordPress.org which has extensive additional features (Plugins) which extend it beyond blogging use. wordpress.org is self-hosted; the user provides their own webspace for putting the material online. This document relates to WordPress.org.
Installing and Set-Up
WordPress.org is self-hosted, so the user needs to arrange their own webspace (I use Blacknight who are reasonably priced and reliable, and have WordPress pre-installed) or arrange server space with their institution. Once installed, the Administration (Admin) page can be accessed through the page www.example.com/home/wp-admin where there web address www.example.com/home is the URL established during installation. This prompts for a login, which again was set during installation.
Once logged in, the user is presented with the Dashboard. This can be a little daunting at first sight! However, the key elements that are used are Posts and Pages. These are discussed below.
When starting out, many users do not want their portfolio to be viewed by anyone else, except perhaps their tutor and/or peers. There are various levels of privacy available.
Search Engine Privacy: The built-in privacy feature with WordPress allows you to block search engines finding your portfolio. Therefore, to access your site, someone would have to know the exact URL. This is achieved by selecting Settings > Privacy > “I would like to block search engines, but allow normal visitors”. This will add the phrase “Search Engines Blocked” to your dashboard, as shown in the example above. However, the portfolio is still available to view on the web for anyone who knows (or guesses) the URL.
Total Lockdown: In order to have the portfolio available only for whomever the user decides, it is necessary to download a Plugin. There are several available. A rather blunt but effective one is “WordPress Password” which allows a site-wide password to be entered before viewing. To obtain this plugin, select Plugins > Add New > and search by Term for the Plugin name. Click Install to download the plugin to the website and then Activate make your plugin active. You can deactivate the plugin at any time in the Plugins menu. Now any visitor to your site will have to enter this password to access the portfolio. Other methods of restricting access (for example by requiring User Log In) are available.
Selective restrictions: It may be desirable to have certain elements of the portfolio accessible, and others restricted. There are a range of options available here, and they are discussed in the Presentation section below.
Arranging Content: Pages and Posts
Content can be added to WordPress in two ways: on a Page or on a Post. Pages are permanent, generally with content that remains fixed (although we will exploit some useful page plugins, below). Therefore pages can be considered as the main structure of the website. For example, there may be a home page, an About Page, and other pages relevant to the portfolio—perhaps sub-home pages for each module in a programme, or pages covering various components such as a Teaching Philosophy, Prior Learning and so on. While pages can be added at any time, it is worthwhile planning out what pages you plan to consider for your portfolio. This is discussed below.
Posts are updates to the website that are time stamped, such as in a blog. Usually posts appear on one section—the blog—in reverse chronological order. In a portfolio, posts are usually dynamic in nature—blog posts considering thoughts and reflections in time as the user progresses through their learning. They are usually used to demonstrate evidence of engaging in the process of learning. As posts are usually written in time, they may not form a sequential series of thoughts related to one module or one concept, rather they reflect what the user was thinking about at any one time. Therefore in the presentation element of a portfolio, posts would not be read in order, and need to be available to be called up as required at various points in pages or other presentation elements.
Building the Portfolio
Figure 3 shows a simplified template for an e-portfolio. In this scenario, there are two modules being shown in the e-portfolio, along with a Teaching Philosophy, a blog and an About page. Module 1 has several sub-pages; a page which will selectively compile all blog posts related to a theme called “Category B” (and only this category), a sub-page containing a digital artefact—e.g. an essay, audio, picture, video and so on—and a sub page containing a Bibliography of web links or links to journal articles. Finally, there is the Blogroll, which has all the posts made on the blog. The details about how to construct this architecture are provided below. Although this is a simple scenario, it covers most of what would be required in a scaled-up version of an e-portfolio.
First, we will create the pages required. After installing WordPress, two pages will be apparent—a so-called Home Page (although this is actually the Blogroll) and an About page. Therefore we need to first create five more top level pages according to Figure 3: an actual Home Page, and pages for Module 1, Module 2, Teaching Philosophy, and the Blog. To create a Page, select Pages > Add New and type the page name in the title bar. There is no need to add content yet, but if you like you can type in a short page description for each of your pages created, except for the Blog page, which should be kept blank. We know have six pages—the five created and the already present About Page.
Module 1 in our template has three pages associated with it. We create an additional three pages, as described above, except in this case we also need to configure these three pages so that they are recognised as sub-pages of Module 1. To do this, we select the Module 1 page as Parent page in the Page Attributes box, usually on the left hand column of the New Page. If you forget to do this now, you can always return at any stage to reconfigure a page by selecting Pages and choosing Edit for the page you wish to change from the list of pages shown. For each of the three new pages created: Module 1 Blogs; Artefact; Bibliography; we assign the Module 1 page as the parent in the Page Attributes option.
We have now created all of the Pages required according to our template. Clicking on Pages will show the list of pages in the Portfolio. You will also notice that the sub-pages are indented in a list beneath their parent page. Of course it is possible to extend this to sub-sub-pages and beyond by following a similar approach as described.
Before moving on to program each page, there is one default option that requires to be changed. After installation, WordPress automatically uses the blogroll as the homepage. As you will not yet have made any blog posts, it is probably a generic “Hello World” post that is posted on your homepage. To force WordPress to go to your new, real, homepage on typing in the portfolio URL, select Settings > Reading. In the options presented, choose “A static page” for the Front Page display and name that Front Page as “Home” the homepage just created above. We now also need to specify where the blog posts will go, and we can use “Blog”—the blog page specially created above to house these. After saving changes, when you type in the portfolio URL, the page should go to your new actual homepage.
It is worth reviewing progress made so far on the front end of the website. Depending on the theme installed (see below), your pages will be listed along the top or down the side of the website. Usually, sub-pages are not shown or are activated by a drop-down menu. We will see later how additional customised menus can be added along with these page links, that are usually included by default. Click around the website to ensure that pages are as you expect them to be.
Up to this point, we have concentrated on the site architecture – the underlying structure of the portfolio. Before progressing, we need to make some blog posts (assuming you want to use this feature). Adding a Post is the same as adding a page, except we select Post > Add New. Give the post a name: e.g. “Module 1 Week 1 Reflections” and type in some content. Now, before saving, we need to categorise the post. On the right hand side of the page, there is a Category option. Click Add New Category and type in the name of your category – in the example below in Figure 5, I have called in “Discussion Boards”. Click “Add New Category”, un-tick “Uncategorized” and when you’ve finished typing your blog post, hit “Update”.
While you are here, make a second blog post with some nonsense content (you can delete them later), called “Module 1 Week 2 Reflections”, and assign it a category “VLEs”. In our template above, you notice that we wish to selectively pull in some posts on to one of the Module 1. This selection will be achieved by using categories. We will return to this below.
In order to demonstrate the bibliography page, we need to add some weblinks. To do this, select Links > Add New. Add in 3-4 web links, giving them different link categories. If you wish to link to a journal, the most useful way is to link to the journal article on the journal’s page or give the DOI link. The links on the bibliography page can then be easily incorporated using the WP Render Blogroll Links, below.
Putting it all Together
We have now completed the architecture and additional components required to finish the template shown in Figure 3. While it seems laborious, everything discussed can be done as you develop your portfolio, and the advantage of doing it as you go along means that when you come to the presentation stage, everything is automagically in place. You can decide what is viewable, and where it goes very easily.
Several pages (and posts) in our template just require normal text, images and other digital resources, These can be included using the standard WordPress editor—simply type in the text you wish, or click on the add image/media buttons to include pictures or media as with any web editor.
The bulk of the portfolio is usually built in this way. For embedding material, such as YouTube videos, copy the embed code from that website, and paste it into the editor. YouTube offers a range of embed sizes (e.g. 600 x 400) and the one you select will depend on the theme you choose for your portfolio (see below). Avoid choosing too large a size (> 600px in width) as while they may fit your theme and look good on your widescreen, remember your viewers who may have to look at it on a tiny tablet!
Two pages are left to complete: the selective category page and the bibliography by category page. These are best achieved by using Plugins. Plugins are extra bits of code written by third party agents that do specific tasks in WordPress. There are hundreds of thousands, and they vary in quality. Some useful ones for e-portfolios are listed below. It is not necessary to incorporate these straight away if you are a beginner, but the use of categories in blog posts and links, above, mean that when you do wish to include them, they will require very little work.
Useful Plugins for E-Portfolios
Some useful plugins are listed below. While it is not necessary to include all of these straight away, they do make for an easier life when your portfolio gets to be quite big in size.
List category posts plugin allows you to list blog posts on a page by category. For example, on our template, we wanted to only list one particular category on a page. After downloading List category posts and activating, posts for any category can be listed simply by typing in the code on the required page:
will list all of the posts on the page which had the tag “Discussion Board”. Of course it is possible to just simply type in these links, but the advantage of this method is that the page automatically updates every time you write a new blog post and tag it with the “Discussion Board” tag.
WP Render Blogroll Links is a similar plugin, except that it organises links by their category. To insert a list of links according to any category anywhere on a page or post, enter the code:
This code will list all the links given the category “Discussion Board”, showing the name of the link (rather than the URL). As with List Category Posts, it is possible to list all links by category, identify links by catid, etc. 
Broken Link Checker is a very simple plugin that checks any link from your page and alerts you in the dashboard if that link doesn’t work. You can click to see what is wrong with link and modify if required from the dashboard, without having to re-enter the post/page. It is a must-have for every WordPress installation.
Askimet is a SPAM detector. If you plan to allow comments on your blog, then this is a must. It is already downloaded on installation and just needs to be activated with an Askimet API key. Step-by-step instructions are provided on installation.
Usernoise is a simple plugin that allows for a nice way for users to contact you without giving your email. It is also Spam resilient. I have found that in order to get email from a WordPress installation working, I have had to download HGK SMTP.
Restrict Content is a nice plugin that is a bit more subtle than WP Password mentioned earlier. It allows for you to restrict access to some of your site; pages, posts or even down to just some paragraphs while allowing others to be open view. Even more useful is that you can set it up so that different registered users (set up in Users > Add New) can have different levels of access—for example in a portfolio, you may wish peers, tutors, and external examiner to have different levels of what they see.
Showcasing your E-Portfolio
As the portfolio is usually always on view—even just to the tutor—organisation according to the template shown above will mean that it will be well-structured and easy to navigate. There are some final front end things to decide when presenting your portfolio. The first is the portfolios theme.
The great beauty of WordPress is that you can customise how your page looks by choosing one of thousands of themes. This is a good thing—you have lots of choice—and a bad thing—you have too much choice! Themes can be changed at any time, and if you are new to WordPress, I recommend that you stick with the pre-loaded theme (currently “Twenty Eleven”). The theme I use in my portfolio is called “Portfolio” (coincidence!). The main decisions in choosing this theme were that it had a sharp layout and included drop-down menus for the sub-pages to allow for easy navigation. When you are ready to consider a new theme, choose Appearance > Install Themes and search until you find your perfect choice…!
Navigation is another key component of showcasing your portfolio. For this reason, I use the home page to list all of the main components of the portfolio, with links to these. The menu bar across the top is automatically added as new pages and sub-pages are created (and ordered using the number system, as explained above). New menus can be added in WordPress easily using the Appearance > Menus Option. This allows you build a menu comprised of whatever you like: categories, pages, web links and so on.
There is a right hand column in my theme, which again shows some navigation options, and some important pages I want to stand out. WordPress uses Widgets to allow you place what you like in these side bars (assuming your theme choice has side bars—most do).
Widgets are activated in the Appearance > Widget section of the dashboard. This uses a drag and drop mechanism to add widgets to your sidebar(s). For example, if you wanted to list all of your pages in the sidebar, simply drag the pages widget over and place it where you want. Because we have defined things by category (posts, links, etc) we can be quite specific about how we arrange the content in the sidebar using the available widgets. Common additions to sidebar include RSS feeds, Twitterfeeds (download a Twitter plugin (e.g. Twitter for WordPress) and after activating, this will be an option in the Widget area). I would suggest though that the aim of the sidebar in a portfolio is to aid navigation, not cram stuff in. I have left just the pages, and two areas I wanted to highlight—a reflective commentary and a feedback area.
Good luck with your e-portfolio. If you have any feedback on this guide, please let me know so I can improve for future versions!
 The order of pages as they appear in the menu can be changed, but it is clumsy. When editing a page, there is an option in Page Attributes called “Order”. The pages will appear left to right in increasing numerical order. Therefore, change these values (and Update to save), so that the numerical sequence matches that which you want. Feel free to leave gaps in the sequence, in case you decide to slot in a page as a later stage. It’s not great, but it’s all there is.
 Note that WordPress creates a “slug” version of the category name (i.e. no caps and without spaces)—this is viewable in the categories list. Categories are also defined by number, and it is possible to use the number rather than the name by using [catlist id=24] for category number 24. See http://wordpress.org/extend/plugins/list-category-posts/other_notes/ for the extensive range of options with this plugin.
 See http://0xtc.com/plugins/wp-render-blogroll-links/ for plugin details.
As recently as 2008, a review of clickers in Chemistry Education Research and Practice had difficulty finding reports of their use in chemistry lecture rooms. In the intervening years, the increase in usage has been nothing short of meteoric. It’s interesting to survey the recent literature to consider how clickers are used in chemistry.
The first category is those who use clickers in a simple check of the class’ understanding of a topic – do they know x? King (JCE, 2011, doi: 10.1021/ed1004799) describes the use of clickers to allow a class to identify the ‘muddiest point’, with the most common cause of difficulty being the subject of a review in the following lecture.
Initiate class/peer discussion
The second type of usage is to use clickers to gauge opinion from the class, often on a misconception, and use the initial class responses as a basis for discussion, with possible reassessment. Wagner (JCE, 2009, 86(11), 1300) describes the use of clickers in this manner, for example in asking students: which of the following substances has the ID(50) value? [aspirin; DDT; nicotine; caffeine; ethanol]; and initiating a subsequent class discussion based on the student responses. Mazur’s Peer Instruction is based on this approach.
Ruder and Straumanis have a very nice paper (JCE, 2009, 86(12), 1392) on using digit response function in some clicker handsets so that students may input a sequence. Two examples illustrate the concept. in the first, students have to select which two precursors from two lists of reagents (in this example Michael donors and acceptors) they would choose in order to prepare a desired product. In their second example, students are asked to select the reagents they would add, in the correct sequence, to produce a desired product. These questions offer two advantages; they allow for a much larger set of possible answers and so minimise a lucky guess and they require students not only to know an answer, but to consider that answer in the context of a total problem. Clickers which do not allow numerical answers can still consider this approach – several more incorrect answers can be easily generated, and if clever, common wrong answers can be included. In fact, the authors say that they only show the responses to the top few most common incorrect answers.
This approach is also used by these authors to test students on curly arrow mechanisms – carbon atoms in a diagram are numbered, and students can describe their understanding of the mechanism by entering multiple-digit responses to represent a mechanism. It’s clever, but some of the very extensive models described seem a bit elaborate to expect students to be able to “code” their curly arrow mechanism into numbers. However, it shows how far the technology could be pushed. A similar approach using numbered carbons on complex organic structures as a basis for numerical entry is described by Flynn in her work on teaching retrosynthesic analysis (JCE, 2011, doi: 0.1021/ed200143k).
What and why?
My own use of clickers follows Treagust’s work (e.g. CERP, 2007, 8 (3), 293-307), where he asks students two-stage multiple choice questions. The first is a simple response, and the second is asking students to select why they chose that response. This work by Treagust is very clever, as it allows students who may know or guess the correct answer to really challenge themselves on their understanding of why they know what they know. The wrong answers in Treagust’s work are developed from literature reports of misconceptions. He has been very generous in sharing examples of these in the past.
In the lab
My colleagues Barry Ryan and Julie Dunne completed some work using clickers to assess pre-and post-lab activities. You can find out more about that here.
Do you use clickers? If so, I’d be interested to hear how in order to compile a “Chemist’s User’s Guide”.
Prezi arrived on the scene about two (maybe three?) years ago. Since its introduction, conference attendees’ snoozing during a succession of PowerPoints has been interupted by a sense of sea-sickness induced by well-meaning presenters and their carefully crafted Prezis. All Prezis I have seen are like a PowerPoint presentation riding along a rollercoaster. They are linear in format, usually contain bullet-points (a major faux-pas in the eyes of Prezi-purists) and have a start, middle and end. They offer no additional advantage to PowerPoint. I don’t really like admitting this to the learning community, but I don’t really like Prezi.
Take the Prezi below. This was my one live presentation I gave at a chemistry education conference, I think in 2009. What attracted me to Prezi was the fact that you could show the audience the overall presentation in one go, and as you go along highlight the sections of the talk you go through as you progress. (This Prezi, for extra nerdiness, was shaped as a reaction profile diagram, which no-one in the audience noticed. For shame, physical chemists, for shame). It had pictures and videos embedded. It took me an age to do and I was very proud of it
I wonder however, in the effort to help peple get a grasp on the overall presentation, whether I lost them in the detail of the talk with a constantly swishing screen. Even now I can see faces in the audience following the presentation as it rolled around, probably trying to wonder where to focus next, as I wished helplessly for the presentation to stop being so gut-churningly jolly. There is something elegant and simple about a well designed PowerPoint slide.
Well I don’t like to pick on other people, but if you’ve made a Prezi, can you take a step back and ask yourself, what added value does it have over PowerPoint? There is an add-in for PowerPoint that allows slides to be grouped together. PowerPoint animation is becoming really quite impressive. PowerPoint doesn’t leave you fretting about whether the computer you’re giving the presentation on will have Flash Player. PowerPoint doesn’t make your audience seasick.
We all recognise there is something terrible about most PowerPoint presentations. But people interested in technology have a terrible habit of ambulance chasing the latest gig (see Exhibit A) rather than taking something that’s quite good and working on it to improve it. For PowerPoint, I think disabling the bullet point, having a maximum word count per slide and creating purposeful handouts for after presentation digestion are three ways to improve.
Someone very cleverly suggested to me today that Prezi could be a very useful mind-mapping software. I think that has potential. But it’s not a presentation!
I want to be wrong about Prezi. I really do. It looks and feels cool. But I just don’t see it as a good alternative for presentations. Am I wrong?!!
I’ve done a few courses at DIT’s Learning Teaching and Technology Centre (LTTC) and am just finishing my latest, the brilliant MSc (Applied E-Learning). It’s proof, yet again, of the diversity of talent, the pragmatic inspiration, and the extent of expertise of the staff at the LTTC.
Not that more proof is needed. The Centre have just hosted their second DRHEA E-Learning Summer School this week, themselves following on from in-house summer schools that have run for several years. I loitered around for the last two days at the summer school this year, and harassed a few attendees wh0 were from institutions in the Dublin region. They were, as I was last year at the end of the week, tired from a hard week of immersion into finding out about e-learning, but enthused with a range of ideas they could take back to their practice. This is the hallmark of the Centre. I’ve read (on the Interweb) that teaching centres can be at a distance from what goes on on the ground – a common complaint in the UK, it seems. In all the courses I have done, every one of them has been delivered so that it interweaves with the participants’ practice, allows for discussion with peers of common ground and has given me something to take back to my teaching. That’s no mean feat, but it’s something they do well. Their loyal and ever-growing fan-base in DIT is a testament to the collegial nature, in the very best sense, that the Centre fosters. It struck me this week that it must be very satisfying that a small group of dedicated people can ultimately have a direct impact on a large number of students’ college experience.
In September, for the first time in six years, I won’t be doing a course at the Centre. I’ll be peering through the windows on Mount St., wandering the corridors looking for a flip chart, lurking online waiting for a discussion post. I might just grow a moustache and start all over again.I’m still not quite sure about my epistemology.
In the meantime, thanks to all of my Mount St friends!!
I’ll be giving a webinar as part of the fantastic Sligo IT webinar series this Wednesday at lunchtime. You can register and find out more here: http://www.eventbrite.com/event/1135441135. The webinar will cover some of the work I’ve done on my Teaching Fellowship on the area of pre-lecture resources. It’ll be my first webinar – I’m quite nervous about it, but looking forward to the instant interaction of the audience as I give the talk!
This presentation will outline the use of online pre-lecture resources to supporting in-lecture material. The design rationale is to develop a cyclical approach between online resources and lectures, so that the two are mutually dependent. The aim of the resources are to introduce students to some key ideas and terminology prior to the lecture, so that their working memory during the lecture can focus on application
and integration, rather than familiarising with new terminology. These had a short quiz associated with them which was linked to the gradebook in the VLE. Some design principles behind developing these (and any) e-learning resources will be presented, along with implementation strategy and some analysis of the effect of using these resources with my own students.
This post provides some short annotations to literature involving prelecture resources/activities – the annotations are a brief summary rather than a commentary:
Online Discussion Assignments Improve Students’ Class Preparation, Teaching of Psychology, 2010, 37(2), 204-209: Lecturer used pre-lecture discussion activities to encourage students to read text before attending class. It had no direct influence on examination results but students reported that they felt they understood the material better and that they felt more prepared for exams.
Using multimedia modules to better prepare students for introductory physics lecture, Physical Review Special Topics – Physics Education Research, 2010, 6(1), 010108: Authors introduce multimedia learning modules (MLMs) which are pre-lecture web-based resources which are awarded credit to incentivize usage. Authors mention one of the reasons as being to reduce the cognitive load in lectures.The total time required for each pre-lecture was about 15 mins, and they covered most of what was coming up in the lecture itself. the authors argue by presenting exam scores, etc, that the prelecture resources increased students’ understanding of a topic before coming to the lecture, measured by post-prelecture-but-before-lecture questions, and will present in a subsequent paper how the lecture experience changed because of the introduction of these resources. (T. Stelzer, D. T. Brookes, G. Gladding, and J. P. Mestre, Comparing the efficacy of multimedia modules with traditional textbooks for learning introductory physics content. Am. J. Phys.). The authors provide a link to examples of their prelecture resources (Flash resource).
Benefits of prelecture quizzes, Teaching of Psychology, 2006, 33(2), 109 – 112: Tests the use of pre-lecture quizzes and found that students felt that lectures were more organised, felt better prepared for exams, and performed better on essay questions when compared to students who had not completed pre-lecture quizzes.
Student-Centered Learning: A Comparison of Two Different Methods of Instruction, Journal of Chemical Education, 2004, 81(7), 985 – 988: Lecturer introduced pre-lecture quizzes to facilitate just in time teaching – teaching based on student misunderstandings/difficulties identified just prior to the lecture. The students took the approach seriously as they were given some credit for it. the approach was considered successful by staff and students in the programme.
From the Textbook to the Lecture: Improving Prelecture Preparation in Organic Chemistry, Journal of Chemical Education, 2002, 79(4), 520 – 523: This paper describes attempts to encourage students to prepare for lectures. The authors argue that engagement with the textbook results in more active learning by students. Pre-lecture activities (“HWebs”) were to be completed by students prior to each lecture, and were based on the content of that lecture. The lecture itself remained relatively unchanged. The analysis found that student performance on HWebs correlated with their end of semester grade. While students generally liked the material, the felt that the system penalized them for being incorrect on material they had not yet been taught. Students did generally agree that use of the HWebs helped them understand the material in lectures. and the lecturers found that the nature of the lecture did gradually evolve to more explanation and discussion.
Preparing the mind of the learner, University Chemistry Education, 1999, 3, 43: This paper uses examination statistics to demonstrate the effectiveness of pre-lectures, with a particular effect noted for students who did not have a strong background in chemistry. The pre-lecture is defined as an activity prior to block of lectures aimed at either stimulting the prior knnowledge that may be present but inaccessible/forgotten and/or to establish the essential background knowledge so that learning takes place on a solid foundation. The students involves were in a year 1 of 4 (Scottish) degree and included those who had to take chemistry in their first year as well as those who were pursuing a chemistry degree, and students with a low level of prior knowledge were enrolled on the module. The pre-lecture took the form of a short quiz at the start of the pre-lecture, which students marked themselves, followed by the class breaking into groups comprised of a mixture of self-designated “needing help” and “willing to help”.The remainder of the pre-lecture activity allowed for the group to work through activities. The evaluation took the form of comparing the exam results of students in this group (who had little or no chemistry) and the students in the group that did not have pre-lectures but had a good level of chemistry knowledge. The results demonstrated that there was a significant difference between these groups in the years that pe-lectures were not offered, but not in the years pre-lectures were offered. A range of confounding factors, including mathematics knowledge were examined and found not to affect the results. The results are surprising, given that the students without pre-lectures received approximately 10% more teaching time as this was the time given over to the pre-lectures for the group that had them.
Preparing the mind of the learner – part 2, University Chemistry Education,2001, 5, 52: This second paper from the Centre for Science Education on this topic. Based on the evidence from the first study on the benefits of pre-lectures, this work looks at the development and implementation of “Chemorganisers”. These aimed to enable the preparation of students for their lecture course, ease the load on the working memory space and change students’ attitudes towards learning. The structure and purpose of Chemorganiser design is explained in detail, along with an example. Evaluation was carried out by comparing the exam marks between the two groups described in the previous paper. In the year Chemorganisers were instigated, this difference was insignificant.
Developing Study Skills in the Context of the General Chemistry Course: The Prelecture Assignment, Journal of Chemical Education, 1985, 62(6) 509-510: This short paper reports on the inclusion of using instructional activities during a lecture course to allow students develop study habits.Students are asked to read a section of a text book prior to the lecture and are asked questions at the start of the lecture. Evaluation took the form of student survey, who said that they liked the pre-lecture assignments and that it encouraged in-class discussion.
I posted a summary last time of what best practice from cognitive science research preached about designing online resources. Putting it into practice threw up some interesting considerations. I’ve summarised these below in light of developing my first pre-lecture resource, as well as reflections stimulated by conversations about it with my colleague Claire.
The first pre-resource is for my first lecture in introductory chemistry which is based around the structure of the atom, the main components (protons, electrons and neutrons) as derived from the Rutherford model, the notion of elements and then progresses onto a discussion of isotopes, introducing the technique of mass spectrometry. There are a lot of new terms – I counted 17 in the lecture notes* – and I derived four learning outcomes for the lecture. Both of these exclude the case studies used in the lecture, which also incorporates a demonstration.
1. The purpose of the pre-resource:
The first step was to define the main goal of the lecture, based on Norman Reid’s advice to me on this. I decided that while it didn’t encompass everything I did in the lecture, the main goal was “to describe the structure of the atom and how this leads to the definition of an element”. This would arise out of a discussion of the Rutherford experiment. I decided to concentrate the pre-lecture resource on this goal, which threw up my first concern that the content would be very dry. I was torn between wanting to “advertise” the themes in the first lecture and rigidly focussing on the ultimate aim of the pre-resource – to introduce the viewer to some of the terminology. The resulting resource tended very much towards the latter. I suppose this makes sense, as it means the lecture can concentrate on the more interesting aspects such as applications, contexts and so on, but it was hard not to include some of this. I had to keep reminding myself that the resource was not a summary of the lecture, more a preparation for it.
I made a simple tabbed design which uses tabs to outline the main structure – so that everything is visible at once. There are some flaws with this – for example a student who just clicks on tabs will miss two pages, although a left hand menu will highlight this.
3. Presentation of text
Keeping in mind the modality ideas discussed in the principles post, most of the text presented in the resource is spoken, with key phrases, aims and terminology given in written form. Having scripted the resource, I added the text to the notes, which can be viewed in the presentation. The first version was a bit robotic, so after reviewing other aspects, I re-recorded the audio to try to make it a bit more casual.
4. Effect on my consideration of how to deliver
Despite having taught this content for several years, being forced to choose a small amount of content meant that I really had to think again about how I introduce this topic. For example, in considering terminology, I had a dilemma about how to phrase the wording about electrons. The Rutherford model is an over-simplification, albeit a useful one, and I like to get the message across early on about its limitations, but discussing with Claire, decided to stick to the particle notion for the pre-resource, and gradually introduce the cloud model of electrons a little later through the lectures themselves. Other changes after initial review included including a definition of the atom to begin with as well as a rationale – why what was being presented was important. I have to say the exercise of distilling down to this core level has really made me think about how this content – the very basics of chemistry – can be effectively presented. One failing that I have not yet overcome is a way to integrate the content into the prior knowledge of students, although the definitions used would relate to what students who had studied chemistry before would be used to, and the lecture is based around one of the most identifiable symbols of science – the structure of the atom (which is how I start the lecture).
I also decided that some active work could be encouraged, so ask students to do some study of their own on the Rutherford experiment before the lecture – this will tie in with changes in the lecture itself on encouraging discussion, which will be discussed elsewhere in a post on scientific literacy.
At the end of the resource, I had a short quiz. There isn’t much scope with this material at this stage to introduce fading, etc, so it is fairly cut-and-dry. Because I was initially going to tie this in with assessment, I did not include any feedback or right answers. The result was that it was a bit abrupt. Claire also felt the questions were tough, which they were on looking at them again, and suggested an easy starter. Therefore I decided not to include a mark for the assessment – merely to log the fact the students do them (via SCORM), and push the assessment elements to other aspects of in-class work. This freed me up to give feedback for each question (answer specific), and allow students to review the quiz and/or print off the sheet. I think this makes for a more useful learning object.
For comparative purposes, the resource before and after analysis are linked here. the next stage will be to implement them – roll on next week!
*New terms include atom, electron, proton, neutron, nucleus, alpha-particles, radioactivity, element, atomic number, mass number, isotopes, deuterium, tritium, density, atomic mass unit, mass spectroscopy, ionised.
This post aims to consider cognitive load theory and what considerations should be drawn from it in the design of electronic instructional materials. Sweller (2008) discusses several strategies for harnessing the principles of CLT in e-learning design. Several of these strategies are described by Clark and Mayer (2008), so overlap between both are discussed in tandem below. Mayer’s multimedia learning model (Mayer 2005) is used here as the underlying framework for the principles discussed. Before these are discussed, there is a brief explanation of what CLT is, along with the processes involved in learning new information.
What is Cognitive Load Theory?
Cognitive load theory (CLT) is model for instructional design based on knowledge of how we acquire, process and retain new information. It proposes that a successful use of the model will result in more effectual learning, and the retaining of information in the long term memory, which can be recalled when required in a given context. The theory distinguishes three types of cognitive load (Sweller 2008, Ayres and Paas, 2009):
Intrinsic load is caused by the complexity of the material. This depends on the level of expertise of the learner – in other words it depends on the learner’s understanding of the subject.
Extraneous load depends on the quality or nature of the instructional materials. Poor materials or those that require a large amount of working memory to process will increase the load and leave little capacity for learning.
Germane load is the mental effort required for learning. Because of the limited capacity of the working memory, germane load (the extent of learning) is dependent on the extent of the extraneous load, and also on the material and expertise of the learner – the intrinsic load. An expert on a topic is able to draw from prior knowledge, and release working memory capacity for germane load processing.
The mechanism of information processing was summarised succinctly by Mayer for the purposes of multimedia learning. This is similar in many respects to the information processing model familiar to many chemists through the work of Alex Johnstone (Johnstone, Sleet and Vianna 1994, Johnstone 1997). Mayer’s model is shown in the figure below (Clarke and Mayer 2008).
Information is presented to users in the form of words and pictures (there are other channels too, but these are the most pertinent to e-learning). The user senses these (what Johnstone refers to as a perception filter) and some of this is processed in the working memory, which can hold and process just some information at any time (this can be quantified using the M-capacity test). If this material can be related to existing prior knowledge, it is integrated with it, and effective learning occurs – the new experiences/information are stored in the long-term memory.
Considerations for Presentation of Information
Learning materials that provide two sources of mutually dependent information (e.g. audio and visual) will require the learner to process both channels and integrate them, requiring working memory. Design of the materials should therefore ensure that as, for example, a reference to the diagram is being verbalised, the associated diagram reference is clear for the viewer to see. The alternative is that learners require working memory to process the diagram to find the reference being verbalised. Clark and Mayer call this the contiguity principle, and provide two strategies for considering it in practice, namely to place printed words near the corresponding graphics (including, for example, feedback on the same screen as the question and integration of text legends) and to synchronize spoken words with the corresponding graphics.
Because the working memory has “channels”, the most significant being the visual/pictorial and auditory/verbal information channels, consideration of the nature or mode of information can be beneficial. In the split-attention effect, above, it was argued that they different modes must be integrated effectively to ensure that working memory was not overloaded. This can be teased out a little further. If learning material contains a diagram and explanation, (mutually dependent), the explanation can be in text or audio form. Presenting the explanation in text form means that learners’ visual/pictorial channel will be overloaded more quickly, as they must process the diagram and read the text. If the text is presented as audio, both channels are being used effectively. Clark and Mayer also discuss the modality principle, advising that words should be presented as audio and not on screen. However, they limit it to situations where there are mutually dependent visual/auditory information being presented (see below). Additionally, they argue that there are occasions where text is necessary – for example a mathematical formula or directions for an exercise, that learners may need to reread and process.
The split-attention and modality effects considered mutually dependent information. If there is multiple representations of the same material, each self-sufficient, or if there is material of no direct use to learning, it can be considered redundant. The time required to attend to unnecessary information or process multiple versions of the same information means that the working memory capacity is reduced. Clark and Mayer also discuss the redundancy effect, especially recommending that on-screen text is not used in conjunction with narration, except in situations where there are no diagrams, or the learner has enough time to process pictures and text, or the learner may have difficulty processing the speech.
Consideration for Design of Interactions
1. Worked Examples
Worked examples have been shown to reduce cognitive load. The reason is that students who were exposed to worked examples and who then were required to solve problems did not need to spend extraneous load on the process of solving the examples, and could concentrate on the problem itself instead. Clark and Mayer agree, and discuss five strategies for incorporating worked examples into e-learning instruction, including fading, below. (Crippen and Brooks (2009) have previously discussed the case for worked examples in chemistry.)
While the case for worked examples is strong, the situation becomes problematic when learners who are already expert engage with the material. In this scenario, their learning may be at best the same as solving problems without worked examples and at worst hampered by the presence of worked examples. The nature of delivery of material (considered for example in the split-attention and modality sections) can also differ for experts, as some material may become redundant. A potential solution offered by Sweller is to present learners with a partially completed problem and asked to indicate the next step required. The response was then used to direct the further instruction pathways.
Fading is related to worked examples, and involves a progressive reduction in the information presented in worked examples, so that learners are initially provided with many details on how to process a worked example, with the amount of guidance (scaffolding) reduced as more examples are provided. Clark and Mayer discuss this in some detail, and highlight it as a potential remedy for the expertise-reversal effect. For a three step problem, they propose that in the first worked example, all three steps are shown, and in each subsequent example, one step is left to the learner until they are required to complete an entire problem. They do acknowledge though that there is not yet sufficient evidence for how fast fading should proceed. Clarke and Mayer state that some students may not engage with the worked out components of a faded example, and propose that a worked out step of a faded example could be coupled with request requiring learners to state a reason/principle why a particular step was used. This is aimed to ensure learners are interacting with material that may otherwise be passive.
Having considered these principles, the next task is to implement them into a design framework. This will be discussed in a subsequent post.
Ayres, P. and Paas, F. (2009) Interdisciplinary perspectives inspiring a new generation of cognitive load research, Educational Psychology Review, 21, 1-9.
Clarke, R. C. and Mayer, R. E. (2008) E-Learning and the science of instruction, Pfeiffer (Wiley): San Francisco, 2nd Ed.
Johnstone, A. H., Sleet, R. J. and Vianna, J. F. (1994) An information processing model of learning: Its application to an undergraduate laboratory course in chemistry, Studies in Higher Education, 19(1), 77-87.
Mayer, R. E. (2005)Cognitive theory of multimedia learning, in Cambridge handbook of multimedia learning, R. E. Mayer, Cambridge University Press: Cambridge.
Sweller, J. (2008) Routledge: Human Cognitive Architechture, in Handbook of research on educational communications and technology, Spector, J. M., Merrill, M. D., van Merrienboer, J. and Driscoll, M. P., New York, 3rd Ed.
This is a great way of representing the contributions to science over the course of 500 years. The chemistry line (tan coloured) begins with origins in alchemy and starts as chemistry proper with Robert Boyle, followed by Black, Cavendish, Lavoisier and Priestley. The station intersections show where one scientist had an impact on two or more disciplines – needless to say Newton is a central hub!
Found these on iTunesU from La Trobe University (Australia) – interviews with John Biggs (constructive alignment and problem based learning); Vaughan Prain (teaching science); Chris Scanlan (New media for journalism students); Lorraine Ling (future of education). Nice, listenable, relatively short podcast interviews.
At the DRHEA E-learning summer school this week, we had a useful session on E-portfolios. The conversation very quickly diverted to discussion about lots of complicated things that I had never considered or worried about.
E-portfolios are simple! I decided to repay the DRHEA sponsored headset costs by making this short video explaining why: