Student Feedback


Student surveys are by far the most popular and most widely used from of feedback for educators.  However, they need to be designed with care, and the data they provide must be interpreted carefully.  Over the years I have used student surveys in a few different contexts. At Savannah College of Art & Design, a standard form was administered by a student hired by the college. At CIT I used online forms I designed myself, and later the standard QA1 forms.  Later, I volunteered to use the Learner Engagement Survey.  In almost all of these cases I found that the most useful feedback came in the free-form comments that students left.  For example: Spelling and grammar instruction are sorely needed, and you give it well. A little too blunt in lectures though. Personalised feedback is usually very specific and direct when given. Your blog is great. A comment from a single student is, of course, not representative of the whole student body, and it not as meaningful as the aggregate view.  But individual comments tend to be very specific and so are more easily acted on.  They might not be reliable for evaluating a lecturer's performance, but they are ideal when the objective is self-improvement.  Braskamp et al. (1981) found that many lecturers also share this preference. But the risk with this approach that one might give too much weight to a single student's view, or be biased one way or the other.
I found the QA1 forms to be particularly poorly designed. I was a TUI branch officer immediately prior to their introduction and was involved the their development.  I formed the view then, that they were poorly designed on purpose, so that any conclusions one might wish to draw from them could be easily dismissed.  The numeric scale they used didn't even identify which end was good and which end was bad.  I found only the comments to be useful.

I used my blog to solicit free-form feedback, but had mixed results.


Addendance is often overlooked as a feedback mechanism.  Poor attendance rates are an indication of something, but it is not always clear what.  Many of the courses I teach, for exmaple, have low attendance rates.  It is tempting to suggesst that if students have access to notes and recordings of lectures, then it is not a problem if they don't actually turn up.  As long as they get the material they need, and study it, everything is OK.  However, an explanation such as this hightlights the supposition that the primary purpose of the lecture is to distruibute content.  I firmly believe that the apodtion of technologies such as Learning Management Systems (LMSs) to distribute content don't just supplement lectures, but challenge the assumptions we make about the very purpose and nature of lectures.  Ideas such as Teaching Naked and the Inverted Classroom propose that contact time be used for student engagement rather than content distribution.  Clearly attendance is a good proxy measure of student engagement.  But, of course, student engagement is not the solely responsibility of the lecturer.

Exam results

Formative assessment is designed to provide students with feedback on how they are doing.  However it is often overlooked as a feedback mechanism for lecturers.  If the assessment scheme is properly designed then it will provide information about the degree to which students understand the  subject.  If many students do not understand a subject, then clearly something (or perhaps even someone) is not working.  And if many do well, then it may be safe to assume that the module is going well.  Because designing assessment that is both authentic and valid is very difficult, both lecturers and students often challenge the veracity of an assessment technique rather than accept the feedback it provides.
Over several years I have found that the results of assessments for my modules do not follow the typical bell curve that one might expect.  Instead a two-hump curve is often very apparent, where students fall in into one of two groups: the haves, and have-nots, so to speak.  Some of my colleagues have speculated that I am a harsh marker, or an extreme marker, or that I see things in black & white.  I'm wondering, however, if surface learning and deep learning generate different kinds of curves.  In general, humam characteristics and abilities are often distributes along a normal "bell" curve.  If we asked a large diverse group of people to remember words from a list of fifty, we might expect that the results would be normally distrubuted around some average.  In fact many results from cognitive pedagogical theory are based on such experiments.
But should one expect the same kind of curve, when one is attempting to test deeper learning?  Once a student has a deep understasnding of a subject, that is, has a understanding of the underlying guiding principles, it is quite easy to apply that understanding to new situations.  However, with only a surface knoweldge of a subject, new scearios and situations can be particularly challenging. Long division, for example, is very difficult until one knows how to do it, and very easy once one does.  I believe the same is true for many areas of computing, and especially for programming.  So I wonder if the expectation of a normal distribution is reasonable.  I think this is an interesting question that I haven't found directly addressed in the literature.

One-Sheet Exams

Last year I was among a number of lecturers in the computing department who began experimenting with one-sheeet exams.  These exams were similar to normal open-book exams expect that students were permitted to bring only one A4 sheet of notes (front and back).  The act of preparing the sheets was very beneficial to the students.  Student were required to identify the main topics and to assess their knowlege of those areas.  That process alone was useful and resulted in many formative side-effects.  However, I found looking at the sheets was also useful to me.  Seeing what the students deemed to be important gave me an understanding of how they viewed the module and whether I was getting across the main points.  In gerenal, I found that most students opted for qualtity rather than quality, and generated very detailed sheets.  I learned from that that I need to exphasize more, the key important points.  If the important points were fully appreciated by the students, the details would be  trivial to them.

Learning Analytics

Learning Analytics is the application of user tracking technologies and data mining techniques to teaching and learning.  Historical data about student experiences can be used to identify trends and patterns that can trigger interventions or modifications in the learning environment for current students.  Imagine, for example, that a lecturer who posts notes on Blackboard has found over the years that students who do not log into Blackboard within the first four weeks of term, invariably fail the module.  At the end of Week 4, the lecturer might log into Blackboard and generate a list of all the student who have not yet logged in, and he might decide to take those students aside and have a chat with them.  Learning Analytics takes this idea to a level of sophistication on a par with online advertisers who attempt to predict which ads one might click on or which products one is inclined to buy.  These systems attempt to identify the complex variables and conditions that are likely to result in student success, so that the learning experience can be optimised.
These techniques are still being developed and much of the information that has been collected is secret.  The development of good learning analytics algorithms is the main objective of organisations offering free courses using Massive Open Online Courses (MOOCs).
However more modest objectives can be met using simpler technology.  Blogging and web page services such as Blogger and Google Pages allow publishers to collect statistical data on which content is viewed most often.

This chart (above) from Blogger shows access to my blog over the past month.  It's possible to see that activity dropped off since the end of the semester.  In general, it is possible to identify the weeks when students are working, just from looking at the statistics. [Latest stats here]

More interesting, and more useful, however, is the ability to identify what students are looking at.  The chart above shows the most commonly viewed pages of my blog during the month of November.  Since assignments were due in November it's no surprise that the specifications for them featured significantly in that month.  However during the normal course of the semester it's possible to see that  summaries of some classes feature more prominently than others.  This may suggest that those topics were more difficult or more interesting than others.

YouTube also provides analytics information.  For classes that were recorded and put online it's possible to see which classes were viewed the most.  Again this suggests that the material in those classes may have been more difficult, or was more useful than others.  Of course attendance records would also need to be considered when analyzing results in case the class was simply one with poor attendance.

More interestingly, however, YouTube also provides second by second statistics for viewership for any video.  This allows a lecturer with a class online to see which parts of the class were of most interest to online viewers.  The chart above shows the viewer retention rate over the length of the class.  After the first 100 seconds only 30% of the viewers remained.  However the decline is not steady.  There are parts of the class where the number of viewers appears to increase.  This is caused by viewers skipping ahead to portions of the class, or reviewing those portions.  This data is useful because these inflection points correspond to the parts of the class that viewers found most important.  The corresponding content to is clearly important to the students.  This kind of insight could be useful to lecturers teaching a class.

Prediction Markets

Prediction Markets are online betting exchanges where participants use play money to buy shares in the outcomes of future events.  The price they pay for those shares is measure of how likely they believe those events will occur, or in some cases the value a particular variable will have at some specified time.  For the academic year 2012-13, I ran a prediction market with first-year computing students where I asked them to predict the avaerage marks for some assessment componets, module pass rates, and attendance rates for partcular cohorts and modules.  Students won iPod Shuffles as prizes for making accurate predictions.

The experiment was very interesting.  It was possible to identify some key events during the semester from the shift in price for some shares.  For example, immediately after one exam, the estimated average mark for that componet dropped.  That suggests that the students were more confident before the the exam, than after.  That was perhaps an indication that their expecations were not met.
The experiment provided some interesting results.  In hind-sight, however, I think it was a mistake to use separate contracts for each class group.  This meant that the number of partcipants with an interest in each contract was small and individual trades were able to move the price.  A feedback machaism like this would work best where there are a large number of students in the class, and the results would only begin to be meaningful after the stduents had practice with using the market and understanding how it works.  However, as a mechamism for aggregating honest opinion about how a module is going I think the idea has great potential.
It's possible to imagine this working on a large scale where lecturers can get a feel for how a module is going based on the current prices for various contracts.  But more research is needed.


Many researchers have used heart rate as a proxy measure for student engagement.  That has produced some useful results.  The figure above taken from Bligh (2000) shows that mean heart rates decline steadily during a lecture, but in a classroom with a variety of activities it varies significantly.  It provides compelling evidence for need to provide a variety of different activities in the classroom.  It also suggests that an hour is too long for a lecture.

Currently using this metric on a day to day basis would be impractical, but as wearable computers become more common place this may become easier.  Devices such as Jawbone's Up continuously gather information about the wearer.  It's possible to imagine an app on a lecturer's phone that provides a dashboard reading on how engaged a class is at any moment.

Conclusions (for me)

Based on student surveys, conversations with students and my head of department, and reviewing recordings of classes I have (perhaps somewhat unscientifically) come to the following conclusions about my teaching

Presentation Style

  • My slides are good. They are not to densely packed with information. I keep them sparse and talk about them. Students like this. As one student put it I am not a human slide reader.
  • I think I do a good job of allowing students to interject and ask questions or comment.  From the recordings of classes I noticed that I am interrupted a lot.  This is a good thing.
  • I don't move around in class.  I'm sure if given the choice between having me move around or provide quality recordings, students would prefer me to keep still.  But a Bluetooth microphone might allow me to do both.
  • I save classroom activities until the end of class.  There's a lot of research that suggests they would be best in the middle, or even at the start.
  • I don't stress important points enough.  Looking at the assessment results and one-sheets that students prepared it's clear that I need to stress more the very important points.


Assessment is the cause of a lot of negative feedback.  It's hard to know how to evaluate that feedback.  Some students who are not successful are prone to blame the assessment techniques rather than accept responsibility for their actions and inaction.  However, it's too easy dismiss feedback this way.
In recent years I have attempted to be more prescriptive in my assignment specifications and provide rubrics, marking schemes, and samples of work.  Disappointingly, that hasn't helped much.  I think students, like all people, tend to compare themselves to others and there is a presumed grading curve. But students who take their cues from others, often miscalculate.  This is especially true in the first year of computing programmes.  Retention rates at the end of the first year have sometimes been as low as 50%.  This means that a student who is the top half of the class, may actually be borderline.

In general I find there is a lot of pressure on lecturers to pass students, and maintaining standards can be exhausting without support.

No comments:

Post a Comment

Comments welcome