The basic essence of our project is we have observed an increase in quality of work produced in a wiki (where our students work together) compared to their normal essays. However, we would like to quantify and qualify this rather than going with impressions. Our simplistic view had been to look at comparing the 2 sources by hand using defined indicators of say evidence of using critical thinking, use if primary sources etc. These can be tested by checking for the presence of certain words or word patterns.
Who am I?:
Dr Jessie Paterson & Dr Christian Lange, School of Divinity
We have produced a report describing possible techniques that could be used to create a practical, automated tool for giving formative feedback on student written work.
As part of this research, we produced a formalised list of quality criteria. This will be of direct practical use to students and staff in clarifying the criteria used for assessment.
Do the outcomes match the objectives:
The original aim had been to use informatics techniques to investigate the apparent quality differences between student work in the form of wikis and essays. However, this was intended only as a practical starting point to explore the broader issues of applying Informatics technology to teaching and learning in the humanities.
As the project progressed it became clear that the wikis and essays were totally different, each being driven by different quality criteria. This made a meaningful comparison impractical. However, it also became clear that the techniques being studied would be capable of being used in a practical tool which could provide valuable formative feedback on student work. This became the focus of the remainder of the project, and has produced some very promising results.
Benefits for the future:
The exploratory nature of the project has allowed to to investigate possible applications of informatics techniques to teaching and learning in the humanities. Both partners now have a much better understanding of the requirements and possibilities.
In concrete terms, we have identified a specific application area which is currently very important (formative feedback), and convinced ourselves that these techniques could make a valuable contribution to creating a practical tool.
Opportunities for further research:
We would like to take the research forward by attempting to create a prototype of a practical tool, based on the techniques studied. We currently think that something like a one year project might be an appropriate length to explore this initially - perhaps as a Master by Research. However, potential sources of funding for this are not currently clear.
Has the project created any new shared resources?:
Only the full report on the findings.
Future research plans:
We plan to explore possible funding resources to take this work forward
Publications and presentations:
Poster at e-assessment Sept 2010 in Dundee.
Abstract submissions for virtual paper to ICERI 2010 (International Conference of Education, Research and Innovation) Madrid (Spain) November, 2010 and paper to Online Educa Berlin December 2010 (still awaiting to hear about acceptance)
Links between School of Divinity and Informatics.
Links with Francisco Iacobelli, Northwestern University and Alastair Gill (now at University of Surrey) on the computational linguistic techniques – these links will be important on taking the project forward.
Use of funding:
The funding was used as planned, except that the monies allocated for dissemination could not be spent because the appropriate conferences fell beyond the end-of-year financial cutoff. Some of this money was used instead to add additional expertise to the project by commissioning a background technology report from Francisco Iacobelli. This proved extremely valuable.
How is it novel? What is exciting about it?:
Automated essay marking has been on the agenda for years but all the methods require training the systems and really need a high input of the same essay - something our student numbers don't merit! What this is proposing is a more "crude" but still valid approach of assessing quality using linguistic techniques to search for known indicators of quality in the Humanities. this project is to look at the comparison between things for the same student but could be applied more widely
What will I do next? What opportunities will it open up?:
If successful we could try to get further funding externally (so far we have failed!)
What constitutes success? How risky is it?:
having an automated technique of assessing the quality attributes of the work.
What resources do I bring to the project?:
Subject and educational knowledge
What resources and expertise do I need?:
We need some linguistic person to help build the system?
What shared resources, if any, will the project create?:
A methodology and approach others could use or adapt?
What is the timescale?:
We don't have one but we currently have students working on a wiki project that will be also producing essays as well (we also have the same data from last year!)