The measurement and confirmation of students’ academic achievement through official certification has become the compulsion of educational institutions (Bryan and Clegg 2006). Together with increased external institutional demands, in terms of reliability, validity and accountability (Knight 2001), this has led assessment practices to suffer a lack of pedagogical (Whitelock and Watt 2008) and technological innovation (Robles and Braathen 2002). On the other hand, literature has repeatedly described and critically evaluated games in their capacity to potentially inform and transform education (de Freitas and Griffith 2007, Gee 2005, Squire 2003, Steinkuehler 2004, Whitton 2009).

This dissertation, underpinned by an assessment for learning philosophy in ‘which the first priority in its design and practice is to serve the purpose of promoting students’ learning’ (Black et al 2004, p 10), brings games, learning and assessment principles together, in an attempt to reframe current assessment practices. In this research, I will introduce the term game-informed assessment and will develop a set of guiding principles, defining a game-informed assessment framework. In turn, the framework, mediated through digital assessment technologies, will be informed by gameplay elements and principles of learning found in good game designs. This dissertation follows and builds on my earlier attempt to construct a theoretical model focused on a game-informed approach to assessment in online contexts (Bezzina 2013).

The sub-sections that follow ...

The sub-sections that follow, describe the researcher and research context, provide reasons for my professional and academic interest in the scholarly areas being investigated and illustrate the theoretical and conceptual foundations that underpin my argumentation. The specific research question which informs this dissertation, together with the design and methods adopted in order to address the question posed, will then be outlined.

The Research and Researcher Context

The state secondary education system in Malta requires students to study Physics as a mandatory subject in the last three years of compulsory secondary school (between Forms 3 and 5, at the age of 13 – 16 years). During Forms 3 and 4, students are formally assessed twice during the scholastic year (mid-way and at the end), through summative written examinations measuring academic attainment in the different subjects. Together with the examination mark, each individual student is awarded an assessment mark, which is a summative evaluation of the students’ coursework and academic progress throughout the year, attested by the subject teachers who ‘are expected to correct and mark students’ work and to assess regularly their students’ achievement in the subject they teach’ (Attard and Buhagiar 1996, p 49). The final secondary school examinations for Form 5 students are high-stakes nationwide exams which determine the level of formal education, defined by academic attainment in the different subjects, achieved by the students as certified in their Secondary School Certificate and Profile (Ministry of Education and Employment 2012). At the end of the secondary education cycle, students proceed to the Secondary Education Certificate (SEC) examinations, which thus assume a benchmarking function and provide a summative norm-referenced assessment of academic attainment, giving access, through academic certification, to post-secondary education.

Figure 1 The Research and Researcher Context [edited and adapted by S. Bezzina] (Freepik #1 n.d.)

Notwithstanding various class-based informal formative assessment endeavours, advocated by the different local policies and authorities (Ministry of Education and Employment 2012, Ministry of Education, Employment and the Family 2011), formal academic attainment, as defined and certified by summative high-stakes examinations, continues to dominate the secondary education cycle (Pace 2003, Sultana 1999). As an unintended consequence, students learned to study exclusively in order to pass the exam, which is in turn reflected in the quantity and quality of learning that occurs (Boud 1995). This often results in surface learning approaches, both in terms of strategy employed and motive sustained (Scouller 1998). Assessments for summative purposes, mostly in the form of examinations, are considered by the students to be a necessary evil. In fact, along the years, ‘examinations have become an end in themselves and a form of extrinsic motivation’ (Pace 2003, p 29). Additionally and even more worryingly, students have come to regard class-based informal assessments, mostly designed for formative purposes, as soft assessments which carry little or no value. In this sense, assessment of learning has taken over its for learning counterpart, further reinforcing the ‘pass or fail mentality brought about by an examinations-oriented system’ (Ministry of Education and Employment 2012, p 50), in a context where the examination tail still wags the education dog (Sultana 1997). Unfortunately, any formative efforts promoted and encouraged by the local educational assessment authorities are being filtered by the summative purposes brought about by standardised testing, ‘dependent on one-off performances in tests and examinations’ (Ministry of Education and Employment 2012, p 63). This filtering possibly diminishes any potential benefits conveyed by these formative endeavours over the scholastic year.

My Professional and Academic Interest

Figure 2 My Professional and Academic Interest [edited and adapted by S. Bezzina] (Freepik #2 n.d.)

As a secondary school Physics teacher with 10 years teaching experience, I am professionally committed to continuously assess students’ progress through both formative and summative strategies (Ministry of Education and Employment 2012). However, my discontentment with the exclusive perception of assessment as a mechanism to measure, record and certify academic attainment, together with students’ unresponsiveness to class-based informal assessment, led me to look at formative assessment from a different and innovative standpoint. Informed by my own professional and academic interest in games and gaming, the presented research aims to rethink assessment practices in terms of the gameplay elements and principles of learning found in good game designs. Envisaged in response to the exam-oriented traditions dominant in the research scenario and in line with the National Curriculum Framework, the research attempts to make assessment intrinsic to the learning process and attractive to our students (Pace 2003), whilst being ‘supported by an e-Learning based approach’ (Ministry of Education and Employment 2012, p 12).

The Theoretical and Conceptual Foundations

Figure 3 The Theoretical and Conceptual Foundations [edited and adapted by S. Bezzina] (Freepik #3 n.d.)

I start by drawing on seminal literature on the themes of play and games (Caillois 1961, Huizinga 1949, Suits 1978), in order to understand the similarities and differences that have shaped the relationship between the two terms and ultimately understand what constitutes play and games. Based on this work and on more recent literature (Avedon and Sutton-Smith 1971, Salen and Zimmerman 2004), I then propose my own definition of game. The relationship between play, game and cognitive development is then explained through the academic work of early and influential scholars in the field of psychology and education (Dewey 1910, Piaget 1962); specifically the ways in which play promotes cognitive development by acting as a more knowledgeable other (Vygotsky 1978) and how it nurtures a sense of motivation, engagement and flow (Csikszentmihályi 1990, Malone 1980). Through the literature and research conducted by current prominent academics in the field of game studies (Gee 2007, Squire 2003, Steinkuehler 2004), I analyse and critically evaluate the role of games in education, in particular how these manage to sustain and foster more complex competencies (Whitton 2009) and discuss the problems which might arise in a game-based approach (Begg et al 2005). Finally, I examine the strong relationship that exists between learning and assessment (Ramsden 1992) and critically reflect on the main themes running through current assessment practices (Black and Wiliam 1998, Carless 2007, Hounsell et al 2007, Nicol and MacFarlane-Dick 2006).

The Research Question, Design and Method

Video 2 Where am I coming from? [created by S. Bezzina]

(download reference pack here)

As such, the research question that I have identified, seeks to empirically evaluate the game-informed assessment framework that I have developed for the purpose of this dissertation, through its capacity of acting as a catalyst for improvement in academic achievement. More specifically, the research question that I have devised is the following:

Does a game-informed approach to assessment using digital assessment technologies improve the Physics test scores of Form 5 male students in Malta?

A quasi-experimental design spread over 10 weeks, using a control (following a traditional assessment) and experimental group (following a game-informed assessment), will be utilised in order to determine any statistically significant difference in academic achievement between the two groups, measured using a post-intervention test. The overall mean test scores will be adjusted for prior academic performance and statistically analysed using an analysis of covariance. Furthermore, the resulting test scores will be categorised according to the questions set, based on the different levels of Bloom’s taxonomy of educational objectives for the cognitive domain (Bloom 1956), in order to establish statistically significant differences between the 2 groups at the individual cognitive levels. Although not immediately generalisable, due to the small sample size and specific context in which the research occurs, the findings are meant to potentially inform and possibly transform current assessment practices.

Go to the

Review of Literature