Although learning gain as a concept is relatively easy to define, its measurement is potentially problematic.
Building on initial work presented at SRHE2016, in this follow-up study amongst two “traditional” universities
we sought to replicate the feasibility of using assessment grades as a measure of learning gain. Our
multi-level growth analyses of 3,537 students across 2 * 20 degree programmes indicated on average
students showed improvement in standardised grades, although this was only significant for one university.
Furthermore, the variance explained differed between the levels, whereby University 2 had more variance at
the departmental level and within students than University 1, while at the University 1 variance was mainly
nested between students. This has important implications for TEF when assessing learning gains at an
institutional level, as aggregate learning gains estimates can result in misleading estimates of students’
learning gains.
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Longitudinal analysis of students’ learning gains in Higher Education across two UK institutions
1. Longitudinal analysis
of students’ learning
gains in Higher
Education across two
UK institutions
RHONA SHARPE
JEKATERINA ROGATEN
Marius Jugariu
Ceri Hitchings
Ian Scott
Bart Rienties
Ian Kinchin
Simon Lygo-Baker
2. Defining and measuring learning gains
“LA is the measurement, collection, analysis and
reporting of data about learners and their contexts, for
purposes of understanding and optimising learning and
the environments in which it occurs” (LAK 2011)
Learning Gains is a growth or change in knowledge, skills
and abilities over time that can be linked to the desired
learning outcomes or goals of the course
3. Pre-post standardised testing is resource intensive and becomes
even more so if one wants to estimate learning gains across various
disciplines and number of universities
Advantages:
Assessment data readily available
Widely recognized as appropriate measure of learning
Relatively free from self-reported biases
Allows a direct comparison of research finding with the results of other studies
Can assessment data be a measure of
learning gain?
5. Present Study
University 1
1,990 undergraduate students
University 2
1,547 undergraduate students
20 degree programmes within each university
DV – average grade yearly grade
University 1 University 2
Year M SD M SD
1 60.65 7.62 63.75 12.66
2 61.31 6.81 65.64 12.74
3 63.32 6.63 64.12 14.02
8. Variance partitioning
University 1 University 2
Variance at Department level 13.1% 22%
Variance between students 59.8% 22%
Variance within students (between years) 27.1% 56%
9. Summary of findings
Although both universities overall showed positive gains,
substantial differences were present in variance at departmental
level.
Aggregate learning gains estimates can result in misleading
estimates of students’ learning gains on a discipline or degree
level.
Multilevel modelling is a more accurate method in comparison
with simple linear models when estimating students’ learning
gains.
10. Possible implications
Support for subject level TEF
Guidance on where to focus interventions and resources
Visualisation could promote data informed learning design
decisions
Questions still to answer:
Does grade trajectory reflect students’ learning gains?
Can we make a meaningful comparison between universities?
What impact grade trajectory has on students? Do they need to know
their own trajectory and how it compares to others?
11. Longitudinal analysis
of students’ learning
gains in Higher
Education across two
UK institutions
RHONA SHARPE
JEKATERINA ROGATEN
Marius Jugariu
Ceri Hitchings
Ian Scott
Bart Rienties
Ian Kinchin
Simon Lygo-Baker
https://twitter.com/LearningGains
https://abclearninggains.com/
Notes de l'éditeur
Although learning gain as a concept is relatively easy to define, its measurement is potentially problematic. The essence of the learning gains measurement debate is that the occurrence of learning is difficult to quantify, and often based on indirect measures.
The LAK definition emphasises the utility of learning gain research, that is, it is should be useful for optimising learning environments. So, it’s important to see the data in the context of the department and institution, and to present it in ways that would inform discussion and actionable insights.
The second definition is how the ABC project defined learning gain when we were at SRHE last year introducing the multilevel modelling approach. The relevant point to note here (highlighted) is that assessments are designed to test of achievement of the learning outcomes, and so should be a good proxy of learning gain. If so, we could use the assessment data that is routinely collected at institutions, rather than introducing new tests.
Assessment data is also discipline based rather than generic pre/post test measures of learning gain in e.g. the HEFCE mixed method study
The aim of this research is to establish whether academic performance within modules is a valid proxy for estimating students’ learning gains, and whether there is variance in learning gains that is due to students having shared educational experiences at the level of a module.
“Can assessment data be used as an alternative to pre/post standardised testing?”
Advantages would be:
Less resource intensive
Assessment data readily available etc…
Level 1 – G: repeated measures on students ACROSS 3 YEARS and tell us about students learning trajectory (NOTE: GLM shouldn’t be used for repeated measures as break the assumption of independence of measures).
Level 2 – student: between students variations
Level 3 – module: between course variation
The 3 level modelling allows us to see where the variance is coming from: between modules or between students.
In our previous analysis on data from the OU (presented here last year), we found most of the variance was between modules. Analysing data from Business and Law and STEM subjects, we found most of the variance was accounted for by differences between modules. That is, the specific module that a student is enrolled in accounts for a substantial portion of variance.
In both faculties students who studied in modules with initial high student achievements showed lower learning gains than students who studied in the modules with low initial student achievements.
The present study set out to see whether this same pattern would be seen in other traditional (campus based) universities.
On average students showed improvement in standardised grades, although this was only significant for University 1
University 1 variance was mainly nested between students
University 2 had more variance at the departmental level and within students
The negative learning gains seen in some students in University 2 does not imply that these students are losing knowledge or ability per se. However it highlights the complexity of factors that have to be taken into account when using students’ academic performance as a proxy for learning gains, such as ‘assessment difficulty’ and ‘learning design’ (Rienties & Toetenel, 2016). We need to understand these if we are to use academic performance as a proxy for learning gains within Higher Education.
We set out to see if multilevel modelling of assessment data could be a valid and useful proxy for learning gain
Aggregate learning gains estimates can result in misleading estimates of students’ learning gains on a discipline or degree level. (or show no gain hiding a more complex picture – University 2)
Multilevel modelling is a more accurate method in comparison with simple linear models when estimating students’ learning gains. The simple models are not able to detect differences between modules when looking at the department and degree level performance, whereas multilevel modelling can.
The HEFCE funded programme of which this project is a part wants to devise a sector wide APPROACH (not method) and characterisation of learning gain, and explore possible proxies.
The contribution of the ABC project is that if we are to use LG research for optimising learning (LAK definition) then we need to use a methodology that allows observation of differences in patterns of gain between departments, courses or modules.
We have seen that there are subject level differences and so our research supports the introduction of a subject level TEF.
Ideally, we’d like to see LG not just used to judge institutions but integrated into curriculum design. Designing for learning is becoming more learner centred – moving from ’what am I going to teach?’ to ‘How are students going to learn?’. A LG methodology such as this could provide data to support course teams to make evidence based decisions (complementing student satisfaction data). For example, do similar modules with different types of learning activities produce different LG trajectories? We know from the field of Learning Design that visualisation is important for course team making decisions, the multi level modelling approach provide data visualisation.
Some questions still to answer….