Back to top

Grade inflation or grade improvement?

08 December 2025 | news

Chris Whelan, Chief Executive
Universities New Zealand - Te Pokai Tara

In August, The New Zealand Initiative published ‘Amazing Grades – Grade Inflation at New Zealand Universities.’  Media paid attention because it said things that sounded like universities were making university easier – letting more people graduate and giving higher grades because of funding pressures and pressure on academics. 

They released a slightly different version of the report again in November.  Again, it made the news. 

The report, ‘Amazing Grades’ is in two parts.  The first part is the result of the author using the Official Information Act to ask the eight universities for longitudinal information on grades awarded.  The results are interesting and new.  They show a marked increase in students passing courses and in students gaining A-grades.  These numbers deserve analysis and an explanation.    

The second part tries to explain the findings.  This is where I have concerns.  It assumes that higher pass rates and higher grades are bad things.  It identifies just four possible factors that might account for better learning outcomes (1) students are better prepared at school, (2) there are more female students at university, who generally do better than male students, (3) there is more university income and expenditure on teaching and learning, and (4) improved staff to student ratios. 

Speaking to anyone working in university quality assurance, learning and teaching, or senior leadership would have uncovered more than a few other factors which might also have been considered. 

Key among these are decades of deliberate work to improve pedagogy (the method and practice of teaching), to lift the quality of teaching and learning and to put more support around students. 

I’ve been in and out of the university sector at various times during my career and I’ve seen some of these things firsthand – initiatives which started 10, 20, even 30 years ago to cumulatively and deliberately lift student achievement and performance. 

I worked for the University of Canterbury for three years after the earthquakes (2011 to 2014).  I had a variety of roles there but was involved in a range of university projects including the following: 

  • Requiring anyone wanting promotion to senior lecturer (or beyond) to have completed training in teaching and to be demonstrating success as a teacher.  [That wasn't a requirement at Canterbury before that time]. 
  • A university learning and teaching strategy that focussed on better pedagogy and assessment, supported with investment in modern learning environments and learning teaching systems.  [At the time, teaching was mostly a lecturer in front of a large classroom and tutorials]. 
  • A requirement that all lectures be recorded and available for any and all students to access anywhere and at any time [this was a post-quake lesson and significantly improved student participation at the university and ability to succeed]. 
  • An academic teaching unit that employed basic analytics to better understand where there were inconsistencies in learner performance and assessment.  The unit tried to get support and moderation wherever they identified potential quality issues with teaching and teachers.  
  • An enabling technology environment. That was the period where we moved everything online - making the entire library searchable and accessible online.  Digitising and putting all physical resources online,  improving access and search capabilities, and extending access nationally and internationally.  At the same time, moving everything to fibre speeds and making it standard for students to have their own devices.  Putting Wi-Fi everywhere.  Creating the infrastructure for technology enhanced and technology enabled learning that has been extensively developed and utilised over the past decade. 

The University of Canterbury is just one university.  Other universities have taken different paths, but all have been actively focussed on continuous improvement for as long as I’ve been associated with them.  Since 2014 I’ve seen the following introduced across the university sector at slightly different times and in slightly different ways:  

  • Enterprise information technology platforms aimed at targeting support for students, embedding better pedagogy in teaching and learning, and creating a wider range of learning and assessment options.  These include student management systems, learning management systems, curriculum management systems, and enterprise analytics and insights that leverage them all for better outcomes. 
  • A deliberate focus on creating more equitable and inclusive learning environments for students traditionally underrepresented at university and traditionally less likely to complete studies. 
  • Learner success initiatives that use the analytics and smart investment in time and money to target student support where it is going to make a difference. 
  • Quality assurance processes that include a sample of assessment undergoing independent marking by people outside of the student’s university.  This always happens for postgraduate research, but also happens across at least a fifth of undergraduate programmes, mainly those which are accredited by a professional body and that lead to professional registration. 
  • Graduating year reviews that formally assess new or substantially amended qualifications 2-3 years after the first cohort of graduates emerge and which verify with the graduates themselves, and their employers, that they have the skills and capabilities expected. 
  • Programme/qualification reviews every 7-10 years that look at how well qualifications and their graduates are meeting employer needs. 

Thinking even further back in my career I was the hapless well-meaning student representative on the University of Canterbury Council in 1990. 

I recall one discussion at Council based on a report from what was then the College of Arts.  That was my own college and so I paid it particular attention at that time. 

The Dean of Arts reported that Arts students were missing out on postgraduate scholarships and some career opportunities because the marking standards used in the College of Arts did not line up with the marking standards in the College of Engineering and the College of Sciences. 

In Engineering and the Sciences, student assessment generally involved calculations and solving problems where there was just one correct answer.  Students were getting all answers correct and were being granted the A+ grades that got them excellent jobs and the best scholarships. 

By contrast, the unofficial standard for getting an A-grade in the College of Arts was work that was as good as the academic could have done themselves and an A+ grade was something better than the academic might have done. 

Unsurprisingly, few academics rated their students as being as good or better than themselves, so the normal grade for an arts student at that time was a B+ or A-.  

I remember the Dean of Arts announcing a new initiative to progressively fix this problem so judgements applied to the awarding of higher grades would be more objective and employers and scholarship schemes would have a more accurate view of the abilities of University of Canterbury arts graduates. 

Although universities like to see themselves as eternal, they are never unchanging.  

Many decades of initiatives such as these have been aimed at educating better with a deliberate goal of lifting student achievement and performance.  

Our graduate outcomes continue to be good by every measure we can identify.  Just 1.5% of graduates are unemployed on average. Under-employment (graduates not in degree level employment) is broadly in line with the OECD at 2.98% for bachelor’s degree holders.  Graduates continue to earn a significant premium over non-graduates over their working lives. 

I can’t conclusively prove that the trend to higher grades is the product of the decades of improvements across the teaching and learning environment.  But given the enormous time and investment in these improvements, I’d really like to see it at least considered in this sort of research. 

Here’s an idea for a research project.  How about getting 30-40 assignments from twenty years ago and 30-40 assignments from now – same subject, same level of study, all reformatted to remove anything that might date them, and a topic which is about an assessor assessing critical thinking and analysis.  Have them marked by a few academics at overseas universities and compare the grades given to the assignments written then versus now. 

That would bring some evidence to this discussion.