Conversations over students using artificial intelligence to cheat on their exams are masking wider discussions about how to improve assessment, a leading professor has argued.
Phillip Dawson, co-director of the Centre for Research in Assessment and Digital Learning at Deakin University in Australia, argued that âvalidity matters more than cheating,â adding that âcheating and AI have really taken over the assessment debate.â
Speaking at the conference of the U.K.âs Quality Assurance Agency, he said, âCheating and all that matters. But assessing what we mean to assess is the thing that matters the most. Thatâs really what validity is ⊠We need to address it, but cheating is not necessarily the most useful frame.â
Dawson was speaking shortly after the publication of a survey conducted by the Higher Education Policy Institute, which found that 88Â percent of U.K. undergraduates said they had used AI tools in some form when completing assessments.
But the HEPI report argued that universities should âadopt a nuanced policy which reflects the fact that student use of AI is inevitable,â recognizing that chat bots and other tools âcan genuinely aid learning and productivity.â
Dawson agreed, arguing that âassessment needs to change ⊠in a world where AI can do the things that we used to assess,â he said.
Referencingâciting sourcesâmay be a good example of something that can be offloaded to AI, he said. âI donât know how to do referencing by hand, and I donât care ⊠We need to take that same sort of lens to what we do now and really be honest with ourselves: Whatâs busywork? Can we allow students to use AI for their busywork to do the cognitive offloading? Letâs not allow them to do it for whatâs intrinsic, though.â
It was a âfantasy landâ to introduce what he called âdiscursiveâ measures to limit AI use, where lecturers give instructions on how AI use may or may not be permitted. Instead, he argued that âstructural changesâ were needed for assessments.
âDiscursive changes are not the way to go. You canât address this problem of AI purely through talk. You need action. You need structural changes to assessment [and not just a] traffic light system that tells students, âThis is an orange task, so you can use AI to edit but not to write.â
âWe have no way of stopping people from using AI if we arenât in some way supervising them; we need to accept that. We canât pretend some sort of guidance to students is going to be effective at securing assessments. Because if you arenât supervising, you canât be sure how AI was or wasnât used.â
He said there are three potential outcomes for the impact on grades as AI develops: grade inflation, where people are going to be able to do âso much more against our current standards, so things are just going to grow and growâ; and norm referencing, where students are graded on how they perform compared to other students.
The final option, which he said was preferable, was âstandards inflation,â âwhere we just have to keep raising the standards over time, because what AI plus a student can do gets better and better.â
Over all, the impact of AI on assessments is fundamental, he said, adding, âThe times of assessing what people know are gone.â