Pearson turns artificial intelligence attention to essay marking

New tool will learn how academics grade papers and replicate their approach

August 6, 2018
A woman wearing an electronic headset
Source: Alamy

Having used IBM’s Watson technology to create a virtual learning assistant, Pearson has played a significant role in driving forward the development of artificial intelligence in higher education.

Now the international education company is poised to take the next step by developing an AI tool that can grade university essays.

Pearson’s tool, which is currently being developed for piloting in US higher education, is not the first product of its type; similar platforms have been developed at the University of Manchester and at the University of California, Berkeley.

But Pearson’s global reach and its experience of using Watson – which can analyse huge amounts of text and data and use this to answer complex questions in natural language – could mean that its new tool represents a significant step forward.

Milena Marinova, who has been hired by Pearson from chipmaker Intel to lead its work on AI, told Times Higher Education that the new tool would be able to mark essays in a more sophisticated fashion than previous grading assistants.

“For any automation or assisted decision-making, abstraction is very difficult, but the new product is going to allow the professor to train the system,” explained Ms Marinova, Pearson’s senior vice-president for AI products and solutions. “So, in the first 10 essays, for example, the professor can teach the way they grade to the system, until there is enough accuracy, and confidence, that the algorithm can work to the level the professor has approved.

“They can dynamically change the product, so [that] it grades the essay [in] the way that particular professor would. That’s a key difference to our new advanced algorithm, it’s pretty unique. Previously when we talked about AI products they were prescriptive, pre-designed, one-size-fits-all types of program.”

Ms Marinova emphasised that the tool was not designed to put academics out of a job. Instead, it would “free up the professor’s time to focus on more complex, critical and important tasks”, she said.

It would also allow academics to set more writing assignments for students who would then, in turn, get more feedback and develop better writing skills.

Pearson is also working on developing feedback capabilities for marking complex mathematical activities. “It will allow the AI to do what a human grader can recognise: that a minor mistake can lead to the wrong answer but award [the student] credit for the right steps they took,” Ms Marinova said.

Tim Bozik, head of product development at Pearson, added that the application of AI would give students real-time feedback that is tailored to them when instructors aren’t around.

“More teaching and learning involving digital experience is going to create more data and the opportunity for more meaningful experience between students and teachers,” he said.

Please Login or Register to read this article.

Register to continue

Get a month's unlimited access to THE content online. Just register and complete your career summary.

Registration is free and only takes a moment. Once registered you can read a total of 3 articles each month, plus:

  • Sign up for the editor's highlights
  • Receive World University Rankings news first
  • Get job alerts, shortlist jobs and save job searches
  • Participate in reader discussions and post comments
Please Login or Register to read this article.

Related articles

Reader's comments (1)

"Ms Marinova emphasised that the tool was not designed to put academics out of a job. Instead, it would “free up the professor’s time to focus on more complex, critical and important tasks”, she said." Ooh, what a give-away. So assessing student performance doesn't make the list of 'complex, critical and important tasks' for an educator?