Webinar Series
Academic AI for every library

Understanding the role of AI in academic libraries
Evaluating the quality of generative AI output: Methods, metrics and best practices
March 26, 2025 | 11AM EDT / 3PM GMT
Generative AI is becoming an increasingly accepted practice in academic research and learning. As its use expands, ensuring the quality and reliability of AI-generated content is a critical priority for the scholarly community.

Unlike traditional systems, Large Language Models (LLMs) produce variable outputs that challenge conventional quality assessment methods. How can system providers and institutions establish rigorous frameworks to assess accuracy, relevance, and trustworthiness in an academic context?
This webinar will explore key challenges and solutions in evaluating generative AI output, drawing from Clarivate’s ongoing research and product development. Topics include:
 
  • Core challenges in AI output evaluation: Addressing inconsistent output, the limitations of human testing and the need for scalable assessment methods. 
  • Key metrics for quality assessment: Establishing a structured model to ensure AI-generated content meets academic standards for reliability and relevance. 
  • Human vs. automated evaluation: Finding the right balance between human oversight and automated assessment in AI quality control.
  • Real-world applications: Insights from the Clarivate Academic AI quality evaluation methodology and lessons from our AI solutions.

Who should attend: Library professionals and academic leaders looking to navigate the AI evolution with confidence. If you’re focused on turning general-purpose AI into Academic AI that serves your community, this session is for you!

Register below and we'll send the details straight to your inbox

HIDDEN FIELDS BELOW

Speakers

Christine Stohn

Senior Director, Strategy & Innovation, Clarivate

Marta Enciso

Director, Strategy & Innovation, Clarivate