Responsible Use of AI in Research Workflows

Monday, December 15

Join this free webinar on December 15 | 11:00 AM IST

Artificial Intelligence(AI) is rapidly transforming how researchers discover, analyse, and communicate knowledge. While AI tools offer unprecedented efficiency from enhancing literature reviews to supporting data analysis and writing, they also introduce new ethical, methodological, and integrity challenges.

This webinar provides a clear, practical framework for using AI responsibly across the research lifecycle. We will explore how to evaluate AI-generated content, prevent inaccuracies and hallucinations, ensure transparency in AI-assisted writing, safeguard sensitive data, and recognise potential bias in AI outputs.

Participants will also gain insights into global guidelines from COPE, and major publishers, alongside Clarivate’s innovations in the responsible AI space. Designed for faculty, researchers, librarians, and students, the session aims to equip institutions with the awareness needed to build a culture of ethical, accountable, and trustworthy AI use in academia.

Agenda:
  • Introduction to Responsible AI in Research: Setting the context for ethical, transparent, and accountable AI adoption.
  • Understanding AI Capabilities and Limitations: Clarifying what AI can and cannot do across discovery, writing, and analysis.
  • Ensuring Research Integrity in AI-Assisted Workflows: Addressing risks of hallucination, inaccuracies, and unverifiable claims.
  • Data Privacy, Security, and Sensitivity in AI Tools: How to handle confidential data and avoid unintended disclosure.
  • Ethical Literature Search and Review Practices: Using AI to streamline reviews without compromising methodological rigor.
  • AI-Assisted Writing: Transparency and Attribution: What to disclose, how to cite, and maintaining authorship accountability.
  • Avoiding Bias and Ensuring Fairness in AI Outputs: Identifying algorithmic bias and evaluating outputs critically.
  • Using AI for Research Design and Analysis Responsibly: Guardrails for hypothesis development, statistical suggestions, and coding.
  • Evaluating AI Tools for Trustworthiness: Key criteria: provenance, accuracy checks, audit trails, and reproducibility.
  • Institutional Policies and Global Guidelines on AI Use: Overview of policies from UGC, COPE, publishers, and research organizations.
  • Building a Responsible AI Culture in Academic Institutions: Training, governance, and role of libraries in ethical AI adoption.

 
Join us to learn how to integrate AI effectively without compromising research integrity.

Register below and we'll send the details.

HIDDEN FIELDS BELOW

Speaker's Bio

Dr. Subhasree Nag
Senior Business Solution Consultant
Clarivate

Dr.  Subhasree  Nag  is  a  senior  business  solution  consultant  for  the scholarly research and life sciences division at Clarivate.  

She  completed  her  PhD  from  Texas  Tech  University  Health  Sciences Center,  USA  and  her  post-doctoral  training  from  Pacific  Northwest National Laboratory, USA.

A pharmacologist and toxicologist by training, she has more than 8 years of  research  experience  in  anticancer  drug  discovery  and pharmacokinetics  areas  with  25  peer  reviewed  publications  and  more than 1500 citations.

At Clarivate, she is involved in promoting Clarivate's scholarly research, drug discovery and intellectual property (IP) solutions, conducting author and research capacity workshops, bespoke consulting projects, etc.

She  was  a  co-author  on  the  UGC-Good  Academic  Research  Practices guidance  document released  in  2020.  She  also  writes for  the  Clarivate pharma  industry  magazine,  Bioworld,  where  she  contributes  to  articles related to early-stage discovery and preclinical studies.