This document and the resources have been adapted from the original created by James Barnett and Lisa Bird, Libraries and Learning Resources at the University of Birmingham, to help researchers make informed decisions when choosing AI tools for their research. It is licensed under CC BY 4.0: creativecommons.org/licenses/by/4.0
The UK Research Integrity Office (UKRIO) have recently launched new guidance on Embracing AI with integrity: a practical guide for researchers.
Image by Rosy / Bad Homburg / Germany from Pixabay Accessed 19.09.2025
Recent advances in Generative AI are encouraging researchers to explore its role throughout the research lifecycle. At the University, Microsoft Copilot is available within the secure digital infrastructure, helping to safeguard your data. Importantly, any data entered into this service is not used to train foundation models.
Researchers have access to a wide range of AI tools, but licensing terms—especially in free or freemium versions—can pose risks. Always review terms and conditions before using any tool in your research. To help researchers and libraries navigate this landscape responsibly, please look at these two sources of support – an Evaluative Framework for AI tools outlining the questions a researcher should consider before using an AI tool (see below), and the AI Tool Licensing Guide section on the left to assist with reviewing terms and conditions once a tool has been identified.
The Evaluative Framework for AI tools supports researchers in making informed choices. It offers key questions to consider—from assessing the tool’s relevance to ensuring compliance with institutional and stakeholder policies. It is also available as a one-page Word document (DOCX - 25 KB).
How does this tool compare to alternatives?
Is it the best tool for your purpose?
How does it impact on skill acquisition?
Tip: use the TAP and TASTE methods for effective prompting.*
Are your prompts or inputs added to the service?
Do you provide the vendor with rights to your inputs?
Do you need to be the rights owner of any inputs?
Do you have ethical approval to input your research data?
How useful is the output?
Are the outputs accurate?
Are there any limits on what you can do with outputs?
Does use of the tool comply with funder policies?
Does use of the tool comply with publisher policies?
Does it comply with institutional policies?
Does use of the tool raise any ethical concerns?
Is my choice of tool the most environmentally sustainable option?
Is the underlying data suitable for your needs?
Is it transparent what the model has been trained on?
What bias may be in the corpus?
Are the date periods for the corpus suitable for your needs?
Do the terms and conditions raise any issues regarding:
*To understand what TAP and TASTE means read this useful article by A.J. Wallbank:
Prompt engineering as academic skill: a model for effective ChatGPT interactions
Explore the University of Derby’s comprehensive Generative AI guide, which includes a Code of Practice. For research-specific guidance, visit the Policy Hub for resources on responsible and ethical AI use - Guidance on the Acceptable and Responsible use of Generative Artificial Intelligence in Research.