Skip to Main Content

Generative Artificial Intelligence (AI)

Testing AI against your own Knowledge 

A good way to test an AI's knowledge is to ask questions based on a topic that you know well with some complexity to it. You may notice that whilst an AI model is mostly correct there are parts which may be incorrect or missing key information. Providing context to a Large Language Model (LLM) will give a more personalised and accurate outcome (Ball, 2025).  

Here are three text-based AI that you can use for this activity ChatGPT, Microsoft CoPilot or another AI LLM of your choice (for further examples of AI to use, please refer to the Generative Artificial Intelligence which covers an array of LLM’s you can use. 

Example: Asking ChatGPT for board game rules. 

 

ChatGPT listing out common rules of monopoly and uno

 

For well-known board games such as Monopoly and Uno, ChatGPT was able to identify the core rules of each game, however it omitted several less frequent rules and ‘unofficial rules’, such as stacking +2 or +4 cards for Uno.  

For newer games such as the word guessing game Codenames, ChatGPT explained the core game but missed out a few key mechanics and did not explain rules for creating clues (a vital part of the game). If this was for an academic topic used as the sole basis of research for an assignment there would have been missing gaps in knowledge that would have resulted in a lower mark.  

When tested on a lesser-known game released in 2022, Feed the Kraken, ChatGPT 3.5 (without web-searching capabilities) was unable to provide accurate information. Instead, it generated a fictional game based on the theme it inferred. It is crucial to note that ChatGPT was trained on data only up to September 2021. Prior to September 2023, without the ability to search the internet, ChatGPT's model could not access information beyond its training cut-off date, leading to 'hallucinations' or fabricated responses. This limitation highlights the importance of recognising that, for newer or niche topics, the model is more likely to provide biased or inaccurate information based on its training data or fail to source the information altogether. 

 

With any information gained through AI, it is important that you ask questions of its validity. Check the information elsewhere before believing it to be true. If you need support in finding and evaluating information, please see our critical thinking guides. 

Ball, J. (2025). What Is AI? Why Are Human Skills Easier for a Computer to Learn than Animal skills? How Can We Build Talking Machines in Any Language within Five years?.