Prompt Engineering is the art of asking the right questions to get the best output from Large Language Models. Large Language Models (LLM’s) are Artificial Intelligence systems that are capable of understanding and generating human language by processing vast amounts of text data. One example of an LLM is Chat GPT.
Whilst working with LLM’s in the past has been derived from a deep understanding in data, modelling techniques and statistics, today, they can be trained in English as well as other languages.
Being ‘good’ at prompting AI does not require coding experience, but it does involve understanding how to effectively communicate with different models. Key skills such as creative problem-solving, resilience and adaptability play a crucial role in interacting with LLM’s. These skills enable you to refine your approaches, prompts and troubleshooting to make your engagement with Generative AI more productive and rewarding.
Google suggests that you should abide by the following when communicating with LLM’s to generate relevant responses:
Clearly communicate what content or information is most important.
Structure the prompt: Start by defining its role, give context/input data, then provide the instruction.
Use specific, varied examples to help the model narrow its focus and generate more accurate results.
Use constraints to limit the model's output. This can help avoid meandering away from the instructions into inaccuracies.
Break down complex tasks into a sequence of simpler requests
Question the LLM to evaluate its own response, prompting it to do so with questions such as "Rate your work on a scale of 1-10 for conciseness" or "Do you think this is correct?"
It is important to note that the effectiveness of prompts depends on the model and the version that you are working with. LLM’s also benefit from you being open minded, so be sure to be creative with your requests, prompting the model to answer in different formats to acquire the most relevant output for your needs.
One way in which you can learn to get the most out of AI is through a series of pre-questioning. Asking the AI what it requires is often a good way to start! As you learn more about how each AI model works, you’ll learn to structure your requests accordingly.
Above: ChatGPT response
Above: Microsoft CoPilot response
For the example above, proposing ChatGPT 4o and Microsoft CoPilot with the question “Before we begin what questions do you have for me about this prompt?”, provides the AI with an opportunity to clarify any ambiguities or gather necessary context before generating a response.
As you can see, the two models have different approaches to answering this initial question, allowing you to explore the ways different AI models communicate.
This approach allows you to guide the AI toward a more tailored and relevant outcome, ensuring that it addresses your specific needs effectively! Additionally, it helps refine the interaction by encouraging a collaborative dialogue, setting the stage for more accurate responses.
Microsoft (2025) CoPilot model response screenshot
Open AI (2025) ChatGPT 4o model response screenshot