Skip to Main Content

Generative Artificial Intelligence (AI)

Artificial Intelligence is a field of research in computer science focused on developing and studying the simulation of human intelligence in machines. These machines are designed to mimic cognitive functions such as learning, reasoning, problem-solving,

Firstly, what are ethics?  

Ethics, as defined by the Oxford Dictionary are: 

“Moral principles that govern a person's behaviour or the conducting of an activity” 

Overall, the discipline of ethics is philosophical, debating what is morally right or wrong.

What are ethics in AI?  

AI ethics refer to the governance of AI’s behaviour in terms of their alignment with human values. Imposing ethics within AI ensures that they are developed and are used in ways that are beneficial to society. 

It is also to ensure that AI models are made safely and fairly.

You can use the ‘Ask A Friend’ test to discover whether your use of AI is ethical or not.  

Primary Concerns with AI

  • Accountability 

    When AI systems make decisions, classifications or predictions that affect individuals, it can become challenging to hold anyone directly accountable for the outcomes. AI systems can automate complex functions that were previously handled by humans, who could be held responsible for possible negative outcomes. However, the distributed nature of AI design, development, and deployment makes it difficult to identify who is accountable for the results.

    The lack of accountability can leave individuals that experience harm of negative outcomes  without protection or the ability to seek justice.

 

  • Reduced Skill 

It is argued that there could be a loss of skills through reliance upon AI. This dependency may undermine the development of essential cognitive abilities, such as critical thinking and creative problem-solving, which are integral to tackling challenges. In academic contexts, over-reliance on AI tools could contribute to instances of academic misconduct, where students may use AI to complete assignments without fully engaging with the material. When using AI you must always cite your use to avoid being flagged for plagiarism. Learn to cite AI through the ‘referencing AI’ section of the guide. 

 

  • Bias & Discrimination 

    AI models are trained using a wide range of data from multiple sources, this can also include insights from existing societal structures and dynamics from society. Data-driven models could take this data, reproduce and reinforce negative patterns of inequality and discrimination. Technologies shaped by their creators may carry forwards their creators’ own preconceptions and biases.  

 

  • Privacy 

    The threat to privacy is posed by AI systems through their design and development processes. As the models are designed to process vast amounts of data, the development of technologies will frequently involve the utilisation of personal data.  

    Sometimes this data can be captured or extracted without the proper consent OR is handled in a way that risks the revelation of personal information.  

    On the deployment of AI systems, users’ interactions with the model could be monitored, targeted or profiled without their knowledge or consent. This practice risks infringing on users’ privacy by influencing or shaping the development of AI models without explicit permission. 

It is vital, as an AI model user, that you thoroughly read the terms and conditions as well as the privacy statements to ensure that you know what your data is being used for, and who it is being used by. Please read the Personally Identifiable Information (PII) segment of the guide for further information concerning privacy and AI. 

 

  • Outcomes 

    Unreliable, unsafe or poor-quality AI models can harm both individuals and public welfare. These issues often stem from through irresponsible data handling, negligent design and production processes or questionable deployment, which leads to the implementation and distribution of flawed outcomes. Such outcomes not only risk public safety but also undermine trust in AI technologies designed for societal benefit, potentially damaging the innovative sector.