Skip to Main Content

Generative Artificial Intelligence (AI)

Artificial Intelligence is a field of research in computer science focused on developing and studying the simulation of human intelligence in machines. These machines are designed to mimic cognitive functions such as learning, reasoning, problem-solving,

Firstly, what are ethics?  

Ethics, as defined by the Oxford Dictionary are: 

“Moral principles that govern a person's behaviour or the conducting of an activity” 

Overall, the discipline of ethics is philosophical, debating what is morally right or wrong.

What are ethics in AI?  

AI ethics refer to the governance of AI’s behaviour in terms of their alignment with human values. Imposing ethics within AI ensures that they are developed and are used in ways that are beneficial to society. 

It is also to ensure that AI models are made safely and fairly.

You can use the ‘Ask A Friend’ test to discover whether your use of AI is ethical or not.  

Primary Concerns with AI

  • Accountability 

    When AI systems make decisions, classifications or predictions that affect individuals, it can become challenging to hold anyone directly accountable for the outcomes. AI systems can automate complex functions that were previously handled by humans, who could be held responsible for possible negative outcomes. However, the distributed nature of AI design, development, and deployment makes it difficult to identify who is accountable for the results.

    The lack of accountability can leave individuals that experience harm of negative outcomes  without protection or the ability to seek justice.

 

  • Reduced Skill 

It is argued that there could be a loss of skills through reliance upon AI. This dependency may undermine the development of essential cognitive abilities, such as critical thinking and creative problem-solving, which are integral to tackling challenges. In academic contexts, over-reliance on AI tools could contribute to instances of academic misconduct, where students may use AI to complete assignments without fully engaging with the material. When using AI you must always cite your use to avoid being flagged for plagiarism. Learn to cite AI through the ‘referencing AI’ section of the guide. 

 

  • Bias & Discrimination 

    AI models are trained using a wide range of data from multiple sources, this can also include insights from existing societal structures and dynamics from society. Data-driven models could take this data, reproduce and reinforce negative patterns of inequality and discrimination. Technologies shaped by their creators may carry forwards their creators’ own preconceptions and biases.  

 

  • Privacy 

    The threat to privacy is posed by AI systems through their design and development processes. As the models are designed to process vast amounts of data, the development of technologies will frequently involve the utilisation of personal data.  

    Sometimes this data can be captured or extracted without the proper consent OR is handled in a way that risks the revelation of personal information.  

    On the deployment of AI systems, users’ interactions with the model could be monitored, targeted or profiled without their knowledge or consent. This practice risks infringing on users’ privacy by influencing or shaping the development of AI models without explicit permission. 

It is vital, as an AI model user, that you thoroughly read the terms and conditions as well as the privacy statements to ensure that you know what your data is being used for, and who it is being used by. Please read the Personally Identifiable Information (PII) segment of the guide for further information concerning privacy and AI. 

 

  • Outcomes 

    Unreliable, unsafe or poor-quality AI models can harm both individuals and public welfare. These issues often stem from through irresponsible data handling, negligent design and production processes or questionable deployment, which leads to the implementation and distribution of flawed outcomes. Such outcomes not only risk public safety but also undermine trust in AI technologies designed for societal benefit, potentially damaging the innovative sector. 

 

Environmental Impact of AI

 

One important consideration when using AI is its sustainability, particularly regarding its direct and indirect environmental impact and the overall costs of large-scale computational processes.

 

  • Why does Environmental Sustainability matter in AI?   

Training AI models at scale, requires immense computational resources which amplifies both energy and water. The consumption of resources raises a range of environmental, societal and ethical issues.  

Global data centers are predicted to consume 1,000 terawatts of electricity by 2026, equivalent to the entire electricity demand of Japan (IEA, 2024). Gen AI systems may use 33 times more energy than task-specific programs (Luccioni et al., 2024).  

 

  • Energy Use  

In many cases, the electricity powering AI systems originates from power plants burning fossil fuels which may use water for cooling and steam generation, which indirectly contributes to the water and carbon footprint of AI.  

Even where renewable forms of energy are used, this is both diverting electricity from other uses and requires the construction of additional power plants, with both direct and indirect carbon and energy impacts (de Vries, 2023) (Gupta, 2024).  

A poignant example of the power demand of AI is the recently announced reopening of the Three Mile Island Nuclear Plant to power Microsoft’s datacentres (Sherman, 2024). 

 

  • How does Artificial Intelligence consume water?  

The footprint of water in AI operations is often overlooked in discussions surrounding sustainability, largely overshadowed by the focus on carbon emissions and the use of electricity (Saenko, 2023).   

AI’s water consumption is a genuine concern that merits attention and careful reflection. AI models are trained on servers housed in data centres, which generate significant heat during operation. Water is commonly used in cooling systems to maintain the optimal operational temperatures for the servers. One way in which water is used, is to lower the temperature of air circulating the equipment in the data centres.   

Training AI models also consumes thousands of litres of water (Li, 2023).  Many data centres are located in areas that are already experiencing water scarcity, potentially exacerbating local issues (Gupta, 2024). 

As students, it is important to use AI mindfully, avoiding unnecessary or excessive use that could contribute to its environmental impact. However, this issue should be addressed through facts and solutions rather than fear. Through fostering the development of energy efficient AI technologies and practicing responsible use, we can reduce AI’s environmental footprint while continuing to benefit from its advancements.  

 

For further information surrounding Environmental Sustainability and Sustainable Development, please take a look at the Sustainable Development Guide on Develop@Derby

de Vries, A. (2023). The Growing Energy Footprint of Artificial Intelligence. Joule, 7(10). doi:https://doi.org/10.1016/j.joule.2023.09.004.  

Gupta, J. (2024). AI’s Excessive Water Consumption Threatens to Drown out Its Environmental Contributions. [online] Available at: https://sdgs.un.org/sites/default/files/2024-05/Gupta%2C%20et%20al._AIs%20excessive%20water%20consumption.pdf.  

IEA (International Energy Agency) (2024) Electricity 2024: Analysis and forecast to 2026.    

Li, P., Yang, J., Islam, M.A. and Ren, S. (2023). Making AI Less ‘Thirsty’: Uncovering and Addressing the Secret Water Footprint of AI Models. ArXiv (Cornell University). doi:https://doi.org/10.48550/arxiv.2304.03271.  

Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. The Alan Turing Institute. https://doi.org/10.5281/zenodo.3240529   

Luccioni, S., Jernite, Y. and Strubell, E.  (2024) Power hungry processing: Watts driving the cost of ai deployment? ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT '24), June 3--6, 2024, Rio de Janeiro, Brazil  

Saenko, K. (2023). A Computer Scientist Breaks down Generative AI’s Hefty Carbon Footprint. [online] Scientific American. Available at: https://www.scientificamerican.com/article/a-computer-scientist-breaks-down-generative-ais-hefty-carbon-footprint/.