Year
Artificial Intelligence
AI Concerns

OVERVIEW

Generative AI is a cool tool, but it has some downsides and ethical issues you should know about. Sometimes, it might make mistakes or show bias in what it creates, and it can struggle to understand the full picture. On the ethics side, there's a risk of people using AI-generated material in the wrong way or invading privacy. Plus, the AI might unintentionally perpetuate biases present in the training data. So, it's crucial to be aware of these issues and be careful when using generative AI in your studies to make sure it's used responsibly and ethically.

LIMITATIONS AND BIASES

Generative AI is susceptible to generating "hallucinations", where the model generates erroneous or misleading information - including making up citations for articles and book citations - and even people -  that do not exist(!)

It's important to note that despite the authoritative tone of the output, it doesn't guarantee correctness. Generative AI lacks awareness of the factual context of a query; instead, it utilises your prompt to generate human-like content. The model doesn't self-correct for accuracy and operates more like a "let's pretend" game than providing reliable search results.

Here are a few ways in which AI might exhibit hallucinations:

  • Text generation: asking for information about Mars as the "third" planet from the Sun may yield extensive content about this planet. However, despite the authoritative output, Mars is actually the "fourth" planet in our Solar System. ChatGPT generates content word by word, aiming to create output that seems true, contributing to these "hallucinations."
  • Image Generation: An AI model trained on pictures of animals might generate a fantastical creature that doesn't exist in reality.

It's crucial to recognise that these instances of "hallucination" are not failures of the AI tool but rather inherent to its generative nature. It's essential to understand that Generative AI is not a research tool; instead, it's a system designed to generate human-like output, irrespective of its accuracy.

If your prompts aren’t specific enough, the GenAI can make assumptions that lead to confirmation bias.

To avoid introducing bias or influencing the GenAI model toward a specific response, it is advisable to formulate questions in a neutral manner or present information with a neutral tone.

Furthermore, consider using the same tone in your prompt that you desire the GenAI response to reflect. For instance, if your prompt employs harsh language, the tool is likely to respond in an aggressive manner, assuming it is following your lead.

Another strategy to address this issue is to ask the GenAI to guide you through the steps it took, which can help identify logical errors and unfounded assumptions.

GenAI tools undergo training using an extensive array of texts and datasets. However, despite the diversity of these materials, they inherently carry the cultural and linguistic biases of predominant societal and historical groups.

Consequently, when GenAI generates new content, it may exhibit a tendency towards certain viewpoints, a phenomenon referred to as "Algorithmic Bias." The model produces material based on learned patterns, lacking contextual awareness. It is essential to exercise caution regarding potential biases, such as alignment with commercial objectives or societal prejudices.

For instance, GenAI might unintentionally reinforce stereotypes, such as associating "Doctor" with "Male" or "Nurse" with "Female," reflecting historical commonalities in various texts. Read more about this.

To navigate these potential biases effectively, employ critical thinking skills when analysing and contextualising the outputs generated by AI. This process should involve cross-verifying information obtained from the outputs and forming an independently informed perspective.

CHALLENGES AND ETHICAL IMPLICATIONS

ACADEMIC INTEGRITY

It is essential to critically evaluate content from generative AI tools due to their potential unreliability, as they can produce seemingly credible yet often unsupported information without proper source citations. How can we do this?

  • Questioning the content that has been generated.
  • How are you going to verify the information that generative AI has created? 
  • Using an Evaluation Method, such as the CRAAP or SIFT method.
  • Applying what has been generated to inspire your own pursuit of knowledge and creativity.

View GenAI as a starting point and summary of a topic before using it to start researching more widely.

Crediting creators and attributing content is a core part of both academic integrity and of being a digital citizen more broadly. You must always check the credibility of your information and source, as any material you use which turns out to be inaccurate or false may lead to findings of academic misconduct.

Always check with your teacher if the use of generative AI is allowed in your unit and assignments

If you use generative AI in any element of your work, the person that marks it needs to know what's yours and what comes from somewhere else. 

For instructions on how to reference GAI use, please refer to our Academic Honesty Guide/Artificial Intelligence

USING AI IN ETHICAL WAYS

PRIVACY & ACCESSIBILITY

Like most online and digital tools, generative AI has to ability to collect and store data about its users. When signing up, users may unknowingly allow companies to collect this data if the terms and conditions are not read and understood properly. This data can then be used to further train and refine the models or in some instances sold to the highest bidder. 

In April 2023, Italy became the first country to block ChatGPT due to privacy related concerns. The country’s data protection authority said that there was no legal basis for the collection and storage of personal data used to train ChatGPT. The authority also raised ethical concerns around the tool’s inability to determine a user’s age, meaning minors may be exposed to age-inappropriate responses. This example highlights wider issues relating to what data is being collected, by whom, and how it is applied in AI.

There are two main concerns around the accessibility of ChatGPT.

1. The first concern is the lack of availability of the tool in some countries due to government regulations, censorship, or other restrictions on the internet.

2. The second concern relates to broader issues of access and equity in terms of the uneven distribution of internet availability, cost and speed. Most tools have a free basic account that can be used by anyone, but these usually come with restrictions, such as limits on the number of uses within a time frame. Many tools now charge for access to the platform or for premium features. This can create barriers for those who are unable to afford these costs.

AI Unleashed: The Ethics of Artificial Intelligence

AI and the impact on Artists

Login to LibApps