Skip to Main Content

Using generative AI at university

Can I use generative AI at university?

La Trobe University recognises that generative AI tools can be beneficial for study, assessments, and research - however, each school, subject, or even assessment task may have different conditions for the use of generative AI. It is crucial that you check with your lecturers or subject coordinators before using a generative AI tool to avoid academic misconduct.

Start by checking the guidance for assessment and if in doubt check with your subject coordinator or lecturer to find out what is permitted. If you are permitted to use generative AI tools, you must reference or acknowledge the use.

In general, submitting an assessment produced by generative AI as your own work is academic misconduct. In a few disciplines you may occasionally be asked to use a generative AI tool as part of a particular assessment. In these situations, your lecturer or subject coordinator will provide clear instructions and you must reference or acknowledge the use of the tool.

No matter how you plan to use these tools, it is important to understand the ethical considerations of using them and apply a critical lens to any content generated.
 

Checklist of key considerations:

  • Am I permitted to use generative AI by my school, course, subject coordinator? 
  • Am I missing an opportunity to learn an essential skill or knowledge? 
  • Will I be including the content in my final assessment? 
  • Will I be breaching copyright by uploading someone else’s work as a prompt? 
  • Can I verify the content to ensure it is accurate and credible? 
  • Can I verify the generated content has not been stolen from someone else? 
  • Have I considered the biases which may be present in the content?

 

Academic integrity

Academic integrity refers to the shared values of the academic community, including honesty, fairness, and responsibility. It is important to understand what Academic Integrity is and why it is important in your studies.

Breaches of academic integrity can result in serious penalties, but there is help for cases of suspected academic misconduct.

Generative AI is still an emerging technology so in your study or research it is good to question if using it is appropriate.

Examples of breaches of academic integrity in relation to generative AI could include:

  • Prompting a generative AI tool to write entire responses or sections of your response and claiming it as your own response.
  • Using an image produced by a generative AI tool and not acknowledging.

As this is an emerging area and all the rules haven’t yet been established, there may be other situations which may be considered a breach of academic integrity.
 

Copyright, ownership and authorship

Copyright law is designed to incentivise creative human expression and thus the notion of allowing copyright protection of works created by AI will challenge this principle. As of 2024, copyright ownership of AI outputs is not directly addressed or defined in the Australian Copyright Act 1968. However, based on current definitions of what is protected by copyright and language references to only human (i.e. 'person') expressions being protected, an AI output may not be protected by copyright.

A generative AI output may be created through a combination of user prompts and pre-existing training material. If AI technology is responsible for the creation of the output, the generated material may not be defined as a creation of the user of the platform. Use of the AI platform output can still be governed by the platform terms and conditions, which should always be confirmed.
 

Privacy and online safety

Privacy and online safety is important to consider when using generative AI tools as with any other online activity. You should also be cautious of putting personal data and intellectual property into generative AI tools because many of these tools will collect and store information you provide. This data could be used to continue training the platform or sold to third parties.
 

Accuracy and reliability

Generative AI tools are designed to answer prompts based on an algorithm and the data they are trained on. The tool will attempt to provide a response which is the most probable answer based on the data it has access to. When it is obvious that the response provided is wrong it is referred to as a ‘hallucination’. One type of hallucination encountered in an academic setting is when false citations are provided as part of a response. The tool may give you a reference list which looks correct, but some of the articles may not exist. Using a strategy such as the SIFT method can help you determine if the information generated is credible.

Content created by generative AI has been shown to perpetuate biases and echo racial and gender stereotypes. These biases can be programmed in or appear organically due to inherent biases in the datasets used for training or in the way the platform has interpreted the data.