“Good accountability and governance are key to maintaining your professional responsibilities, as well as protect clients’ privacy and data security.”

– 2025 Solicitor’s Guide to Responsible Use of Artificial Intelligence

 

To address and identify both the concerns and benefits of generative AI tools and large language models (LLM), the Law Society of New South Wales recently published a Solicitor’s Guide to Responsible Use of Artificial Intelligence.

 

Overview

The guide presents readers with information about several pieces of legislation impacted by the use of generative AI (“gen AI”), the ethical and legal obligations expected from legal practitioners, examples of how gen AI can be applied properly in the workplace, and ideas to consider either before or after applying AI tools.

 

Definition

The guide defines generative AI as a learning model that produces outputs when given a prompt, and the outputs are “based on probabilistic modelling applied to a set of data” which results in content being produced based on the data it was fed.

 

How can it help?

When used properly, the guide suggests that there are ways in which LLMs and gen AI can assist firms and solicitors – namely with repetitive, high-volume tasks, such as recording and writing up meeting minutes, drafting correspondences, and streamlining client and matter intake processes, among several other useful applications.

 

Downsides and risks

Despite having identified multiple helpful and beneficial uses, the guide warns practitioners of the numerous current downsides and risks that arise from using such technology – including how it impacts and affects several obligations in the Legal Profession Uniform Law Australian Solicitor’s Conduct Rules 2015 (“Conduct Rules”).

Accuracy of information

In the pursuit of creating conversational style outputs, ChatGPT has been known to “hallucinate” and generate fictitious results.

Contextual Relevance

Generative AI systems produce text without any understanding of relevance or context (e.g. a client and their unique circumstances).

Biases

Due to generating results by relying on incomplete or finite data sets, the outputs provided can as a result have an identifiable bias.

Intellectual Property Theft

Some of the data or information utilised in the creation of text outputs may utilise copywritten data or texts.

Privacy and Data Security

The terms and conditions between gen AI tools vary, including how user data is shared with third parties and whether or not user consent is required. Furthermore, not all gen AI systems properly encrypt or destroy sensitive information, resulting in confidential client information potentially being exposed to the public.

 

Conduct Rules risks

When considering the use of gen AI tools, the guide advises legal practitioners to consider how it impacts their obligations under the Conduct Rules and identifies five rules that should be front of mind when considering the use of gen AI tools:

  • Rule 4: Other Fundamental Ethical Duties
  • Rule 9: Confidentiality
  • Rule 17: Independence – avoidance of personal bias
  • Rule 19: Duty to the Court
  • Rule 37: Supervision of legal services

 

Rule 4: other fundamental ethical duties

As gen AI systems sometimes generate fictitious outputs or contain plagiarised information, it’s vital that solicitors review all generated copy to ensure Rule 4.1.2 is not violated.

Rule 9: Confidentiality

The guide considers copying confidential client information into a public gen AI system “akin to putting it into the public domain”, and that could be a breach of confidentiality and result in clients losing privilege, thus violating Rule 9.

Rule 17: Independence – avoidance of personal bias

Regardless of what outcomes a gen AI system may provide, solicitors are expected to exercise their own professional judgements, and not act as “the mouthpiece of the client”.

Rule 19: Duty to the court

Solicitors have a duty to not “deceive or knowingly or recklessly mislead the court” – which is a real concern, given the tendency for gen AI systems to produce fabricated or “hallucinated” outputs. Furthermore, the guide states “solicitors should not rely on generative AI to verify sources produced by AI”.

 

Recent AI cases

To better illustrate the risks of using gen AI for court matters, the guide cited two recent cases (Handa & Mallick and Mata v. Avianca Inc.) as examples where gen AI was used to create materials for court and were not reviewed prior to being used, and as a result, authorities and cases were respectively presented during matters that were subsequently identified as being fictitious.

 

Adoption considerations

The guide provides 16 ideas for firms before adopting AI or after adopting it, to determine whether a firm’s use of AI is “appropriate and consistent with the Australian Solicitor Conduct Rules”. These ideas range from evaluating risk management frameworks, the principle of least privilege, and prompt engineering to professional development opportunities and relevant court protocols.

 

Key Takeaways

While large language models and generative AI tools have obvious practical and helpful uses that can assist legal practitioners on a day-to-day basis, there are demonstrable downsides and professional risks that cannot be ignored and should be considered both prior to and after adopting AI technologies.

Nyman Gibson Miralis provides expert advice and representation in cases involving purported privacy breaches, including cases of alleged cybercrimes.

Contact us if you require assistance.