THE ALGORITHM MADE ME DO IT: ACCOUNTABILITY AND AI IN LAW

Artificial Intelligence (AI) has quickly become a part of our everyday lives. Despite being an idea and fantasy seen in movies and TV less than a decade ago, in just a few short years, AI has firmly cemented itself into the fabric of daily life. From social media to voice recognition, to Large Language Models (LLMs) and our Google searches, AI seems to be inescapable. Whilst its integration and frequent appearances have quickly become a norm to most, it has implications and consequences that are much further reaching than the superficial. There is, and has been, growing discussion on the need to regulate the use and development of AI. And with the technology developing as fast as it is, time is running out to do so, and South Africa is not exempt from that. From its use to provide legal advice to the layman, to the reliance of legal professionals who use AI to do their work for them, it has never been more paramount that AI regulation becomes adopted.

Background & Timeline

AI is largely unregulated in South Africa, and whilst existing legislation does regulate some activities of organizations making use of the technology, there remains no direct regulations in place. POPIA provides some protections, as it extends to the automatic processing of personal data, whilst the Copyright Act and the Patents Act do have clauses that apply to works generated by AI (one of which has already been granted a patent in South Africa). The Competition Act, to some extent, applies to the use of AI in digital mergers and digital platforms, and in 2024, the Draft Cyber Security Bill included AI in its definition of Information and Communication Technology (ICT), which aims to require AI softwares to be certified against minimum standards. And yet, South Africa has yet to formalize any legislation or even enter a bill into Parliament that seeks to regulate AI.

However, that does not mean that South Africa does not have the intention to move forward on the matter. In April of 2019, the President appointed members to the Presidential Commission, with the task of identifying relevant policies, strategies, and action plans that would position South Africa as a competitive global player in the realm of AI. In May 2019, 42 countries, including South Africa, adopted a set of non-binding intergovernmental policy guidelines on AI. November of 2022 saw the Department of Communications and Digital Technologies launch the Artificial Intelligence Institute of South Africa, and in April of 2024, the Minister of that department convened an AI Summit to share the contents of a draft National AI Plan, which was then released in August 2024. This draft plan would seek to address the disinformation and fake news that could be generated through AI applications as well as the bias and discrimination that could be exhibited by AI developers and users. Moreover, it aims to address copyright abuse, and provide privacy and protection to personal, private, and public data.

The South African National Artificial Intelligence Policy Framework

This document marked South Africa’s first major step towards the regulation of AI and the development of AI policy in South Africa. The framework was developed with the aim of driving economic growth, promoting societal well-being, and making South Africa a leader in AI innovation. The framework recognizes the rapid development of AI, and how it has become integrated into daily life much faster than other new technologies. However, although AI has its benefits, the framework also recognizes the risks involved and the importance of effectively managing them, as well as the need for ethical AI development and use.

The framework states that there is a need for specific guidelines to ensure that AI systems are not just transparent and accountable but are also designed in a manner that promotes fairness, whilst mitigating biases. The path to achieving this is by establishing robust data governance frameworks, protecting privacy, enhancing data security, and setting a standard for AI transparency and explainability to foster trust amongst both users and stakeholders. At the core of the framework is the mission to ensure that AI is developed in a human-centered manner, thereby ensuring that AI applications operate to augment human decision-making, rather than replace it. Through the framework, it is hoped that AI legislation will be developed that assists in the development of talent and digital infrastructure, research and innovation, and implementation into the public sector.

Is it already too late to put the genie back into the bottle?

Despite growing efforts to regulate AI development and usage, the technology may be growing faster than we can manage it. A 2025 study involving nearly 300 participants (who were all lay people with no legal background) found that when presented with legal advice generated by AI and legal advice provided by an attorney (without informing the participants where the pieces of advice came from), an overwhelming amount of them chose the advice provided by the LLM, rather than that of the attorney.

LLMs are becoming more relied upon on a day-to-day basis, largely due to their ability to provide quick answers, generate ideas, diagnose medical symptoms, and concerningly, provide legal advice. However, LLMs often generate what has become known as “hallucinations”; these are outputs that contain inaccurate or nonsensical content, which in a high-stakes environment such as the law, poses a real risk to the people involved. LLMs present this advice in a confident and condensed manner, making it easier for the layman to “understand” the information being given to them, however, it makes it difficult for them to distinguish between good or bad legal advice.

Unfortunately, instances such as these do not solely exist within the realm of academia and have already bled into South African legal spaces. In Mavundla v MEC Department of Co-Operative Government and Traditional Affairs Kwazulu-Natal and others (Leave to Appeal) [2025] JOL 68108 (KZP), the counsel for the First Respondent conceded that he only tried to find the first or second case cited in a notice of appeal. This resulted in only two cases being relied upon by the Respondent, neither of which actually existed. It was later discovered that AI models from Google, ChatGPT, and Meta, were used to find the alleged “cases”.

This is not the first instance of AI and LLMs being abused in our Courts. As cited in the above-mentioned case, Parker v Forsyth NO and others (2023), saw a Plaintiff’s attorneys submit a list of authorities for the matter. When the cases they referenced could not be found, the Plaintiff’s attorneys admitted that they had not accessed nor read the cases cited and could not source them. It was finally revealed that the cases were actually sourced from ChatGPT. On the matter, the Court stated: “It seems to the court that they [the Plaintiff’s attorneys], placed undue faith in the veracity of the legal research generated by artificial intelligence and lazily omitted to verify the research”.

Conclusion

The most popular LLM today, and the one that comes to mind for most people when AI is mentioned, is that of ChatGPT. The LLM was released to the public during November 2022, and as of February 2025, version 4.5 is available to the public, with version 5.0 expected to be released in mid-2025. Even though governmental discussions surrounding AI have been ongoing since 2019, it is evident by the rapid development of the technology, that regulation of AI is moving far too slowly. In less than 3 years, AI has transformed from a new and fascinating technology to a regular and seemingly unnoticed part of everyday life. And whilst its implementation into social media and online searches may not have major impacts, it is already evident that its misuse in the legal environment is damaging. It is thus imperative, that legislation regulating the use and development of AI be introduced and adopted as soon as possible. These are just small examples of the ways in which it has already been abused and mishandled, at the expense of those in need of legal assistance, and our Courts. Furthermore, there is no telling just how much AI is being incorrectly relied upon in legal spaces, as any reliance upon the technology in the settings of office spaces would go largely unreported.  Thus, without proper management and regulation of this technology, which is developing and expanding at an exponential rate, the safety and future of the legal landscape in South Africa may be in jeopardy.

 

Written by Joshua Casey - currently undergoing an internship at HJW Attorneys. 

We trust that you found this article informative, please email info@hjwattorneys.co.za for assistance with all your legal queries.

This article is provided for informational purposes only and should not be substituted for legal advice on any specific matter. Any opinions expressed herein are subject to the law as at the time of writing and will change in accordance with any change in the law. We recommend that you contact HJW Attorneys & Conveyancers at info@hjwattorneys.co.za directly for advice applicable to your specific matter.

Next
Next

BALANCING ACT: WHEN THE RIGHTS OF PATIENTS OUTWEIGH THE MISCONDUCT OF A DOCTOR