So we’ve all been reading about Surendra Singh and Associates, the Pietermaritzburg law firm that’s found itself in hot water for allegedly using AI to generate fake legal citations. You couldn’t have missed it- every major news publication and even many tech sites have covered this story, generally from the very legal point of view and the implications thereof. But here’s a different take on the matter – is there a place for AI in legal research?

This case  unfolded in the Pietermaritzburg High Court, where the firm was representing South African politician and business tycoon Philani Godfrey Mavundla in a legal dispute against the KwaZulu-Natal MEC of Cooperative Government and Traditional Affairs (COGTA) and the Independent Electoral Commission. Mavundla had been elected mayor of the Umvoti Local Municipality but was subsequently suspended. He initially obtained an interim interdict against the municipality, but this was later discharged. Mavundla then sought leave to appeal the ruling, and it was during these appeal proceedings that the issue of AI-generated citations arose. Judge Elsja-Marie Bezuidenhout’s registrar, doing their due diligence, discovered that some of the cases cited in the firm’s application were as real as the “ANC’s” promises.

Judge Bezuidenhout’s registrar, searching for the cases cited in Mavundla’s application had found them to be non-existent. This discovery led to a closer examination of the case law cited in the application, revealing that many of the references were indeed fictitious. Further investigation suggested that the firm had used AI technology to generate these citations. A candidate attorney at the firm allegedly used ChatGPT to find case law and create the citations, which were then included in the briefs without proper verification by the attorneys or counsel.   

Judge Bezuidenhout, understandably, wasn’t too thrilled. She called the practice “irresponsible and downright unprofessional” and sent the whole shebang over to the Legal Practice Council (LPC) for a closer look. We might be looking at disciplinary action or even new guidelines for AI use in the legal profession.

Now, before we all jump on this as Judge,Jury and executioner, let’s take a breath and consider this from slightly different angle – AI in legal research isn’t inherently bad. In fact, it has the potential to be a real game changer. With AI practically taking over every aspect of our lives and with no signs of this trend slowing down anytime soon, should the discussion be more about the regulatory framework and the responsible use of AI, rather than dismissing its utility entirely? Imagine AI tools that can sift through mountains of case law, flag relevant precedents, and even help lawyers predict the outcome of their cases. It’s like having a super-powered paralegal who never needs coffee.

But, and this is a big but, AI has a tendency to “hallucinate.” That’s tech-speak for “make stuff up.” And when you’re dealing with legal citations, accuracy is kind of a big deal. This case is a stark reminder that AI, for all its smarts, still needs a human chaperone.

It highlights how big of an impact AI is going to have in reshaping not just the legal landscape, but our lives in general.

It’s something we can’t avoid, no matter how hard we try, so a serious collective effort is required to explore the ethical dilemma’s and regulatory frameworks regarding the use of AI or risk even greater challenges, as this case is certainly not the last!

Share.
Exit mobile version