AI - 07.02.2024

The perils of using generative AI for research

A 2023 First-tier Tribunal judgment in a tax case has highlighted the dangers of using generative AI for conducting research, particularly if used in the defence of an employment tribunal claim. What happened and what lessons can be learned?

Risks of AI

There are several risks to be aware of when using generative AI applications, such as ChatGPT and Google Bard. For example, their use increases the opportunity for breach of data protection legislation and infringement of copyright.

You are also at increased risk of using information that is inaccurate, unreliable, outdated or even totally fabricated. This is because AI isn’t programmed to care about truthfulness and so it can make things up, by giving you highly plausible but incorrect results - often referred to as “hallucinating” .

Fabricated authorities

In a case before the First-tier Tribunal (FTT), Ms Harber (H) appealed against an HMRC penalty and she provided the names, dates and case summaries of nine previous FTT decisions which appeared to support her appeal. H said the cases were provided to her by a friend and she didn’t have copies of the full judgments nor the FTT case reference numbers.

Following consideration, the FTT concluded that all nine decisions were fabricated and had been generated by an AI application such as ChatGPT (see The next step ). This was on the basis that:

  • none of the cases appeared on the FTT website or on any other legal website
  • H accepted it was “possible” that the cases had been generated by AI and she could offer no alternative explanation for why they couldn’t be located on any legal database
  • the Solicitors Regulation Authority (SRA) has highlighted that hallucination is an issue with generative AI.

Lessons to learn

The cases that H cited were plausible but non-existent. She hadn’t deliberately cited fabricated authorities and she was unaware this was what she had done, but she had relied on generative AI without carrying out further checks herself, e.g. on the FTT or other legal websites.

Although the incident made no difference to H’s appeal, it does highlight that the submission of non-genuine cases to employment tribunals and other courts can waste time and public money and it certainly isn’t going to garner you any favours with the tribunal or court. It may also potentially harm the reputation of judges who are falsely invoked as the authors of invented judgments.

Tip. When using AI to do any type of research, always carefully review and fact check the information generated in an output to ensure the content is accurate before relying on it, e.g. by verifying the information using more credible sources.

Tip. If you’re going to let your employees use generative AI for work purposes, first conduct a risk assessment covering what work-related activities you’ll let them use AI for and which AI applications they can use and then put an appropriate policy in place (see The next step ). Ensure you also provide training to staff on the proper use of AI in the workplace.

For the FTT’s ruling in this case and a generative AI policy, visit https://www.tips-and-advice.co.uk , Download Zone, year 26, issue 4.

The individual cited nine judgments to support her appeal against a tax penalty, all of which had been produced by generative AI but which were, unbeknown to her, completely fabricated. When using AI for research, always double check the information it produces, e.g. by verifying it with more credible sources.

© Indicator - FL Memo Ltd

Tel.: (01233) 653500 • Fax: (01233) 647100

subscriptions@indicator-flm.co.ukwww.indicator-flm.co.uk

Calgarth House, 39-41 Bank Street, Ashford, Kent TN23 1DQ

VAT GB 726 598 394 • Registered in England • Company Registration No. 3599719