The perils of using generative AI for research
Risks of AI
There are several risks to be aware of when using generative AI applications, such as ChatGPT and Google Bard. For example, their use increases the opportunity for breach of data protection legislation and infringement of copyright.
You are also at increased risk of using information that is inaccurate, unreliable, outdated or even totally fabricated. This is because AI isn’t programmed to care about truthfulness and so it can make things up, by giving you highly plausible but incorrect results - often referred to as “hallucinating” .
Fabricated authorities
In a case before the First-tier Tribunal (FTT), Ms Harber (H) appealed against an HMRC penalty and she provided the names, dates and case summaries of nine previous FTT decisions which appeared to support her appeal. H said the cases were provided to her by a friend and she didn’t have copies of the full judgments nor the FTT case reference numbers.
Following consideration, the FTT concluded that all nine decisions were fabricated and had been generated by an AI application such as ChatGPT (see The next step ). This was on the basis that:
- none of the cases appeared on the FTT website or on any other legal website
- H accepted it was “possible” that the cases had been generated by AI and she could offer no alternative explanation for why they couldn’t be located on any legal database
- the Solicitors Regulation Authority (SRA) has highlighted that hallucination is an issue with generative AI.
Lessons to learn
The cases that H cited were plausible but non-existent. She hadn’t deliberately cited fabricated authorities and she was unaware this was what she had done, but she had relied on generative AI without carrying out further checks herself, e.g. on the FTT or other legal websites.
Although the incident made no difference to H’s appeal, it does highlight that the submission of non-genuine cases to employment tribunals and other courts can waste time and public money and it certainly isn’t going to garner you any favours with the tribunal or court. It may also potentially harm the reputation of judges who are falsely invoked as the authors of invented judgments.
Tip. When using AI to do any type of research, always carefully review and fact check the information generated in an output to ensure the content is accurate before relying on it, e.g. by verifying the information using more credible sources.
Tip. If you’re going to let your employees use generative AI for work purposes, first conduct a risk assessment covering what work-related activities you’ll let them use AI for and which AI applications they can use and then put an appropriate policy in place (see The next step ). Ensure you also provide training to staff on the proper use of AI in the workplace.
For the FTT’s ruling in this case and a generative AI policy, visit https://www.tips-and-advice.co.uk , Download Zone, year 26, issue 4.