Blog Image

Publications

Episode 4: ChatGPT's potential in the workplace

Home Insights Episode 4: ChatGPT's potential in the workplace

Published on:

Published on: April 19, 2023

Share:

Please note, this is a script from episode four of our video series, the digital download: Generative AI, which you can access here.


Welcome to the digital download.

In this episode, we're going to be looking at the impact of tools like ChatGPT on the legal sector.

Generative AI models can be trained for specific disciplines and organisations. This means that it can be used for sectors that have a specific language, such as the legal, medical and financial sectors.

One example today of a law firm using generative AI is global law firm Allen & Overy. It's recently announced an exclusive partnership with Harvey AI to become the first law firm in the world to use generative AI based on Open AI's latest model.

In a judicial context, it's been reported that a Judge in Columbia has admitted using ChatGPT in a legal ruling earlier this year. As well as using ChatGPT as a research tool, the Judge went as far as citing ChatGPT in their decision. This has raised concerns about the appropriateness of using ChatGPT and other generative AI tools in important decision-making processes, including in the judicial context.

One reason that tools like ChatGPT could be a game changer in the legal profession, is that unlike most other AI-based tools to date, they're fast to get working live in production.

However, human oversight and verification of outputs will be critical to ensure quality and accuracy and this is particularly important given the regulated nature of legal services.

One of the key concerns with using ChatGPT for legal advice, is that it is not currently connected to a web search function and is trained on data up to 2021.

ChatGPT also doesn't understand the context of a prompt. It draws on data it was trained on and may not have other data available to make a full analysis.

Another limitation of tools like ChatGPT is that they operate essentially as a black box. They don't provide references or explain workings or confidence levels, and because tools like that answer fluently, users can often be lulled into a false sense of security. Again, the advice is to not blindly trust the outputs and to verify the information against trusted third-party sources before relying on it.

In our next episode, we're going to be looking at some of the IP issues relevant to the use of generative AI, including the curly question of who owns the IP in the outputs.

 

Talk to one of our experts:
Related Expertise