Blog Image

Publications

Episode 3: Risks when using Generative AI

Home Insights Episode 3: Risks when using Generative AI

Published on:

Published on: April 12, 2023

Share:

Please note, this is a script from episode three of our video series, the digital download: Generative AI, which you can access here.

Welcome to the Digital Download.

Like any technology, AI tools such as Generative AI can be used in ways that are unlawful or present risks to individuals and organisations. While the tool itself is neutral, the way it's used can be problematic.

In this episode, we'll be looking at some of these risks as well as strategies to mitigate against them.

Security

Generative AI can amplify security risks when making it easier to produce code and content which can then be used for nefarious purposes. For example, it can create text which contains malware or it can create convincing and personalised messages to trick users into disclosing sensitive information, such as, usernames and passwords. It can also be used to generate large volumes of phishing emails.

Website owners should ensure that they take appropriate measures to secure their ChatGPT installation and prevent it from being used for malicious purposes. For example, vulnerabilities in the website hosting ChatGPT, could result in unauthorised access to user conversations, or a man in the middle attack - where an attacker intercepts and modifies a user's conversation with ChatGPT.

Privacy

ChatGPT itself advises users to be careful about the information that they disclose to it and recommends that users don't upload personal or other sensitive information. Organisations that collect and process personal information in New Zealand will have statutory obligations under the Privacy Act 2020. If any of that personal information were to be uploaded to a tool like ChatGPT, that organisation would be fully responsible for any unauthorised use or disclosure that results.

There's also the risk that employees may inadvertently breach the confidentiality of their organisation or another organisation by entering confidential information into prompts. The safest approach when using ChatGPT is to assume that anybody may be able to access your user conversations. Employers may also wish to remind employees of their confidentiality obligations.

Regulatory Compliance

The use of generative AI tools may also be subject to various regulatory requirements including advertising, consumer protection and financial regulation. The open AI terms also require that AI generated content be identified in a clear way.

Risk Mitigation

We recommend that any organisations looking to incorporate AI technologies implement risk mitigations that are appropriate to the particular use in the sector the organisation is operating in. For example, these could include conducting risk assessments to identify and mitigate against privacy and security risks and ensuring that leaders know how the technology can be used and misused.

Organisations should also consider controlling or managing its use by personnel, either by blocking our restricting its use to certain people.

For those who are allowed to use it, you must ensure that they adhere to open AI's terms and any additional restrictions that are relevant to your organisation.

It's important to remember that the risks associated with using generative AI will differ depending on the proposed use and the relevant sector.

If you'd like any specific advice in relation to this, please don't hesitate to get in touch.

Next week, we're going to be looking at the impact of generative AI on the legal sector. We look forward to you joining us.

Talk to one of our experts:
Related Expertise