Blog Image

Publications

Episode 4: SaaS and AI

Home Insights Episode 4: SaaS and AI

Published on:

Published on: September 12, 2022

Share:

In this episode of the Digital Download we look at Artificial Intelligence (AI) in the SaaS context, including insights from the Deputy Chair of New Zealand's AI Forum, our very own Louise Taylor.

AI applications can be used to improve efficiency, unlock insights from data, and improve profitability in almost every sector. We're seeing SaaS suppliers increasingly using AI as part of their solutions to enhance their functionality, a trend we're seeing playing out globally.

We've helped clients implement AI into all different parts of their businesses, from security software, right through to CX improvement tools. Irrespective of the application, it's critical that clients understand the risk associated with implementing AI into their businesses and options for mitigating against that risk.
 
There's been a lot of focus in the media about AI tools that can target and influence consumer behaviour or make automated decisions affecting individuals. This kind of functionality has sparked widespread debate with calls for the responsible development of AI technologies, particularly those affecting individuals.

While your customers may not be concerned about an algorithm that's making shopping recommendations, they may be more concerned if it's supporting decision-making that might limit their access to services or opportunities. For example, credit, insurance, or employment opportunities. Particularly in these higher risk areas, liability can accrue and sometimes in areas you might not expect. For example, discrimination under the Human Rights Act.

Before implementing AI functionality, it's important to diligence the algorithm itself to check that it's reliable and won't lead to any unintended consequences. For example, the application should be checked for any racial, gender, or other bias before the application is rolled out. Businesses who lack this expertise often engage third-party consultants to help with the algorithm audit.

AI is only as good as the data it's been trained on, so diligencing data sets is a critical part of implementing AI into your business. As a general rule of thumb, data sets need to be sufficient, fit for purpose, and free from bias. Having appropriate human checkpoints is also critical to ensure that the algorithm is running appropriately and that it's fit for organisational needs. It also lets you pick up any issues early on and address them quickly in order to mitigate against any ongoing risk and liability.
 
Existing legal frameworks will apply to the use of AI, including privacy legislation and suppliers will need to ensure that the use and development of AI is carried out in a lawful way.

Suppliers working with Government clients are also likely to be required to comply with the AI Charter that the Government developed in July 2020. This requires public sector agencies adopting AI to ensure that they're operating in a trustworthy and ethical way and that they're appropriately transparent.
 
We may also see specific AI regulation in future in New Zealand as part of the Government's work in producing a digital strategy and potentially a national AI strategy. As part of that work, we may see a regulatory focus on particular AI applications that are perceived as high risk for New Zealand society in terms of public safety, human rights or privacy.

When you're scaling up in other countries, you're going to need to think about local AI regulation in those jurisdictions as well. For example, the EU's AI Act. If you're targeting customers in the EU, you need to think about compliance with the AI Act, in the same way as you might with the GDPR.


You can view the Digital Download Series episodes here.

Talk to one of our experts: