Blog Image


Do your homework first: Data, algorithms and human checkpoints

Home Insights Do your homework first: Data, algorithms and human checkpoints

Contributed by:

Contributed by: Liz Blythe and Zoe Sims

Published on:

Published on: March 05, 2019


AI and other emerging technologies have the potential to create efficiencies, unlock valuable insights from data, transform customer experience and increase profitability. Whilst these opportunities are enticing, before diving in, it is essential that organisations fully understand the technology, how it will work in the context of their business and where the key risk areas lie. Below we explore three key factors that an organisation should consider before implementing new AI technology into its business.


AI learns by analysing patterns and features in data. AI technology is therefore only as good as the data it has been trained on and it is important to ensure that the right data has been used. What constitutes the "right data" will depend in part on the context that the algorithm will be deployed in. Data sets used to train algorithms must be sufficient, fit for purpose and free from bias. 

  • Sufficient: A data set will be "sufficient" if it contains enough examples to enable the algorithm to recognise, and appropriately deal with, exceptions such that it does not react in unexpected ways and produce unreliable results. 
  • Fit for purpose: A data set exhibiting consumer preferences in France may not be able to teach an algorithm how to accurately predict consumer preferences in New Zealand. In this sense, the data may not be "fit for purpose" and may cause an algorithm trained on it to produce unreliable results if applied outside of France, unless augmented with additional context-specific data.
  • Free from bias: Data sets can contain inherent biases. If a data set exhibits biases within it, algorithms trained on that data may learn, and perpetrate, that bias. It is therefore important to check, and normalise, data prior to exposing an algorithm to it. For example, a data set of historic employment data could be used to train a recruitment algorithm, but if the industry has historically suffered from a gender imbalance then the algorithm may learn to prefer the characteristics of the dominant gender, and perpetrate the imbalance. 

These factors are particularly important when using AI outputs for decision-making purposes, as inaccurate or incomplete data sets could cause organisations to make ill-informed decisions.  This could carry with it the risk of significant legal liability, including under the Human Rights Act 1993, where relying on the outputs of an algorithm leads to discriminatory decision-making.   

An additional point to consider is that, if an organisation is planning on using any data concerning individuals (particularly if it constitutes personal information), it is important to have consent from such individuals to process their information in this way. This has become increasingly important in the context of the European Union's new privacy law, the General Data Protection Regulation, which has raised the bar for obtaining consent and the use of customers' personal data by organisations where those customers ordinarily reside in the EU.


Prior to implementing AI technology in your business, it is important to diligence the algorithm itself to ensure that it is acting reliably, transparently and not in a biased fashion. Inaccurate or discriminatory outputs are key risks associated with AI technology and these problems can result not just from issues in the data sets that the algorithm was trained on, but also from the programming of the algorithm itself.

A popular theory commonly termed AI's "white guy" problem, essentially suggests that if a population of coders lacks diversity it will result in algorithms being coded in such a way as to inherit that particular group's unconscious biases, resulting in discriminatory outcomes. Whether or not that theory is correct, it is essential that organisations diligence algorithms thoroughly prior to implementation to ensure that they are producing reliable results, in a transparent way and without bias.

To assist with this, many businesses are now engaging external advisors to conduct algorithmic audits. Although algorithmic auditing is still a relatively new practice, companies such as Deloitte and Accenture do already provide services in this area.

Whether using consultants or in-house expertise, it is worth diligencing algorithms thoroughly prior to implementation rather than looking to rely solely on contractual remedies later if things go wrong. This is particularly important when you are looking to procure technology from start-ups or growth stage organisations who may not have deep pockets.

Human checkpoints

Although AI technology can create vast efficiencies through automation and reducing the amount of human involvement required in a given process, it is important that we do not underestimate the continuing importance of the role of humans.   

Having appropriate human checkpoints to scrutinise the technology, and its outputs, is important to ensure that the algorithm is at all times working suitably, not acting in a biased fashion and is still meeting organisational needs. This approach allows issues to be addressed at the earliest opportunity and before resulting in significant cost to the business. However, these human checkpoints need to be built into the technology and it is important that you consider the transparency of the tools you are looking to employ and whether they provide you with the ability to understand and asses their ongoing operation.

Each of the above factors are essential to mitigating against the risks of implementing new AI technology in your business and ensuring that the implementation effort really is worth the investment. These initiatives also assist an organisation to have, and maintain, a sound understanding of the technology, how it is operating and any issues associated with the same.

If you would like any advice regarding the issues discussed in this article, or assistance in getting the right legal protections in place for your business before implementing AI technology in your organisation, please do not hesitate to contact us.

This article was first published by CIO New Zealand. 

To view the other articles in our "Implementing AI in your business" series, please visit our landing page here.

Talk to one of our experts:

Liz Blythe I Partner, Technology – [email protected] 
Auckland I DD +64 9 367 8145

Zoe Sims I Solicitor, Technology [email protected]
Auckland I DD +64 9 367 8074

This publication is intended only to provide a summary of the subject covered. It does not purport to be comprehensive or to provide legal advice. No person should act in reliance on any statement contained in this publication without first obtaining specific professional advice. If you require any advice or further information on the subject matter of this newsletter, please contact the partner/solicitor in the firm who normally advises you, or alternatively contact one of the partners listed below.

Related Expertise