Print this page

Tech firms must be transparent about AI

  • font size decrease font size decrease font size increase font size increase font size

The Government has announced an audit of algorithms operating in agencies like the Department of Health and the Transport Agency writes  Richard MacManus for Newsroom.

In particular, the audit will look closely at artificial intelligence (AI) algorithms. The project aims to “ensure transparency and fairness in decisions that affect citizens.”

The announcement came just a few weeks after Communications Minister Clare Curran promised “an action plan and ethical framework” for AI in this country.

Wouldn’t it be great if technology companies reviewed the impact of their AI systems too?

Even an internal audit, which is basically what this Government project is, would give users of products by Google, Facebook, Amazon and Apple some reassurance that they aren’t being spied on, or taken advantage of by nefarious “big data” firms like Cambridge Analytica.

Certainly, Curran would like big tech companies to be more accountable. In a speech last week, she said “Facebook, Google are not the gentle giants they make out to be, they are the collectors of vast amounts of data that is used with sometimes dubious permissions to conduct extraordinary experiments on our whole civilisation.”

The reason Curran and many others think this way is that transparency is almost nonexistent with Internet companies. Instead, giant corporations like Google and Facebook default to opacity. These companies are so obsessed with keeping their algorithms secret, that often we – the humble users – have no clue what they’re doing with our data.

However, this lack of transparency backfires when something goes wrong with AI-powered software.

The most recent example of an AI fail came from Amazon just last week, when its Alexa AI system mistakenly recorded a couple’s conversation on an Echo device and sent the audio to one of the man’s employees.

Many Echo users do not realise the device can record household conversations, much less send them out into the world. Some might say that’s just another case of users not understanding the risks of devices like Echo, but I’d argue this is a good argument for Amazon (and other tech companies) to be more transparent about those risks.

Google also got into trouble recently, with a tone deaf demonstration of a new voice AI software called Duplex. The AI had called up a couple of hair salons, to demonstrate it could make appointments without human intervention. But bizarrely, the AI inserted ums and ahs into its conversation – thus hiding the fact it was a machine. Google later admitted it needs to let people know when they are talking to AI.

It’s this kind of obfuscation around AI that concerns me, so I welcome the Government’s audit of algorithms. I’d heartily recommend technology companies follow the example. If corporations really want to front-foot embarrassing stories such as the recent Alexa and Duplex failures, offer up an internal audit of what their AI systems do with our data – and the risks associated with that.

Which brings me to a related issue with AI software. Are we sure some of the technology being touted as AI these days isn’t just smoke and mirrors? Again, because companies are so opaque about what precisely their software does, it’s hard to know for sure what is AI and what is not.

In a recent column I profiled Parrot Analytics, a kiwi startup success story. Parrot Analytics is attempting to usurp Nielsen and become the leading television ratings tool for the streaming era. The problem it’s trying to solve is that the leading streaming companies, Netflix and Amazon, do not release their usage data. Parrot Analytics gets around this with a proprietary algorithm it calls “Demand Expressions.”

The trouble is, we really have no idea what kind of data makes up a "Demand Expression". We’re told it includes social media chatter, streaming data from smaller services like Lightbox, even Torrent data. All of this is given some kind of magical AI treatment, and – voila! – we get a "Demand Expression".

Parrot Analytics says it uses “advanced artificial intelligence.” But what does that mean exactly? The closest I found to an explanation is a November 2015 article on the company’s website, but that confused me even more. In the article the company claims to “have adopted and explored techniques from various disciplines such as computer science, information retrieval and natural language processing, machine learning, physics, bio-medical engineering and signal processing (since our data is temporal or time-series based), economics, statistics for product development.”

So, it uses AI plus everything but the kitchen sink?

Look, I think Parrot Analytics is an exciting company and is doing New Zealand proud on the global stage. But until I know more about how it calculates "Demand Expressions", there is a nagging question in my head about how accurate its ratings really are.

That’s not to say that AI isn’t doing useful work in our economy. There’s clear evidence it is. In the AI Forum report Artificial Intelligence: Shaping a Future New Zealand, launched by Curran in May, three companies were called out for praise. According to the report, Air New Zealand, Soul Machines and Xero are “leading the development of AI nationally.”

The report goes on to note that in March 2017, Xero launched a pilot program that uses AI to code invoices: “Using machine learning, the program can learn the individual invoice coding behaviours for all of Xero’s customers.” In October 2017, the same AI functionality was added to Xero’s billing feature.

What I like about Xero’s AI program is that the company is upfront about how it is using these algorithms (for coding invoices and sending bills). Plus, Xero has provided statistics about its success: “The AI system will only start suggesting codes once businesses have 150 approved bills, and is currently 70 to 75 percent accurate on average for supplier bills.”

This is the kind of explanatory detail I’d like to see more technology companies offer about their AI systems. We don’t need to know the magic sauce necessarily, but more data about what the AI is doing would be helpful to us all.

A little transparency would go a long way to help the reputations of technology companies like Amazon and Google, especially in this new GDPR era of privacy.