The rise of AI is damaging user trust

Kevin Triggle


We’re coming to rely on artificial intelligence in many ways – from chatbots to Siri, the technology is becoming integral to our lives. However, it’s also coming under scrutiny. Although its popularity is growing, so is users’ distrust in the technology.


AI has huge potential for good, but there are two sides to every coin and there is also opportunity for its misuse. The Elon Musk-backed company OpenAI was so concerned about this propensity for abuse that it refused to publicly release research into text-generating AI. The worry was that in doing so, this would increase fake news and make it harder for people to identify what’s real and what’s not. So, what’s the way forward?


Know AI’s inner workings

Without the industry stepping in to allay fears, the future of AI hangs in the balance. If consumers don’t trust the technology, then they won’t use it – it can often be as simple as that. AI’s development is limited when organisations like OpenAI refuse to share information in the public domain.

Indeed, such a move will ultimately make AI more untrustworthy. Black box AI is of significant concern – where AI is developed without knowledge of its internal workings. If organisations refuse to share the algorithms that power their AI, then there is little-to-no oversight and accountability from their peers. Plus, if something goes wrong, nobody will understand how to fix it.

It’s the lack of knowledge around algorithms that have caused several scandals including one at Google, where its image recognition tech proved to be racist. To solve this issue, we need more knowledge of AI’s inner workings.

To date, some progress in opening the black box has been made, with IBM developing a system that automatically detects potential bias and explains how an algorithm has come to a decision. But, at the moment, there is still some way to go.

Ensure your AI developers are diverse

Another solution to combat bias in AI is to ensure the team developing it is diverse. Currently, only 12 per cent of machine learning researchers are women, and the numbers are even lower for ethnic minorities. Timnit Gebru, founder of ‘Black in AI’ notes a time when she attended the Neural Information Processing Systems conference and saw just six other black people – out of 8,500 attendees.

Improving diversity is a challenge facing the entire tech industry but is of particular importance in the field of AI. Bettering representation of minorities is vital to developing systems that don’t have unconscious biases built-in, while also changing the problems that AI is designed to tackle by taking into account the issues of an entire society, instead of just a subset.

Educate consumers about AI

Communication is key for any company using AI. People are worried about it taking their jobs, faking the news or even becoming our evil overlord. Business leaders need to openly convey the abilities and limitations of AI if they’re going to get buy-in.

Some education around spotting AI’s work is also important, with the rise of deepfakes – artificially doctored images and videos (even people!) – causing concern. It’s becoming easier than ever to create scenarios that haven’t happened. In order to restore and maintain user trust, these will need some kind of compulsory labelling system in the future, perhaps similar to the way in which Microsoft, YouTube and Facebook flag potential fake news.

Just the start

The thorny issue of trust in AI is a broad and difficult one. Improvement begins with the three main pillars of trustworthy AI: knowing its workings, increasing diversity and communication. From this, other steps can be taken, such as appointing an ethical AI leader, securing your systems and training your AI on high quality data. Some businesses are taking steps to address this, however there is still much progress to be made. The problem is not going away, and those that combat the distrust in AI now will be on solid ground in the future.

If you’re just beginning to test the water of artificial intelligence, it can be difficult to truly appreciate its full potential and the impact it could have on your business. If you’d like to further your understanding, get in touch with a member of the team by emailing or calling 01225 220155.



What is the right way to teach coding to younger generations?

May 16 2019 Tom Passmore
Cyber Safety

Should tech giants take responsibility for our children’s safety?

May 13 2019 Nathan Baranowski

A helping hand: working alongside AI

April 29 2019 Nathan Baranowski