Can We Trust AI?

The following is an excerpt from our latest ebook, The Marketer’s Field Guide to Machine Learning

As AI is becoming more and more pervasive, human-machine interaction and cooperation become commonplace. For such a relationship to be fruitful there must be some level of trust, otherwise the human actors keep second guessing everything the machine does. How does an AI system gain and maintain trust? How do we humans build up that trust and how easily can it be lost?

Will We Trust an AI System if it Makes Mistakes?

Both as individuals and as a society we have to work out our trust mechanisms. We know how much we trust people around us, we know when the trust deepens or when it is being lost. When we board a bus we implicitly trust that the company operating it keeps them well maintained and that the driver they hired is a safe driver. When we hear about accidents involving transit vehicles we might be shaken but typically don’t lose trust in the entire transit system. But what if the vehicle involved in the accident was a self-driving bus? What if the driver was an AI system, not certified by some driving license agency?

When it comes to humans making an error we are more likely to forgive them because they are humans and can learn from their mistakes. When it comes to machines we expect them to be perfect and not to make any errors.

Certainly there are times when a machine makes a mistake, but so do humans. While many human mistakes can be attributed to fatigue, distractedness, or lack of attention to details, machines don’t err for those reasons. The most likely culprit in a machine’s case is not having seen a particular situation before and not having learned how to handle those cases correctly. But let’s not forget that AI is built on machine learning, where “learning” is just as important as it being a machine. On the one hand, being (and acting) like a human includes admitting not knowing things or even making mistakes, while on the other hand our expectation towards intelligent machines is that they be all knowing and always perfect.

Improving AI to Improve Trust

Researchers are still hard at work making AI systems better and more accurate for the task they are trying to solve. These activities focus on improving the system’s predictions; making the self-driving car require fewer and fewer driver interruptions, making the voice assistant understand more accents and more complex sentences, etc. These activities go a long way toward establishing trust, as near flawless execution will win over skeptics. But at the same time these AI agents must also pass more general tests, such as that of explainability, fairness, ethics, and security to fully gain human trust.

Get the full story and continue reading by downloading a copy of our latest eBook here, The Marketer’s Field Guide to Machine Learning.

 

Image Credits

Featured Image: Unsplash/Cherry Laithang

Chandal Nolasco Da Silva

Chandal Nolasco Da Silva

With nearly a decade of digital marketing experience, Chandal has created content strategies for both the biggest and sometimes the most unexpected markets, while developing strategic relationships with editors and publishers. Chandal contributes to some of the highest authority industry publications, has been featured in industry events and is thrilled to be Acquisio’s Content Director.

The Marketer’s Field Guide to Machine Learning

The Past, The Present and The Future of Machine Learning

Get Your Copy Now!