Acquisio Blog

AI and Ethics: Struggle for Trust

I recently attended the AI Congress and Data Science Summit in London and made a point of attending the panel discussion titled Ethics and AI. The panelists were Harriet Kingaby, the Co-Founder of Not Terminator and Emma Prest the Executive Director of DataKind UK.

Ethics is such an interesting discussion, and perhaps the most important aspect in the development of artificial intelligence (AI). Last year we released The Marketers Field Guide to Machine Learning and discussed the topic of AI and ethics alongside topics like fairness, explainability and security. If AI was to be developed completely unethically, we would lose trust in it completely, so we have to keep it front of mind.

In our ebook we discussed:

To date ML and AI have been mainly deployed to perform lower level, mainly mundane tasks to help improve productivity. But soon they can be in a position to literally decide life or death. A self-driving car will be in charge of not only getting its occupants safely to their destination but keeping everyone else about them safe. It is only a matter of time when a self-driving car finds itself in an impossible situation; an accident is unavoidable, its only choice is between steering towards pedestrian A on the left or pedestrian B on the right. How would the AI system under the hood decide which action to take? Based upon size? Age? Social status? And when the accident investigators try to determine what influenced the outcome would they find ethically troubling logic built into it?

Indeed these questions perplexed us then, but we’ve since seen some of the outcomes. The topic of ethics and AI is not only being discussed inside conference rooms and in universities, it’s made its way into legislative pages and soon will be sewn into the fabric of how we operate as a society.

AI Good and Bad – A Lot Can Happen in One Year

Since the release of our machine learning ebook less than one year ago, there have been many AI developments (for better or for worse).

Tesla has reported autopilot accidents with self-driving cars, and technologies like Deepfake have emerged, whereby deep learning technology can be used to create digital media by superimposing images of real persons into situations they did not participate in with the intent to create fake news or hoaxes.

In one terribly unfortunate incident, an Uber self driving car killed a pedestrian. This tragedy occurred because our society trusted an AI technology. Although it was later found human error played a part in the accident, once you label these things as AI it’s difficult to then say the technology is not intelligent enough to be left to its own devices. Despite this horrible tragedy, car companies (and Ikea) continue to announce new self driving car plans.

And while the ethics of AI is up for discussion because of its potential to harm, it is that same trust in its development that has also resulted in the many amazing outcomes we are currently benefiting from.

Technology, as always, is part of the problem and part of the solution. Consider the fast pace of development and new applications of AI cropping up daily like:

You may or may not have heard about these fascinating and beneficial technologies. Media sensation around tragedy is much more prevalent. Hype and excitement surrounds mistakes from AI technology because it gathers a lot more attention; from the mundane, often hilarious failures of AI assistants to stories about more serious privacy concerns.

Point is, whether you hear about it or not, AI is doing many positive things, despite its heavily publicized mistakes. These trials and tribulations have got people talking and the discussions happening at a higher level will certainly play a part in shaping our future.

An Organized Effort to Develop AI Ethically

Highly publicized mistakes, academic breaks and broken technological boundaries surrounding AI development has caught the attention of leaders worldwide. After all, AI is already in use and civil society is preparing for widespread acceptance in a variety of ways.

Governments and associations have to be monitoring and talking about it. And good thing they are. Here’s a short list of examples off the top of my head:

Ethical OS

Ethical OS is the practice of establishing a framework that attempts to future-proof technological development by minimizing future technical and reputational risks. It considers not just how a new technology may change the world for the better, but how it could damage things or be misused.

This OS proposes a few general areas to consider:

There are a lot of areas to consider, but each has its own implications that are no laughing matter. Ethical OS says that once the risks are identified with a potential AI development they can be shared among stakeholders to fully vet the problem.

Moving Towards an Intelligent and Ethical Future

The panel I attended at the AI Congress and Data Science Summit concluded with additional strategies to help AI developers move forward more ethically. They said tech ethics should be built into the business culture and be part of a corporate vision and that ethical AI bounty hunters could operate similar to the way bug hunters do!

With major consumer privacy legislations like GDPR and the California Consumer Privacy Act of 2018 coming into play, we are already seeing how advances to data science will be shaped by policy.

Some hypotheses suggest that regulations will slow down AI development. While that may happen in the process, if consumers have more confidence that their data and personal information are not being misused, it can increase trust in the process. That could even lead to increased use and adoption – who knows.

We certainly don’t have all the answers but we’re paying close attention to the discussion.

 

Image Credits

Feature Image: Unsplash / Markus Spiske