unsplash data image

AI and Ethics: Struggle for Trust

I recently attended the AI Congress and Data Science Summit in London and made a point of attending the panel discussion titled Ethics and AI. The panelists were Harriet Kingaby, the Co-Founder of Not Terminator and Emma Prest the Executive Director of DataKind UK.

Ethics is such an interesting discussion, and perhaps the most important aspect in the development of artificial intelligence (AI). Last year we released The Marketers Field Guide to Machine Learning and discussed the topic of AI and ethics alongside topics like fairness, explainability and security. If AI was to be developed completely unethically, we would lose trust in it completely, so we have to keep it front of mind.

In our ebook we discussed:

To date ML and AI have been mainly deployed to perform lower level, mainly mundane tasks to help improve productivity. But soon they can be in a position to literally decide life or death. A self-driving car will be in charge of not only getting its occupants safely to their destination but keeping everyone else about them safe. It is only a matter of time when a self-driving car finds itself in an impossible situation; an accident is unavoidable, its only choice is between steering towards pedestrian A on the left or pedestrian B on the right. How would the AI system under the hood decide which action to take? Based upon size? Age? Social status? And when the accident investigators try to determine what influenced the outcome would they find ethically troubling logic built into it?

Indeed these questions perplexed us then, but we’ve since seen some of the outcomes. The topic of ethics and AI is not only being discussed inside conference rooms and in universities, it’s made its way into legislative pages and soon will be sewn into the fabric of how we operate as a society.

AI Good and Bad – A Lot Can Happen in One Year

Since the release of our machine learning ebook less than one year ago, there have been many AI developments (for better or for worse).

Tesla has reported autopilot accidents with self-driving cars, and technologies like Deepfake have emerged, whereby deep learning technology can be used to create digital media by superimposing images of real persons into situations they did not participate in with the intent to create fake news or hoaxes.

In one terribly unfortunate incident, an Uber self driving car killed a pedestrian. This tragedy occurred because our society trusted an AI technology. Although it was later found human error played a part in the accident, once you label these things as AI it’s difficult to then say the technology is not intelligent enough to be left to its own devices. Despite this horrible tragedy, car companies (and Ikea) continue to announce new self driving car plans.

And while the ethics of AI is up for discussion because of its potential to harm, it is that same trust in its development that has also resulted in the many amazing outcomes we are currently benefiting from.

Technology, as always, is part of the problem and part of the solution. Consider the fast pace of development and new applications of AI cropping up daily like:

You may or may not have heard about these fascinating and beneficial technologies. Media sensation around tragedy is much more prevalent. Hype and excitement surrounds mistakes from AI technology because it gathers a lot more attention; from the mundane, often hilarious failures of AI assistants to stories about more serious privacy concerns.

Point is, whether you hear about it or not, AI is doing many positive things, despite its heavily publicized mistakes. These trials and tribulations have got people talking and the discussions happening at a higher level will certainly play a part in shaping our future.

An Organized Effort to Develop AI Ethically

Highly publicized mistakes, academic breaks and broken technological boundaries surrounding AI development has caught the attention of leaders worldwide. After all, AI is already in use and civil society is preparing for widespread acceptance in a variety of ways.

Governments and associations have to be monitoring and talking about it. And good thing they are. Here’s a short list of examples off the top of my head:

Ethical OS

Ethical OS is the practice of establishing a framework that attempts to future-proof technological development by minimizing future technical and reputational risks. It considers not just how a new technology may change the world for the better, but how it could damage things or be misused.

This OS proposes a few general areas to consider:

  • Truth and Disinformation
    Can the tech you’re working on be turned into a tool that can be used to “fake” things?
  • Addiction
    It is great for the creator of a new tool that it is so popular people spend a lot of time using it, but is it good for their health? Can the tool be made more efficient so people spend their time well, but not endlessly? How can it be designed to encourage moderate use?
  • Inequality
    Who will have access and who won’t? Will those who won’t have access be negatively impacted? Is the tool negatively impacting economic well-being and social order?
  • Ethics
    Is the data used to build the technology biased in any way? Is the technology reinforcing the existing bias? Is the team developing the tools sufficiently diverse to help spot biases along the process? Is the tool transparent enough for others to “audit” it? Example of AI for hiring to eliminate bias – but what about its creators, what bias might they hold?
  • Surveillance
    Can a government or military turn the technology into a surveillance tool or use it to otherwise limit the rights of citizens? Does the data gathered allow following users through their life? Who would you not want to have access to this data for their purposes?
  • Data Control
    What data are you collecting? Do you need it? Do you profit from it? Are your users sharing in that profit? Do the users have rights to their data? What would bad actors do with this data? What happens to the data if your company is acquired?
  • Implicit Trust
    Does your tech have user rights? Are the terms clear and easy to understand? Are you hiding information from the users they may care about? Can users opt-in and out of certain aspects while still using the tech? Are all users created equal?
  • Hate and Other Crimes
    Can the tech be used for bullying or harassment? Can it be used to spread hate, discriminate against others? Can it be weaponized?

There are a lot of areas to consider, but each has its own implications that are no laughing matter. Ethical OS says that once the risks are identified with a potential AI development they can be shared among stakeholders to fully vet the problem.

Moving Towards an Intelligent and Ethical Future

The panel I attended at the AI Congress and Data Science Summit concluded with additional strategies to help AI developers move forward more ethically. They said tech ethics should be built into the business culture and be part of a corporate vision and that ethical AI bounty hunters could operate similar to the way bug hunters do!

With major consumer privacy legislations like GDPR and the California Consumer Privacy Act of 2018 coming into play, we are already seeing how advances to data science will be shaped by policy.

Some hypotheses suggest that regulations will slow down AI development. While that may happen in the process, if consumers have more confidence that their data and personal information are not being misused, it can increase trust in the process. That could even lead to increased use and adoption – who knows.

We certainly don’t have all the answers but we’re paying close attention to the discussion.

 

Image Credits

Feature Image: Unsplash / Markus Spiske

Tamas Frajka

Tamas Frajka

Personally owning 2 patents on handwriting rendering, Tamas is the Lead Research Scientist at Acquisio. He holds a MSc in Computer Science from Budapest University of Technology and Economics and a PhD in Electrical and Computer Engineering from the University of California in San Diego. He is a globe trotter, a PADI certified dive master and when he's not volunteering with Hungarian cultural organizations he is the volunteer governor and PTA co-chair for his children's school.

The Marketer’s Field Guide to Machine Learning

The Past, The Present and The Future of Machine Learning

Get Your Copy Now!