Can AI  be trusted?

Posted by
On July 30, 2018

Illustration by Sarah Martin

The reliability of self-driving cars and other forms of artificial intelligence is one of several factors that affect humans’ trust in AI, machine learning and other technological advances.

Incidents like the fatal crash of a self-driving Uber that killed a Tempe, Ariz., woman and the death of a test driver of a semi-autonomous vehicle being developed by Tesla put our trust in AI to the test.

“Trust is the cornerstone of humanity’s relationship with artificial intelligence,” write Keng Siau, professor and chair of business and information technology, and Weiyu Wang, a graduate student in information science and technology, in a research article in the February 2018 Cutter Business Technology Journal. “Like any type of trust, trust in AI takes time to build, seconds to break and forever to repair once it is broken.”

The Uber and Tesla incidents indicate a need to rethink the way such AI applications are developed, and for their designers and manufacturers to take certain steps to build greater trust in their products, Siau says.

Despite these recent incidents, he sees a strong future for AI, but one fraught with trust issues that must be resolved.

Siau and Wang point to five areas that can help build initial trust in artificial intelligence systems:

  1. Representation. The more “human” a technology is, the more likely humans are to trust it. “That is why humanoid robots are so popular,” Siau says, adding that it is easier to “establish an emotional connection” with a robot that looks and acts more like a human or a robotic dog that acts more like a canine.
  2. Image or perception. Science fiction books and movies have given AI a bad image, Siau says. People tend to think of AI in dystopian terms, as colored by Terminator or Blade Runner movies or Isaac Asimov and Philip K. Dick novels. “This image and perception will affect people’s initial trust in AI,” Siau and Wang write.
  3. Reviews from other users. People tend to rely on online product reviews, and “a positive review leads to greater initial trust.”
  4. Transparency and “explainability.” “To trust AI applications, we need to understand how they are programmed and what function will be performed in certain conditions,” Siau says.
  5. Trialability. The ability to test a new AI application before being asked to adapt it leads to greater acceptance, Siau says.

But after they develop a sense of trust, AI creators also must work to maintain that trust. Siau and Wang offer suggestions for developing continuous trust. They include:

  • Usability and reliability. AI “should be designed to operate easily and intuitively,” Siau and Wang write. “There should be no unexpected downtime or crashes.”
  • Collaboration and communication. Developers must focus on creating AI applications that smoothly and easily collaborate and communicate with humans.
  • Sociability and bonding. Building social activities into AI applications, like a robotic dog that can recognize its owner and show affection, is one way to strengthen trust.
  • Security and privacy protection. AI applications rely on large data sets, so ensuring privacy and security will be crucial to establishing trust in the applications.

Already, Siau is working to prepare MBA students at Missouri S&T for the AI age through Artificial Intelligence, Robotics, and Information Systems Management, a course he introduced in 2017. As part of the coursework, Siau asks each student to present an article on a new artificial intelligence or machine learning technology or application.

Posted by

On July 30, 2018. Posted in Around the Puck, Research, Summer 2018