The reliability of self-driving cars and other forms of artificial intelligence is one of several factors that affect humans’ trust in AI, machine learning and other technological advances.
Incidents like the fatal crash of a self-driving Uber that killed a Tempe, Ariz., woman and the death of a test driver of a semi-autonomous vehicle being developed by Tesla put our trust in AI to the test.
“Trust is the cornerstone of humanity’s relationship with artificial intelligence,” write Keng Siau, professor and chair of business and information technology, and Weiyu Wang, a graduate student in information science and technology, in a research article in the February 2018 Cutter Business Technology Journal. “Like any type of trust, trust in AI takes time to build, seconds to break and forever to repair once it is broken.”
The Uber and Tesla incidents indicate a need to rethink the way such AI applications are developed, and for their designers and manufacturers to take certain steps to build greater trust in their products, Siau says.
Despite these recent incidents, he sees a strong future for AI, but one fraught with trust issues that must be resolved.
Siau and Wang point to five areas that can help build initial trust in artificial intelligence systems:
But after they develop a sense of trust, AI creators also must work to maintain that trust. Siau and Wang offer suggestions for developing continuous trust. They include:
Already, Siau is working to prepare MBA students at Missouri S&T for the AI age through Artificial Intelligence, Robotics, and Information Systems Management, a course he introduced in 2017. As part of the coursework, Siau asks each student to present an article on a new artificial intelligence or machine learning technology or application.