Can AI  be trusted?

The reliability of self-driving cars and other forms of artificial intelligence is one of several factors that affect humans’ trust in AI, machine learning and other technological advances.

Illustration by Sarah Martin

Incidents like the fatal crash of a self-driving Uber that killed a Tempe, Ariz., woman and the death of a test driver of a semi-autonomous vehicle being developed by Tesla put our trust in AI to the test.

“Trust is the cornerstone of humanity’s relationship with artificial intelligence,” write Keng Siau, professor and chair of business and information technology, and Weiyu Wang, a graduate student in information science and technology, in a research article in the February 2018 Cutter Business Technology Journal. “Like any type of trust, trust in AI takes time to build, seconds to break and forever to repair once it is broken.”

The Uber and Tesla incidents indicate a need to rethink the way such AI applications are developed, and for their designers and manufacturers to take certain steps to build greater trust in their products, Siau says.

Despite these recent incidents, he sees a strong future for AI, but one fraught with trust issues that must be resolved.

Siau and Wang point to five areas that can help build initial trust in artificial intelligence systems:

  1. Representation. The more “human” a technology is, the more likely humans are to trust it. “That is why humanoid robots are so popular,” Siau says, adding that it is easier to “establish an emotional connection” with a robot that looks and acts more like a human or a robotic dog that acts more like a canine.
  2. Image or perception. Science fiction books and movies have given AI a bad image, Siau says. People tend to think of AI in dystopian terms, as colored by Terminator or Blade Runner movies or Isaac Asimov and Philip K. Dick novels. “This image and perception will affect people’s initial trust in AI,” Siau and Wang write.
  3. Reviews from other users. People tend to rely on online product reviews, and “a positive review leads to greater initial trust.”
  4. Transparency and “explainability.” “To trust AI applications, we need to understand how they are programmed and what function will be performed in certain conditions,” Siau says.
  5. Trialability. The ability to test a new AI application before being asked to adapt it leads to greater acceptance, Siau says.

But after they develop a sense of trust, AI creators also must work to maintain that trust. Siau and Wang offer suggestions for developing continuous trust. They include:

  • Usability and reliability. AI “should be designed to operate easily and intuitively,” Siau and Wang write. “There should be no unexpected downtime or crashes.”
  • Collaboration and communication. Developers must focus on creating AI applications that smoothly and easily collaborate and communicate with humans.
  • Sociability and bonding. Building social activities into AI applications, like a robotic dog that can recognize its owner and show affection, is one way to strengthen trust.
  • Security and privacy protection. AI applications rely on large data sets, so ensuring privacy and security will be crucial to establishing trust in the applications.

Already, Siau is working to prepare MBA students at Missouri S&T for the AI age through Artificial Intelligence, Robotics, and Information Systems Management, a course he introduced in 2017. As part of the coursework, Siau asks each student to present an article on a new artificial intelligence or machine learning technology or application. π

“Trust is the cornerstone of humanity’s relationship with artificial intelligence.”

Around the Puck

Seeking TBI therapies

By Delia Croessmann, croessmannd@mst.edu Complications from TBI can be life altering. They include post-traumatic seizures and hydrocephalus, as well as serious cognitive and psychological impairments, and the search for treatments to mitigate these neurodegenerative processes is on.

[Read More...]

Understanding the invisible injury

Students advance traumatic brain injury research By Sarah Potter, sarah.potter@mst.edu “Research is creating new knowledge.”–Neil Armstrong  Research keeps professors on the vanguard of knowledge in their fields and allows students to gain a deeper understanding of their area of study. For students and recent graduates researching traumatic brain injury (TBI) at Missouri S&T, the work […]

[Read More...]

Analyzing small molecules for big results

By Delia Croessmann, croessmannd@mst.edu At only 28 years old, Casey Burton, Chem’13, PhD Chem’17, director of medical research at Phelps Health in Rolla and an adjunct professor of chemistry at Missouri S&T, is poised to become a prodigious bioanalytical researcher.

[Read More...]

To prevent and protect

By Peter Ehrhard, ehrhardp@mst.edu Traumatic brain injuries (TBIs) are an unfortunate but all too common occurrence during military training and deployment. Because mild TBIs often present no obvious signs of head trauma or facial lacerations, they are the most difficult to diagnose at the time of the injury, and patients often perceive the impact as […]

[Read More...]

Q&A

Toughest class … ever Some of your classes may have been a breeze, but others kept you up at all hours studying, and some of you struggled just to pass. As part of his research for the S&T 150th anniversary history book, Larry Gragg , Curators’ Distinguished Teaching Professor emeritus of history and political science, asked […]

[Read More...]