Can AI  be trusted?

The reliability of self-driving cars and other forms of artificial intelligence is one of several factors that affect humans’ trust in AI, machine learning and other technological advances.

Illustration by Sarah Martin

Incidents like the fatal crash of a self-driving Uber that killed a Tempe, Ariz., woman and the death of a test driver of a semi-autonomous vehicle being developed by Tesla put our trust in AI to the test.

“Trust is the cornerstone of humanity’s relationship with artificial intelligence,” write Keng Siau, professor and chair of business and information technology, and Weiyu Wang, a graduate student in information science and technology, in a research article in the February 2018 Cutter Business Technology Journal. “Like any type of trust, trust in AI takes time to build, seconds to break and forever to repair once it is broken.”

The Uber and Tesla incidents indicate a need to rethink the way such AI applications are developed, and for their designers and manufacturers to take certain steps to build greater trust in their products, Siau says.

Despite these recent incidents, he sees a strong future for AI, but one fraught with trust issues that must be resolved.

Siau and Wang point to five areas that can help build initial trust in artificial intelligence systems:

  1. Representation. The more “human” a technology is, the more likely humans are to trust it. “That is why humanoid robots are so popular,” Siau says, adding that it is easier to “establish an emotional connection” with a robot that looks and acts more like a human or a robotic dog that acts more like a canine.
  2. Image or perception. Science fiction books and movies have given AI a bad image, Siau says. People tend to think of AI in dystopian terms, as colored by Terminator or Blade Runner movies or Isaac Asimov and Philip K. Dick novels. “This image and perception will affect people’s initial trust in AI,” Siau and Wang write.
  3. Reviews from other users. People tend to rely on online product reviews, and “a positive review leads to greater initial trust.”
  4. Transparency and “explainability.” “To trust AI applications, we need to understand how they are programmed and what function will be performed in certain conditions,” Siau says.
  5. Trialability. The ability to test a new AI application before being asked to adapt it leads to greater acceptance, Siau says.

But after they develop a sense of trust, AI creators also must work to maintain that trust. Siau and Wang offer suggestions for developing continuous trust. They include:

  • Usability and reliability. AI “should be designed to operate easily and intuitively,” Siau and Wang write. “There should be no unexpected downtime or crashes.”
  • Collaboration and communication. Developers must focus on creating AI applications that smoothly and easily collaborate and communicate with humans.
  • Sociability and bonding. Building social activities into AI applications, like a robotic dog that can recognize its owner and show affection, is one way to strengthen trust.
  • Security and privacy protection. AI applications rely on large data sets, so ensuring privacy and security will be crucial to establishing trust in the applications.

Already, Siau is working to prepare MBA students at Missouri S&T for the AI age through Artificial Intelligence, Robotics, and Information Systems Management, a course he introduced in 2017. As part of the coursework, Siau asks each student to present an article on a new artificial intelligence or machine learning technology or application. π

“Trust is the cornerstone of humanity’s relationship with artificial intelligence.”

Around the Puck

“Forged in Gold: Missouri S&T’s First 150 Years”

In the 1870s, Rolla seemed an unlikely location for a new college. There were only about 1,400 residents in a community with more saloons than houses of worship. There were no paved streets, sewers or water mains. To visitors, there seemed to be as many dogs, hogs, horses, ducks and geese as humans walking the dusty streets.

[Read More...]

By the numbers: Fall/Winter 2019

[Read More...]

Bringing clean water to South America

Assessing water quality, surveying mountaintop locations and building systems to catch rainwater — that’s how members of S&T’s chapter of Engineers Without Borders spent their summer break.

[Read More...]

Geothermal goals exceeded

After five years of operation, Missouri S&T’s geothermal energy system continues to outperform expectations. S&T facilities operations staff originally predicted the geothermal system would reduce campus water usage by over 10% — roughly 10 million gallons per year. The system, which went online in May 2014, cut actual water usage by 18 million to 20 […]

[Read More...]

What happens in Vegas…may appear in print

In his latest volume of Las Vegas lore, historian Larry Gragg says it was deliberate publicity strategies that changed the perception of Sin City from a regional tourist destination where one could legally gamble and access legalized prostitution just outside the city limits, to a family vacation spot filled with entertainment options and surrounded by […]

[Read More...]