Can AI  be trusted?

The reliability of self-driving cars and other forms of artificial intelligence is one of several factors that affect humans’ trust in AI, machine learning and other technological advances.

Illustration by Sarah Martin

Incidents like the fatal crash of a self-driving Uber that killed a Tempe, Ariz., woman and the death of a test driver of a semi-autonomous vehicle being developed by Tesla put our trust in AI to the test.

“Trust is the cornerstone of humanity’s relationship with artificial intelligence,” write Keng Siau, professor and chair of business and information technology, and Weiyu Wang, a graduate student in information science and technology, in a research article in the February 2018 Cutter Business Technology Journal. “Like any type of trust, trust in AI takes time to build, seconds to break and forever to repair once it is broken.”

The Uber and Tesla incidents indicate a need to rethink the way such AI applications are developed, and for their designers and manufacturers to take certain steps to build greater trust in their products, Siau says.

Despite these recent incidents, he sees a strong future for AI, but one fraught with trust issues that must be resolved.

Siau and Wang point to five areas that can help build initial trust in artificial intelligence systems:

  1. Representation. The more “human” a technology is, the more likely humans are to trust it. “That is why humanoid robots are so popular,” Siau says, adding that it is easier to “establish an emotional connection” with a robot that looks and acts more like a human or a robotic dog that acts more like a canine.
  2. Image or perception. Science fiction books and movies have given AI a bad image, Siau says. People tend to think of AI in dystopian terms, as colored by Terminator or Blade Runner movies or Isaac Asimov and Philip K. Dick novels. “This image and perception will affect people’s initial trust in AI,” Siau and Wang write.
  3. Reviews from other users. People tend to rely on online product reviews, and “a positive review leads to greater initial trust.”
  4. Transparency and “explainability.” “To trust AI applications, we need to understand how they are programmed and what function will be performed in certain conditions,” Siau says.
  5. Trialability. The ability to test a new AI application before being asked to adapt it leads to greater acceptance, Siau says.

But after they develop a sense of trust, AI creators also must work to maintain that trust. Siau and Wang offer suggestions for developing continuous trust. They include:

  • Usability and reliability. AI “should be designed to operate easily and intuitively,” Siau and Wang write. “There should be no unexpected downtime or crashes.”
  • Collaboration and communication. Developers must focus on creating AI applications that smoothly and easily collaborate and communicate with humans.
  • Sociability and bonding. Building social activities into AI applications, like a robotic dog that can recognize its owner and show affection, is one way to strengthen trust.
  • Security and privacy protection. AI applications rely on large data sets, so ensuring privacy and security will be crucial to establishing trust in the applications.

Already, Siau is working to prepare MBA students at Missouri S&T for the AI age through Artificial Intelligence, Robotics, and Information Systems Management, a course he introduced in 2017. As part of the coursework, Siau asks each student to present an article on a new artificial intelligence or machine learning technology or application. π

“Trust is the cornerstone of humanity’s relationship with artificial intelligence.”

Around the Puck

Q&A: Miners got game

What was the most memorable sports team during your time on campus? As part of his research for the S&T 150th history book, Larry Gragg, Curators’ Distinguished Teaching Professor emeritus of history and political science, asked you to share your memories. Here are a few of your answers.

[Read More...]

Honoring new academy members

In October, 12 alumni and friends were inducted into Missouri S&T academies. Academy membership recognizes careers of distinction and invites members to share their wisdom, influence and resources with faculty and students. Some academies hold induction ceremonies in the fall, others in the spring.

[Read More...]

Boosting cyber-physical security

A wide array of complex systems that rely on computers — from public water supply systems and electric grids to chemical plants and self-driving vehicles — increasingly come under not just digital but physical attacks. Bruce McMillin, professor and interim chair of computer science at Missouri S&T, is looking to change that by developing stronger safeguards […]

[Read More...]

MXene discovery could improve energy storage

In spite of their diminutive size, 2-D titanium carbide materials known as MXenes are “quite reactive” to water, a discovery S&T researchers say could have implications for energy storage and harvesting applications such as batteries, supercapacitors and beyond. Their findings were published in 2018 in the American Chemical Society journal Inorganic Chemistry.

[Read More...]

A faster charge for electric vehicles

One drawback of electric vehicles (EVs) is the time it takes to charge them. But what if you could plug in your EV and fully charge it as quickly as it takes to fill up a conventional car with gasoline? Missouri S&T researchers, in collaboration with three private companies, are working to make speedy charging […]

[Read More...]