Can AI  be trusted?

The reliability of self-driving cars and other forms of artificial intelligence is one of several factors that affect humans’ trust in AI, machine learning and other technological advances.

Illustration by Sarah Martin

Incidents like the fatal crash of a self-driving Uber that killed a Tempe, Ariz., woman and the death of a test driver of a semi-autonomous vehicle being developed by Tesla put our trust in AI to the test.

“Trust is the cornerstone of humanity’s relationship with artificial intelligence,” write Keng Siau, professor and chair of business and information technology, and Weiyu Wang, a graduate student in information science and technology, in a research article in the February 2018 Cutter Business Technology Journal. “Like any type of trust, trust in AI takes time to build, seconds to break and forever to repair once it is broken.”

The Uber and Tesla incidents indicate a need to rethink the way such AI applications are developed, and for their designers and manufacturers to take certain steps to build greater trust in their products, Siau says.

Despite these recent incidents, he sees a strong future for AI, but one fraught with trust issues that must be resolved.

Siau and Wang point to five areas that can help build initial trust in artificial intelligence systems:

  1. Representation. The more “human” a technology is, the more likely humans are to trust it. “That is why humanoid robots are so popular,” Siau says, adding that it is easier to “establish an emotional connection” with a robot that looks and acts more like a human or a robotic dog that acts more like a canine.
  2. Image or perception. Science fiction books and movies have given AI a bad image, Siau says. People tend to think of AI in dystopian terms, as colored by Terminator or Blade Runner movies or Isaac Asimov and Philip K. Dick novels. “This image and perception will affect people’s initial trust in AI,” Siau and Wang write.
  3. Reviews from other users. People tend to rely on online product reviews, and “a positive review leads to greater initial trust.”
  4. Transparency and “explainability.” “To trust AI applications, we need to understand how they are programmed and what function will be performed in certain conditions,” Siau says.
  5. Trialability. The ability to test a new AI application before being asked to adapt it leads to greater acceptance, Siau says.

But after they develop a sense of trust, AI creators also must work to maintain that trust. Siau and Wang offer suggestions for developing continuous trust. They include:

  • Usability and reliability. AI “should be designed to operate easily and intuitively,” Siau and Wang write. “There should be no unexpected downtime or crashes.”
  • Collaboration and communication. Developers must focus on creating AI applications that smoothly and easily collaborate and communicate with humans.
  • Sociability and bonding. Building social activities into AI applications, like a robotic dog that can recognize its owner and show affection, is one way to strengthen trust.
  • Security and privacy protection. AI applications rely on large data sets, so ensuring privacy and security will be crucial to establishing trust in the applications.

Already, Siau is working to prepare MBA students at Missouri S&T for the AI age through Artificial Intelligence, Robotics, and Information Systems Management, a course he introduced in 2017. As part of the coursework, Siau asks each student to present an article on a new artificial intelligence or machine learning technology or application. π

“Trust is the cornerstone of humanity’s relationship with artificial intelligence.”

Around the Puck

Generous partners complete ACML fundraising

Thanks to an investment from the University of Missouri System, major gifts from industry partners and alumni support, S&T will break ground on the Advanced Construction and Materials Laboratory (ACML) on Oct. 12, during Homecoming weekend.

[Read More...]

Alumni help with sesquicentennial planning

Seven alumni, including three Miner Alumni Association board members, have been named to Missouri S&T’s sesquicentennial advisory committee. The group is made up of graduates, students, faculty, staff and community members who are involved in planning the university’s upcoming 150th anniversary celebration.

[Read More...]

Using big data to reduce childbirth risks

According to the Centers for Disease Control and Prevention, complications during pregnancy or childbirth affect more than 50,000 women annually, and about 700 of them die every year. Steve Corns, associate professor of engineering management and systems engineering, is working with researchers from Phelps County Regional Medical Center through the Ozarks Biomedical Initiative to reduce […]

[Read More...]

Bogan solves Benton mural mystery

Missouri State Capitol muralist Thomas Hart Benton wrote in his memoir about being called into then-Gov. Guy Park’s office and told that a prominent St. Louis politician objected to Benton’s portrayal of black people, especially depictions of slavery.

[Read More...]

Breaking bias

According to Jessica Cundiff, assistant professor of psychological science at S&T, women who consider careers in the physical sciences, technology, engineering and math (STEM) fields are deterred by stereotypes that impose barriers on the recruitment, retention and advancement of women in STEM.

[Read More...]

Speak Your Mind

*