Do Trucks Mean Trump?  AI shows how humans misjudge images

Do Trucks Mean Trump? AI shows how humans misjudge images

Credit: Unsplash/CC0 public domain

A study of the types of errors humans make when evaluating images can activate computer algorithms that help us make better decisions about visual information, such as when reading an x-ray or online content moderation.

Researchers from Cornell and partner institutions analyzed more than 16 million human predictions of whether a neighborhood voted for Joe Biden or Donald Trump in the 2020 presidential election based on a single Google Street View image . They found that humans as a group did well at the task, but that a computer algorithm was better at distinguishing between Trump’s country and Biden’s country.

The study also categorized common ways people get it wrong and identified objects, such as pickup trucks and American flags, that misled people.

“We’re trying to figure out, when an algorithm has a better prediction than a human, can we use that to help the human, or create a better human-machine hybrid system that gives you the best of both worlds? ” said first author JD Zamfirescu-Pereira, a graduate student at the University of California, Berkeley.

He presented the work, titled “Trucks Don’t Mean Trump: Diagnosing Human Error in Image Analysis,” at 2022 Association for Computing Machinery (ACM) Conference on Fairness, Accountability and Transparency (FAccT).

Recently, researchers have paid a lot of attention to the issue of algorithmic bias, which is when algorithms make errors that systematically disadvantage women, racial minorities, and other historically marginalized populations.

“Algorithms can go wrong in a myriad of ways and that’s very important,” said lead author Emma Pierson, assistant professor of computer science at Cornell Tech’s Jacobs Technion-Cornell Institute and the Technion with the Cornell Ann S. Bowers. College of Computing and Information Science. “But humans themselves are biased and error-prone, and algorithms can provide very useful diagnostics of how people are wrong.”

The researchers used anonymized data from an interactive New York Times quiz that showed readers snapshots of 10,000 locations across the country and asked them to guess how the neighborhood voted. They trained a machine learning algorithm to make the same prediction by feeding it a subset of Google Street View images and feeding it actual voting results. Then they compared the performance of the algorithm on the remaining images with that of the readers.

Overall, the machine learning algorithm predicted the correct answer about 74% of the time. When averaged together to reveal “the wisdom of the crowd”, humans were right 71% of the time, but individual humans only got about 63%.

People often picked Trump wrong when the street view showed pickup trucks or open skies. In a New York Times article, participants noted that American flags also made them more likely to predict Trump, even though neighborhoods with flags were evenly split among candidates.

The researchers categorized human errors as the result of bias, variance, or noise, three categories commonly used to assess the errors of machine learning algorithms. The bias represents errors in the wisdom of the crowd, for example, always associating vans with Trump. Variance encompasses poor individual judgments – when one person makes a bad call, even though the crowd was right, on average. Noise occurs when the image doesn’t provide useful information, such as a house with a Trump sign in a predominantly Biden-voting neighborhood.

Being able to break down human errors into categories can help improve human decision-making. Take radiologists who read x-rays to diagnose disease, for example. If there are many errors due to bias, physicians may need retraining. If, on average, the diagnosis is successful but there are discrepancies between radiologists, a second opinion may be warranted. And if there’s a lot of misleading noise in the x-rays, a different diagnostic test may be needed.

Ultimately, this work may lead to a better understanding of how to combine human and machine decision-making for human-in-the-loop systems, where humans contribute to otherwise automated processes.

“You want to study the performance of the whole system together — the humans plus the algorithm, because they can interact in unexpected ways,” Pierson said.


Confidence in algorithmic advice from computers can prevent us from making mistakes, study finds


More information:
JD Zamfirescu-Pereira et al, Trucks Don’t Mean Trump: Diagnosing Human Error in Image Analysis, ACM 2022 Conference on Fairness, Accountability and Transparency (2022). DOI: 10.1145/3531146.3533145

Provided by Cornell University

Quote: Do trucks mean Trump? AI shows how humans misjudge images (2022, September 20) Retrieved September 20, 2022 from https://techxplore.com/news/2022-09-trucks-trump-ai-humans-misjudge.html

This document is subject to copyright. Except for fair use for purposes of private study or research, no part may be reproduced without written permission. The content is provided for information only.


#Trucks #Trump #shows #humans #misjudge #images

Leave a Comment

Your email address will not be published.