espite increasing concern over the intrusion of algorithms in daily life， people may be more willing to trust a computer program than their fellow humans， especially if a task becomes too challenging， according to new research from data scientists at the University of Georgia.
From choosing the next song on your playlist to choosing the right size pants， people are relying more on the advice of algorithms to help make everyday decisions and streamline their lives.
“Algorithms are able to do a huge number of tasks， and the number of tasks that they are able to do is expanding practically every day，” said Eric Bogert， a Ph.D. student in the Terry College of Business Department of Management Information Systems.
Bogert worked with management information systems professor Rick Watson and assistant professor Aaron Schecter on the paper， “Humans rely more on algorithms than social influence as a task becomes more difficult，” which was published April 13 in Nature's Scientific Reports journal.
For this study， the team asked volunteers to count the number of people in a photograph of a crowd and supplied suggestions that were generated by a group of other people and suggestions generated by an algorithm.
As the number of people in the photograph expanded， counting became more difficult and people were more likely to follow the suggestion generated by an algorithm rather than count themselves or follow the “wisdom of the crowd，” Schecter said.
“This is a task that people perceive that a computer will be good at， even though it might be more subject to bias than counting objects，”Schecter said. “One of the common problems with AI is when it is used for awarding credit or approving someone for loans. While that is a subjective decision， there are a lot of numbers in there —— like income and credit score ——so people feel like this is a good job for an algorithm. But we know that dependence leads to discriminatory practices in many cases because of social factors that aren't considered.”
Facial recognition and hiring algorithms have come under scrutiny in recent years as well because their use has revealed cultural biases in the way they were built， which can cause inaccuracies when matching faces to identities or screening for qualified job candidates， Schecter said.
“The eventual goal is to look at groups of humans and machines making decisions and find how we can get them to trust each other and how that changes their behavior，” Schecter said.