1917 United States Federal Reserve Board (wikimedia commons)

When guessing the weight of an ox or estimating how many marbles fill a jar, the many have been shown to be smarter than the few. These collective displays of intelligence have been dubbed "the wisdom of crowds," but exactly how many people make a crowd wise?

New research by SFI Professor Mirta Galesic and her colleagues from the Max Planck Institute for Human Development in Berlin suggests that larger crowds do not always produce wiser decisions. In fact, when it comes to qualitative decisions such as “which candidate will win the election” or “which diagnosis fits the patient’s symptoms,” moderately-sized "crowds," around five to seven randomly selected members are likely to outperform larger ones. In the real world, these moderately-sized crowds manifest as physician teams making medical diagnoses; top bank officials forecasting unemployment, economic growth, or inflation; and panels of election forecasters predicting political wins.

“When we ask ‘how many people should we have in this group?’ the impulse might be to create as big a group as possible because everyone’s heard of the wisdom of crowds,” Galesic says. But in many real world situations, it’s actually better to have a group of moderate size.”

Where previous research on collective intelligence deals mainly with decisions of how much or how many, the current study applies to this-or-that decisions under a majority vote. The researchers mathematically modeled group accuracy under different group sizes and combinations of task difficulties. They found that in situations similar to a real world expert panel, where group members encounter a combination of mostly easy tasks peppered with more difficult ones, small groups proved more accurate than larger ones. This effect is independent of other influences on group accuracy, such as following an opinion leader or having group discussions before voting.

Why? It’s a matter of probabilities, she says. For easy decisions, a group of experts of any size will very likely get it right. For more difficult decisions, moderately sized groups are more noisy representations of the overall population of experts (the large crowd), and can by chance be correct even when most experts in the population are wrong.

“In the real world we often don’t know whether a group will always encounter only easy or only difficult tasks,” Galesic says. “And in many real-world situations, an expert group will encounter a combination of mostly (for them) easy tasks and a few difficult tasks. In these circumstances, moderately-sized crowds will perform better than larger groups or individuals. Organizations might take this research to heart when designing groups to solve a series of problems.”

What about voting as a means of determining the majority opinion of a populace? "These results, of course, do not mean that we should abandon large scale referendums like Brexit and national elections,” Galesic adds. “Choices between different policies and candidates often do not have a 'right' and a 'wrong' answer: different people simply prefer different things, and the outcomes of these decisions are complex, with a spectrum of consequences. It is important to account for everyone's opinion about the general direction in which they want their country to go -- including underrepresented groups.

“But when it comes to decisions with a more clear 'right' and 'wrong' answer -- where everyone can, at least after the fact, agree that one course of action was better than the other -- then moderately sized groups of experts can often be better than larger groups or individuals,” she says.

Read the article in Dublin News (June 29, 2016)

Read the article in R&D (June 28, 2016)

Read the paper on Research Gate (May 30, 2016)

Read the paper in Decision (May 30, 2016, subscription required)

Listen to the broadcast on Southern California Public Radio's Lab Notes (June 30, 2016)

Read the article in HowStuffWorks (July 8, 2016)

Read the column in Bloomberg View (July 7, 2016)

Read the column in MoneyScience (July 10, 2016)

Read the article in Deutschlandradio Kultur (August 11, 2016)