In 1951, Solomon Asch ran one of the most unsettling experiments in the history of social psychology. He put people in rooms with confederates and asked them to identify which of three lines matched a reference line. The correct answer was obvious. But when the confederates gave the wrong answer first, 75% of participants went along with it at least once. One in three responses, overall, were wrong — not because participants couldn't see, but because they couldn't hold their position against the group.
Groups, it turned out, were powerful in ways nobody had fully reckoned with. And that was just the beginning of fifty years of research trying to figure out what those powers actually were — and whether they were working for us or against us.
The short answer, built on decades of careful study: small groups make better decisions than individuals under the right conditions. The longer answer involves groupthink, hidden profiles, social loafing, polarization, and a body of evidence that reframes how any serious operator should think about the people in their room.
The Basic Advantage: Diversity of Information
The most reliable finding in the small group literature is also the most intuitive: groups outperform individuals primarily because they can pool information. More minds hold more relevant data. The collective can surface angles, counterexamples, and considerations that no single person would have reached alone.
Francis Galton demonstrated this effect long before the research formalized it. In 1907, he analyzed entries from a weight-guessing competition at a county fair — 800 people had guessed the weight of an ox. No individual entry was close. But the median of all guesses was 1,207 pounds. The actual weight: 1,198 pounds. Off by less than 1%. The crowd had been smarter than its smartest member.
The effect scales down. Multiple studies have shown that small groups of 3–5 people make more accurate judgments and better decisions than their best individual member — not just their average member, but their best one. The group catches what the individual misses. This benefit is most pronounced on complex tasks with verifiable correct answers. It diminishes when the task is simple or when one person in the group is dramatically more expert than the others.
David Johnson and Frank Johnson synthesized decades of cooperative learning and group performance research and found that cooperative groups consistently outperform competitive groups and individuals on complex decision tasks. The mechanism isn't just more information — it's the process of genuine exchange. Groups that interact generate ideas that none of the members would have produced independently.
The Problem Nobody Wants to Talk About: Hidden Profiles
Here's where it gets complicated. In 1985, Garold Stasser and William Titus ran a series of experiments that revealed a structural flaw in how groups share information. They gave group members packets of information about candidates for a decision — but distributed the information unevenly. Some facts were shared by everyone. Others were unique to only one or two members.
What they found was that groups spent most of their discussion time talking about the information everyone already had — the shared pool. The unique pieces, which were often the most critical for reaching the correct answer, were systematically underweighted or never surfaced at all.
They called this the "hidden profile" problem. The group technically contained everything it needed to make the right call. But its conversational dynamics ensured it would never use that information properly.
A 2012 meta-analysis by Li Lu and colleagues reviewed 65 studies on the hidden profile paradigm and confirmed: groups consistently fail to share unique information at the rate they should. The people with non-redundant perspectives tend to stay quiet while the room converges on what everyone already agreed on.
This is not a trivial finding for anyone who's ever been in a meeting. The hidden profile problem explains why diverse groups often fail to leverage their diversity. Having different perspectives in the room doesn't automatically mean those perspectives will be heard. You need explicit structures — a designated devil's advocate, rounds where each person shares something the others don't know, decision processes that surface dissent before consensus forms — to overcome what groups do naturally.
Groupthink: When Cohesion Becomes a Liability
Irving Janis coined the term "groupthink" in 1972 after analyzing a series of spectacular decision failures: the Bay of Pigs invasion, the failure to anticipate Pearl Harbor, the escalation of Vietnam, the appeasement of Nazi Germany in the 1930s. In each case, Janis found the same pattern. A cohesive, high-status group, under pressure, had converged on a flawed decision by suppressing dissent.
The symptoms he catalogued remain diagnostically sharp: the illusion of invulnerability, collective rationalization, belief in the inherent morality of the group, stereotyped views of outgroups, pressure on dissenters, self-censorship, the illusion of unanimity, and self-appointed "mindguards" who protect the group from disturbing information.
What made these groups fail wasn't incompetence or bad intentions. The members were, in many cases, among the most capable people in their fields. What failed was the process. When cohesion is high, when a powerful leader signals the preferred answer early, when the stakes feel urgent — groups produce confident, unanimous, wrong decisions.
The lesson isn't that cohesion is bad. It's that cohesion without explicit structures for dissent is dangerous. The psychological safety research confirms this from the other direction: groups where members feel safe enough to disagree, raise concerns, and say "I think we're missing something" don't just feel better to be in — they perform measurably better.
Group Polarization: The Drift Toward Extremes
In 1961, MIT student James Stoner was running a dissertation study on risk-taking and stumbled into something unexpected. When people made decisions alone, then discussed them in groups, the groups tended to recommend riskier choices than the average individual had. He called it the "risky shift."
Subsequent research refined the finding into something more precise. It wasn't always toward risk — it was always toward whichever direction the group was already leaning. Serge Moscovici and Marisa Zavalloni formalized this in 1969 as "group polarization": after discussion, groups adopt positions more extreme than the pre-discussion average of their members.
The mechanism is persuasive arguments theory: in any discussion, arguments favoring the majority view outnumber arguments against it. The more you hear arguments for the position you already lean toward, the further you move toward it. Social comparison adds another layer — members who discover they're less committed to the group's direction than their peers often shift to match or exceed the group norm.
Polarization is not always bad. A group of experienced founders who are moderately confident that a market opportunity is real will, after discussion, become more confident — and that confidence may well be warranted, because they've pooled their individual evidence into a more complete picture. But the same mechanism produces investor groups who take catastrophically overleveraged positions, and management teams who convince themselves a struggling product needs to double down rather than pivot.
The implication for anyone running a peer group or making high-stakes decisions with a small team: pay attention to the starting distribution of opinions. If everyone walking in already leans the same way, the group won't balance them. It will push them further.
The Conditions That Predict Good Group Decisions
After decades of this research, a fairly clear picture has emerged. The groups that make excellent decisions share identifiable characteristics. The ones that make terrible ones do too.
Size matters. Groups of 3–5 tend to outperform both individuals and larger groups on complex decisions. Small enough that everyone participates, large enough to provide genuine diversity of information and perspective. Above seven or eight people, process losses accelerate — coordination becomes harder, social loafing increases, and the benefits of additional perspectives are outweighed by the costs of getting everyone heard. The science on why small groups are powerful runs deeper than most people realize.
Diversity of information, not just diversity of identity. Groups with members who hold genuinely different information and mental models outperform groups whose members share the same knowledge base. This is distinct from demographic diversity, which matters for different reasons. The critical variable is epistemic diversity — are there people in the room who know different things?
Psychological safety is load-bearing. Amy Edmondson's research at Harvard, which Google's Project Aristotle confirmed at scale, established that team psychological safety is the primary predictor of team performance. It predicts whether unique information gets shared, whether concerns get raised, and whether the group actually uses the diversity of thought it theoretically contains. A group where people are afraid to be wrong is a group that's already making decisions alone.
Process structures beat good intentions. Unstructured discussion reliably fails to surface hidden information, tends toward polarization, and is vulnerable to groupthink when the group is cohesive. Research consistently shows that nominal groups — people who work individually and then combine their outputs — often outperform interactive groups on idea generation, specifically because interactive groups are subject to production blocking (only one person can speak at a time) and evaluation apprehension (people censor themselves). The groups that outperform individuals aren't doing what comes naturally. They're following deliberate structures that correct for natural tendencies.
Accountability to the decision process, not just the outcome. Groups where members know they'll be accountable for their reasoning — not just whether they got it right — make more careful, calibrated decisions. Outcome bias is a real problem: we judge decisions by results rather than by whether the process was sound. Without structured accountability, even well-intentioned groups drift back into comfortable consensus.
What This Means for Founders and Operators
The research is not a case for committees. It's a case for small, well-structured groups with genuine diversity of information, explicit permission to dissent, and accountability mechanisms that push members to actually share what they know.
Most founders and operators aren't making decisions in those kinds of groups. They're making decisions alone, or in groups so susceptible to hierarchy and social pressure that the group structure adds liability rather than value. The board meeting where everyone nods. The team offsite where the CEO's opening remarks determine the outcome. The peer call where nobody wants to be the one who says "I think you're wrong."
Founders who make decisions alone don't just miss the upside of collective intelligence. They carry the full cognitive load of surfacing their own blind spots, challenging their own assumptions, and correcting their own errors — a task humans are, by evolutionary design, terrible at.
Fifty years of research hasn't produced a simple answer. "Groups are better than individuals" is true under specific conditions and false under others. What the research has produced is a map of those conditions — and it turns out the conditions that make groups reliable and excellent are the same conditions that define a well-run peer advisory group: small, trust-based, with honest information sharing, real accountability, and explicit norms around dissent.
The science isn't pointing toward more meetings. It's pointing toward fewer, better-structured ones — with people whose perspectives you don't already share, in rooms where the social cost of being right outweighs the social cost of being difficult.
That's not a natural thing for humans to build. It has to be designed. But the research is unambiguous about what it produces when you do.
GoodGrowth places founders and operators in small groups of 3–5 peers — structured for exactly the conditions the research describes. Diverse information, genuine psychological safety, and accountability built into every session. The model has a 300-year track record.