In 1960, Peter Wason ran an experiment that still makes cognitive scientists wince.
He showed subjects three numbers — 2, 4, 6 — and told them this sequence followed a certain rule. Their task was to guess the rule by proposing their own sets of three numbers, to which Wason would respond “fits” or “does not fit,” until they felt confident enough to state the answer.
Most subjects immediately formed a hypothesis: “must be even numbers” or “increasing by 2.” Then they started testing:
- 8, 10, 12: “Fits”
- 14, 16, 18: “Fits”
- 100, 102, 104: “Fits”
“Got it! The rule is even numbers increasing by 2!”
Wason’s actual rule was: any three ascending numbers. 1, 2, 3 fits. 1, 50, 1000 also fits. Subjects almost never tried these. They only asked questions whose answers would be “fits” — never once proactively testing something that might shatter their hypothesis.
This is confirmation bias in its purest form: not deliberate fabrication, but the thought “try to disprove myself” never even arising in the first place.
Cognitive biases are not occasional mistakes — they are systematic bugs built into the brain’s information processing. These 7 are the ones most likely to lead you astray during data analysis and decision-making.
24. Confirmation Bias (Analysis Phase): Seeing Only What You Want to See
Wason’s subjects were not stupid — they simply never thought to “try to disprove themselves.” We tend to seek, interpret, and remember information that supports our preexisting views, while ignoring or downplaying contradictory evidence.
In product development, this bias is extremely lethal. When you have strong emotional investment in a feature, you unconsciously cherry-pick positive reviews to share with the team, dismissing negative feedback as trolls or “users who don’t get the product.” The danger of confirmation bias here is not just that you reach the wrong conclusion — it is that your decision-making process looks completely normal. You are collecting feedback, you are analyzing data — it is just that your filter is operating where you cannot see it.
In A/B testing, when results do not match expectations, a person with confirmation bias will “torture the data” until they find some narrow segment where the new version “actually looks decent,” then interpret that as a win.
The strongest defense against confirmation bias: actively seek counter-evidence. Before reaching any conclusion, invest equal effort in asking “what evidence would prove me wrong?”
25. Anchoring Bias: The First Number You See Controls You
Imagine you are evaluating a new system’s performance. The engineer tells you the first number: “Current latency is around 500ms.” That “500ms” becomes an anchor. In subsequent testing, if latency drops to 400ms, you think “Wow, significant performance improvement!” But if the initial anchor had been 100ms, your evaluation of 400ms would be entirely different — “Why is it so slow?”
In budget planning and effort estimation, anchoring bias is everywhere. Once someone throws out a number (say, “this feature should take about 3 days”), all subsequent discussion revolves around fine-tuning that number, unable to break free and assess objectively. Even when we know the first number is unreasonable, the brain still involuntarily “anchors” to it.
The fix: before hearing any number, form your own independent estimate first. Let the first anchor be yours, not someone else’s.
26. Availability Heuristic: Easy to Recall Does Not Equal Likely to Happen
Why do we feel that flying is more dangerous than driving? Because plane crash news always features terrifying footage and wall-to-wall coverage — vivid memories that are easily retrieved (available) from our minds. The brain mistakenly concludes: “things that are easy to recall = things that happen frequently.”
In engineering, we often overestimate the severity of certain bugs simply because one particular bug caused a memorable 3 AM on-call incident. At the same time, we might underestimate chronic but less dramatic performance issues that lack “theatricality,” even though the latter may cause far more user churn.
The availability heuristic lets our risk assessments get hijacked by the vividness of memories rather than driven by actual incidence rates.
27. Representativeness Heuristic: Stereotypes Override Actual Probabilities
This is the classic “Linda Problem”: Linda is 31, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with discrimination and social justice and participated in anti-nuclear demonstrations. Which is more likely?
A. Linda is a bank teller
B. Linda is a bank teller and an active feminist
Most people choose B because B’s description “fits” Linda better. But logically, B is a subset of A — B’s probability cannot be greater than A’s. We let “how well the description fits” override statistical base rates.
In technical evaluations, we often judge probability by “how well the description fits.” See an engineer described as “rigorous thinker, antisocial, uses Linux,” and intuition says they are more likely a backend engineer. But if the base number of frontend engineers is larger, this judgment violates the base rate. Stereotypes are fuel for heuristics, and heuristics and probability calculations are not the same thing.
28. McNamara Fallacy: Measuring Only What Can Be Measured, Ignoring What Cannot
During the Vietnam War, U.S. Secretary of Defense Robert McNamara built a rigorous quantitative metrics system to assess war progress: enemy body counts, bombing sorties, square miles of territory controlled. He completely ignored “morale,” “public support,” and “strategic objective achievement” — factors that could not be quantified but were critically important.
The result: the U.S. military “won” every battle by the numbers but lost the entire war. McNamara later admitted in his old age that their fundamental mistake was treating “what can be measured” as a substitute for “what truly matters.”
In product development, this fallacy shows up as DAU (daily active users). Users can open the app and immediately close it — DAU looks perfect, but the actual value users derive from the product might be zero. We measure engineer productivity by “lines of code” while ignoring code quality and architecture, and the result is engineers writing more verbose code to game the metric.
The structure of the McNamara Fallacy: find a measurable metric -> treat it as a proxy for the real goal -> start optimizing the metric -> the metric improves while the real goal deteriorates.
29. Goodhart’s Law: When a Measure Becomes a Target, It Ceases to Be a Good Measure
This is different from the McNamara Fallacy but closely related. The McNamara Fallacy is “measure only what’s measurable, ignore what’s not”; Goodhart’s Law is “once you measure it, the metric itself gets corrupted.”
A customer service team uses “average response time” as a performance metric. Agents start rapidly closing tickets — problems unsolved, tickets closed, metrics perfect. Customer satisfaction plummets.
SEO once used keyword density to gauge content quality. Websites began churning out articles stuffed with keywords — high density, terrible reader experience. Google’s ranking algorithm has been forced to continuously evolve precisely to combat the effects of Goodhart’s Law.
Use App Store ratings to measure app quality, and companies start popping up “Rate us 5 stars!” prompts at just the right moment — deliberately timed for when users are happiest. Ratings look great, but what they now represent is not genuine user evaluation — it is “how cooperative users are when asked.”
Goodhart’s Law is not a human nature problem; it is a systems problem. When a metric is tied to rewards, optimizing the metric becomes rational behavior regardless of the cost. Understanding this law is fundamental to designing incentive systems.
30. Path Dependence: The Roads You Have Walked Limit the Roads You Can Walk
Early decisions constrain later options. This is not just an engineering architecture issue — it is a cognitive trap: even when new data shows the current direction is wrong, people tend to keep going down the wrong path.
The QWERTY keyboard is the most commonly cited example: it was originally designed to prevent typewriter keys from jamming — a need that no longer exists on modern keyboards, yet decades of user habit make it nearly impossible to replace with a better layout.
In software development, path dependence manifests as “technical debt”: compromises made for a quick launch become boundary conditions for every subsequent architectural decision. The cost of refactoring compounds over time until it becomes so high that no one is willing to touch it — even though everyone knows it is wrong.
In decision-making, the cognitive version of path dependence is the sunk cost fallacy: “We have invested so much in this path that we cannot abandon it now.” This is an emotional argument, not a data argument. Costs already paid are sunk — whether you continue or stop, those costs are gone. Decisions should be based only on future costs and benefits, not past investments.
Cognitive biases are hard to overcome because they operate at the subconscious level. Understanding them is the first step, but awareness alone is not enough — the brain under pressure is faster than reason. What is more effective is designing it into the process: establish a “devil’s advocate” role that forces everyone to confront counter-evidence, create a fixed checklist of questions that must be answered before any decision, and let systems do the work that willpower alone cannot sustain against the brain’s built-in self-protection mechanisms.