During a quarterly meeting, a product manager excitedly threw up a slide. The line on the chart shot upward at a 45-degree angle. “Since we shipped dark mode last week, user engagement has seen explosive growth!” The room erupted in applause.
The data was real. The chart was auto-generated by software. Nobody fabricated anything.
Someone in the audience noticed two details. First, the Y-axis started at 95%, not 0. The so-called “explosive growth” was engagement going from 95.1% to 95.3%. Second, the chart only showed data from “last Tuesday to last Thursday.” Zoom out to a full month, and that steep line was just normal fluctuation noise.
This is the essence of selective reporting: telling the truth, but only half of it. It is more dangerous than fabrication because it wears the cloak of “science” and “objectivity.” The data is fine, the chart is fine — the person choosing which data to share with you is the problem.
51. Truncated Y-Axis: Fooling Your Eyes with Proportional Distortion
When the absolute change in data is small but the chart maker wants it to look “dramatic,” the simplest trick is to not start the Y-axis at 0.
CPU usage fluctuating from 82% to 84% — with a Y-axis from 0 to 100%, it is a nearly flat line. With a Y-axis from 80% to 85%, that 2% change fills the entire chart height and looks like a rocket launch. The numbers have not changed — only the scale has — but your brain’s intuitive response to slope overrides its rational assessment of magnitude.
Sometimes truncating the Y-axis is legitimate: a body temperature chart starting at 35 degrees C rather than 0 degrees C makes sense because 0 degrees C body temperature is meaningless for medical decisions. The question is not “whether the Y-axis starts at 0” but “whether truncating the Y-axis changes the substantive meaning of the data — making a trivial gap look like a massive one.”
First move when looking at any chart: check the Y-axis start and end points, then ask yourself “if the Y-axis started at 0, would this line still have a story?”
52. Dual-Axis Manipulation: Two “Perfectly Synced” Lines That Differ by an Order of Magnitude
On a single chart, using two Y-axes with different scales to present two variables so that both lines’ visual trajectories appear to match, implying a strong relationship between them.
A company report: “Revenue” and “User Count” growth curves overlap perfectly on the chart, painting a picture of “user growth driving revenue growth.” But look at the scales closely: the left axis (revenue) is 0 to 10 million, the right axis (user count) is 0 to 1 million. Put both on the same scale and the revenue line is nearly flat while the user line rises steeply. Their trajectories look nothing alike — the chart maker just adjusted the scales to make them “visually align.”
Dual-axis charts are not inherently problematic. The problem is using them to imply a causal relationship between two variables that you have not demonstrated, or deliberately aligning scales to manufacture a visual impression of “synchronization.”
53. Cherry-Picking / Texas Sharpshooter Fallacy
A Texan fires a bunch of shots at a barn wall, then paints a bullseye around the densest cluster of bullet holes. Everyone marvels at what a sharpshooter he is.
In A/B testing, if you simultaneously test 20 different design variations, purely by probability, one of them is likely to “happen to” show a statistically significant lift (p < 0.05). If you only report this one “success” and hide the other 19 non-results, you have painted the bullseye around the holes — not because your design worked, but because you ran enough trials.
This is closely related to the multiple comparisons fallacy (statistical methods article): mechanistically it is the same issue, but the former is more about “selective presentation” while the latter is more about “statistical interpretation.”
The key question for identification: “Do these results represent all attempts, or only the ones selected for presentation? Behind these success stories, how many unreported failures are there?”
54. File Drawer Problem: Silence Is an Organized Lie
Fifty labs worldwide simultaneously test whether a certain vitamin can prevent the common cold. Purely by probability, about 2–3 labs will produce significant positive results (p < 0.05) even if the vitamin is completely ineffective. The other 47 labs get “no significant difference,” then lock their reports in a drawer because journals do not like to publish “we didn’t find anything” studies.
Years later, you see three published papers showing the vitamin works, and you think it is scientific consensus. But you will never know what is inside those 47 drawers.
This is not an individual researcher’s moral failing — it is a structural bias in the entire knowledge production system. The personal version of the file drawer problem appears inside companies: each team runs 20 experiments per quarter, only the 3 successful ones make it into the quarterly report, and the other 17 are quietly shelved. Over time, the product decision-making knowledge base is filled exclusively with “success stories,” while repeatedly failed directions go unrecorded and unknown. The next new hire may repeat the exact same mistakes in a different form.
55. Publication Bias: Journals Are Filters That Only Select Good News
Publication bias is the institutional version of the file drawer problem — scaling from “individuals choosing not to report” to “the entire publishing industry systematically favoring positive results.”
Studies of equal quality that find “an effect exists” are more likely to be accepted by journals than those that find “no effect.” This is not malicious — it is the journal’s incentive structure: “no effect” studies seem to lack “narrative,” readers are less interested, and citation rates suffer.
The result: the scientific literature we can read systematically overrepresents positive results. A therapy may appear effective only because the studies showing it is ineffective were never published, while the few showing effectiveness happened to pass through the filter. This is a major contributor to the “replication crisis” in medicine.
The difference between the file drawer problem and publication bias: the former is selective silence at the individual or team level (“I choose not to share this result”); the latter is structural filtering at the publishing system level (“journals choose not to publish this kind of result”). Both work together to ensure that published literature cannot represent all completed research.
Presentation and reporting traps have a special quality: they require no lies. Not mentioning the failed experiments, not presenting the inconvenient comparisons, not bringing up the 47 studies locked in drawers — each step is a legitimate choice, but the cumulative effect is systematic misdirection.
Before accepting any chart or conclusion, the most important question is not “is this number correct” but “am I seeing all the attempts, or only the ones selected for me to see.”