“Tell it by the fireside or in a marketplace, or in a movie—almost any story is almost certainly some kind of lie.”
— Orson Welles, F for Fake
“You Eat a Credit Card’s Worth of Microplastics Every Week”
This claim has been floating around countless platforms for the last five years. It’s been cited by BBC, referenced in letters from members of Congress, and even featured on the UN’s own website. Here’s one example from January 2025 by Stanford University discussing the issue.
There’s just one problem—it’s almost certainly not true.
The idea that humans ingest a credit card’s worth of microplastics every week stems from a 2019 study conducted by the University of Newcastle in Australia. The research suggested a wide possible range of ingestion, from as little as 0.1 grams per week to—at the uppermost extreme—5 grams per week. Even back then, the researchers flagged 5 grams as a worst-case scenario. And since then, newer data suggests even the low end of this figure may have been overestimated.
But here’s the thing—what chance do facts like these have against a viral meme like this?
How We Got Here
The credit card image above was produced by the WWF, which commissioned the University of Newcastle study in 2019. Their justification is easy to see—highlighting the higher figure (5 grams) packs a more powerful emotional punch. But here’s where nuance got swallowed by simplicity—and the simplifications kept cascading.
After WWF’s announcement, major outlets like Reuters published reports with slightly misleading headlines that largely ignored the range, replacing it with “up to 5g”. Then, as the story was rapidly shared, the inconvenient “up to” vanished. Ultimately, the internet did what it does best, and “5 grams per week” became the de facto standard, with some creators suggesting it was an underestimate.
Origins of Misinformation
Sometimes, misinformation starts at the source—fraud, manipulation, or deceit. Think of Bernie Madoff, who knowingly deceived investors, or Elizabeth Holmes at Theranos, whose lies misled an entire industry. Even journalists, such as Jayson Blair, have fabricated stories for esteemed outlets like The New York Times.
But not all origins are malicious. The microplastics case shows a different, subtler trajectory. It’s an example of how well-meaning research can get distorted through repetition, simplification, and sensationalism.
Even more complex myths—such as the idea that humans use only 10% of their brains or that the World Series was named after the New York World newspaper—often have no clear origin. Yet, their longevity proves this point clearly:
Lies Outlast Truth When They’re Compelling
As natural storytellers, we humans want to share surprising and emotional information, regardless of whether it’s true.
Do All Lies Matter?
Lies aren’t inherently bad. All of us lie at least some of the time, and sometimes for very good reasons. “White lies” work as social tools that preserve harmony in relationships. Fiction uses its own set of facts so that the author can reveal a deeper truth. And some great work has been created that explores the very boundary between truth and fiction (as well as F for Fake, check out In Cold Blood and Man on the Moon). These works bend the truth to help us see how easily it bends.
But—there’s an obvious difference between harmless lies and those deployed to manipulate power structures. Misinformation turns dangerous when it influences critical decisions, systems, and beliefs. And we’ve hit an era where lies not only spread—they persist.
Now ask yourself—why does this epidemic feel worse today?
The Misinformation Explosion Over 25 Years
Something fundamental shifted in the past few decades, and it boils down to three big drivers:
1. The Fragmentation of Truth
Centuries ago, the truth was communal. Until the advent of the printing press, society relied on orally shared traditions to pass down knowledge. But later, books, and eventually radio and television, centralized narratives in a massive way—creating eras defined by shared facts.
Today we’re fragmenting again. Our current media landscape is an explosion of micro-audiences, each inhabiting its own narrative universe. When everyone is broadcasting to their own tightly defined audience, the social cost of being wrong goes down. Being interesting becomes more important than being accurate.
2. The Virality Premium
A while back, I wrote about how the economics of the internet favor outrage (The Outrage Economy). But those economics also favor falsehoods.
It’s not that algorithms were designed to prioritize lies—platforms reward engagement. And as people, we’re drawn to drama, novelty, and shareability. That makes the internet fertile ground for bold claims like, “You eat 5 grams of plastic weekly.”
Sometimes this is directly exploited by savvy influencers who leverage unfounded conspiracy theories on a large scale, but these individuals are just the tip of the iceberg. Connected to them are hundreds of millions of us becoming just a little more interesting online by spreading things that sound like they should be true, even if they are not.
3. Summarizing Summaries
This is how we used to do research: We’d go to a library and read books from real publishers with real editors. Non-fiction books were typically well-researched, used human fact-checkers, sported dozens of citations, and we checked them directly.
The Internet made many of these sources easier to access. And soon we began to summarize and link our work to that of others. I’ve done it several times already in this article.
On the surface, this is great. After all, if you click on every link in this article, you will go down some fascinating rabbit holes and likely improve your understanding of the world.
But let’s face it, you probably won’t read the articles I have linked to. Instead, you will let me do the work for you, and simply assume I’m not misleading you when I summarize a small part of the article for you. And then, someday, someone will quote this article and take you one degree of separation further away from the source. The further we drift from the original material, the harder it becomes to spot the distortions. And the easier it becomes to amplify them.
Lying AI
Today, many of us never even touch source materials - we just rely on AI’s response. It’s even built into Google now. We are in a giant game of AI-powered telephone, with AIs' guesses being our new truth.
Let’s not kid ourselves here - this is fundamentally different from what’s gone before. With the advent of Generative AI, we’ve moved from technology aiding and abetting lies (through search and social media algorithms) to AI actively creating the lies. Of course, it is not intending to lie - it’s just the nature of a probabilistic system.
This is particularly worrying because, as humans, we are not primed for it. Multiple studies have shown that humans are highly inclined to trust powerful technology systems, even when warned that they may be lying (Science, 2018). That shouldn’t be surprising. We’ve spent decades using technology tools like spreadsheets and databases, which we expect to provide us with accurate and truthful answers.
This was already problematic, but AI is making it worse.
Real-World Deceptions Amplified by AI
This newsletter wasn’t supposed to be about lies; it was supposed to be about professional networking.
When I began my research for that topic, I started as I often do - using AI to explore ideas and conduct research.
The main idea I wanted to explore was whether in-person networking is something extroverts enjoy, or do they find it as cringeworthy as the rest of us. So I asked my preferred AI tool for research, Perplexity.
Perplexity offered back something that backed up my going in position - that pretty much everyone is allergic to networking. And it helpfully offered up supporting links to back up the point.
But as I began to look a bit more closely, I found that every link directly or indirectly cited the work of one researcher, Francesca Gino.
So, I figured I would go to the source, and this is what I found (highlighting mine)
Now, of course, the fact that Harvard Business School states that Gino’s methods have been discredited doesn’t in itself make her hypotheses false. But here’s the thing - Perplexity did not highlight any problems with Gino’s research and even provided me with a table claiming that the claim was broadly supported.
Gino lied, and AI turned it back into a “truth”.
Human-Centered Solutions—What Can We Do?
One thing is obvious - technology companies need to take this more seriously. It’s not surprising that companies whose mantra is often to move fast and break things are disregarding some things they could do to safeguard the truth, such as providing confidence estimates with every answer AI provides.
But as humans, we also need to work on our own skills, so we can be safe and productive. In our Thrive with AI program at BillionMinds, we focus on four areas that are vitally important in this new world:
1. Honing Critical Thinking
Asking key questions like, Who benefits from this claim? or What’s missing here? before accepting statements at face value.
2. Learning Probabilistic Thinking
Recognizing uncertainty. Judgment isn’t about choosing absolutes; it’s about understanding likelihoods (e.g., why cancer screening statistics often confuse non-experts).
3. Building Fact-Checking as Habit
Practicing returning to original papers, distinguishing between primary sources and derivative news/commentary.
4. Auditing Our Own Biases
Humans process narratives favorably when they align with beliefs (confirmation bias). If we lean into self-awareness, we can slow the spread of misinformation.
Why All This Matters
It’s very clear that not all lies are the same. Many lies do very little harm, and as I’ve mentioned, some even serve us well.
But it’s also clear that some lies are hugely damaging, so as participants in a world where information is spread faster than ever before, it’s incumbent on us all to be thoughtful about the information and misinformation we spread. Remember, even a simple like on a social media platform, or just lingering over content, gives it oxygen. Even if you have never written an online post in your life, you are part of the system.
So, think about what you watch, read, like, and share. If something feels right, that might be because it is, but it could be because you want it to be right.
And if you actively create fact-based content, you have extra responsibility. Misinformation will spread, of course, but you don’t have to be part of the cause.
As Ronald Reagan said (and he actually did) - Trust But Verify.