The Age of Unreality: Why It’s Time to Rethink Our Social Media Habits

The Age of Unreality: Why It’s Time to Rethink Our Social Media Habits

In January 2024, thousands of New Hampshire Democrats received robocalls featuring President Joe Biden’s voice urging them not to vote in the primary election. The message was convincing, the tone familiar, the cadence perfect. There was only one problem: President Biden never made that call. It was a deepfake, an AI-generated fabrication that fooled countless voters (Council on Foreign Relations, 2024). This incident wasn’t an isolated anomaly, it was a harbinger of our new reality, where the convergence of social media, misinformation, and deepfake technology has created an information ecosystem so compromised that reducing our social media intake is no longer just wellness advice. It’s a necessity for maintaining our grip on reality itself.

The Amplification Machine: How Social Media Is Built to Spread, Not Verify

Social media platforms have fundamentally transformed how information moves through society, but not in the way their creators promised. Rather than democratizing truth, these platforms have built sophisticated systems that prioritize engagement over accuracy, creating what researchers now call an “amplification machine” for false information.

The numbers tell a stark story. A 2024 study from Indiana University revealed that just 0.25% of users on X (formerly Twitter) were responsible for between 73% and 78% of all tweets containing low-credibility information or outright misinformation (U.S. PIRG Education Fund, 2023). This tiny fraction of users has outsized influence because platform algorithms reward content that generates engagement regardless of whether that content is true.

The speed at which false information spreads on social media vastly outpaces the methodical work of fact-checking. By the time independent fact-checkers verify or debunk a claim, it has already circulated through millions of feeds, embedded itself in countless conversations, and shaped public opinion. As of Q1 2025, over 72% of internet users globally encounter misinformation on at least one social platform monthly, with Facebook remaining the most affected platform where 52% of users report weekly exposure to false information (SQ Magazine, 2025).

This creates a psychological phenomenon known as the “illusory truth effect” — when people encounter the same false claim repeatedly, they begin to believe it, even if they initially questioned its validity. Social media’s algorithmic architecture exploits this cognitive vulnerability, showing users similar content repeatedly and creating echo chambers where false narratives are reinforced rather than challenged. Approximately 45% of U.S. adults now say they find it difficult to determine whether information on social media is true or false (SQ Magazine, 2025), a troubling statistic that reflects the erosion of our collective ability to discern reality from fabrication.

The Misinformation Crisis: When Lies Go Viral

The scale of misinformation on social media has reached crisis proportions, with real-world consequences that extend far beyond individual confusion. The economic impact alone is staggering: misinformation is estimated to cost the global economy $89 billion in 2025, factoring in public health missteps, election security costs, and business reputational damage (SQ Magazine, 2025). In the United States, COVID-era misinformation led to an estimated $4.2 billion in unnecessary healthcare spending between 2020 and 2025 (SQ Magazine, 2025).

But the human cost extends beyond dollars. Misinformation has fueled political polarization, undermined public health initiatives, enabled financial fraud, and even incited violence. During major events such as elections, pandemics, international conflicts, false information proliferates at exponential rates. During the 2024 U.S. elections, misinformation posts on X surged by nearly 240% in the 48 hours before and after Election Day (SQ Magazine, 2025). In conflict zones, the consequences are even more severe: during the 2025 Israeli-Gaza escalation, over 13,000 AI-generated visuals were identified across social media within the first week (SQ Magazine, 2025).

Health misinformation poses particularly acute dangers. In sub-Saharan Africa, health-related misinformation increased by 75% in 2025, largely due to emerging epidemics and low digital literacy rates (SQ Magazine, 2025). False health claims during disease outbreaks create panic, drive poor medical decisions, and undermine trust in legitimate healthcare institutions. When people can’t distinguish between credible medical advice and viral falsehoods, lives are literally at stake.

The demographic patterns of susceptibility reveal troubling vulnerabilities across all age groups. Older adults (65+) remain the most susceptible, with 61% unable to consistently identify false content across platforms (SQ Magazine, 2025). Yet younger users aren’t immune: teenagers aged 13–17 saw a 24% spike in exposure to misinformation in 2025, especially via meme-driven formats on TikTok and Snapchat (SQ Magazine, 2025). Perhaps most concerning, one in three Gen Z users admitted to unknowingly sharing misinformation, believing it to be factual at the time of posting (SQ Magazine, 2025).

The Deepfake Revolution: When Seeing Is No Longer Believing

If misinformation represents the corruption of words, deepfakes represent the corruption of reality itself. Deepfake technology , AI-generated synthetic media that can convincingly simulate real people saying or doing things they never did , has evolved from a theoretical threat to a ubiquitous weapon of deception.

The technology has become alarmingly accessible and sophisticated. What once required specialized equipment and expertise can now be accomplished with smartphone apps and free online tools. AI-generated fake content saw a 300% increase from early 2023 to mid-2025, particularly via deepfake videos (SQ Magazine, 2025). More troubling still, deepfake incidents surged by 171% in the first half of 2025 compared to the total number recorded since 2017, with financial losses from deepfake fraud reaching $897 million, including $410 million occurring in just the first half of 2025 (Surfshark, 2025).

Elections have become primary targets for deepfake manipulation. Since 2021, 38 countries have faced deepfake incidents during elections, potentially influencing a population of 3.8 billion people (Surfshark, 2025). Among countries that held elections from 2023 onwards, 33 out of 87 experienced deepfake incidents. These aren’t theoretical exercises — they’re active attempts to manipulate democratic processes.

The examples are as varied as they are disturbing. In India’s 2024 general election, political parties spent an estimated $50 million on AI-generated content, with millions of voters exposed to AI-generated deepfakes that mimicked politicians, celebrities, and even deceased leaders (Surfshark, 2025). In Indonesia’s presidential election, deepfake videos falsely showed candidates speaking languages they didn’t speak or in situations they never experienced (Surfshark, 2025). In Turkey’s contested 2023 election, manipulated videos falsely depicted opposition leader Kemal Kılıçdaroğlu in meetings with militants, fueling distrust among nationalist voters (Surfshark, 2025).

Foreign interference through deepfakes has also emerged as a significant threat. In Germany, the Russian-run “Storm-1516” network established over 100 AI-powered websites to distribute deepfake videos and false reports attacking politicians ahead of national elections (Surfshark, 2025). Taiwan’s 2024 presidential election was targeted by waves of deepfake videos, fake audio, and synthetic images believed to originate from foreign actors (Surfshark, 2025).

The platforms where these deepfakes spread reveal the central role of social media in this crisis. Social media platforms (Twitter/X, Facebook, Instagram) are mentioned in 92% of analyzed election-related deepfake incidents, with X mentioned most frequently at 53%, followed by Facebook at 39% (Surfshark, 2025).

The detection arms race between deepfake creators and those trying to identify fakes consistently favors the creators. While AI-powered detection models can now identify false text posts with up to 93% accuracy, image and video verification still lags at around 67% (SQ Magazine, 2025). More critically, over three in four Americans believe they are not prepared to detect fake photos, video, and audio on their own (ABC News, 2024). When the average person cannot trust their own eyes and ears, the foundation of shared reality begins to crumble.

The Perfect Storm: Convergence and Collapse

The true danger lies not in any single component such as social media, misinformation, or deepfakes, but in their convergence. Together, they create what scholars are calling an “epistemic crisis,” a breakdown in our collective ability to determine what is true and what is false.

Social media provides the distribution mechanism, with its algorithms ensuring that engaging content, regardless of veracity, reaches the largest possible audience. Misinformation provides the content, crafted to appeal to emotions, confirm existing biases, and spread rapidly. Deepfakes provide the credibility, offering seemingly incontrovertible visual and audio “evidence” that bypasses our critical thinking and appeals directly to our sensory perception.

This convergence has led to what researchers call “truth decay,” a gradual but persistent erosion of shared reality. When people cannot agree on basic facts because they inhabit entirely different information ecosystems, productive civic discourse becomes impossible. Families divide over false information. Communities fracture along lines of competing unrealities. Trust in institutions such as media, government, science, and medicine plummets as people retreat into information bubbles that reinforce rather than challenge their existing beliefs.

Sixty-four percent of people now say fake news causes a great deal of confusion about current events (SQ Magazine, 2025). This confusion breeds anxiety, decision paralysis, and a pervasive sense that reality itself has become unmoored. The psychological toll is measurable: continuous exposure to misinformation increases anxiety and decision fatigue, noted in 34% of social media users in a 2025 digital wellness survey (SQ Magazine, 2025).

The Personal Price: Mental Health and Cognitive Costs

Beyond the societal implications, there are profound personal costs to sustained engagement with compromised information ecosystems. The mental health impacts of social media use, particularly when combined with exposure to misinformation and deepfakes, are increasingly well-documented and deeply concerning.

Children and adolescents who spend more than three hours a day on social media face double the risk of mental health problems, including experiencing symptoms of depression and anxiety (U.S. Department of Health and Human Services, 2025). This is alarming considering that teenagers now spend an average of 3.5 hours per day on social media (U.S. Department of Health and Human Services, 2025). When asked about social media’s impact on their body image, 46% of adolescents aged 13 to 17 said social media makes them feel worse (U.S. Department of Health and Human Services, 2025).

The impacts extend beyond mood disorders. Among users who consume five or more hours of social content daily, depressive symptoms were 23% higher if exposed to misinformation regularly (SQ Magazine, 2025). False health claims caused misdiagnosis anxiety in nearly 18% of surveyed adults, leading to self-treatment without medical consultation (SQ Magazine, 2025). Gen Z users reported feeling “digitally manipulated,” with 41% stating they mistrust even verified sources after repeated misinformation exposure (SQ Magazine, 2025).

The cognitive costs are equally troubling. Constant engagement with social media fragments attention, reduces capacity for sustained concentration, and erodes critical thinking skills that develop through slower, more contemplative information consumption. The exhaustion of constant verification, involving attempts to fact-check every claim, second-guess every image, and question every source, creates a state of perpetual vigilance that is psychologically unsustainable.

Cognitive overload caused by rapid, contradictory information led to digital withdrawal symptoms in 12% of surveyed daily users (SQ Magazine, 2025). Exposure to political misinformation was associated with increased polarization and hostility, especially in group discussion environments (SQ Magazine, 2025). Misinformation-heavy users scored 28% lower on trust perception tests compared to those with balanced content exposure (SQ Magazine, 2025).

Why Platform Changes Aren’t Enough

Faced with mounting evidence of harm, social media platforms have implemented various measures to combat misinformation and deepfakes. Meta introduced an AI fact-checking tool in 2025 that processes over one million posts daily. TikTok expanded its content labeling system. YouTube removed over 11 million videos containing health and election misinformation in the last 12 months. X launched “Community Notes 2.0” for real-time annotation of tweets (SQ Magazine, 2025).

These efforts sound impressive until you examine their effectiveness. Meta’s AI fact-checking tool has a real-time success rate of just 37%. TikTok’s content labeling system still leaves 30% of flagged content live for over 48 hours. X’s Community Notes system reduced misinformation spread by only 12% within pilot regions (SQ Magazine, 2025). Despite all platform efforts, misleading content engagement is only down 9% year-over-year, suggesting severely limited success in suppression tactics (SQ Magazine, 2025).

The fundamental problem is structural: social media companies’ business models depend on maximizing user engagement, which directly conflicts with the slower, more careful approach required for truth verification. Algorithmic amplification accounts for 64% of total engagement on misinformation content across major platforms (SQ Magazine, 2025). Platform moderation remains chronically under-resourced, with the average content moderator now reviewing over 1,200 posts per day (SQ Magazine, 2025).

Regulatory efforts, while well-intentioned, move too slowly to keep pace with rapidly evolving technology. Thirty-two countries have enacted new or amended legislation directly addressing social media misinformation as of mid-2025, with the European Union’s Digital Services Act now in full enforcement and having issued €750 million in collective fines (SQ Magazine, 2025). Yet despite regulatory progress, enforcement remains inconsistent, particularly in jurisdictions without dedicated digital watchdogs.

The unavoidable conclusion is that we cannot rely on platforms to fix themselves or on governments to regulate effectively in real time. The profit motive, technological complexity, and speed of innovation consistently outpace both voluntary reforms and legislative remedies.

The Case for Personal Action: Reduction as Self-Preservation

If we cannot fix the platforms and regulation cannot keep pace with technological change, what remains is personal action. Reducing social media intake isn’t about rejecting technology or disconnecting from modern life; it’s about consciously choosing where we direct our attention and which information ecosystems we inhabit.

The evidence for reduction is compelling. When people take breaks from social media or significantly reduce their usage, measurable improvements follow. A large-scale experimental study with over 35,000 participants found that temporarily deactivating Facebook or Instagram led to clear improvements in happiness, depression, and anxiety, with deactivation being about 15–22% as effective as the average psychological intervention such as cognitive behavioral therapy (Northeastern University, 2025). More broadly, reducing time spent in compromised information environments allows for restoration of critical thinking abilities, reduction in anxiety and decision fatigue, and rebuilding of trust in verifiable information sources.

Reduction offers benefits that platform reforms cannot provide. It allows individuals to take back control of their information inputs, shifting from passive consumption of algorithmically-curated content to active selection of reliable sources. It enables people to rebuild direct community connections that don’t depend on mediated digital platforms. It provides mental health benefits through decreased anxiety, improved sleep, and reduced feelings of manipulation and confusion.

This isn’t about abandoning social media entirely — a proposal both unrealistic and unnecessarily extreme. Instead, it’s about intentional use: establishing boundaries, curating feeds carefully, diversifying information sources, and prioritizing slower, more reliable forms of information consumption alongside highly selective social media engagement.

What Conscious Reduction Looks Like

Practical implementation of social media reduction varies by individual circumstances, but several principles apply broadly:

Time limiting strategies: Use built-in screen time controls or third-party apps to set strict daily limits on social media use. Many find that reducing usage to 30–60 minutes per day (from the average of 3.5 hours for teenagers) creates significant benefits while maintaining necessary connections.

Diversifying information sources: Supplement or replace social media news consumption with direct sources: established news organizations with editorial standards, primary documents, academic publications, and books. These slower-paced sources allow for depth, context, and verification that social media cannot provide.

Creating alternative social connections: Invest in direct, unmediated human contact. Phone calls instead of DMs. In-person gatherings instead of likes and comments. Community involvement that doesn’t require digital platforms. These connections provide social benefits without exposure to misinformation ecosystems.

Developing media literacy skills: Learn to recognize common misinformation patterns, understand how algorithms work, identify manipulated media, and approach viral content with healthy skepticism. These skills make necessary social media engagement less risky.

The 80/20 rule for platforms: Identify which aspects of social media provide genuine value (keeping in touch with distant relatives, professional networking, specific interest communities) and ruthlessly eliminate the rest. Often, 20% of social media use provides 80% of the benefits.

Tech-free zones and times: Establish physical spaces (bedrooms, dinner tables) and time periods (first hour after waking, last hour before sleep) that are completely free of social media and digital news consumption.

Reclaiming Reality

We stand at a critical juncture. The information ecosystem that has emerged over the past decade, characterized by algorithmic amplification, rampant misinformation, and increasingly sophisticated deepfakes, poses unprecedented challenges to our ability to maintain a shared sense of reality.

The stakes could not be higher: our mental health, our democratic institutions, our social cohesion, and our capacity to address collective challenges all depend on our ability to distinguish truth from fiction.

The platforms will not save us. Their business models prevent fundamental reform. Regulation, while necessary, cannot keep pace with technological change. What remains is individual and collective action; conscious choices about where we direct our attention, how we consume information, and which digital ecosystems we choose to inhabit.

Reducing social media intake is not a retreat from modern life but an act of self-preservation in an age of information warfare. It is choosing signal over noise, depth over breadth, and reality over its increasingly convincing simulations. It is recognizing that our most precious cognitive resources, including attention, critical thinking, and trust, are under assault and require protection.

The path forward requires courage to step away from platforms engineered to be addictive, wisdom to recognize that not all information sources are equal, and discipline to prioritize long-term wellbeing over short-term engagement. It requires building new habits, fostering new communities, and developing new literacies adequate to our technological moment.

Most importantly, it requires understanding that the question is no longer whether to reduce social media intake, but how quickly we can implement meaningful change before the erosion of shared reality becomes irreversible. The age of unreality is here. Our response will determine whether we lose ourselves in the confusion or find our way back to firmer ground.

The choice, ultimately, is ours. But we must choose soon, and choose wisely. Our grip on reality depends on it.

References

ABC News. (2024, October 25). AI deepfakes a top concern for election officials with voting underway. https://abcnews.go.com/Politics/ai-deepfakes-top-concern-election-officials-voting-underway/story?id=114202574

Council on Foreign Relations. (2024, February 2). Election 2024: The deepfake threat to the 2024 election. https://www.cfr.org/blog/election-2024-deepfake-threat-2024-election

Northeastern University. (2025, May 1). Taking a break from Facebook and Instagram can boost emotional well-being, research finds. https://news.northeastern.edu/2025/05/01/social-media-break-mental-health-study/

SQ Magazine. (2025). Social media misinformation statistics 2025: How social platforms amplify false content (with data). https://sqmagazine.co.uk/social-media-misinformation-statistics/

Surfshark. (2025). 38 countries have faced deepfakes in elections. https://surfshark.com/research/chart/election-related-deepfakes

U.S. Department of Health and Human Services. (2025). Social media and youth mental health. https://www.hhs.gov/surgeongeneral/reports-and-publications/youth-mental-health/social-media/index.html

U.S. PIRG Education Fund. (2023, August 14). How misinformation on social media has changed news. https://pirg.org/edfund/articles/misinformation-on-social-media/