Grain — Case for Impact

Grain is a new social media app with nonprofit governance that uses reflective satisfaction, not engagement, as its north star metric. What follows is an honest assessment of the counterfactual case for better social media and five impact channels for Grain, ranked by evidence quality and practical magnitude.

The Counterfactual Case for Better Social Media

The counterfactual question

Much of the public debate about social media is organized around a single question: is social media harmful? Researchers fight over effect sizes. Advocates cite teen depression statistics. Skeptics point to meta-analyses showing correlations near zero. That debate matters, but it is not the only question worth asking.

For the purpose of deciding whether to work on improving social media, the more useful question is: how large is the gap between social media as it currently exists and social media as it could exist?

If that gap is large, then there is enormous counterfactual value in closing it — regardless of where social media currently falls on the spectrum from slightly net positive to slightly net negative.

The impact surface is enormous

Social media is, by almost any measure, one of the most consequential technologies in American life. The numbers are worth spelling out because the sheer scale is easy to understate.

Users. Roughly 253 million Americans use social media — about three-quarters of the population (Pew Research, 2025). The typical user actively visits nearly seven different platforms per month. YouTube and Facebook each reach a majority of all U.S. adults; Instagram reaches half.

Time. The average American spends about 2 hours and 9 minutes per day on social media (BroadbandSearch). For young adults aged 18–24, it’s over 3 hours. For teens, it’s 4.8 hours. Aggregated across the U.S. user base, that comes to roughly 545 million hours of social media use per day, or about 200 billion hours per year. For comparison, total annual working hours in the entire U.S. economy are roughly 300 billion — meaning Americans collectively spend about two-thirds as much time on social media as they spend at work. Put differently, social media consumption is the equivalent of a workforce of 100 million full-time employees doing nothing but scrolling.

Money. U.S. social media advertising revenue reached roughly $94 billion in 2025, with the U.S. accounting for the largest share of global social ad spend (Statista). The U.S. spends about $335 per social media user per year in advertising — more than seven times the global average. This revenue is the direct financial expression of how much attention platforms capture, and it flows almost entirely to a handful of companies whose business model is to maximize that attention.

Share of life. Over a lifetime, current usage patterns mean the average American will spend roughly 6–7 years on social media. For heavy users and younger cohorts, substantially more. Social media is the dominant medium through which hundreds of millions of Americans maintain relationships, form identities, consume news, encounter political ideas, and spend their leisure time.

The sheer scale means that even small per-person improvements in quality — slightly less regret, slightly more meaningful connection, slightly better information — aggregate to massive population-level impact. A design change that makes the median session 5% more satisfying across 253 million users represents roughly 10 billion hours of improved human experience per year in the U.S. alone. Few interventions have a comparable surface area.

The current equilibrium is clearly suboptimal

We do not need to prove that social media is harmful to establish that it could be dramatically better. We only need to show that the current design is far from the optimum. The evidence for this is strong and comes from multiple independent sources.

Users themselves report it. A Carnegie Mellon study analyzing 34,000 smartphone screenshots found that users regretted at least some of their social media use in 60% of sessions and regretted all of it in nearly 40% of sessions (CHI 2025). The content most regretted is not messages from friends — it is algorithmically recommended content that users did not seek out. This is a direct, revealed measure of the gap between what platforms deliver and what users actually want.

People try and fail to use less. 41% of users report having tried to cut back on social media and being unable to. Among teens who attempt breaks, 61% return within a day. The pattern of repeated failed attempts to reduce a behavior that the person themselves considers undesirable is not evidence that the product is well-designed for human flourishing. It is evidence of a design that is optimized for a metric (engagement) that diverges from user welfare.

The collective action trap. Bursztyn, Gonzalez, and Yanagizawa-Drott (2024) ran deactivation experiments at scale and found that most users would prefer a world in which they and their peers use social media less, but no individual can achieve this outcome alone because their social graph remains on the platform (Bursztyn et al.). This is a textbook collective action failure: the current equilibrium is stable but Pareto-suboptimal. Everyone would be better off with a different arrangement, but no one can unilaterally switch.

Platform incentives guarantee suboptimality. Advertising-funded platforms maximize engagement, not user satisfaction, because advertisers pay for attention. When the business model rewards time-on-app rather than quality-of-experience, the resulting product will systematically diverge from what users would choose if they could. This is not a conspiracy theory — it is a straightforward principal-agent problem. The user is not the customer; the advertiser is.

Taken together, these findings do not merely suggest that social media could be slightly better. They suggest that the current design is trapped in a local optimum that serves platform economics at the expense of the people using it. The gap is structural, not marginal.

Direct Impact

Today, your social graph is bundled with entertainment. Instagram and TikTok hold your connections hostage alongside algorithmically curated content designed to maximize time-on-app. This bundling traps people in a dynamic they don’t want: you open the app to see how a specific friend is doing, then lose that goal as the algorithm pulls you into an infinite feed of unrelated content. Carnegie Mellon researchers found that users are distracted from their intended purpose 28% of the time on social media, and the CHI 2025 screenshot study found that users who opened an app intending to message a friend ended up doing something else over 60% of the time. Goal loss is not a bug in these products, it is a core feature.

Grain unbundles the social graph from entertainment. By becoming a home for your primary social connections, separate from platforms optimized for engagement, Grain makes it possible to maintain your relationships without being trapped by the content machinery around them. This is fundamentally about user choice: if your friendships live on Grain, you can leave Instagram’s entertainment features behind without losing the people you care about.

The evidence strength varies across five channels of direct impact.

Theory of ChangeEvidencePer-User ValueNotes
Reducing Unwanted Screen Time for Average UsersStrong$100–150/yearCleanest mechanism, best-supported
Improving Quality of Social Connection and RelationshipsModerateHard to quantifyMost distinctive to Grain
Improving Mental Health, Especially for the Most VulnerableModerateHeterogeneous; $500+/year for 5–10% most affectedStrongest for adolescent girls
Reducing Political Polarization and Partisan AnimosityWeak to moderate~Zero individuallyMeta 2020 studies complicate the story
Reducing Exposure to MisinformationWeakNegligibleWhatsApp problem is a cautionary tale

For a grant application, lead with time recapture (strongest evidence, cleanest mechanism, most quantifiable), pair it with social connection quality (most distinctive to Grain, hardest for a skeptic to dismiss as trivial), acknowledge mental health (important for the narrative but honest about the heterogeneity), and be cautious with polarization and misinformation (real but the evidence either doesn’t support big claims or the mechanism is indirect).

1. Reducing Unwanted Screen Time for Average Users

StrongEstimated per-user value: $100–150/year

This is the strongest theory of change. The mechanism is direct: Grain replaces an app with infinite algorithmic content with one where your friend feed naturally runs out.

The 31% self-control gap from Allcott et al. (2022) establishes that excess use is real and large. The 78% adoption rate for binding limits shows people want help. The design features driving excess use — infinite scroll, variable reinforcement, algorithmic content injection — are precisely what Grain removes by construction. The deactivation and limit-setting RCTs consistently show that reduced time improves wellbeing by 0.06–0.09 SD, replicated across seven studies and multiple countries.

The gap in the evidence is that no study has tested the specific intervention Grain proposes: a substitute platform that preserves social function while removing extractive design. The closest analog is the chronological feed experiments from Guess et al. (2023), which did reduce time spent. But Grain goes further than chronological ranking — it eliminates non-friend content entirely.

The behavioral economics framework is well-established, the mechanisms are well-documented, and the intervention logic follows directly. The uncertainty is about whether Grain can actually capture the social graph, not about whether the intervention would work if it did.

What this looks like at scale. Suppose 10% of U.S. social media users (~25 million people, per Pew Research) adopted Grain and it displaced half of their time on other platforms. The average American spends about 129 minutes per day on social media, and the Allcott et al. self-control gap suggests roughly 31% of that time is unwanted — about 40 minutes per day that people would prefer not to spend. If Grain’s finite, friend-only design helped users recapture even half of their unwanted time on the displaced portion (~20 minutes per day), that would aggregate to roughly 3 billion hours of recaptured time per year across 25 million users. Using the per-user welfare estimate of $100–150/year from the deactivation and limit-setting literature, that translates to roughly $2.5–3.8 billion per year in aggregate welfare gains. For context, this is comparable to the annual economic value of a mid-sized federal public health program — from a single design change to a single product category.

2. Improving Quality of Social Connection and Relationships

ModerateEstimated per-user value: Hard to quantify

This might be the second-strongest theory of change after time recapture, and the one most distinctive to Grain.

The CHI 2025 screenshot study found person-to-person communication had the lowest regret of any smartphone activity. The deactivation studies found people socialized more with friends and family when off Facebook. There’s a plausible case that Grain, by centering direct messaging and friend updates rather than content consumption, shifts the ratio of social interaction to passive consumption.

The dating recession evidence is suggestive — in-person time with friends down 50% since 2010, declining confidence in social skills among young adults. Grain can’t fix this directly, but if the platform’s design encourages genuine back-and-forth (responding to friend posts, direct messaging) rather than passive scrolling, it might preserve or strengthen social ties that Instagram’s design lets atrophy. You scroll past your friend’s post, maybe drop a like, but never actually talk to them. Grain’s finite feed and lack of competing algorithmic content might increase the likelihood of genuine interaction.

The evidence gap is that no study has compared relationship quality across platform designs. We know deactivation increases face-to-face socializing, and we know person-to-person communication has low regret, but we don’t know whether a friend-focused platform produces more or better social interaction than a general-purpose one.

3. Improving Mental Health, Especially for the Most Vulnerable

ModerateEstimated per-user value: Heterogeneous; $500+/year for 5–10% most affected

This is where the evidence gets genuinely complicated. The population-average effect of social media on mental health is small — Orben and Przybylski’s 0.4% of wellbeing variation, the meta-analysis showing negligible active/passive effect sizes. But the population average obscures substantial heterogeneity.Braghieri et al. (2022) found Facebook introduction increased depression by 7% and anxiety by 20% relative to baseline, and that was pre-algorithmic Facebook — meaning the social comparison dynamics that Grain would still contain were sufficient to cause harm. Facebook’s internal research found 32% of teen girls with body image concerns said Instagram made them worse, though those numbers have methodological issues.

The problem for Grain specifically is that some of the mental health harm comes from social comparison, which is inherent to seeing your friends’ lives — not an artifact of algorithmic amplification. A chronological friend feed might reduce exposure to influencer content and algorithmically surfaced appearance-focused content, but it doesn’t eliminate the “everyone’s life looks better than mine” dynamic. It might even concentrate it, since every post you see is from someone you actually know and compare yourself to.

The strongest case is for adolescent girls, where the evidence (despite being contested) is most suggestive of meaningful harm, and where the mechanism plausibly runs through algorithmic content — explore page, beauty content recommendations, engagement-bait body image content — rather than just friend posts. Grain’s friend-only model would eliminate the algorithmic exposure channel while preserving the social connection channel.

There is a genuine unresolved question about whether social comparison among friends is itself harmful. No study has isolated friend-only social media from algorithmic social media for mental health outcomes.

4. Reducing Political Polarization and Partisan Animosity

Weak to moderateEstimated per-user value: ~Zero individually

The headline findings seem supportive: Milli et al. showed engagement algorithms amplify partisanship by 0.24 SD, Jia et al. showed reranking reduced partisan animosity by 2 points. Grain eliminates the algorithmic amplification channel entirely. But the Meta 2020 election experiments are the biggest problem — across four large-scale studies, chronological feeds, reduced like-minded content, and removed reshares produced no significant effects on affective polarization or political attitudes. Nyhan et al. found reducing like-minded content by a third moved nothing.

The reconciliation might be that Grain’s mechanism isn’t about feed ranking within a political content ecosystem — it’s about removing political content exposure altogether. A friend-only feed has very little political content compared to Instagram’s explore page or X’s algorithmic timeline. You’re not reranking political posts, you’re replacing them with photos of your friend’s dog. That’s a different intervention than what the Meta experiments tested.

But Boulianne’s meta-analyses found social media has small positive effects on civic participation, and the deactivation studies showed reduced political knowledge. If Grain pulls people off platforms where they encounter political information, you might reduce polarization while also reducing engagement with democratic processes. That’s a tradeoff, not a pure win.

The direct experimental evidence (Meta 2020 studies) suggests feed-level interventions don’t move polarization. Grain’s mechanism is different — displacement rather than redesign — but the displacement pathway to reduced polarization hasn’t been tested. And the potential cost to civic engagement complicates the story.

5. Reducing Exposure to Misinformation

WeakEstimated per-user value: Negligible

Guess et al. (2023) found that removing reshared content on Facebook reduced exposure to untrustworthy news sources. Grain’s friend-only model has no resharing mechanism and no algorithmically surfaced news content, so exposure to misinformation through the platform would be near zero compared to Instagram, Facebook, or X.

But the honest question is whether this matters much in practice. Misinformation exposure on social media is concentrated among a small share of heavy news consumers. Most people’s Instagram feed is already mostly friends, food, and entertainment, not political misinformation. And to the extent that Grain displaces Instagram but people still use X or YouTube for news, you haven’t reduced their misinformation exposure — you’ve just moved it to a different app.

Grain could also create its own misinformation channel through DMs and friend posts. Friend-sourced misinformation (“my friend shared this so it must be credible”) can be more persuasive than algorithmically surfaced content from strangers. WhatsApp’s misinformation problem in India and Brazil is the cautionary example of what happens when misinformation spreads through trusted social graphs.

Research Impact

Grain’s nonprofit governance and open design create a second, independent source of impact: Grain as a platform for generating high-quality research on social media’s effects that benefits everyone, not just its own users.

The knowledge gap is a supply problem, not a demand problem

Billions of dollars flow into social science and public health research every year through NSF, NIH, Wellcome Trust, and dozens of foundations. The demand for rigorous research on social media’s effects is enormous. The problem is that almost nobody can run the studies, because platforms won’t give researchers access.

The few times researchers have gotten access, the results have gone straight to the top journals. The Meta 2020 election experiments produced four papers in Science and Nature. That level of publication impact from a single data-access event tells you how starved the field is. And even those studies were constrained: Meta controlled what got published and researchers had limited ability to study design-level questions.

What Grain uniquely enables

A nonprofit-governed platform with reflective satisfaction as its metric has no incentive to suppress unflattering findings. This is structurally different from every major social media company, where research that would justify reducing engagement is an existential threat to the advertising business model. Facebook’s internal research showed they knew about harms and buried them. That’s not a failure of individuals; it’s a predictable consequence of incentives.

Grain could offer researchers something that doesn’t currently exist: a platform designed to be studied. This means IRB-approved randomized experiments on design choices (chronological vs. no feed, finite vs. infinite scroll, friend-only vs. mixed content) with real users in naturalistic settings. It means longitudinal data on the relationship between platform design and wellbeing, with the platform’s own north star metric as a built-in outcome measure. And it means open publication regardless of findings, something no ad-funded platform can credibly commit to.

Scale and feasibility

You don’t need Facebook-scale data for publishable research. Many of the most influential social media studies have sample sizes in the thousands or even hundreds. A platform with 50,000–100,000 engaged users could support serious, peer-reviewed research on design-level questions that the field currently cannot answer.

The self-selection concern (Grain users chose a “healthier” platform, so they’re not representative) is real but also interesting rather than disqualifying. What happens to people who switch from engagement-optimized to satisfaction-optimized social media? How do their habits, wellbeing, and social connections change over time? These are questions nobody can currently study because the intervention doesn’t exist yet.