← All Posts
Privacy 7 min read
Digital Disconnections
Lena & Heather Digital Disconnections

Why Scaring People About Privacy Doesn’t Work

Fear-based messaging triggers the exact cognitive patterns that keep people locked in. The only thing that moves behavior is autonomy.

There is a certain kind of privacy article that begins with a number.

Hundreds of data points collected every hour. Billions of records exposed in the last breach. Seventeen companies you’ve never heard of who know your income, your health history, your 3 a.m. search habits. The number is always large enough to be incomprehensible and precise enough to feel authoritative. You read it, feel a cold wash of something — dread, maybe, or the vague nausea of confirmation — and then you close the tab and open Instagram.

The article did its job. You are now more aware than you were before. You are not, however, doing anything differently.

This is the fear paradox. And it has been quietly undermining privacy advocacy for twenty years.


The Awareness Trap

The theory behind fear-based messaging is intuitive: if people understood what was being done to their data, they would be outraged. And if they were outraged, they would act. Awareness leads to action. Therefore, create awareness.

The theory is wrong — or rather, it is wrong about humans.

Kahneman and Tversky showed that we feel losses more sharply than equivalent gains. But there is a caveat the privacy world keeps ignoring: loss aversion is most powerful for immediate, tangible losses. When the loss is abstract, probabilistic, distributed across years — which describes nearly every data privacy harm — the same framing that should motivate instead produces cognitive overload. The brain downgrades the threat because it can’t locate it. It can’t locate it because it has no address, no face, no specific day on which anything was actually taken.

The result is not outrage. Researchers who study this call it privacy fatigue — a measurable state of withdrawal that looks, from the outside, exactly like apathy. Choi et al. (2018) documented it as a clinical response pattern: repeated exposure to warnings without corresponding agency produces burnout, not action. Hargittai and Marwick found the same thing across demographics — awareness of surveillance correlated with resignation. The more people knew, the less they tried.

Privacy fatigue is not apathy. It’s the rational response to being told repeatedly that you are in danger and having no clear path to safety. The mind decides, more or less consciously: if I can’t fix it, I won’t feel it.

You closed the tab. You opened Instagram. That wasn’t failure of will. That was a perfectly logical response to a message that offered no exit.


Why Fear Makes the Cage Bigger

Here is the crueler part: fear-based messaging doesn’t just fail to motivate. It actively reinforces the behaviors it’s trying to interrupt.

Brehm’s reactance theory — later extended by Cialdini in Influence — holds that when people perceive their autonomy is under threat, they resist, often by intensifying the very behavior being discouraged. Tell someone their favorite platform is dangerous and you have made it more psychologically attractive, because abandoning it now feels like conceding agency rather than exercising it. Ariely showed that default choices become stickier under cognitive load, and nothing generates cognitive load quite like being told your entire digital life is compromised. Samuelson and Zeckhauser found that status quo bias strengthens in proportion to the number of alternatives and the complexity of the choice. Fear messaging about privacy does both: it multiplies the perceived alternatives (which platform is “safe enough”?) while making every choice feel more fraught.

When you tell someone that the platforms they use are surveilling them, you are threatening their identity, not just their data. Most people have years of social context, memories, and relationships stored on those platforms. To say “this is bad for you” is also, implicitly, to say “your choices have been bad.” Identity threat triggers defensiveness, not reflection.

The status quo is powerful in proportion to how much we’ve already invested in it. The longer we’ve been somewhere, the more reasons we find to stay. Fear doesn’t erode that math — it confirms it. If everything is unsafe and surveillance is everywhere, the familiar unsafe thing is still safer than the unfamiliar unknown.

Fear messaging, paradoxically, is an argument for staying put.

There is a harder version of this argument, from Seligman’s learned helplessness research. Subjects exposed repeatedly to uncontrollable negative stimuli stopped attempting escape — even when conditions changed and escape became trivially easy. The mechanism is what Seligman called explanatory style: when a cause is perceived as permanent (“surveillance is everywhere”), pervasive (“every platform does this”), and uncontrollable (“you already agreed to the terms”), the brain reclassifies the problem from something to solve to something to endure. Every privacy headline that emphasizes the scale and inevitability of data collection — however accurate — is training its audience in precisely this explanatory style.

The advocacy meant to liberate people ends up confirming their helplessness. That is the full arc of the paradox.


What the Research Says Actually Works

The good news — and there is good news, which is also the point — is that researchers have a pretty clear picture of what does move behavior.

Not fear. Not shame. Not the 400-million-records statistic.

Autonomy.

Deci and Ryan’s self-determination theory identifies autonomy, competence, and relatedness as the three core psychological needs that drive sustained behavior change. When messaging activates a sense of competence and agency — here is something you can do, and it works — behavior follows, reliably if not instantly. When messaging undermines those needs, as fear messaging does by positioning the user as a victim of forces beyond their control, the response is withdrawal. Peer et al. tested this directly in privacy contexts: gain-framed messages (“this tool keeps your data on your device”) significantly outperformed loss-framed messages (“your data is being collected”) in driving adoption of privacy tools. And Acquisti, Brandimarte, and Loewenstein’s review in Science concluded that the gap between privacy attitudes and privacy behavior — the famous “privacy paradox” — is best explained not by hypocrisy but by the absence of perceived behavioral control.

People don’t act because they’re scared. They act because they believe acting will work.

This is not a communications insight. It is a design insight. You can’t solve a messaging problem with better messaging if the underlying experience doesn’t offer a real exit.

The question isn’t “how do we scare people into caring?” It’s “what does caring actually look like, and is that thing available?”


The Kintsugi Principle

Kintsugi is the Japanese practice of repairing broken ceramics with lacquer mixed with gold. The philosophy behind it is not that the break didn’t happen. The philosophy is that the break is part of the object’s history — that acknowledging it, even honoring it, is more honest and more durable than pretending it away.

We’ve been trying to pretend the break away by making people afraid of it.

The kintsugi approach to privacy is not denial and it is not alarm. It’s acknowledgment. Yes, the data economy is extractive. Yes, the defaults were designed for someone else’s benefit. Yes, you have been tracked. That’s true. It is also not the end of the story. The break is visible. You can work with visible.

What you can’t work with is a threat that has no resolution, a statistic with no action attached, a warning with no door marked “exit.”


What Digital Disconnections Does Instead

Private Assistant doesn’t work by warning you about surveillance. It works by making surveillance structurally unnecessary for this tool, in this moment. The model runs on your device. Your questions do not leave the room. There is no server to log your input, no terms of service requiring data retention, no third-party integrations quietly inheriting your conversation history.

We don’t lead with what we’re against. We lead with what you gain.

You gain a tool that answers to you. You gain an AI that processes your thinking in your presence instead of at a facility you’ve never seen. You gain the ordinary, underrated experience of a private thought remaining private.

We’re not asking you to be afraid of the alternative. We’re asking you to notice how good the alternative feels.

That distinction — between fear-as-motivation and capability-as-invitation — is the whole argument.


A Note on Preachiness

One of the hazards of writing about privacy is the tendency to collapse into moral instruction. To tell the reader what they should do, what their choices really mean. Privacy advocates, myself included, are not immune to this tendency.

But preachiness is another form of identity threat. Nobody changes their behavior because someone disapproves of their behavior. They change because they see something worth moving toward.

So: this is not a rebuke. If you’ve been closing the scary privacy articles and going back to the familiar apps, that was a reasonable response to a poorly designed message. The message didn’t offer you a door.

There is a door.

It’s not even a hard door. It opens from the inside.


Sources

Acquisti, A., Brandimarte, L., & Loewenstein, G. (2015). Privacy and human behavior in the age of information. Science, 347(6221), 509–514.
Ariely, D. (2008). Predictably Irrational. HarperCollins.
Brehm, J. W. (1966). A Theory of Psychological Reactance. Academic Press.
Choi, H., Park, J., & Jung, Y. (2018). Is the Internet user’s privacy fatigue real? Cyberpsychology, Behavior, and Social Networking, 21(7).
Cialdini, R. (2021). Influence (revised ed.). Harper Business.
Deci, E. L., & Ryan, R. M. (1985). Intrinsic Motivation and Self-Determination in Human Behavior. Plenum.
Hargittai, E., & Marwick, A. (2016). “What can I do?” How people manage their digital privacy concerns. Communication Research, 43(6).
Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
Peer, E., Brandimarte, L., Samat, S., & Acquisti, A. (2020). Beyond the Turk. Journal of Experimental Social Psychology.
Samuelson, W., & Zeckhauser, R. (1988). Status quo bias in decision making. Journal of Risk and Uncertainty, 1(1), 7–59.
Seligman, M. E. P., & Maier, S. F. (1967). Failure to escape traumatic shock. Journal of Experimental Psychology, 74(1), 1–9.

Fear got their attention. That was never the problem. We just need to give them somewhere to go.

Explore Our Products
Digital Disconnections
Lena & Heather
Digital Disconnections

Lena crafts the narrative arc and brand voice. Heather layers in the behavioral psychology — Kahneman, Cialdini, Seligman — that turns intuition into evidence. Together they write about why the words we use about privacy matter less than the architecture underneath.