The First Move is Data Collection
Data minimization is security architecture.
In my last Field Note, I proposed that if victory is decided before conflict, then withholding knowledge guarantees defeat before the first move is ever made. But in some cases, quite the opposite is true: not withholding knowledge guarantees defeat before the first move is ever made. Sometimes the problem is not that people lack information. Sometimes the problem is that we collect too much of it.
In December 2025, the Electronic Frontier Foundation published their “Breachies” list, a satirical awards-style roundup of the year’s worst breaches. Their conclusion wasn’t complicated: “the best way to avoid catastrophic data exposure is to stop collecting data you do not need in the first place.”
That idea sounds obvious. But in practice, it cuts against how most systems are built. We have normalized collecting everything: identity data, location, analytics, behavioral tracking, photos, and verification artifacts. We tell ourselves we will “secure it later.”
And then the breach happens.
Sometimes the most dangerous breaches aren’t the flashy ones. They’re the quiet ones. The ones hiding inside “normal software,” like analytics components and tracking technology that end up embedded everywhere. That’s the terrain now. Your organization does not only have the risk you designed for, it also inherits the risk of what your developers included, what your vendors required, and what your workflows quietly depended on.
And this is where Sun Tzu’s supply line wisdom becomes uncomfortably modern. In Waging War, he wrote: “Bring war material with you from home, but forage on the enemy. Thus the army will have food enough for its needs.” A few lines later, he makes it even more direct: “Hence a wise general makes a point of foraging on the enemy. One cartload of the enemy’s provisions is equivalent to twenty of one’s own…” That isn’t just a battlefield lesson. It’s an economic one. The smartest adversaries don’t want to sustain themselves with their own resources if they can capture yours. And in modern systems, our “provisions” are often the data stockpiles we collect and retain without thinking twice.
Over-reliance on cryptography is one of the greatest threats in our industry. This is why the idea of “Q-day” matters—even if it’s not tomorrow. Store-and-decrypt later attacks don’t start in the future. They started years ago. If an adversary can harvest encrypted data today and decrypt it later, then “we encrypted it” was never the finish line. The safest sensitive dataset is the one you never collected.
Attackers don’t have to outsmart your tools in a fair fight. They just need you to build the depot. Once they get inside, they can use what you gathered to sustain themselves: identity fraud, account takeovers, extortion, resale, long-tail exploitation. That’s why the worst breaches are not always defined by the initial access. They’re defined by what was waiting on the other side of the door. And it gets worse when the data you collect can’t be replaced. A password can be rotated. A credit card can be canceled. But the modern push for identity checks and biometric verification creates a different kind of harm. More and more organizations are collecting IDs, selfies, face scans, and verification artifacts. When those are exposed, there is no clean reset.
You don’t get to rotate your fingerprint when it ends up on the dark web.
This is the part most teams miss: Security is not only about preventing compromise. It’s also about limiting what compromise can take from you. No single person ever holds the full picture of a complex system. Engineers see one set of constraints. Operators see another. Leaders see priorities and pressures that never appear in diagrams. Users interact with systems in ways designers didn’t anticipate. Risk emerges in the gaps between those perspectives.
Collaboration is not a soft value in this context, it’s a strategic necessity. But collaboration needs to include one uncomfortable question early, not after the breach: Why are we collecting this data at all?
Asking that question is the first move. As Sun Tzu taught: “The supreme art of war is to subdue the enemy without fighting.” Because security controls can be perfect on paper and still fail in real life. Even experts get caught. As the Breachies conversation bluntly points out, you can’t nerd your way out of a lot of these problems. People have bad days. Vendors fail. Misconfigurations slip through. Threat actors exploit timing, not just weakness. So the strategic advantage is not only being smarter than an adversary. It’s designing systems where the worst day does not become a permanent catastrophe.
Here are five questions worth asking before anything ships:
What decision does this data enable? If there isn’t one, don’t collect it.
Can we verify without retaining? Prove the fact, don’t store the artifact.
What third parties touch this data? Every vendor expands the blast radius.
How long do we truly need it? Put retention on a timer by default.
What happens if this leaks? If harm is irreversible, treat it as radioactive.


