why-take9-won’t-improve-cybersecurity

A new initiative for cybersecurity awareness has emerged: Take9. The concept is that individuals—you, me, everyone—should pause for nine seconds to reflect more on the link they intend to click, the file they wish to download, or whatever they are about to share.

There’s a site—naturally—and a video, well-crafted and alarming. However, this campaign is unlikely to significantly bolster cybersecurity. The guidance they offer is impractical, it won’t enhance safety for individuals or countries notably, and it shifts focus from the true sources of our digital vulnerabilities.

To begin with, the advice is unrealistic. A nine-second break feels like an eternity during the mundane activity of using your computer or smartphone. Try it; set a timer. Now consider how many links you click on and how many items you forward or respond to. Are we pausing for nine seconds after every text message? Every Slack notification? Does time reset if someone replies mid-pause? How about browsing—do we stop before clicking on each link or after every page refresh? The logistics quickly become unmanageable. I seriously doubt they tested this with real users.

Furthermore, it largely won’t be beneficial. The sector should remember because we’ve attempted this before. “Stop. Think. Connect.” was an awareness initiative from 2016, orchestrated by the Department of Homeland Security—before CISA—and the National Cybersecurity Alliance. The core message was akin: Pause and reflect before engaging online. It didn’t yield results then, either.

Take9’s website states, “Research indicates: In stressful situations, wait 10 seconds before reacting.” The issue with this is that clicking a link isn’t stressful. It’s routine, occurring hundreds of times each day. Perhaps one can train an individual to count to 10 before striking someone in a bar, but certainly not before accessing a file.

Moreover, there is no scientific backing for this. It’s a common belief circulating online without any reliable research backing it—much like the five-second rule for food that drops on the floor. In emotionally charged scenarios, most individuals are already overwhelmed, cognitively stressed, and not in a state where rational interruption functions seamlessly as this advice suggests.

Pausing Offers Little

Breaks assist us in disrupting routines. If we are clicking, sharing, linking, downloading, and connecting out of habit, a pause can indeed disrupt that pattern. However, the issue here isn’t merely habitual behavior. The core problem is that individuals struggle to distinguish between something legitimate and a threat.

The Take9 platform suggests that nine seconds is “sufficient time for a better decision,” but there’s no point in instructing individuals to stop and think if they lack clarity on what to consider after stopping. Pause for nine seconds and… then what? Take9 provides no direction. It assumes individuals possess the cognitive tools to identify the numerous possible threats and determine which of the thousands of online actions they take could be detrimental. Without adequate understanding, pausing for longer—even a minute—will accomplish nothing toward increasing awareness.

The tripartite suspicion, cognition, and automaticity model (SCAM) offers a framework for understanding this. The first element is the knowledge gap—not being aware of what poses a risk and what doesn’t. The second is habituation: people acting as they typically do. Third, there’s the misuse of flawed mental shortcuts, like assuming PDFs are safer than Microsoft Word files or that mobile devices are more secure than computers when opening suspicious communications.

These processes don’t always function in isolation; they may occur concurrently or sequentially. They can affect one another or negate each other. For instance, a lack of knowledge may lead someone to depend on flawed mental shortcuts, while those same shortcuts can reinforce that knowledge deficit. Hence, significant behavioral change requires more than just a pause; it demands cognitive scaffolding and system designs that take these fluid interactions into account.

An effective awareness initiative would offer more than mere encouragement to pause. It would walk individuals through a two-step approach. First, evoke suspicion, prompting them to scrutinize more closely. Then, guide their focus by instructing them on what to examine and how to assess it. When this occurs, the individual is significantly more likely to make a sound choice.

This signifies that pauses must be contextually relevant. Consider email clients that incorporate alerts like “EXTERNAL: This email originates from outside your organization” or “You have not received correspondence from this individual previously.” Those are specific and constructive. One could envision an AI plugin that warns: “This isn’t how Bruce usually communicates.” However, naturally, there’s an ongoing arms race; malicious actors will adapt to find ways to circumvent these systems.

This is all challenging. The traditional cues are no longer effective. Modern phishing attempts have evolved from previous Nigerian scams filled with grammatical errors and typos. Text message, voice, or video scams pose even greater detection challenges. A text message often lacks enough context for the system to flag it. In vocal or video interactions, it’s considerably more difficult to incite suspicion without disrupting the flow of conversation. Furthermore, all the false positives—when the system misidentifies a legitimate interaction as a potential scam—undermine people’s intuition. Individuals will likely start disregarding their suspicions, much like how many ignore various warnings their computers present.

Even if we execute this impeccably, we cannot render individuals immune to social engineering. Recently, both cyberspace advocate Cory Doctorow and security expert Troy Hunt—two individuals one would expect to be astute phishing detectors—were successfully targeted. In both instances, it was merely the right message at the right moment.

The situation is even more dire for large organizations. Security isn’t grounded in the average employee’s ability to recognize a malicious email; it’s determined by the least capable person—the weakest link. Even if awareness raises the mean, it won’t suffice.

Don’t Assign Blame Where It’s Misplaced

Finally, all of this constitutes poor public policy. The Take9 initiative conveys that individuals can prevent cyberattacks by simply taking a moment and making a better choice. What remains unspoken, yet heavily implied, is that if they fail to take that moment and don’t make improved decisions, they are at fault when an attack occurs.

This simply isn’t accurate, and its blame-the-user narrative represents one of the gravest errors our sector makes. Stop attempting to correct the user. It’s not the individual’s fault if they click a link that compromises their system. It’s not their responsibility if they insert an unfamiliar USB drive or ignore a warning they don’t comprehend. It’s not even their fault if they are deceived by a counterfeit bank website and lose their money. The issue lies in the fact that we’ve crafted these systems to be so insecure that everyday, non-technical users cannot operate them with assurance. We’re employing security awareness campaigns to mask inadequate system design. Or, as security researcher Angela Sasse articulated in 1999: “Users are not the enemy.”

We wouldn’t tolerate this mindset in other areas of life. Visualize Take9 applied in different scenarios. Food service: “Before sitting down at a restaurant, take nine seconds: Inspect the kitchen, perhaps check the cooler’s temperature, or ensure the cooks’ hands are clean.” Aviation: “Before stepping onto a plane, take nine seconds: Examine the engine and cockpit, check the plane’s maintenance log, ask the pilots if they feel rested.” This is clearly absurd advice. The average person lacks the training or expertise to appraise restaurant or aircraft safety—and we don’t expect them to. Regulations and laws exist to permit individuals to dine at restaurants or board flights without apprehension.

However—we understand—the government isn’t poised to intervene and regulate the Internet. These insecure systems are the reality we face. Security awareness education, along with the blame-the-user mindset it fosters, are all we have available. Hence, if we aspire to achieve meaningful behavioral change, it necessitates considerably more than merely a pause. It requires cognitive support and system designs that acknowledge all the dynamic interactions involved in a decision to click, download, or share. And that demands genuine effort—much more than simply an advertisement campaign and a polished video.

This essay was co-authored with Arun Vishwanath and originally published in Dark Reading.


Leave a Reply

Your email address will not be published. Required fields are marked *

Share This