Resources

Why Your Phishing Program Is a Waste of Time and Money

How much time do you spend on building, running, and reporting on internal phishing email tests each year?

Many organizations are running phishing tests quarterly or even monthly. These tests tend to span the range of complexity from the classics such as foreign royalty looking for help in moving some money, to more advanced scenarios tailored specifically to the target. The test is run and then a report is made. In most cases, the focal point of the test is the percentage of recipients who failed. Failure can mean different things in different situations, but the most common two would be either clicking on the link or perhaps providing credentials to the attackers. Perhaps people who fail are provided some additional awareness training.

If the program stops here, I will say you are wasting your time, the time of your end users, and are likely wasting funds on a tool that would be better used elsewhere.

The above situation is far too common and is not designed to move the needle on the overall security posture of your environment. Reporting on click rates and nothing more is the equivalent of reporting how many thousands of “intrusion attempts” your perimeter firewall blocked last month. Both metrics are vanity metrics that sound impressive, but do not provide any context. When I hear something like “10% of our users clicked on last month’s phishing test” a flood of questions come to mind, including:

  • Is that good?
  • What was the context of the event?
  • How does that compare to previous tests?
  • How do we compare to peer organizations in our industry?
  • Were other controls bypassed to allow testing?
  • Did anyone report the phishing email?
  • How quickly did the first end user report the email to security/help desk?
  • How many failures occurred after the first suspicious email report?

I could go on, but my point is that simply reporting on a click rate or credential compromise rate alone does not matter. I would argue that programs focusing on these metrics are doing more harm than good. Organizations are being lulled into a false sense of security when the click rate is brought into the single digits. I say this is a false sense because, according to the 2023 Verizon Data Breach Investigations Report,“74% of all breaches include the human element, with people being involved either via Error, Privilege Misuse, Use of stolen credentials or Social Engineering.”

We, as an industry, have been running these phishing email tests for years, and clearly these are not working.

When looking at the above phishing scenario, the analogy I like to use is that of an accounting department. I work in security; I do not want the accounting department to expect me to help close the books at month end. I am not an accountant, and really would not be the right person to be helping. What is fair is the fact that the accounting department has created an expense reimbursement policy they expect me to follow which allows them to do their job of closing the books. They should also build in controls to make sure expenses have appropriate approvals, coding for the general ledger, etc.

To bring this to the phishing email scenario, security needs to stop expecting their end users to be security experts. That is not their job. What is fair is for security to design a simple process that end users are expected to follow such as “click this phish alert button” or “forward the email to the phishing @ email address”. That should be the only end user involvement. From there, the security team should be building a layered control environment to:

  • Limit the likelihood of a malicious email reaching an end user
  • Increase the response capabilities to triage suspicious emails
  • Reduce the impact of a successful phishing email against the organization

How to Design a Phishing Program That Isn’t a Waste of Time

Start by building the program with your end goal in mind. Ideally, you will be using these phishing tests to influence a specific end user behavior. You should also identify which metrics would allow you to measure the influence these tests have on the intended behaviors. Some examples of phishing metrics by maturity:

  • How many people clicked on the link/document in the email?
  • How many people subsequently provided credentials?
  • How many recipients reported the phishing email?
  • How many failures occurred before the first suspicious email report?
  • How fast did your MSP/MSSP triage the phishing attempt and take proper actions?
  • How many legitimate phishing emails (not tests) were reported by end users?
  • How many end users were prevented from interacting with legitimate phishing emails due to other end user repots?

At the end of the day, if you want the program to be a valuable tool in your toolkit, you need to design the program in a way that avoids the vanity of out of context metrics and focuses on having a meaningful impact on end user behavior. Also, look at what the real-world attackers are doing. They are targeting end user credentials because they have value. You can focus on reducing the likelihood of an end user exposing their credentials, but this has not worked to date. Consider working to reduce the value of those end user credentials by minimizing the impact a compromised password has on your environment.