Are your security controls optimised for success? Verizon estimates that 82% of enterprise breaches should have been stopped by existing security controls but weren’t. Why? Security controls fail repeatedly and silently.

It’s a given that knowing if your security controls are effective requires testing. However, for a lot of organisations, formal testing is still largely considered covered by pen tests, a task generally only conducted once a year. This begs the question, are we doing enough?

Is our testing comprehensive enough to validate issues in the press and protect us against the most common and embarrassing attacks? And can our testing keep up with the pace of change we see on a daily, if not hourly, basis? It’s likely that what we used to test annually may now need testing more frequently and updated regularly to remain relevant.

We’ve created a testing spectrum mapping out seven different styles of testing and their respective results. Your organisation can consider where it sits on the spectrum and think about how effective that might be:

Figure 1: Testing spectrum – ranging from least to most effective.

Figure 1: Testing spectrum – ranging from least to most effective.

With 100% of the workforce connected to home networks and surrounded by several unmanaged devices, cybersecurity is forced to use a new lens, fueling the demand for more frequent testing with automated testing, such as Breach and Attack Simulation (BAS), at the forefront. BAS mimics the multitude of attack strategies and tools that attackers deploy, allowing you to assess your true preparedness to handle cybersecurity risks effectively.

But, while these tools are an essential step forward, they are not the only component in your security stack, and you can’t just set it and forget about it. You must conduct continuous tests of both the controls’ effectiveness and its response, i.e., did you get an alert in the SIEM? And did the P1 email arrive in your inbox? Ultimately, testing aims to improve the security posture and make your organisation more resilient, not just create jobs for IT; for example, if you are retesting the same things yearly – what have you actually learned?

So, What Does Good Actually look Like?

Good is continuous testing of critical controls, with highly trained staff who can quickly identify what to do if a test fails and understand why. For instance, did it fail because of a change in the information flow or a change to the tooling configuration? Good is also regularly reviewing your completed tests to improve or correct them according to the changing threat landscape. Results should be easy to comprehend and shared with the relevant teams in a timely manner.

And while the importance of including a human element cannot be overstated – as a human can change direction within a test if countermeasures are encountered, we don’t believe human-only or machine-only testing is the answer. A blended approach treats the scale of the problem but still focuses on hard to test items such as processes. We also believe testing should answer real questions the CIO might have, such as, how long would it take an attacker on the network to compromise my backups?

We’re not saying that human testing with a pen tester has had its day. We’re saying that…not enough vectors are being covered; that testing needs to be more frequent; and that a blend of automated and manual testing is the correct way to ‘determine if your controls are doing what it says on the tin’.