In today’s digital world, cybersecurity is a top priority for every organisation, or at least it should be. But the current economics of storing and processing enterprise security data makes it expensive and nearly impossible to defend against cyber threats effectively.

In this article, we share key security operations modernisation considerations for organisations looking to improve their ability to address potential cyber threats in a smarter, faster, and more cost-effective manner.

Today’s security operations challenges

The challenge is that the attack surface is expanding, and the threat landscape is adapting so quickly that it is increasingly difficult for organisations to keep up with the sheer volume of threats. In a recent Forbes article, Jess Leroy, Google Cloud’s Director of Product Management, explained, “The biggest problem in the industry when it comes to SOCs themselves is really that people have been doing things in a way for a long time now that is really no longer sustainable. The old model just doesn’t work.”

This is mainly because of two reasons; volume and speed. Jess had a few relevant statistics—like a 600% year-over-year growth in crimeware and the fact that there are over 100 zettabytes of data out there. Meanwhile, threat actors are becoming more sophisticated and increasingly using automation to streamline attacks. Combine that with a severe shortage of cybersecurity talents and skill sets, causing Security Operations teams to often be overworked and undertrained for the challenges they face.

Modern security operations requirements

Organisations generate vast amounts of data but are often limited in the amount of information they can store and process, with traditional setups which charge based on data ingestion rate. Resulting in most only using 30% or 40% of their telemetry. They are not correlating and analysing all of the data—because they simply don’t have a framework capable of doing it.

The standard is often 90-day retention periods for their security logs, yet IBM statistics show that the average dwell time between infiltration to detection is 250 days. This means that should an incident occur, there is no forensic record available for incident responders to identify where the threat came from and how to mitigate it. Therefore, organisations are forced to make educated guesses based on partial snapshots.

To ensure that organisations have the appropriate protections in place to respond to the threat landscape and the risks that they face, security operations requirements should include the following:

  • Lower and predictable cost with petabyte scalability for analysing enterprise security telemetry across on-premise, cloud and hybrid solutions.
  • Google search speed against petabytes of data for rapid investigations.
  • Instant intelligence by correlating indicators of compromise against 1+ years of security telemetry.
  • SOC productivity multipliers with automated response playbooks.

Future of security operations considerations

Whether organisations can count the number of SOC team members on the one hand, or possess a robust 24/7 operation, maturing security operations abilities will help give the data and strategy needed to effectively communicate security capabilities with the executive team and the board.

Reforming the security operations strategy means ensuring resources go toward enhancing security maturity and improving cyber resilience to ultimately reduce overall risk to the organisation. The right strategy will be scalable to meet the evolving and diverse range of security threats and tailored to fit the company’s unique needs. The result is enhanced threat detection and response across the entire environment, greater visibility, and reduced silos among teams. Read more in our eBook – A New Era of SOC: Detect, Hunt and Respond to Cybersecurity Threats like Google. 

Although the journey looks a little different for every business, there are a few key security telemetry considerations every organisation should consider when getting started…

  1. Where to host? Scale and speed matter.
  2. Retention? At least 365-days of high-availability data is a must.
  3. Scalability? Choosing which data to collect or drop due to scalability constraints restricts visibility.
  4. File types? It’s got to be searchable so you can retroactively match newly discovered indicators of compromise.
  5. Accessibility? When you need it, external incident responders might need it also.
  6. Reliability? If the log is missing or lost, it cannot be proven not to have been tampered with.

See how Secrutiny and Google Cloud have got your back

Customers choose our managed security operations service (SOCaaS) for its disruptive approach to building a modern threat detection, investigation, and response program,” says Ian Morris, Co-Founder of Secrutiny. “Built on Google Cloud and leveraging its security infrastructure, most notably Chronicle we’re able to connect the dots faster across multiple data sources and accurately mitigate threats.”

Check out this case study to see how together, we helped Morgan Sindall be able to analyse 100% of the available telemetry and see threats far and wide.

Our technical and industry experience is helping organisations detect sophisticated threats in an efficient, analytically driven, and continuously improving model. Speak to our team today to see how Secrutiny and Google cloud can have your back.