• Garrett O’Hara

    Garrett O’Hara is the Chief Field Technologist, APAC at Mimecast having joined in 2015 with the opening of the Sydney office, leading the growth and development of the local team. With over 20 years of experience across development, UI/UX, technology communication, training development and mentoring, Garrett now works to help organisations understand and manage their cyber resilience strategies and is a regular industry commentator on the cyber security landscape, data assurance approaches and business continuity.

    Comments:0

    Add comment
Content

Many businesses are embracing artificial intelligence - in some shape or form.

Legal firm Pillsbury predicts that AI spending in cyber will soar in the next few years, rising from $10.5 billion in 2020 to potentially $46.3 billion by 2027. In 2022, Mimecast found that around half of respondents had already adopted machine learning or AI – with benefits including more accurate threat detection (65% of organisations), reduced human error (65%) and better threat prevention (57%).

That rise is partly down to AI tools becoming more sophisticated, but it’s also because AI is a great fit for today’s threats. As networks have become more complex and distributed thanks to trends such as the rise of remote work, AI’s ability to analyse millions of security events across different platforms has become increasingly valuable. Its speed in detection and response, meanwhile, means AI can often stop incursions before they turn into the modern cybersecurity nightmare: a ransomware attack.

Here we'll show what AI can do, and what it can’t – and why its role in incident response can be a game-changer.


Real AI is capable of learning

AI’s can be transformative, but organisations should still look carefully before they leap. AI has a bit of a marketing problem, with many solutions out there claiming to be AI but are actually nothing of the sort. Many offerings use automated tools to analyse data and act on the results, but while these static processes may be sophisticated and useful, they fall under data analytics, not AI.

Real AI uses overlapping approaches such as machine learning (in which the AI applies algorithms and statistical models to data, and builds on the results), neural networks (weighted decision making) and deep learning (a multi-layered, artificial neural network). It has knowledge, intelligence and the ability both to learn, and to use that learning.

These characteristics allow AI cybersecurity tools to constantly gather and analyse data, spotting patterns at an almost unimaginable scale. It can scan your assets, survey the threat landscape and even predict breaches. The more data and more networks it has access to, the more useful it becomes.

Why AI plays a crucial role in incident detection

Artificial intelligence’s incident response capability starts early. Security checks, behavioural analytics, monitoring and intelligent prediction help AI identify anomalies across users, servers and software. When it comes to large pattern analysis across a distributed network, AI beats human observers hands-down.

By combining constant monitoring with information about past incidents and behaviours, AI can both spot and evaluate risks. Rather than simply relying on signatures to identify malware, AI can assess various characteristics – such as encrypting multiple files at once, or seeking to hide from observation – to assess the danger software may pose.


AI is more than just a messenger – it can play an active role in response

AI can reduce alert fatigue and mute the background noise that can drive analysts up the wall – and mean real threats are missed amid all the false positives. By spotting issues early, it enables security teams to triage or fight back against incidents fast, preventing them from escalating. But while its role as facilitator is crucial, AI is increasingly playing a more active role.

It can identify repetitive and relatively routine tasks by assessing the level of risk and the amount of context it has, then respond to such incidents at scale. For example, automatically shutting down a device that has been infected by ransomware is an easy win that can avert a major incident.

For more serious or complex incidents, AI can use its knowledge of resources – such as the availability and expertise of engineers – to suggest how an incident can be managed. Teams will be able to hit the ground running, too: multiple data streams can be collated and pulled into a report for workers to scan and decide on a course of action.


Cyberattackers use AI, and so should you

Of course, AI isn’t just used by the good guys. Attackers may use machine learning to identify vulnerabilities before seeking to exploit them via phishing or Distributed Denial of Service (DdoS) attacks, and may even use AI to personalise social engineering emails. Organisations should use machine learning to identify and manage such attacks.

AI is also a part of the wider incident landscape. Its ability to scan great swathes of your networks at speed can help you recover from incidents in shorter timeframes. It can surface the root causes of vulnerabilities and put data and analysis at your fingertips, helping CISOs make a case for future cybersecurity measures, and safeguard against future incidents.


But AI isn’t everything

For all its potential, AI still has its limitations. To be effective, AI must be done right, with tools integrated into your existing ecosystem and adjusted for your workflows.

It’s also worth taking some of the boldest claims about AI with a pinch of salt. While such systems will learn from the changes around them, they can be blindsided by shifts that might seem obvious to a human observer – like the pandemic’s impact on working patterns. AI’s ability to identify anomalies can be hugely useful, but it can still leave thousands of false positives to investigate, especially if it’s drawing from a limited data pool. The truth is that AI is a tool that is only as good as the people using it. AI can support human teams and extend their capabilities, but it is no match for human insight yet. But the right security team, supported by the right AI tools, can make a tremendous difference to your cybersecurity posture.


AI is a crucial part of modern cybersecurity

As organisations’ attack surfaces become distributed and complex, AI’s ability to operate at scale and with relative autonomy has become crucial for cybersecurity. And it isn’t just about threat detection: as we’ve seen, incident response is a growing area in which AI can transform companies' efforts.

AI is not a one-stop solution that you can invest in in place of other solutions: rather it should be used to augment the defences you already have, from firewalls and threat hunting to awareness training and zero trust. Use it wisely, and artificial intelligence can reduce risk, manage routine incidents and free up your security team to direct their expertise where it is needed most.

Chief Field Technologist APAC, Mimecast

Garrett O’Hara is the Chief Field Technologist, APAC at Mimecast having joined in 2015 with the opening of the Sydney office, leading the growth and development of the local team. With over 20 years of experience across development, UI/UX, technology communication, training development and mentoring, Garrett now works to help organisations understand and manage their cyber resilience strategies and is a regular industry commentator on the cyber security landscape, data assurance approaches and business continuity.

Stay safe and secure with latest information and news on threats.
User Name
Garrett O’Hara