• Scott McKellar

    Scott McKellar is currently a Technical Consultant at Mimecast where he has been since early 2019. Scott has been working in the technology industry for fifteen years and is passionate about technology & security. Scott enjoys understanding his  customers and prospects often complex business challenges and aligning them with technology to solve problems and add value. Prior to his role at Mimecast, Scott headed up the technology team for an Australian leading Wi-Fi analytics SaaS and IaaS provider; Discovery Technology (a Data#3 company).

    Comments:0

    Add comment
Content

Artificial intelligence (AI) and machine learning (ML) are being touted as the cutting-edge solution for every tech-problem under the sun

From solving traffic jams to automated manufacturing to voice-operated digital assistants. AI/ML fever has also taken the cybersecurity world by storm, and there’s a whole crop of tech companies out there claiming to have some proprietary AI/ML technology that can solve whatever challenge you can throw at it.

The hype has everyone believing that AI/ML is already here in its fully developed form and is ready to replace all the human workers in the cybersecurity industry. But the truth is a little more complicated. When it comes to the AI/ML tools available today, it’s a very ‘buyer beware’ kind of situation. Let’s take a look and see if we can separate the myths from the machines.


How AI/ML works in cybersecurity

There’s no denying that AI/ML tools have a lot to offer in terms of cybersecurity. Their strength lies in their ability to process vast amounts of data from a variety of sources. The scale of their processing power is useful for flagging incidents for human teams to investigate and making analytics more efficient. 

For example, AI/ML can filter out and identify potential incidents using anomaly detection and clustering algorithms. They can also triage those incidents, helping cybersecurity workers focus on urgent issues that need human attention, while automated tools take care of the rest. 

This is quite an important achievement. The ever-increasing sophistication and volume of threats are forcing security operations teams to pick their battles and prioritise their responses. The old-school way of running a SOC as a manned system working around the clock, and using SIEM [security information and event management] to flag alerts for manual investigation, is no longer sustainable.

The industry is shifting towards intelligent SIEM/SOC platforms where automation takes over the heavy lifting that would normally fall on the shoulders of human analysts. AI/ML can help human teams make decisions faster and more accurately, enabling them to respond much more quickly to security threats and incidents.

The limitations of AI/ML security tools
But there’s a limit to how many functions AI/ML tools can successfully take over. The algorithm is only as good as its inputs: any unexpected data or event outside its programmed parameters can compromise its effectiveness. That’s why human intelligence – especially from skilled security analysts – is still critical. Highly-trained security teams who specialise in detecting, identifying and protecting against a wide range of cybersecurity threats are still required, and will still be needed in the foreseeable future.

Machine learning systems also have the habit of reporting false positives, generally from unsupervised learning systems where algorithms have to infer categories based on the data available. There is also a lack of transparency about how the technology actually works. A common concern among CISOs is that they’re not given full disclosure on the inner workings of proprietary AI/ML solutions. They can find it difficult to put their full trust in a vendor that proposes an AI/ML solution they do not fully understand.

To muddy the waters even further, many AI/ML security solutions on the market may not even be doing machine learning. Commercial AI/ML solutions are typically trained on malware samples within the vendor’s cloud environment. These are then downloaded to customer businesses like antivirus signatures. There’s no actual ‘learning’ from the customer’s environment happening at all.


What’s more, the data samples these AI/ML solutions are trained on are limited. Having algorithms that work well on controlled data sets in the lab is one thing, but getting AI/ML cyber defences working at scale in live, complex networks is another matter entirely.

A hybrid model is the best approach

AI/ML tools can be incredibly useful, but they aren’t a magic bullet. At this time, they are only suitable for a narrow range of applications, so any claims to the contrary should be taken with a healthy dose of scepticism. But there is immense potential in the underlying principles of the technology. For any cybersecurity strategy that relies on these tools, it’s important to understand that the best results are achieved when the tools are used to support and augment a well-trained human cybersecurity team. Your people will always be your greatest asset when it comes to cybersecurity, and while AI/ML can greatly expand their capabilities, it can’t replace them.

 

Learn more about Mimecast's acquisition of MessageControl to increase efficacy in the fight against advanced phishing and impersonation attacks

Scott McKellar is currently a Technical Consultant at Mimecast where he has been since early 2019. Scott has been working in the technology industry for fifteen years and is passionate about technology & security. Scott enjoys understanding his  customers and prospects often complex business challenges and aligning them with technology to solve problems and add value. Prior to his role at Mimecast, Scott headed up the technology team for an Australian leading Wi-Fi analytics SaaS and IaaS provider; Discovery Technology (a Data#3 company).

Stay safe and secure with latest information and news on threats.
User Name
Scott McKellar