• Garrett O'Hara

    Garrett O’Hara is the Principal Technical Consultant at Mimecast having joined in 2015 with the opening of the Sydney office, leading the growth and development of the local team. With over 20 years of experience across development, UI/UX, technology communication, training development and mentoring, Garrett now works to help organisations understand and manage their cyber resilience strategies. When not talking about the cyber security landscape, data assurance approaches and business continuity Garrett can be found running, surfing or enjoying the many bars and eateries of Sydney's Northern Beaches.

    Comments:0

    Add comment
Garrett O'Hara

Sounds of the scam: the increasing sophistication of voice phishing attacks and deepfakes

Content

Don’t believe everything you hear. The voice on the phone may just be a deepfake voice phishing attack designed to slip by your defences.

As technology continues to change the way we live and work, we’re seeing more and more of our tasks shift online, making face-to-face interaction a little less prominent in our daily work lives.
As incredible this has been for productivity and collaboration, it has also given rise to a new generation of scammers and con artists who are finding a plethora of new vulnerabilities to exploit in our digital workflows.

Business Email Compromise (BEC) attacks are spreading like wildfire, with cybercriminals using impersonation attacks, invoice scams and spear phishing spoof attacks to steal data and sneak malware into their targets’ systems. Deepfake attacks, or voice phishing attacks, are an extension of BECs that have introduced a new tactic to the attacker’s arsenal. 


What is a deepfake?

Deepfake combines deep learning and fake identities, using machine learning and artificial intelligence to combine and superimpose existing images, audio and video. This enables attackers to generate fake voices, videos and images that seem authentic. This form of social engineering aims to trick or coerce individuals into taking actions that benefit the attacker. On a smaller scale, this can involve impersonating an authority figure’s voice to get someone to carry out a transaction or share sensitive data. On a larger scale, considering today’s charged global political climate, deepfake attacks can potentially be used to create distrust, influence public opinion and defamation.


How it works

The first step in creating a convincing deepfake is to train a deep learning model by using a large, labelled dataset of audio and video samples until it reaches an acceptable level of accuracy.
Public figures tend to have large volumes of audio and video content freely available on social media sites, so it’s not too difficult to gather enough samples.

The model can then synthesize a face or voice that matches the training data accurately enough to seem authentic. Many of us are already aware of fake videos of politicians, but with companies becoming more visible on social media and CEOs delivering talks through videos and podcasts, there is a risk that influential business leaders may soon find themselves victims of deepfake attacks.
 


How voice phishing enhances BEC attacks

Deepfake audio is also being used to enhance BEC attacks. Reports already show a marked rise in deepfake audio attacks over the last year and it looks like they will become even more common as the technology matures.

In fact, recent research demonstrated that a convincing cloned voice can be developed with just four seconds of source audio. Within this small time frame, all distinguishable personal voice traits, such as pronunciation, tempo, intonation, pitch, and resonance, can be captured to create a convincing deepfake. The more source audio and training samples the model has the more convincing the output and the more difficult it is to detect.


Some products even allow users to select a voice of any gender and age, rather than emulating the intended target. This method has the potential to allow for real-time conversation or interaction with a target, which makes detection even more difficult.

 
 

Dangerous new tactics, same old motive

The threat is no longer just theoretical. Large-scale deepfake audio deceptions have already been attempted, to the tune of $243,000. Leaders must remain aware of the non-conventional cyber threats they are exposing themselves to and maintain a robust security awareness training program that evolves alongside current threats like voice phishing.

 

Even though the methods cyber attackers use are growing more sophisticated, the goal is typically the same: tricking humans for financial gain. Cybersecurity technology is always evolving, and it’s important to understand how much these scams rely on human behaviour to succeed. Training and awareness are some of your best defences. Making sure employees do their due diligence before sharing any sensitive information and checking for confirmation before carrying out any unusual transactions will thwart most deepfake attempts. That also means empowering your people to decline or withhold any request from senior management that doesn’t follow company policy until they’ve confirmed its authenticity. No exceptions. 

 

Culture and cyber resilience go hand-in-hand. By cultivating a cyber-aware culture that factors in both technology and human behaviour, you can dramatically improve the cyber resilience of your organisation as a whole. 

Principal Technical Consultant

Garrett O’Hara is the Principal Technical Consultant at Mimecast having joined in 2015 with the opening of the Sydney office, leading the growth and development of the local team. With over 20 years of experience across development, UI/UX, technology communication, training development and mentoring, Garrett now works to help organisations understand and manage their cyber resilience strategies. When not talking about the cyber security landscape, data assurance approaches and business continuity Garrett can be found running, surfing or enjoying the many bars and eateries of Sydney's Northern Beaches.

User Name
Garrett O'Hara