Deepfakes combine existing video images along with whatever the creator of this artificial intelligence-driven synthesis wants the person to be saying. These are likely to be used against banks, according to Tim Dunn, commercial director at ValidSoft. “As deepfake technology evolves, it will represent an advanced method of social engineering,” he says. “The widespread use of web and app-based video channels will provide the opportunity for advanced deepfakes to fool both agents and AI bots alike.” It seems the best bet when it comes to future mitigation will be biometric voice synthesis detection that can identify the fakes.
Staying inside the target network is key to a successful cyberattack; the longer they are in, the more they can get out. Counter-incident response does what it says on the tin: it turns off antivirus, firewalls, anything that might trigger detection. “The longer they have to achieve their goal, be that lateral movement, island hopping further up the supply chain or data collection, the better chance they have of success,” says Rick McElroy, head of security strategy at Carbon Black. Mitigation will have to include machine-learning-powered technology to filter the noise in incident reporting so security analysts can respond to the real events as quickly as possible.
Automated attack methodology
Just as security operations centres are starting to apply intelligent automated incident response filtering, so criminals are finding that automated attack methodology works well on their side of the security fence. “Automated, active attacks are escalating and causing massive damage to organisations that have been targeted,” says Chester Wisniewski, principal research scientist at Sophos. They begin with automated mass reconnaissance scans and basic malware infections, with human involvement coming later to see what’s been caught in the net. “These are essentially criminal penetration tests,” Wisniewski warns. Mitigation? Keeping your cybersecurity emergency basics in place.
Big game hunting
Rather than adopt a scattergun approach of automated malware infection, big game hunters take their time to target key organisations for the best return. The weapon of choice is ransomware, which employs well-tested and human-powered reconnaissance, delivery and lateral-movement tactics, techniques and procedures. “The wider e-crime network has been seen to be leveraging this approach more widely,” explains Zeki Turedi, technology strategist at CrowdStrike. ”As a highly devastating yet effective tactic, we can only see this continuing.” Mitigation requires proactive monitoring for indicators of attack by capturing all raw events to detect malicious activity not identified by traditional prevention methods.
Disinformation is the deliberate act of providing and spreading inaccurate information with the aim of manipulating perceptions and influencing decisions be they political or business in nature. “Lies are a fact of life,” says Rodney Joffe, former cybersecurity adviser to the White House and currently senior vice president and fellow at Neustar. “But the Internet has enabled this to occur at scale with close to 100 per cent reach.” Lack of jurisdiction and absence of physical validation makes it almost impossible to tell truth from lies. “The best we can do is continually develop methods of validation and authentication, and ensure this is part of every process, both commercially and personally,” Joffe concludes.