Imagine a constantly evolving and evasive cyberthreat that could target individuals and organisations remorselessly. This is the reality of cybersecurity in an era of artificial intelligence (AI).
AI has shaken up the cybersecurity industry, with automated threat prevention, detection and response revolutionising one of the fastest growing sectors in the digital economy.
Hackers are using AI to speed up polymorphic malware, causing it to constantly change its code so it can’t be identified
However, as is so often the case, there’s a dark side. What if cybercriminals get their hands on AI, and use it against public and private sector organisations?
The more AI cybersecurity solutions, the more tempting for hackers
“The edge in cyberdefence is speed. AI is transforming cyberdefence, allowing businesses to detect evermore complex threats from evermore sophisticated attackers,” says Andre Pienaar, founder of C5 Capital.
Nevertheless, the more AI security solutions, the more cybercriminals will adopt the technology; it’s a case of fighting fire with fire. Newton’s Third Law describes the situation aptly: for every action, there is an equal and opposite reaction.
Before the advent of AI in cyberattacks, the security landscape was already challenging. But the use of AI in targeted criminal attacks has made cybersecurity more treacherous. Not only are attacks more likely to be successful and personalised, but detecting the malicious piece of intelligent code and getting it out of your network is likely to be much more difficult, even with AI security in your corner.
Adoption of AI by cybercriminals has led to a new era of threats that IT leaders must consider, such as hackers using AI to learn and adapt to cyberdefence tools, and the development of ways to bypass security algorithms. It won’t be long before a continuous stream of AI-powered malware is in the wild.
“In the short term, cybercriminals are likely to harness AI to avoid detection and maximise their success rates,” says Fraser Kyne, Europe, Middle East and Africa (EMEA) chief technology officer at Bromium. “For example, hackers are using AI to speed up polymorphic malware, causing it to constantly change its code so it can’t be identified. This renders security tools like blacklisting useless and has given old malware new life.”
Hackers using AI because it takes less effort and yields greater rewards
What about some particular threats? AI-based malware, such as Trickbot, will begin plaguing organisations more regularly. This particular Trojan, a piece of malicious code that can enter a network in a way not dissimilar to Homer’s Trojan Horse, is able to propagate and infect systems automatically. Changes can be made by the malware’s authors on the fly, so it is very difficult to detect and remediate.
The autonomous benefits of AI security apply to cybercriminals and their nefarious activities, enabling them to analyse large stolen datasets in the blink of an eye and in turn create personalised emails or messages to target unsuspecting individuals.
AI trumps human every time as was shown in an experiment conducted by two data scientists from security firm ZeroFOX. The AI, called SNAP_R, sent spear phishing tweets to more than 800 users at a rate of 6.75 tweets a minute, capturing 275 victims. The human, by contrast, sent malicious tweets to 129 users at 1.075 tweets a minute, capturing only 49 individuals. It’s no contest and another reason why hackers are adopting AI as it takes less effort and yields greater rewards.
“Traditionally, if you wanted to break into a business, it was a manual and labour-intensive process,” says Max Heinemeyer, director of threat hunting at Darktrace. “But AI enables the bad guys to perpetrate advanced cyberattacks, en masse, at the click of a button. We have seen the first stages of this over the last year with advanced malware that adapts its behaviour to remain undetected.”
Cybersecurity models must be data-centric to be truly effective
To cope with this emerging AI security threat, organisations need to adapt their security strategies to not only accommodate AI and innovation, but also prioritise protection of the corporate gold: data. In the digital economy, the main aim of hackers is to exploit data; it’s where the money is. Also, crucially, AI does not represent a silver bullet.
“Organisations should use data-centric security models underpinned by information assurance to protect data, as well as continue all the innovations surrounding AI, while continuing to adopt a prevent, detect and response strategy,” says Dan Panesar, vice president and general manager, EMEA, at Certes Networks. “This combination is the best way for organisations to protect themselves in this digital world.”
Cybersecurity, while not the only consideration, must be front and centre in the minds of IT leaders. The consequences of a breach are certainly great enough to keep any chief executive awake at night.
Make no mistake, we’re engaging in cyberwar, when AI is both the weapon of mass destruction and part of the sophisticated solution. And the AI arms race is just beginning.
Isolating the threat
Application isolation, developed by Bromium, is a unique technology that renders malware harmless by allowing it to execute fully in a completely isolated, contained environment. As the malware is trapped in a micro virtual machine, it has no means of escape and no data to steal, ultimately preventing damage to the enterprise. This helps to protect against the most common attack vectors, such as malicious downloads, plug-ins and email attachments.
It also provides unique threat data. By allowing malware to run, security teams can track the full kill chain to see what it is trying to do or steal. As this data is captured in the virtual machine, AI can then be applied to spot patterns, identify gaps and recommend next best actions for response. Knowing how an attack works enables organisations to deal with it in minutes and mitigate the threat. However, it is important this solution is used alongside other protection tools to secure an organisation.