27 Mar 2023 4 min read

How is AI changing cyber security?

By Aude Martin

Rapid progress in artificial intelligence (AI) is enabling new cyber security solutions – but also creates the possibility of a new era of cyber crime. We examine how both sides are adapting.

AI-computer-face.jpg

ChatGPT has led to a surge of interest in AI in recent months, with blogs on what generative AI means for the future of work and which industries are most likely to be disrupted featuring on these pages.

Beyond raising hopes of increased labour productivity, AI has potential to solve some of humanity’s most urgent challenges. From interpreting vast quantities of genomic data to help the fight against cancer, to optimising urban transport strategies to tackle climate change, the potential applications of AI are endless.

However, as with any new tool, technological developments mean new risks. One area where this double-edged nature is particularly apparent is cyber security, where both cyber criminals and companies providing security solutions are exploring how rapid progress in AI could give them the upper hand.

The hacker’s POV: lower barriers to entry and smarter spam

The ability of ChatGPT and other generative AI systems to write functional code (alongside programming-specific tools such as Copiliot) will lower the barriers to entry for creating computer programs of all types, potentially including malware. Creators of AI systems will come under pressure to prevent misuse, but the ingenuity of those looking to harness AI for criminal gain will make this a constant battle – recall the disputes we see today over who is ultimately responsible for harmful content shared over social media and the practicality of policing these spaces.

Lower technical barriers to entry could accelerate the existing trend of all aspects of cyber crime becoming available to non-specialists as a result of the rise of ‘malware as a service’, in which criminals offer to outsource services to the highest bidder.1

However, AI-written code may ultimately prove less valuable to cyber criminals than machine-generated content targeting the weakest link in cyber security: the user. The commoditisation of apparently human writing could allow criminals to greatly refine phishing emails by drawing on information gleaned from social media and company websites to create bespoke attacks.

Initially, individualised attacks will take place via text, but ‘deepfakes’, synthetic media produced by leveraging AI, will add a new dimension to criminal efforts to deceive victims. This technology is also increasingly likely to become a feature of disinformation campaigns.2 Erroneous AI-generated content has also highlighted that the source material used by these systems may also be vulnerable to manipulation.

The cyber security POV: real-time threat monitoring

For cyber security companies, AI promises new ways to identify potential attacks, enabling them to be stopped before they cause harm.

For example, imagine a phishing or extortion attack perpetrated by email. To defend against this attack vector, cyber defence companies have historically identified and blocked specific email addresses, domains or email contents. While this allows known attacks to be blocked, these features are constantly being changed. The content of fraudulent emails, for instance, often includes topical and emotive subjects, as fraudsters know this will engage readers. Domains and sender addresses also constantly change.

Rather than identifying malicious emails based on these static signatures, AI allows identification of malicious content based on what it is seeking to achieve.

Darktrace* uses this approach with its AI systems3, which assign a score in four ‘inducement categories’4: extortion, solicitation, phishing and other spam. Because these scores are based on underlying features of email structure rather than signatures, they can identify malicious content regardless of superficial changes.

The blurring line between cyber and physical security

As digital systems become increasingly interwoven with physical security, we believe the role of AI in tackling cyber crime will be crucial.

2022’s World Cup in Qatar provided a good example: the ‘connected stadium’ concept allowed lighting, gate access and communications across all eight stadiums to be controlled from the Aspire Command Centre in Doha. As well as potentially jeopardising coverage of the football action, a successful cyber attack on the World Cup would have endangered spectators.

AI’s ability to identify threats at machine speed formed a valuable part of the defences that allowed the World Cup to avoid disruption. Away from the excitement of the World Cup, 2021’s ransomware attack on the Colonial Pipeline,5 which resulted in fuel shortages in several states, provided an example of how cyber attacks can cripple physical systems.

Precedence Research estimates that the AI cyber security market was worth $17.4 billion in 2022, and predicts this to rise to $102.8 billion by 2032, implying a compound annual growth rate of 19.4%.

As ChatGPT has demonstrated, AI has made remarkable progress in a short space of time. Whether cyber criminals or security companies will be first to fully capitalise on this progress remains to be seen.

 

*For illustrative purposes only. Reference to a particular security is on a historic basis and does not mean that the security is currently held or will be held within an LGIM portfolio. The above information does not constitute a recommendation to buy or sell any security.

1. Source: https://assets.sophos.com/X24WTUEQ/at/b5n9ntjqmbkb8fg5rn25g4fc/sophos-2023-threat-report.pdf

2. Source: https://www.nytimes.com/2023/02/07/technology/artificial-intelligence-training-deepfake.html

3. Source: https://darktrace.com/research/analysis-of-email-structure-to-detect-malicious-intent

4. Source: https://www.qatar2022.qa/en/tournament/stadiums

5. Source: https://www.bbc.co.uk/news/technology-57063636

Aude Martin

ETF Investment Specialist

Aude joined L&G ETF in July 2019 as a cross-asset ETF Investment Specialist. Prior to that, Aude worked as a delta one trader at Goldman Sachs and within the structured-products sales teams at HSBC and Credit Agricole CIB. As an investment specialist, she contributes towards the design of investment strategies and actively supports the ETF distribution and marketing efforts. She graduated from EDHEC Business School in 2016 with an MSc in Financial Markets.

Aude Martin