What good AI cyber security software looks like in 2022 (2024)

What good AI cyber security software looks like in 2022 (1)

This article originally appeared in issue 28 of IT Pro 20/20, available here (opens in new tab). To sign up to receive each new issue in your inbox, click here (opens in new tab)

Weaponised artificial intelligence (AI) is no longer some futuristic sci-fi nightmare. Autonomous killer robots aren't out to get us just yet, but AI technologies such as machine learning have been adopted by criminal gangs who, like any ambitious organisations, want to give their operations an edge.

One of the best-known botnets, TrickBot, is a prime example of a once standard Trojan that's now brimming with AI capabilities (opens in new tab). Its creators have added intelligent algorithm-based modules which, for instance, calculate how to hide in a specific target system, making it almost impossible to detect.

Imaginative attackers are also using AI to scan for minute vulnerabilities in systems; process vast stores of personal data and create deepfakes (opens in new tab) so realistic they'd fool a CEO's mum. Tools to achieve this nefarious magic are widely available through the dark web (opens in new tab), but even more frightening still is the prospect of criminals weaponising organisations' own AI by infiltrating and manipulating the data that informs it.

The implications for global security are indeed grim. Business leaders also fear lagging behind in the AI security race, with 60% of those surveyed by Darktrace (opens in new tab) last year suggesting human-driven responses are failing to keep up. Nearly all (96%) have begun to guard against AI, but with threats escalating, what tools and systems are available?

How AI learns to guard your data

To face down AI threats (opens in new tab), you need AI defences (opens in new tab). More than two-thirds (69%) of organisations surveyed in a Capgemini study (opens in new tab) said AI security is urgent, and this number is likely to grow as more are hit by AI-driven attacks. "I don't know any IT security vendor that hasn't included machine learning algorithms (opens in new tab) in security toolsets," says Freeform Dynamics analyst Tony Lock. "Security was one of the earliest sectors to use machine learning because it's so good at looking for patterns, especially anomalies that might indicate a threat."

Traditional security tools can't keep pace with the sheer scale of malware and ransomware created every week. AI (opens in new tab), by contrast, can detect even the tiniest potential risk before it enters the system, without having to constantly run computer scans or be told what threats to look out for. Instead, it learns a baseline and then automatically flags anything out of the ordinary.

AI apps and components are available in cloud services from the likes of Amazon and Microsoft, and can be added to existing systems without interrupting workflows. Everyone can get on with their jobs with minimal risk of mistakes, and the tools are designed to scale as required. Microsoft Azure's secure research environment (opens in new tab) for regulated data is a good example. It uses smart automation to supervise and analyse the user's business data, while its machine learning is ready to leap into action if it detects a blip. Similarly, email scanners such as Proofpoint (opens in new tab) use machine learning to detect malicious emails by spotting clues far too subtle for a human to see.

The more these tools are used, the more accurate and faster they get. Response times are slashed as AI tools learn from their own experiences and from those of other organisations, through analysis of samples shared in the cloud. "The AI might miss the first attack, but then it'll share that knowledge with other AI systems and create new ways to detect the new attack, and so on," says Adam Kujawa, security evangelist at Malwarebytes (opens in new tab). Eventually, says Kujawa, the user won't encounter threats at all.

Beyond anomalies: Automation, scale and prediction

Automated threats can't be tackled using legacy security tools, but AI-powered cyber security (opens in new tab) tools can help. Deployed in a system, algorithms build a thorough understanding of activity such as website traffic, and learn to automatically and instantly distinguish between humans, good data, bad data, and bots.

Martin Rehak, CEO of security firm Resistant AI (opens in new tab) and lecturer at Prague University, gives the example of large-scale financial fraud that exploits organisations' own automation systems. "AI and machine learning are the only scaling factors that can supervise these systems effectively in real-time," he says. The system will then continuously refine relationships between algorithms, getting better at evaluating documents and behaviour in real-time, potentially uncovering all kinds of fraud.

AI also prioritises risks far more intuitively than a human can. "Technology has evolved to allow prioritisation backed by AI algorithms, which computes risk score," explains Naveen Vijay, VP of threat research at risk analytics firm Gurucul. "This approach allows it to automate not only the detection of incidents but also the mitigation process."

AI helps you prioritise resources, too. By enabling you to analyse vast amounts of data and create a detailed record of all your assets, an AI system can predict how and where you're most likely to be compromised, so you can organise your defences to protect the most vulnerable areas.

Deep learning, attack simulations and beyond

At the moment, AI defences can't do all the work by themselves. They still have to be correctly managed by humans. "The common mistake I see is companies paying for AI systems then not configuring them correctly," says Jamie King, information and cyber security manager at IT provider TSG. "I personally like Microsoft Sentinel (opens in new tab) as part of a security strategy, because it's cost-effective and works well. But organisations need to be aware that it is an option, and quality management needs to be in place."

AI is great for spotting anomalies, but a human is still needed to make the final call, agrees Phil Bindley, MD of cloud and security at Intercity. "Having a blend that uses both AI and humans helps to spot false positives. Solutions like Checkpoint Harmony inform about potential threats based on AI and machine learning (opens in new tab), then require human interaction to make a choice on the best course of action."

RELATED RESOURCE

What good AI cyber security software looks like in 2022 (3)

Recommendations for managing AI risks

Integrate your external AI tool findings into your broader security programs

FREE DOWNLOAD

Just as driverless cars are set to transform transport, though, autonomous AI systems may render human supervision unnecessary. Already, the most advanced AI security services offer elements of deep learning (opens in new tab), which doesn't depend on human-designed algorithms but instead on neural networks (opens in new tab), which comprise many layers of analytical nodes and are effectively artificial brains. Such a system could learn to "know" the difference between benign and malicious activity.

Security teams can already harness the predictive powers of AI by building models that help them predict what malware will do next, and then build AI workflows that swing into action automatically when an attack or variant is detected. AI prediction is evolving fast, however. Firms such as Darktrace are developing smart attack simulations that'll autonomously anticipate and block the actions of even the most inventive AI-tooled cyberpunk.

"Proactive security and simulations will be incredibly powerful," says Max Heinemeyer, VP of cyber innovation at Darktrace. "This will turn the tables on bad actors, giving security teams ways to future-proof their organisations against unknown and AI-driven threats."

ITPro Newsletter

A daily dose of IT news, reviews, features and insights, straight to your inbox!

Jane Hoskyn has been a journalist for over 25 years, with bylines in Men's Health, the Mail on Sunday, BBC Radio and more. In between freelancing, her roles have included features editor for Computeractive and technology editor for Broadcast, and she was named IPC Media Commissioning Editor of the Year for her work at Web User. Today, she specialises in writing features about user experience (UX), security and accessibility in B2B and consumer tech. You can follow Jane's personal Twitter account at @janeskyn (opens in new tab).

Most Popular
What good AI cyber security software looks like in 2022 (2024)
Top Articles
Latest Posts
Article information

Author: Stevie Stamm

Last Updated:

Views: 6031

Rating: 5 / 5 (80 voted)

Reviews: 95% of readers found this page helpful

Author information

Name: Stevie Stamm

Birthday: 1996-06-22

Address: Apt. 419 4200 Sipes Estate, East Delmerview, WY 05617

Phone: +342332224300

Job: Future Advertising Analyst

Hobby: Leather crafting, Puzzles, Leather crafting, scrapbook, Urban exploration, Cabaret, Skateboarding

Introduction: My name is Stevie Stamm, I am a colorful, sparkling, splendid, vast, open, hilarious, tender person who loves writing and wants to share my knowledge and understanding with you.