Claims of AI in Cybersecurity are Highly Exaggerated | Blog | ThreatWarrior

Claims of AI in Cybersecurity Are Highly Exaggerated

by

Artificial intelligence is an exciting and innovative technology. Cybersecurity is technology’s biggest problem, so it’s natural to apply the former to the latter. It’s also natural, then, that every cybersecurity company claims to use AI. However, most of those claims are highly exaggerated.

ThreatWarrior is different. Real AI is the heart of ThreatWarrior. We use unsupervised learning and deep neural networks to protect your network, but more on that later.

The term ‘AI’ is ambiguous. Some people envision walking, talking, self-aware robots, or fictional characters like the Terminator. Technologists like us think of deep learning systems that can caption images or generate speech. But beware – AI also includes everything from simplistic “if this, do that” rulebooks to decades-old statistical techniques.

When you break it down, there are massive differences in the power and capabilities of artificially intelligent technologies.

Get past the marketing hype, and most so-called ‘AI’ cybersecurity platforms are more like Siri than C3PO

Many marketers slap the letters “AI” on everything. Products that have never functioned as true AI and never will are now proudly proclaiming it. But saying something doesn’t make it true, and not all AI is created equal anyway.

User Entity Behavioral Analysis (UEBA) platforms claim to use AI. They track metrics like bits received per second and alert when they go too high or low. Many analytics platforms claim to use AI for malware detection. They watch out for traffic from domain names made up of random letters.

AI can technically be defined as any technology that demonstrates intelligence by being aware of its environment and acting to successfully achieve a goal. So, these examples can be called AI. But it’s a stretch, and that type of AI doesn’t pack much of a punch.

Supervised vs. Unsupervised Learning

There are malware detectors that apply deep neural networks, but they use supervised learning. The AI learns using good and bad files, then detects malware that operates like the bad files it trained on. But what happens when an attack occurs using a technique it’s never learned about?

Software agents running on laptops don’t have much computing power available without impacting the user experience, so it’s generally a one-size-fits-all design. A centralized AI model is trained by the vendor on proprietary data and the result delivered to all the clients. These systems don’t know your environment, so they can’t use context and history to weigh threat severities. They’re not going to detect novel zero-day attacks. They just detect malware using techniques like the ones they trained on, which is only one of the many problems plaguing agent-based solutions.

ThreatWarrior uses unsupervised learning. We train separate deep neural networks for every client. We don’t depend on a centralized library of known anomalies. We build private datasets of normal behavior for each client. ThreatWarrior knows the environment and context in which it operates. It looks for any deviations from expected behavior, so it can catch problems it’s not specifically taught to detect.

Can you apply unsupervised learning without neural networks?

Yes. However, your mileage will vary. For example, another cybersecurity company utilizes Bayesian machine learning, an unsupervised approach without deep neural networks. It’s an older technique that was likely an attractive option at the time. Bayesian estimation can be done on cheap hardware, whereas neural nets need modern GPUs. However, the Bayesian method lacks the power, sophistication and results now achievable through next-generation techniques like deep neural networks.

To be clear, Bayesian estimation and deep learning have similar goals. They posit hidden variables and probabilistically home in on them. The biggest difference is that deep learning excels at feature extraction: building complex hierarchies of meaning to express information from raw data. It’s a great strength of deep neural networks over previous techniques like Bayesian estimation.

Why is feature extraction so important?

Say you want to recognize faces in photos. The challenge is that photos of the same face vary. Computer vision struggled with this for generations. People spent entire careers handcrafting filters to identify features like eyes, noses, and lips. They took many photos of the same face and blended them together to create composites. Nothing worked well before deep neural networks.

Deep neural nets make their own filters, and they do a better job than people can. They tease sophisticated features out of raw data. For photos, they can generate filters that detect people or even emotional states. They start by finding lines and edges in images, then simple shapes. The features become progressively abstract the deeper the nets go.

When ThreatWarrior runs on a client’s traffic, it extracts features from the raw data of ‘Who talked to Whom about What’ to conceptualize higher-order patterns in the environment. It finds efficient and effective representations of normal traffic. Abnormal traffic can then be measured as the AI’s inability to express what it sees.

In short, using our unsupervised neural networks to perform deep learning allows us to observe and extract significantly more detail, resulting in higher accuracy (lower false positives). Think of this like watching a movie vs. watching it with the director’s commentary adding nuance and understanding to what’s on screen.

Pay attention to claims about what AI cybersecurity platforms can do

When evaluating AI security platforms, be skeptical when they won’t explain much about the inner-workings. Of course, no company is going to hand out proprietary information, but if they don’t provide much info at all it’s likely an indicator of the solution’s power.

Some systems only amount to fancy database queries. Supervised learning systems are one-size-fits-all and won’t be tailored to your network. Systems that don’t use deep learning will struggle to extract details from what they see.

ThreatWarrior uses unsupervised deep learning to protect your network, and that’s what makes it so powerful. After some research, we think you’ll agree.

Related Insights