Written by Tomas Honzak |
I was recently reading a survey that found that 32% of CISOs report being highly reliant on AI software. To me, this puts far too much stock in AI’s abilities, as far as cybersecurity is concerned. As I outlined for Dark Reading, I think that AI definitely brings significant benefits, but—like any technology—it has its limitations.
AI can fool other AI
For one, AI is perfectly positioned to fool other AIs. If AI is useful for detecting threats, then it’s also useful for detecting (and evading) security systems, for example by monitoring and analyzing their detection patterns . And once they’re in, an attacker can continue to use AI’s capabilities as a shield to remain unnoticed by morphing its own fingerprint and behavior accordingly. All it takes is one such attack to compromise everything.
AI requires a lot of technology resources
AI also needs a lot of, well, everything—memory, computing power, data—to run as it’s supposed to. That’s fine for larger devices, but a low-power IoT device, which typically only has a small amount of data, isn’t really suited for AI. Malware at this level likely won’t be detected until data’s been sent to the cloud for processing, and at that point it’s far too late. In a best case scenario, the AI might have alerted you to the fact that something’s wrong before you lost control over your whole IoT infrastructure.
AI and today’s networks
AI doesn’t account for the loose boundaries that our networks have today. Once upon a time, workplace devices existed almost exclusively within the walls of the office. With shadow IT, bring-your-own-device programs, SaaS systems, and employees to consider, the boundaries become blurred. Think of an employee who is trying to be proactive and catch up on email over the weekend. A nice thought, but checking a web-based company email from a personal laptop over an unsecured Wi-Fi network might easily result in a huge leak in terms of security. You might be able to use AI to protect your own application, but there’s no way for you to use AI to protect a device that you didn’t even know someone was using or a system into which you were unaware employees were loading company data.
AI software as a security tool
AI is definitely a useful technology, and it can be transformational when used correctly. But from a cybersecurity lens, these limitations mean that it’s best suited for surveillance and monitoring, not as a cornerstone of your security efforts. It may sound boring, but the best way to ensure security is to keep trying to solve the same problems that we’ve been trying to solve for years: lack of control, lack of monitoring, and lack of understanding of potential threats. Without addressing these issues, there’s no way that AI could actually deliver what it’s hoped to.
Written by Tomas Honzak |