Last November, detectives investigating a murder case in Bentonville, Arkansas, accessed utility data from a smart meter to determine that 140 gallons of water had been used at the victim’s home between 1 a.m. and 3 a.m. It was more water than had been used at the home before, and it was used at a suspicious time—evidence that the patio area had been sprayed down to conceal the murder scene.
As technology advances, we have more detailed data and analytics at our fingertips than ever before. It can potentially offer new insights for crime investigators.
One area crying out for more insight is cybersecurity.
By 2020, 60 percent of digital businesses will suffer a major service failure due to the inability of IT security teams to manage digital risk, according to Gartner. If we pair all this new Internet of Things (IoT) data with artificial intelligence (AI) and machine learning, there’s scope to turn the tide in the fight against cybercriminals.
We’re not just talking about identifying vulnerabilities, risks and cybercrimes, but also automatically combatting them.
Automated threat detection and mitigation
Security professionals face a difficult task in keeping enterprise networks safe. They must uncover vulnerabilities in a continuously growing and increasingly complex landscape of devices and software. When data breaches do occur, they must identify them, limit the damage and track those responsible. Investigations take time, and false positives are all too common.
What if AI platforms or cognitive security solutions could be employed to cut through the noise? Researchers from MIT were able to create a virtual AI analyst that successfully predicted 85 percent of cyber attacks by incorporating input from human experts. Not only is that three times better than most current, rules-based systems, but it also reduced the number of false positives by a factor of five.
The secret sauce here is that the system is constantly learning. Every time a human analyst identifies a false positive or a genuine threat, the system adjusts to accommodate that feedback and creates new models to detect threats. The more feedback it gets, the more accurate it becomes. Not only does this improve threat detection, but it also frees up human analysts to investigate the complex cases that really require their attention. If they’re not bogged down in false positives, it’s possible to make better use of their expertise.
Optimism about the potential of AI and machine learning
With nearly 60 percent of security professionals in agreement that cognitive security solutions can significantly hamper cybercriminals, according to IBM research, there’s reason to be optimistic. Among the top benefits cited by 700 security professionals surveyed were improved intelligence (40 percent), speed (37 percent) and accuracy (36 percent).
Several Fortune 500 companies are enrolled in IBM’s Watson for Cyber Securitybeta program. It can help organizations identify suspicious behaviour, weed out false-positive anomalies and tackle the genuine threats. Many other major companies, from Google to Cisco, are working on analytical AI that might also offer cybersecurity insights.
As these systems evolve, they might go from highlighting threats to autonomously mitigating them by changing policies, automating updates and even rewriting software to close loopholes.
Race against cybercriminals
Vulnerabilities, exploits, malware and data breaches are all inevitable to some extent. The sheer rapidity of IoT adoption is creating enormous risk. In many ways, we are engaged in a race to find threats and mitigate them before the cybercriminals can take advantage. Security coverage is always balanced with convenience and usability, so if we can’t create an impregnable system, it’s vital that we detect and respond to threats as fast as possible.
Consider that 60 percent of enterprise information security budgets will be allocated for rapid detection and response approaches by 2020, up from just 30 percent in 2016, according to Gartner.