Why businesses should focus on protection, instead of relying on detection

Why focus on protection, instead of relying on detection
(Image credit: Pixabay)

The threat landscape is constantly evolving, but I think the biggest change is how modern cybercrime has morphed to become a professional, commoditized industry worth more than $680 billion. Cybercriminals are well-funded, and have access to a sophisticated supply chain that enables them to innovate rapidly, creating new threats such as polymorphic malware and even zero-day exploits, with the goal of evading detection and compromising high-value IT systems.

Added to this the dark web is making it easier than ever to obtain the skills and tools hackers need to disseminate malware and launch campaigns, with very little risk of exposure – it also provides a marketplace to trade and sell access and data. For example, our latest threat report showed that recent Emotet campaigns have risen by more than 1,200% from July to September in comparison to the previous quarter. Emotet infections are often a precursor to human-operated ransomware attacks, threat actors have been observed using the access to compromised systems to perform reconnaissance of victim networks before deploying ransomware families such as Ryuk.

Another big area of innovation is in the lures deployed to reel users into taking actions that are making phishing campaigns much more effective. For example, ‘thread hijacking’ is increasingly being employed, where email clients on compromised machines are used to reply to conversations with messages containing malware and can appear very convincing. We are also seeing some using COVID-19 related warnings to grab users’ attention and play on people’s fears. Our latest threat report found 5% of Emotet phishing emails that could be identified involved the use of a COVID-19 related lure such as a fake report.

What’s the problem with a detection-first approach?

There are a few issues – firstly, the inescapable fact that predicting the future is hard. If 2020 has taught us anything, it’s that the world is unpredictable. This is especially true in cybersecurity, where the threat landscape is constantly evolving. Internet security tools rely on detection, trying to identify code that is already known to be malicious based on known Indicators of Compromise (IoCs). All a hacker needs to do is tweak their code so that it looks new to evade detection. They can do this in an automated fashion, with machine generation to create variants of their malware which are then automatically tested against all the security products they expect to encounter, repeating until they get a ‘pass’. They can then distribute this malware with the expectation it will successfully compromise the victims that receive it. Security vendors will eventually catch up and start blocking the attack, but the attackers can simply repeat the cycle.

Hackers are also getting better at disguising malware within legitimate code and documents, so that it can get past scanners. Malware often evades detection in network sandboxes that use behavioral analysis by either detecting the presence of the sandbox, or by requiring human interaction (such as a mouse click or cursor movement) to detonate. Others use communication with a web server to control the time of detonation. There are many very effective techniques that hackers use to evade detection.

Detection-based approaches also suffer from false positives, a problem exacerbated by vendors dialing up sensitivity thresholds in an effort to avoid fewer false negatives. As a result, security operations center (SOC) teams are placed under even greater pressure, some receiving over 10,000 alerts per day that they need to sift through to ‘pan for the gold’. This can result in the real threats to the business being missed.

No matter how good the detection product you use is, it can’t detect everything; it’s inevitable that sooner or later a user is going to click on something that results in an endpoint becoming compromised. The issue then becomes whether the attacker can remain hidden, and whether they can leverage the compromised endpoint, and the data and credentials contained on it, to move around the network. Spotting such activity again relies on detection, and at this point, security tools on the endpoint are likely disabled or crippled. You might get lucky and be able to stop the attack before damage is done, but you’ve been breached and won’t really know for sure what has happened and whether you have successfully evicted the attackers. Relying on detection is going to result in an unsatisfactory outcome sooner or later, so a more architecturally robust approach to security is required.

Is user education working?

Cybercriminals know that employees and their devices are the soft underbelly of IT security. Last year, 68 percent of IT security professionals claimed their company experienced one or more endpoint attacks that compromised data assets or IT infrastructure, and 94 percent of malware was delivered by email. User education helps, but will never be 100 percent effective. People are busy and distracted, and attackers make effective use of social engineering. It’s now common to see phishing emails employing machine generation techniques to tailor them to organizations or individuals. As mentioned earlier, hackers will also use tactics such as thread hijacking to improve their odds. Often, users don’t even know they have been compromised, leaving gaping holes in organizational security.

What’s more, some users need to engage in ‘risky’ behavior to do their jobs. For example, HR departments need to open unsolicited attachments to read submitted CVs, finance needs to open invoices from suppliers, and digital teams need access to social media channels; it is essential to their work. Ultimately, users shouldn’t bear the burden of security; they should be protected and able to make mistakes.

How should the approach to security change?

Incremental innovation in security is failing to disrupt threat actors, so a new approach to security is needed that builds protection from the hardware up. Hardware-enforced technologies, like micro-virtualization, can help protect users and leave malicious actors with nowhere to go and nothing to steal, while also collecting threat intelligence to inform the business.

With hardware-enforced micro-VMs (Virtual Machines), risky tasks like opening email attachments, clicking on links, or opening downloading files are all executed within an isolated virtual environment. This virtual ‘cage’ runs in its own virtualized hardware, meaning it can’t access anything else on the system. This ensures that malware cannot infect the host PC, access other files, or spread through the corporate network. Isolation through virtualization dramatically reduces the attack surface by architecturally protecting key attack vectors – for instance, browsers, downloads, email, chat, USB, and more.

From a user perspective, it is business as usual, the technology is transparent. They can click on links in emails, visit webpages, download files and open documents just as they normally would, but with the added benefit of knowing that even if it is malicious, the malware will be rendered harmless, giving users the ability to click with confidence. This approach does not rely on detection at all, it does not attempt to prevent attacks from taking place. Instead, it allows attacks to unfold, but in a secure and isolated environment with nothing to steal and no way for the malware to persist. The SOC teams gets high fidelity intelligence on real threats, though no remediation is required.

What is the value of threat intelligence?

Micro-virtualization also has some unique advantages when collecting threat intelligence, which can be used to arm the SOC with detailed knowledge of how they are being attacked. The isolation through micro-virtualization means you don’t need to stop the attack as soon as an anomaly is spotted. The system can capture the full kill-chain of the attack taking place within the VM knowing that no harm can occur.

Malware often attempts to evade detection in network sandboxes that use behavioral analysis by either detecting the presence of the sandbox, or by requiring human interaction. Using micro-virtualization, the malware will always run within an isolated VM, so there will always be a user present to interact with the malware and trigger detonation. This can force hackers to tip their hand by getting the malware to show its true purpose – for example: communication with a Command & Control server, recording a second stage payload being downloaded and executed, observing how the malware tries to persist, and what actions and modifications it attempts to make to the filesystem and registry.

Another advantage is that each VM is created to run a particular application. It’s a very noise-free environment in which the expected behavior of the application is well understood and compared in real-time to the actual behavior. Deviations are indicative of something interesting happening, and the trace can be streamed to the SOC for further automated analysis. It may be an innocent bug in the application, but it could also be a brand new zero day-exploit in use in the wild. Contrast this with the difficulty of doing anomaly detection within the host operating system. Here, many applications will be running, perhaps with users running command shells, Windows updates and other software installers running, and Active Directory Group Policy Object making changes to the registry – it’s very difficult to spot anomalies in such a complex environment.

Micro-virtualization uses introspection to monitor the VM from the outside looking in, capturing a black box flight recorder trace of the malware execution taking place inside. This is highly beneficial, since the monitoring can’t be disabled or tampered with by malware running within the VM. This allows organizations to mobilize their army of endpoints and capture unique threat intelligence that can be shared with other security tools to help harden their overall security stance – for example, SOC teams can identify key Indicators of Compromise (IOCs) that can then be used to update other detection-based tools.

  • Ian Pratt, Global Head of Security for Personal Systems at HP Inc.
TOPICS
Ian Pratt

Ian Pratt, Global Head of Security for Personal Systems at HP Inc.