Why Linux’s biggest strength is also its biggest weakness

Linux
(Image credit: Image Credit: Pixabay)

Unpatched vulnerabilities are one of the main points of entry for cyberattacks. Attacks on infrastructure are increasing, and IT teams are struggling to keep up with the swathe of new issues that are discovered. Patch management should therefore be a key focus for IT and security teams in the race to keep ahead of attackers.

Linux is responsible for the vast majority of public cloud infrastructure - around 90 percent according to the 2017 Linux Kernel Development Report by the Linux Foundation. It also supports 82 percent of the world’s smartphones and nine of the top ten public clouds. Linux also has a good reputation for security, especially when compared to other operating systems.

However, a recent spate of serious Linux-related vulnerabilities has shown that Linux needs to be managed just as closely as any other set of IT assets.

About the author

Shailesh Athalye is SVP Product Management at Qualys

How can we better protect our infrastructure over time? Are we overconfident in Linux and security? And how can we manage the patching process more efficiently?

Understanding the patch management process

Software is complex. Issues such as design flaws or programming errors will naturally arise, and these flaws can potentially lead to security issues. What’s important is that these vulnerabilities are spotted and dealt with quickly, prior to exploitation. 

Proprietary software companies have full control over their update processes. The most recognizable approach is the industry-wide monthly Patch Tuesday releases by the likes of Microsoft and Adobe

These releases highlight vulnerabilities, assign severity levels and help IT teams prioritize issues for patching based on their priorities and risk levels. This approach enables IT and security teams to plan their patching. 

For Linux, the process is very different. As Linux is open source, issues can be discovered by community members and updates issued at any time. The process is coordinated so that all those affected - from the largest open-source distributions run by global vendors through to smaller versions run by community teams - can add the updates to their versions. 

Companies like Red Hat and SUSE run mailing lists that alert the community to known vulnerabilities and associated patches in real-time, rather than being limited to monthly cadences. This process helps maintain the core principles of open source, which center around openness, transparency and traceability for all. 

The significance of Linux

It’s important not to be complacent around Linux and security. Firstly, the sheer number of distributions and variants powered by Linux means that one issue can lead to multiple sets of patches that have to be deployed, one for each distribution or asset used. 

This can become incredibly complex to keep up with. It’s easy to see how teams can fall behind as a result, particularly where there is an assumption that Linux is more secure.

Arguably the winning feature of Linux - the fact that it’s open source - is also its biggest challenge. 

When vulnerabilities become public knowledge, they are open for everyone to look into, and proof of concept code is often created to demonstrate the issues. While this assists those responsible for Linux communities and gives them insight into the problems, this data can also be used to work out other ways to exploit the original vulnerability. 

If organizations running Linux are not up to date with their patch management, it can be easier for attackers to attempt exploits based on those example exploits.

Challenges of the Linux patching process

To manage Linux patching effectively, there are three elements that have to work alongside each other. 

The first process to get right is building an accurate IT asset inventory that can track hardware, operating systems and software, as well as any other services. This should provide a full list of what is in place and the current status of the assets.

Once you have this confirmed, you can look at what vulnerabilities exist and what patches have to be installed. With so many new issues getting discovered, it might not be possible to patch everything immediately. 

Instead, you can prioritize what issues are the most pressing to implement, either because they are the riskiest, the most widespread or the most dangerous. This will depend on your company, what is in place, and the company’s appetite for risk. 

Too much tooling

One of the challenges of collecting the data required to efficiently discover your assets, scan for vulnerabilities, prioritization and remediate is that it may require multiple tools that do not communicate between themselves. 

Some may argue that the more tools the better, and many security professionals were once taught to follow the approach that quantity equals quality. While the safety blanket of multiple asset tools overlapping one another may sound reassuring to ensure that no vulnerability or gap in defense is ever missed, it actually becomes more of a hindrance for IT and security teams to manage over time. 

In reality, every tool you adopt will have its own overhead and its own way of categorizing data. When you compare data across tools and teams, it’s difficult to get accurate information in real time. 

Teams are also likely to double up on work as they have to manually correlate data before they even get to work on patching issues the tools have found. The workload facing IT teams is mounting, so taking out any duplication and automating processes should immediately pay off. 

For example, organizations that use different tools for discovering assets, running vulnerability management scans, prioritizing and patching will initially face the challenge of ensuring that all the different products can “agree” on how to identify a device. Without this “agreement”, reports cannot be generated, and remediation jobs cannot be initiated. 

To complicate things even further, organizations that use multiple tools to achieve those tasks will usually have to undertake time-consuming processes to allow its patch teams to deploy patches based on prioritized vulnerabilities. 

This typically involves sending a report with a list of prioritized vulnerabilities to the patch team, who in turn will have to research each vulnerability, understand what patches are available, assess which of those are relevant to the environment and should therefore be deployed. 

This process can take time and requires a lot of heavy lifting from each team. Lengthy and complex patch management processes such as these are also likely to be the first to be de-prioritized when other, seemingly ‘more urgent’ tasks arise. 

This presents a danger for organizations that may unknowingly leave themselves open to attack because of vulnerabilities left unpatched for longer than necessary.

Unification contains the keys for success

The community recognizes that this is a flawed process. As a result, more tools are now available to minimize some of the steps in this process, but most still fall short and require manual intervention somewhere along the way. 

Instead, if organizations can utilize one solution to scan for vulnerabilities, prioritize and remediate them within one single console, the process is dramatically more efficient, and organizations can more easily keep on top of their patch management. 

This would remove the need for manual research and reporting around each individual vulnerability and the associated patch for each individual system. The patch can be deployed from a single button which delivers an up-to-date report of remediated vulnerabilities to document the process and close the loop.

Ultimately, teams need to create an efficient and effective workflow, both for proactive and reactive patching approaches, that runs across as many operating systems as possible. 

Rather than having separate tools for Windows and Linux, and internal cloud assets, integrating all your asset data together in one place enables greater efficiency. This provides a comprehensive overview of what you have and what to prioritize, regardless of where that asset is hosted.

The threat landscape is constantly changing, so scheduled rounds of scanning monthly or weekly from multiple agents are no longer enough. Companies should strive for continuous, automated scanning to ensure they can detect and remediate issues in real-time. 

This ensures IT and security teams are always working with the most up-to-date information, and it also means that remediation approaches can be automated too.

TOPICS

Shailesh Athalye is the Senior Vice President Of Product Management, Qualys