Ransomware - What have we learned 5 years since NotPetya

In the early years of the millennium, Information Technology reliant organizations enjoyed an innocent expectation that our government intelligence services would target the “enemy” and not us. But that naivety came to an abrupt halt in the summer of 2017 when we came to experience the impact of advanced hacker tools, developed by the US government, in the hands of cyber criminals and adverse governments. More than $10 billion were lost by the use of tools like “EternalBlue”, and the world was forever changed. But what did we learn?

In this blog post, I will take you on a trip down memory lane. The end goal is to demonstrate how something simple can be effective. We do not need artificial intelligence or machine learning to make a substantial difference in our cyber defenses. We just need to add some common sense to our normal way of working, to get some substantial results that work. And an easy win is prioritized patch management. Enjoy the ride! 

Looking back in time

In the innocent summer of 2017, I published an article in the Danish magazine “Effektivitet” called “Ransomware – det konstante kapløb”[1]  roughly translated to English as “Ransomware – the continued  race”.

Throughout the article, I presented the evolution of the phenomenon of “Ransomware”. The malicious software designed to infiltrate IT infrastructure and encrypt files followed a message telling the victim “if you want your files back, pay up!”.

In the article, I took a trip down memory lane, back to the late 1980s when we saw the first publicly seen encryption malware in action doing the WHO’s AIDS conference in 1989. I continued the journey up to the early 2010s, being the 2012-2013 period when we started to see the presence of “Police Ransomware” targeting both private people and corporate organizations.

I touched upon the fact that the ransomware we saw in the period up until 2015 was more annoying than dangerous. We learned that backup was our friend and savior, and with decent disaster recovery and business continuity plans in place, the result of a ransomware attack was mostly time wasted on recovery actions.

I covered that we had lots of controls that we could implement, both on the preventive, detective, and responsive sides. But let us be honest, as long as our recovery was in place, why did we need to spend time on all the other disciplines?

Gamechanger

But I also mentioned another important fact – that the majority of ransomware we saw in the early and mid-2010s was low-sophisticated.

Sure, we saw some targeted and advanced attacks in Europe. One example is the attack against the Danish energy company NRGi in the fall of 2015. The ransomware attack hit the administrative Windows systems and spread toward the backup. But the industrial control systems (ICS/OT) were lucky enough unharmed. And NRGi could recover with some scratches on the surface.

The NRGi attack was called out to be a targeted attack, and throughout 2016 we did not experience an escalation of similar attacks. At least not known to the public.

In the article, I reflected on the possibility of ransomware attacks maturating: “…but what if the infrastructure is compromised? What if the adversaries initiate the encryption from the “inside”? What if an advanced and persistent method is used so that it takes a week or a month before data is encrypted? And then, by the way, the backup also is hit? How many companies, large and small, can survive going back a month in production data?

The future came

While researching for the article, the world came to know “WannaCry”. WannaCry took mainstream ransomware to a whole new level. Build on NSA (National Security Agency) revealed zero-day-exploits[2], and distributed via online web attacks targeting web servers with open SMB[3] communication ports, the ransomware spread like a worm as it began on May 12th, 2017.

After only a few hours, a security researcher discovered a built-in “kill switch”, and by registering a specific domain name, he prevented already infected machines from being encrypted and further spreading the ransomware.

Within only a few hours of activation, WannaCry managed to affect more than 200.000 computers across 150 countries. Several countries led by the United States and the UK pointed fingers toward North Korea, as many indications about the malware author of the ransomware could be linked to Korea. If it was indeed a North-Korean-led attack conducted by the “Lazarus Group” will most likely never be fully solved. But only a month after the WannaCry incident, other criminal parties had gained a huge interest in the leaked NSA tools.

The big bang

The 2017-article that initiated this blog post was published on June 26th, 2017. The day after, on June 27th, 2017 the world experienced yet another global cyberattack.

The ransomware known as “NotPetya” became one of the first globally known Supply Chain Attacks. A risk scenario that the world in the following years should both become awfully familiar with – and frightened of!

NotPetya was actually more a disguised destructive malware than real ransomware, despite the ransom note that met the user of the infected computers. The destructive malware is commonly known as “Wiper malware” and in 2022 seen be put to use in the war started by Russia against Ukraine. But also the United States has a history of the use of destructive cyber tools e.g. back in 2010 with the StuxNet[4] worm.

The NotPetya malware infected the master boot record of the targeted computer, made an overwrite of the bootloader, and triggered a system restart. Before the devastating restart making the hard drive of the computer encrypted and unusable the malware had collected passwords and spread to other computers in the network like a worm.

NotPetya cribbed many international organizations like Maersk and Saint-Gobain, and only luck kept  Maersk from a full corporate shutdown[5]. Maersk found one domain controller backup in their Ghana office. By a stroke of luck, a blackout had knocked the server offline before the NotPetya attack, disconnecting it from the network. It contained a single clean copy of the company’s domain controller data. Without the clean copy, Maersk would have been forced to rebuild the entire active directory, a task easily taking several months with a corporate standstill as a result.

Wired magazine[6]  reported that the damages of NotPetya could be measured as more than $10 billion, but it could have been a lot worse.

Supply chain attack

The supply chain attack came from a compromise of the Ukrainian government's “M.E. Doc” accounting application. The application was mandatory for companies operating in Ukraine to report taxes. The attackers simply spread the malware via the application like a software deployment service.

So, without the need for user interference, the NotPetya malware was likely to be operating with system privileges on the servers and workstations where the application was installed, followed by exploiting the NSA’s EternalBlue tools for further spread and privilege escalation.

Shortly back to my reflection in the 2017 article “….but what if the infrastructure is compromised? What if the adversaries initiate the encryption from the “inside”? The NotPetya malware was indeed compromising the infrastructure from the inside and the encryption was initiated when the threat actor had full control over the attacked domain.

We saw what could happen – but did we learn from it?

To pour more salt into the wound, organizations could easily have avoided the devastating consequences of both WannaCry and NotPetya by performing one of the oldest and most fundamental disciplines of IT operations – patch management.

MS17-010 was the name of the patch published by Microsoft to fix the flaw in the SMB protocol, making it possible for malware and ransomware to spread rapidly. The initial patch for the SMB vulnerability was released on March 14th, 2017. Two months before WannaCry and three months before NotPetya.

Following WannaCry, Microsoft took the untraditional step of releasing an out-of-band patch for operating systems out of support like Windows XP and Windows Server 2003, which unfortunately at the time were still widely used in private homes and corporate environments.

So, companies and organizations did indeed have all possibilities at hand to avoid the catastrophe!

Despite the critical severity of the vulnerability[7], many corporate patch and vulnerability processes were not optimized to handle vulnerability of this magnitude. Furthermore, many networks were still flat as a pancake with little or no barriers to avert such an attack. But where segmented networks and “zero-trust” architecture will be a challenge for most organizations, the patch management discipline should be straightforward.

So, what is the issue?

In 2021 more than 20.000 new vulnerabilities were recorded and published in the NIST National Vulnerability Database (NVD)[1].

From 2016 to 2017 the yearly number of reported flaws more than doubled from 6.000 to 14.000 reports, and since the yearly number has continued to rise to an all-time high of 21.000 in 2021.

With such a massive number of constantly new vulnerabilities, and the desire to perform just a fraction of test management before releasing, the task patching in due time is a challenge to say it lightly. The NVD holds more than 170.000 records of vulnerabilities in total, and with the described yearly growth this number is expected to reach 200.000 by the end of 2022.

But can we just close our eyes and hit the “patch now” button on all systems? Well, if your infrastructure is more than a few years old and/or contains in-house developed software, I would highly avoid doing this as the first choice. Chances are that you will experience quite some issues and a lot of time practicing the discipline of root cause analysis.

What you should do is follow some well-established patch cycles. Within one month you should be able to receive notification of a new patch, create a package and deploy it to your test environment. Then follow the established phases and reach all of your endpoints within a month +/- one week. That would have made a difference in the WannaCry/NotPetya scenario described earlier in the blog post.

When everything is a priority, nothing is a priority

In 2019 research into the actual exploitation of vulnerabilities was published. The research revealed that at the time of publication, only 5,5%[1] of all published vulnerabilities were exploited in the wild. Not surprisingly the CVSS severities 9 and 10 were the most targeted. But as CVSS 9 and 10 also indicates that the vulnerability can be remotely exploited and doesn’t require privileges for execution, it makes sense to focus on these critical vulnerabilities.

Despite the more than 170.000 registered vulnerabilities, no organization is likely to become a potential target on all of them. Lucky enough the huge vulnerability database covers a variety of different software.

How does it all link together   

So, what is the idea of this blog post? Is it merely to point fingers and shout: “do better”, or is it to create panic within the masses and say: “you can’t avoid it”? Neither.

The idea is to argue a mindset change in the patch and vulnerability management minds of organizations and regulators. We don’t need to patch all things at once. But we need to patch the things that possess a current threat!

US government agency CISA – Cybersecurity & Infrastructure Security Agency – is maintaining a catalog of known exploited vulnerabilities[2]. And just before ending up recommending the government that initially caused the troubles in 2017 by having zero-day exploit tools leaked, I can say that there are other sources out there e.g., MISP – an open-source intelligence platform[3] for information sharing, where the same information can be gained.

Whether you are using CISA or MISP, you can extract a list of vulnerabilities being actively exploited. If you are in the lucky situation that you have a vulnerability scanner in place, you should be able to obtain an overview of which of your IT assets possesses which vulnerabilities.

You might also be in such luck, that you have performed a crown jewel assessment, identifying business-critical systems and services, and by that the supporting infrastructure. Finally, you might have a place to cook it all together like a Security Information and Event Management (SIEM) system.

With this recipe, you have the optimal circumstances for being able to detect when new vulnerabilities start to be exploited, where in your environment it would have an effect, and potentially if an affected IT asset is related to a business-critical system and by that a prioritization.

To summarize

The introductory story was long, and the solution was quite short, is it that easy? Of cause not. But the reason for the long introduction to the topic can be found in the current threat landscape.

According to the recent DBIR report – Data Breach Investigation Report 2022 by Verizon[4], more than 13% of all data breaches are related to ransomware. And the number is rapidly growing year by year.

At the same time, the number of reported vulnerabilities is growing even more. And (often) still the same number of people within the organization are allocated to address the issues. If we don’t approach the issues with vulnerabilities in a cleverer and more structured manner, we will definitely lose the race on ransomware.

We can’t rely on recovery plans if we are hit by ransomware. Here in 2022 the encryption of files is just one part of the issue we have to face. Public display of company and employee sensitive information’s is commonly used as part of the extortion methods which eliminates the strength of the backup and restore approach. We must move the kill chain left and reduce the success of an attack before it is too late. And intelligent patch management is a serious consideration to take.