Ransomware made headlines in 2017, with attacks shutting down the UK's NHS and costing Maersk shipping over $300m in lost revenue. Ransomware is a massive business for cybercriminals, driving the cost of bitcoin from $1200 to over $7000 per coin. We often see ransomware as some unbeatable force, however with some common sense controls and simple tricks, the damage can be reduced or even stopped. Join Kieran to learn some simple, free steps you can do to stop ransomware in its tracks.
So tonight I want to talk about ransomware, and some relatively free preventative measures you can do with your client systems to reduce the possible impact of an attack. When I spoke about ransomware back in early April at Experts Live, I really didn’t expect how things would change.
Just out of interest sake, who here has seen a ransomware attack within their organisation in the last 12 months?
Before we getting into the content.
My name is Kieran Jacobsen, I am the Head of Information Technology at Readify and a Microsoft MVP for Cloud and Datacenter Management.
I have a website, PoshSecurity.com, where I write about PowerShell, Azure and information security. I also maintain PlanetPowerShell.com, a community PowerShell content aggregator.
You can find me on Twitter via @Kjacobsen.
Ransomware upgraded to a potential killer this year. On Friday the 12th of May, the world experienced the WannaCry outbreak. Within a day, more that 230 thousand computers in 150 countries were impacted. Around 70 thousand were owned by the UK’s National Health Service. Systems including MRI scanners, blood storage fridges and theatre equipment were all impacted, resulting in Non-critical emergency cases were turned away and ambulances being diverted.
Speaking with medical professionals, the common belief is that this attack would have impacted patient care and potential caused a loss of life.
WannaCry was eventually stopped when Marcus Hutchins discovered a kill switch and purchased the required domain. We were extremely lucky that it contained a kill switch that was simple to activate.
WannaCry spread via exposed SMB services and a NSA discovered vulnerability in SMB 1. It has been attributed to the North Korean government based on intelligence gathering and initial infections originating in Asia.
It has been questioned if this really was ransomware. The bitcoin infrastructure behind it really wasn’t was well structured as other ransomware like Locky. I often wonder what the intent was for WannaCry.
Then in June Maersk was impacted heavily by the NotPetya. If you don’t know who Maersk are, they are a Copenhagen based freight organisation, who are often described as the worlds largest cargo container business. It is often quoted they handle 1 in 7 containers global and 1 fifth of global freight. This attack resulted in delays in the shipping and distribution of many products globally.
NotPetya wasn’t a typical ransomware, by all accounts it was a large scale attack, designed to inflict maximum damage to its primarily Ukrainian targets. It presented itself as ransomware, but its goal was data destruction. These are often called wiper attacks, they are designed to permanently destroy or prevent access to victim’s data. NotPetya has been attributed to the Russian government and some believe it was a cover for some other incident or operation.
NotPetya entered Maersk’s network via their Ukrainian offices. The initial vector was a product called M.E Doc, one of two Ukrainian tax platforms. Once on their network, it spread over the network to four other offices. NotPetya completely shutdown Maersk’s network for several days, leaving ships where ever they where. The overall clean-up took 2 weeks.
In the latest update to investors, their CEO stated the attack could have been much worse, however the total cost of dealing with the attack could be up to 300 million USD.
The truth of the matter is that the majority of ransomware still looks similar to this one. Emails being sent out in large numbers with malicious attachments or links to malicious files. Ransomware counts for 64% of all malware distributed via email, and the vast majority is from the Locky family. Internet security researches have witness some massive Locky distribution campaigns, Lukitus distributed Locky to 23 million email addresses in 24 hours in August, and then we saw another campaign distribute 27 million in September. There are a wide variety of ransomware strains belonging to the Locky family, and those behind it seem to be following an agile and dynamic approach to these campaigns, rapidly iterating and changing out emails and attack vectors. It almost seem like they are running a DevOps or agile methodology.
It is worth noting that some Locky campaigns seem just bent on destruction, however some are still in it to make money.
It is these attacks I want to focus on tonight, as they really are a silently majority.
So the strategy that I have been recommendation is one where we reduce the risks of Ransomware being successful. How do we do that? Well we can achieve this by making it harder, placing security controls, roadblocks as you might say, to slow down the ransomware and prevent it from achieving its goal of infecting a users machine.
I see this strategy much like the signs you see for theme park rides, you know the ones that say “you must be this tall to ride this ride”. Right now, the sign is really low, anyone can just walk up and pwn us. We want to raise the bar sufficiently high enough that those who want to attack us cannot, whilst ensuring that our users can still perform the tasks they need to. It’s a delicate balancing act, one that will take time, but the rewards are worth it.
Tonight I am going to talk about 7 effective ways of raising that bar. These don’t require you to by Windows 10 Enterprise, you can do these with Pro, you don’t need to buy any additional software either. All of the configuration can be done with Group Policy, scripts, System Center or even DSC.
There are two basic preventative measures that I am not going to talk about tonight. The first is network segmentation, and the second is patching. Both of these are not things you can overlook, in fact if you are not practicing either of these, you really should invest in these first.
Network segmentation is crucial to ensure that ransomware, particularly those like WannaCry and NotPetya cannot spread across your networks, from clients to servers and everything in between.
Patching has been critical for a number of years. I realise some people cannot patch, however every time someone says they cannot patch, my next question will always be, then how are you mitigating these risks?
Macros are a simple and effective way to encourage a user to run malicious code on their system. Ransomware authors have proven time and time again that they can convince users to enable them and run whatever code they want. I really struggle to find a legitimate use for macros. At this point, one has to wonder why Microsoft simply hasn’t taken the step to a firm off by default stance. The impact of Microsoft making this move would be to dramatically cut off a vital source of infections for ransomware authors.
The code on the screen is the macro from earlier. It is an obfuscated VBA that downloads a PowerShell based dropper before dropping Locky onto the victims system. Blocking macros, stops this dead in its tracks. No macros, no malware.
We can disable macros via Group Policy or Registry, and there are two different approaches we could take:
The preferred approach is to disable macros entirely
The alternative is to disable macros on files that have come from the internet. Whilst this seems like a more frictionless approach, ransomware authors have already started to instruct users on how to bypass these protections. I don’t recommend this approach.
As administrators in Active Directory environments, we know not to use accounts that are a member of Domain Admins to browse the internet. We are comfortable having multiple accounts for different administrative security contexts. Then why the hell do we insist on browsing the internet with local administrator privileges?
Linux users don’t run as root, nor do Apple users, well, enough said there the better, so why do Windows users, in particular Windows Power Users, insist on opening web pages as local administrator? Do you need local admin for cat pictures?
Some of the blame for these behaviours rests with Microsoft. The introduction of UAC would have been the perfect opportunity to make standard users the default. The argument has and seems to always be backwards compatibility. I am so sick and tired of this excuse.
Running as a standard user greatly reduces what ransomware, or any malware really, can do on your system. It is a significant roadblock for an attacker.
Now I am sure there are people sitting here thinking, he is mad, utterly mad, I need to be admin of my own workstation. Sure you can have administrative privileges, have a general use, non-administrative account that you can use to check your email, cat videos and the latest clickbait news, and then login to an admin account when you need to install an application. You have two accounts, admin and standard user.
On my laptop, I have two accounts, my everyday account is my MSA that I sign in, I have linked my work account to that as well. If I want to make a change to the system or install software, I have a local windows account with admin privileges. I run virtualization tools, docker, visual studio and vs code in this manner.
UAC has been an interesting security control since its introduction. I think the reason it is hated so much by users and IT professionals is simply because it was introduced with Windows Vista, and anything related to Vista is such an easy punching bag. Had it been introduced with Windows 7, I doubt people would hate it half as much.
UAC isn’t a security boundary, however it is a critical security control. The problem with UAC is it is a rather invisible mechanism that improves the security of Windows in ways that cannot be clearly seen by users and administrators. The fact it still exists in Windows 10 shows that it serves its purpose.
UAC changed from Vista to Windows 7, whilst it was a minor change to improve user experience, it has a negative impact on our systems overall security. The shipped default is now “notify me only when apps try to make changes to my computer”. With this setting, you will be notified every time an application attempts to make a change to your computer; however, and this is important, you will not receive a notification when you attempt to make a change. Microsoft made this change to reduce the notification fatigue that occurred in Windows Vista. In hindsight, our users were not ready for that many notifications, and our applications were so badly written that they produced far too many UAC elevation requests. It was also thought that users shouldn’t be notified for changes they clearly knew they were making.
The higher, “Always notify me” setting actually has some benefits. With this higher setting, most UAC bypasses fail, protecting our systems from more complex attacks. It is now becoming more commonly accepted that, in particular in enterprise environments that administrators should be configuring both clients and servers to this higher level. I run all of my devices at this higher level, and I really haven’t seen any impact.
Let’s see how old you all are. Does anyone remember the ILOVEYOU or LOVE BUG worm back in May 2000? Does anyone want to own up to being infected?
I remember this one, I remember how quickly it spread. The worm was extremely primitive, it was VB script after all. It was so effective simply because, double clicking on the script would execute it. The worm spread via email from machines in the Philippines, to Hong Kong, Europe and then the US. The damage estimate was put at around the 5 to 9 billion US dollar mark, with an estimated cost of 15 billion to remove. In 10 days, over 50 million infections were reported, or roughly 10% of internet connected computers. At the time it was considered one of the most destructive computer related disasters.
What If I told you that ransomware has just started to use this technical? Yes this technique is in use by some Locky ransomware campaigns. This year we saw some ransomware families distributing zip file attachments containing .js files. Much like VBS, double clicking on a JS file executes it.
Ever wondered why clicking on a .ps1 file opens notepad and not PowerShell? Microsoft didn’t associate an execution action with PowerShell scripts deliberately as they knew it closed off a infection vector. Microsoft actively made the decision to not have such an association, instead a user or application needs to explicitly run a PowerShell script. I really don’t understand why they haven’t given the other script files the same treatment, my only suspicion is our common friend, backwards compatibility.
Thankfully we can change the default actions for common script files via group policy or the registry.
Previously, Microsoft released the Enhanced Mitigation Experience Toolkit, or EMET for free. EMET applies a bunch of security mitigation technologies that act as special protections and obstacles that an exploit author must defeat in order to exploit the vulnerabilities in the protected software. EMET isn’t a guarantee against the vulnerabilities cannot be exploited, in fact researchers have discovered ways to bypass the protections. The idea is to make exploitation as difficult as possible.
With Windows 10 Creators update, Microsoft decided to include some of these protections into the operating system, they have since further extended these protection technologies in the Fall Creators update. When Microsoft first introduced these changes in Creators Update, and announced that EMET was discontinued, some of us were rather disappointed. Microsoft has responded to that criticism and I am happy to say things have been greatly improved in Fall Creators.
With Fall Creators, you can now configure the exploit protection via the GUI, the group policy experience is a little bit better as well I am told. Fall Creators also comes with the Attack Surface Reduction rules, these are additional controls for office applications and macros, unfortunately this isn’t visible in the UI. Windows Defender SmartScreen has also been extended, Administrators can now make use of IP reputation filtering available in Windows Defender, these are also unavailable via the UI. Finally there is a concept of Controlled Folder Access.
I have deployed EMET previously to both production desktops and servers, and really haven’t seen any major issues.
Administrators like to argue with me on this idea, some find the idea of installing more browsers to be counter intuitive. The argument is that, by increasing the amount of applications that you need to manage you are increasing the complexity and management overheads, more effort needs to go into patching and as such security is decreased overall. I hate to pop peoples bubbles, but Chrome and Firefox are most likely already running in your environment, and they are completely unmanaged, how is that any better?
Your users are probably using Chrome, Firefox or even Safari on their home PC, phones or tablets, so why should their work PCs be any different? In some cases users can even install these without administrator access, so if they are going to install it anyway, why not do it for them?
Chrome comes with 32 and 64 bit, MSI based installers. I recommend using the enterprise 64 bit build, even on your home PC, as it is probably still the most secure browser on the market right now. Edge is catching up but it still has some distance to go. One reason to install Chrome via the 64bit MSI is then you can be sure you have the 64 bit build that comes with additional protections.
Even if you don’t deploy Chrome, check out their group policy templates. With the policy objects, you can modify Chrome in a variety of ways, including installing or blocking specific extentions. The policies will even manage Chrome that has been installed on a per user basis using the mainstream deployment tool.
Firefox provides some enterprise friendly long term support builds, and you can use a third party plugin to get group policy support too.
So on Sunday I decided to run a quick test. I decided to see what the Internet was like without an ad blocking extension.
On the left is news.com.au, without any ad blocker. Chrome needed to perform 720 requests, and almost 14 mb was downloaded. The page took 41 seconds to load. On the right is the same page with U Block Origin. 176 requests, 1 mb of data and it too 2 seconds to load.
Blocking advertising from our browser clients has many benefits, not only do we reduce the eye strain and the boxes covering text, but we also improve browsing performance and most importantly make our users more secure. Advertising networks are essentially a platform for JavaScript distribution, your browser goes to the networks servers for the ads, but is often redirected off to the content creators servers. It is here that things can go wrong, even with all of the checks and balances that the networks have put into place. In March of 2016 we saw the New York Times and BBC hit by ransomware malvertising; over the past years we have also seen sights like news.com.au and even Yahoo.
You can block adds on your edge using firewall, dns or web proxy rules, or you can block them on your client devices. The advantage with client side blocks is that you will also protect your roaming users.
Most people do not realise that there is ad blocking built into Internet Explorer with its “Tacking Protection” list functionality. It isn’t as powerful as some of the more full featured ad blocking extensions of the other browsers, but it makes a significant difference.
So that is everything for tonight. I want to thank you all for coming and listening to me tonight. Here are some links to various guides and more information. I will publish this on my website tonight, and the slides will be shared on the meetup page.
Thank you.
<pause>
Are there any questions?